entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
dale-narroway-2012-framework
A Framework for Evaluating Text Correction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1267/
Dale, Robert and Narroway, George
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3015--3018
Computer-based aids for writing assistance have been around since at least the early 1980s, focussing primarily on aspects such as spelling, grammar and style. The potential audience for such tools is very large indeed, and this is a clear case where we might expect to see language processing applications having a significant real-world impact. However, existing comparative evaluations of applications in this space are often no more than impressionistic and anecdotal reviews of commercial offerings as found in software magazines, making it hard to determine which approaches are superior. More rigorous evaluation in the scholarly literature has been held back in particular by the absence of shared datasets of texts marked-up with errors, and the lack of an agreed evaluation framework. Significant collections of publicly available data are now appearing; this paper describes a complementary evaluation framework, which has been piloted in the Helping Our Own shared task. The approach, which uses stand-off annotations for representing edits to text, can be used in a wide variety of text-correction tasks, and easily accommodates different error tagsets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,468
inproceedings
thurmair-etal-2012-large
Large Scale Lexical Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1268/
Thurmair, Gregor and Aleksi{\'c}, Vera and Schwarz, Christoph
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2849--2855
The following paper presents a lexical analysis component as implemented in the PANACEA project. The goal is to automatically extract lexicon entries from crawled corpora, in an attempt to use corpus-based methods for high-quality linguistic text processing, and to focus on the quality of data without neglecting quantitative aspects. Lexical analysis has the task to assign linguistic information (like: part of speech, inflectional class, gender, subcategorisation frame, semantic properties etc.) to all parts of the input text. If tokens are ambiguous, lexical analysis must provide all possible sets of annotation for later (syntactic) disambiguation, be it tagging, or full parsing. The paper presents an approach for assigning part-of-speech tags for German and English to large input corpora ({\ensuremath{>}} 50 mio tokens), providing a workflow which takes as input crawled corpora and provides POS-tagged lemmata ready for lexicon integration. Tools include sentence splitting, lexicon lookup, decomposition, and POS defaulting. Evaluation shows that the overall error rate can be brought down to about 2{\%} if language resources are properly designed. The complete workflow is implemented as a sequence of web services integrated into the PANACEA platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,469
inproceedings
wang-etal-2012-ntusocialrec
{NTUS}ocial{R}ec: An Evaluation Dataset Constructed from Microblogs for Recommendation Applications in Social Networks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1269/
Wang, Chieh-Jen and Cheng, Shuk-Man and Lee, Lung-Hao and Chen, Hsin-Hsi and Liu, Wen-shen and Huang, Pei-Wen and Lin, Shih-Peng
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2328--2332
This paper proposes a method to construct an evaluation dataset from microblogs for the development of recommendation systems. We extract the relationships among three main entities in a recommendation event, i.e., who recommends what to whom. User-to-user friend relationships and user-to-resource interesting relationships in social media and resource-to-metadata descriptions in an external ontology are employed. In the experiments, the resources are restricted to visual entertainment media, movies in particular. A sequence of ground truths varying with time is generated. That reflects the dynamic of real world.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,470
inproceedings
gravier-etal-2012-etape
The {ETAPE} corpus for the evaluation of speech-based {TV} content processing in the {F}rench language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1270/
Gravier, Guillaume and Adda, Gilles and Paulsson, Niklas and Carr{\'e}, Matthieu and Giraudel, Aude and Galibert, Olivier
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
114--118
The paper presents a comprehensive overview of existing data for the evaluation of spoken content processing in a multimedia framework for the French language. We focus on the ETAPE corpus which will be made publicly available by ELDA mid 2012, after completion of the evaluation campaign, and recall existing resources resulting from previous evaluation campaigns. The ETAPE corpus consists of 30 hours of TV and radio broadcasts, selected to cover a wide variety of topics and speaking styles, emphasizing spontaneous speech and multiple speaker areas.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,471
inproceedings
bel-etal-2012-automatic
Automatic lexical semantic classification of nouns
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1271/
Bel, N{\'u}ria and Romeo, Lauren and Padr{\'o}, Muntsa
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1448--1455
The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,472
inproceedings
de-luca-2012-useful
Is it Useful to Support Users with Lexical Resources? A User Study.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1272/
De Luca, Ernesto William
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3184--3189
Current search engines are used for retrieving relevant documents from the huge amount of data available and have become an essential tool for the majority of Web users. Standard search engines do not consider semantic information that can help in recognizing the relevance of a document with respect to the meaning of a query. In this paper, we present our system architecture and a first user study, where we show that the use of semantics can help users in finding relevant information, filtering it ad facilitating quicker access to data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,473
inproceedings
marinelli-cignoni-2012-boat
In the same boat and other idiomatic seafaring expressions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1273/
Marinelli, Rita and Cignoni, Laura
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
627--631
This paper reports on a research carried out at the Institute for Computational Linguistics (ILC) on a set of idiomatic nautical expressions in Italian and English. A total of 200 Italian expressions were first selected and examined, using both monolingual and bilingual dictionaries, as well as specific lexicographical works dealing with the subject of idiomaticity, especially of the maritime type, and a similar undertaking was then conducted for the English expressions. We discuss the possibility of including both the Italian and English idiomatic expressions in the semantic database Mariterm, which contains terms belonging to the maritime domain. We describe the terminological database and the way in which the idiomatic expressions can be organised within the system, so that, similarly to the other synsets, they are connected to other concepts represented in the database, but at the same time continue to belong to a group of particular linguistic expressions. Furthermore, we study similarities and differences in meaning and usage of some idiomatic expressions in the two languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,474
inproceedings
andersen-etal-2012-creation
Creation and use of Language Resources in a Question-Answering e{H}ealth System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1274/
Andersen, Ulrich and Braasch, Anna and Henriksen, Lina and Huszka, Csaba and Johannsen, Anders and Kayser, Lars and Maegaard, Bente and Norgaard, Ole and Schulz, Stefan and Wedekind, J{\"urgen
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2536--2542
ESICT (Experience-oriented Sharing of health knowledge via Information and Communication Technology) is an ongoing research project funded by the Danish Council for Strategic Research. It aims at developing a health/disease related information system based on information technology, language technology, and formalized medical knowledge. The formalized medical knowledge consists partly of the terminology database SNOMED CT and partly of authorized medical texts on the domain. The system will allow users to ask questions in Danish and will provide natural language answers. Currently, the project is pursuing three basically different methods for question answering, and they are all described to some extent in this paper. A system prototype will handle questions related to diabetes and heart diseases. This paper concentrates on the methods employed for question answering and the language resources that are utilized. Some resources were existing, such as SNOMED CT, others, such as a corpus of sample questions, have had to be created or constructed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,475
inproceedings
rojas-barahona-etal-2012-building
Building and Exploiting a Corpus of Dialog Interactions between {F}rench Speaking Virtual and Human Agents
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1275/
Rojas-Barahona, Lina M. and Lorenzo, Alejandra and Gardent, Claire
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1428--1435
We describe the acquisition of a dialog corpus for French based on multi-task human-machine interactions in a serious game setting. We present a tool for data collection that is configurable for multiple games; describe the data collected using this tool and the annotation schema used to annotate it; and report on the results obtained when training a classifier on the annotated data to associate each player turn with a dialog move usable by a rule based dialog manager. The collected data consists of approximately 1250 dialogs, 10454 utterances and 168509 words and will be made freely available to academic and nonprofit research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,476
inproceedings
potet-etal-2012-collection
Collection of a Large Database of {F}rench-{E}nglish {SMT} Output Corrections
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1276/
Potet, Marion and Esperan{\c{c}}a-Rodier, Emmanuelle and Besacier, Laurent and Blanchon, Herv{\'e}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4043--4048
Corpus-based approaches to machine translation (MT) rely on the availability of parallel corpora. To produce user-acceptable translation outputs, such systems need high quality data to be efficiency trained, optimized and evaluated. However, building high quality dataset is a relatively expensive task. In this paper, we describe the data collection and analysis of a large database of 10.881 SMT translation output hypotheses manually corrected. These post-editions were collected using Amazon`s Mechanical Turk, following some ethical guidelines. A complete analysis of the collected data pointed out a high quality of the corrections with more than 87 {\%} of the collected post-editions that improve hypotheses and more than 94 {\%} of the crowdsourced post-editions which are at least of professional quality. We also post-edited 1,500 gold-standard reference translations (of bilingual parallel corpora generated by professional) and noticed that 72 {\%} of these translations needed to be corrected during post-edition. We computed a proximity measure between the differents kind of translations and pointed out that reference translations are as far from the hypotheses than from the corrected hypotheses (i.e. the post-editions). In light of these last findings, we discuss the adequation of text-based generated reference translations to train setence-to-sentence based SMT systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,477
inproceedings
haselbach-etal-2012-german
{G}erman \textit{nach}-Particle Verbs in Semantic Theory and Corpus Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1277/
Haselbach, Boris and Seeker, Wolfgang and Eckart, Kerstin
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3706--3711
In this paper, we present a database-supported corpus study where we combine automatically obtained linguistic information from a statistical dependency parser, namely the occurrence of a dative argument, with predictions from a theory on the argument structure of German particle verbs with ''''''``nach''''''''. The theory predicts five readings of ''''''``nach'''''''' which behave differently with respect to dative licensing in their argument structure. From a huge German web corpus, we extracted sentences for a subset of ''''''``nach''''''''-particle verbs for which no dative is expected by the theory. Making use of a relational database management system, we bring together the corpus sentences and the lemmas manually annotated along the lines of the theory. We validate the theoretical predictions against the syntactic structure of the corpus sentences, which we obtained from a statistical dependency parser. We find that, in principle, the theory is borne out by the data, however, manual error analysis reveals cases for which the theory needs to be extended.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,478
inproceedings
lefever-etal-2012-discovering
Discovering Missing {W}ikipedia Inter-language Links by means of Cross-lingual Word Sense Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1278/
Lefever, Els and Hoste, V{\'e}ronique and De Cock, Martine
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
841--846
Wikipedia pages typically contain inter-language links to the corresponding pages in other languages. These links, however, are often incomplete. This paper describes a set of experiments in which the viability of discovering such missing inter-language links for ambiguous nouns by means of a cross-lingual Word Sense Disambiguation approach is investigated. The input for the inter-language link detection system is a set of Dutch pages for a given ambiguous noun and the output of the system is a set of links to the corresponding pages in three target languages (viz. French, Spanish and Italian). The experimental results show that although it is a very challenging task, the system succeeds to detect missing inter-language links between Wikipedia documents for a manually labeled test set. The final goal of the system is to provide a human editor with a list of possible missing links that should be manually verified.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,479
inproceedings
mansour-ney-2012-arabic
{A}rabic-Segmentation Combination Strategies for Statistical Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1279/
Mansour, Saab and Ney, Hermann
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3915--3920
Arabic segmentation was already applied successfully for the task of statistical machine translation (SMT). Yet, there is no consistent comparison of the effect of different techniques and methods over the final translation quality. In this work, we use existing tools and further re-implement and develop new methods for segmentation. We compare the resulting SMT systems based on the different segmentation methods over the small IWSLT 2010 BTEC and the large NIST 2009 Arabic-to-English translation tasks. Our results show that for both small and large training data, segmentation yields strong improvements, but, the differences between the top ranked segmenters are statistically insignificant. Due to the different methodologies that we apply for segmentation, we expect a complimentary variation in the results achieved by each method. As done in previous work, we combine several segmentation schemes of the same model but achieve modest improvements. Next, we try a different strategy, where we combine the different segmentation methods rather than the different segmentation schemes. In this case, we achieve stronger improvements over the best single system. Finally, combining schemes and methods has another slight gain over the best combination strategy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,480
inproceedings
hajic-etal-2012-announcing
Announcing {P}rague {C}zech-{E}nglish {D}ependency {T}reebank 2.0
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1280/
Haji{\v{c}}, Jan and Haji{\v{c}}ov{\'a}, Eva and Panevov{\'a}, Jarmila and Sgall, Petr and Bojar, Ond{\v{r}}ej and Cinkov{\'a}, Silvie and Fu{\v{c}}{\'i}kov{\'a}, Eva and Mikulov{\'a}, Marie and Pajas, Petr and Popelka, Jan and Semeck{\'y}, Ji{\v{r}}{\'i} and {\v{S}}indlerov{\'a}, Jana and {\v{S}}t{\v{e}}p{\'a}nek, Jan and Toman, Josef and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3153--3160
We introduce a substantial update of the Prague Czech-English Dependency Treebank, a parallel corpus manually annotated at the deep syntactic layer of linguistic representation. The English part consists of the Wall Street Journal (WSJ) section of the Penn Treebank. The Czech part was translated from the English source sentence by sentence. This paper gives a high level overview of the underlying linguistic theory (the so-called tectogrammatical annotation) with some details of the most important features like valency annotation, ellipsis reconstruction or coreference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,481
inproceedings
felt-etal-2012-first
First Results in a Study Evaluating Pre-annotation and Correction Propagation for Machine-Assisted {S}yriac Morphological Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1281/
Felt, Paul and Ringger, Eric and Seppi, Kevin and Heal, Kristian and Haertel, Robbie and Lonsdale, Deryle
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
878--885
Manual annotation of large textual corpora can be cost-prohibitive, especially for rare and under-resourced languages. One potential solution is pre-annotation: asking human annotators to correct sentences that have already been annotated, usually by a machine. Another potential solution is correction propagation: using annotator corrections to bad pre-annotations to dynamically improve to the remaining pre-annotations within the current sentence. The research presented in this paper employs a controlled user study to discover under what conditions these two machine-assisted annotation techniques are effective in increasing annotator speed and accuracy and thereby reducing the cost for the task of morphologically annotating texts written in classical Syriac. A preliminary analysis of the data indicates that pre-annotations improve annotator accuracy when they are at least 60{\%} accurate, and annotator speed when they are at least 80{\%} accurate. This research constitutes the first systematic evaluation of pre-annotation and correction propagation together in a controlled user study.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,482
inproceedings
kurtic-etal-2012-corpus
A Corpus of Spontaneous Multi-party Conversation in {B}osnian {S}erbo-{C}roatian and {B}ritish {E}nglish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1282/
Kurti{\'c}, Emina and Wells, Bill and Brown, Guy J. and Kempton, Timothy and Aker, Ahmet
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1323--1327
In this paper we present a corpus of audio and video recordings of spontaneous, face-to-face multi-party conversation in two languages. Freely available high quality recordings of mundane, non-institutional, multi-party talk are still sparse, and this corpus aims to contribute valuable data suitable for study of multiple aspects of spoken interaction. In particular, it constitutes a unique resource for spoken Bosnian Serbo-Croatian (BSC), an under-resourced language with no spoken resources available at present. The corpus consists of just over 3 hours of free conversation in each of the target languages, BSC and British English (BE). The audio recordings have been made on separate channels using head-set microphones, as well as using a microphone array, containing 8 omni-directional microphones. The data has been segmented and transcribed using segmentation notions and transcription conventions developed from those of the conversation analysis research tradition. Furthermore, the transcriptions have been automatically aligned with the audio at the word and phone level, using the method of forced alignment. In this paper we describe the procedures behind the corpus creation and present the main features of the corpus for the study of conversation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,483
inproceedings
gurrutxaga-alegria-2012-measuring
Measuring the compositionality of {NV} expressions in {B}asque by means of distributional similarity techniques
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1283/
Gurrutxaga, Antton and Alegria, I{\~n}aki
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2389--2394
We present several experiments aiming at measuring the semantic compositionality of NV expressions in Basque. Our approach is based on the hypothesis that compositionality can be related to distributional similarity. The contexts of each NV expression are compared with the contexts of its corresponding components, by means of different techniques, as similarity measures usually used with the Vector Space Model (VSM), Latent Semantic Analysis (LSA) and some measures implemented in the Lemur Toolkit, as Indri index, tf-idf, Okapi index and Kullback-Leibler divergence. Using our previous work with cooccurrence techniques as a baseline, the results point to improvements using the Indri index or Kullback-Leibler divergence, and a slight further improvement when used in combination with cooccurrence measures such as {\$}t{\$}-score, via rank-aggregation. This work is part of a project for MWE extraction and characterization using different techniques aiming at measuring the properties related to idiomaticity, as institutionalization, non-compositionality and lexico-syntactic fixedness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,484
inproceedings
vivaldi-etal-2012-using
Using {W}ikipedia to Validate the Terminology found in a Corpus of Basic Textbooks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1284/
Vivaldi, Jorge and Cabrera-Diego, Luis Adri{\'a}n and Sierra, Gerardo and Pozzi, Mar{\'i}a
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3820--3827
A scientific vocabulary is a set of terms that designate scientific concepts. This set of lexical units can be used in several applications ranging from the development of terminological dictionaries and machine translation systems to the development of lexical databases and beyond. Even though automatic term recognition systems exist since the 80s, this process is still mainly done by hand, since it generally yields more accurate results, although not in less time and at a higher cost. Some of the reasons for this are the fairly low precision and recall results obtained, the domain dependence of existing tools and the lack of available semantic knowledge needed to validate these results. In this paper we present a method that uses Wikipedia as a semantic knowledge resource, to validate term candidates from a set of scientific text books used in the last three years of high school for mathematics, health education and ecology. The proposed method may be applied to any domain or language (assuming there is a minimal coverage by Wikipedia).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,485
inproceedings
caminero-etal-2012-serenoa
The {SERENOA} Project: Multidimensional Context-Aware Adaptation of Service Front-Ends
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1285/
Caminero, Javier and Rodr{\'i}guez, Mari Carmen and Vanderdonckt, Jean and Patern{\`o}, Fabio and Rett, Joerg and Raggett, Dave and Comeliau, Jean-Loup and Mar{\'i}n, Ignacio
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2977--2984
The SERENOA project is aimed at developing a novel, open platform for enabling the creation of context-sensitive Service Front-Ends (SFEs). A context-sensitive SFE provides a user interface (UI) that allows users to interact with remote services, and which exhibits some capability to be aware of the context and to react to changes of this context in a continuous way. As a result, such UI will be adapted to e.g. a person`s devices, tasks, preferences, abilities, and social relationships, as well as the conditions of the surrounding physical environment, thus improving people`s satisfaction and performance compared to traditional SFEs based on manually designed UIs. The final aim is to support humans in a more effective, personalized and consistent way, thus improving the quality of life for citizens. In this scenario, we envisage SERENOA as the reference implementation of a SFE adaptation platform for the `Future Internet'.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,486
inproceedings
todirascu-etal-2012-french
{F}rench and {G}erman Corpora for Audience-based Text Type Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1286/
Todirascu, Amalia and Pad{\'o}, Sebastian and Krisch, Jennifer and Kisselew, Max and Heid, Ulrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1591--1597
This paper presents some of the results of the CLASSYN project which investigated the classification of text according to audience-related text types. We describe the design principles and the properties of the French and German linguistically annotated corpora that we have created. We report on tools used to collect the data and on the quality of the syntactic annotation. The CLASSYN corpora comprise two text collections to investigate general text types difference between scientific and popular science text on the two domains of medical and computer science.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,487
inproceedings
marimon-etal-2012-iula
The {IULA} Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1287/
Marimon, Montserrat and Fisas, Beatriz and Bel, N{\'u}ria and Villegas, Marta and Vivaldi, Jorge and Torner, Sergi and Lorente, Merc{\`e} and V{\'a}zquez, Silvia and Villegas, Marta
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1920--1926
This paper describes on-going work for the construction of a new treebank for Spanish, The IULA Treebank. This new resource will contain about 60,000 richly annotated sentences as an extension of the already existing IULA Technical Corpus which is only PoS tagged. In this paper we have focused on describing the work done for defining the annotation process and the treebank design principles. We report on how the used framework, the DELPH-IN processing framework, has been crucial in the design principles and in the bootstrapping strategy followed, especially in what refers to the use of stochastic modules for reducing parsing overgeneration. We also report on the different evaluation experiments carried out to guarantee the quality of the already available results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,488
inproceedings
hendrickx-etal-2012-modality
Modality in Text: a Proposal for Corpus Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1288/
Hendrickx, Iris and Mendes, Am{\'a}lia and Mencarelli, Silvia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1805--1812
We present a annotation scheme for modality in Portuguese. In our annotation scheme we have tried to combine a more theoretical linguistic viewpoint with a practical annotation scheme that will also be useful for NLP research but is not geared towards one specific application. Our notion of modality focuses on the attitude and opinion of the speaker, or of the subject of the sentence. We validated the annotation scheme on a corpus sample of approximately 2000 sentences that we fully annotated with modal information using the MMAX2 annotation tool to produce XML annotation. We discuss our main findings and give attention to the difficult cases that we encountered as they illustrate the complexity of modality and its interactions with other elements in the text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,489
inproceedings
skeppstedt-etal-2012-rule
Rule-based Entity Recognition and Coverage of {SNOMED} {CT} in {S}wedish Clinical Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1289/
Skeppstedt, Maria and Kvist, Maria and Dalianis, Hercules
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1250--1257
Named entity recognition of the clinical entities disorders, findings and body structures is needed for information extraction from unstructured text in health records. Clinical notes from a Swedish emergency unit were annotated and used for evaluating a rule- and terminology-based entity recognition system. This system used different preprocessing techniques for matching terms to SNOMED CT, and, one by one, four other terminologies were added. For the class body structure, the results improved with preprocessing, whereas only small improvements were shown for the classes disorder and finding. The best average results were achieved when all terminologies were used together. The entity body structure was recognised with a precision of 0.74 and a recall of 0.80, whereas lower results were achieved for disorder (precision: 0.75, recall: 0.55) and for finding (precision: 0.57, recall: 0.30). The proportion of entities containing abbreviations were higher for false negatives than for correctly recognised entities, and no entities containing more than two tokens were recognised by the system. Low recall for disorders and findings shows both that additional methods are needed for entity recognition and that there are many expressions in clinical text that are not included in SNOMED CT.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,490
inproceedings
chowdhury-lavelli-2012-evaluation
An Evaluation of the Effect of Automatic Preprocessing on Syntactic Parsing for Biomedical Relation Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1290/
Chowdhury, Md. Faisal Mahbub and Lavelli, Alberto
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
544--551
Relation extraction (RE) is an important text mining task which is the basis for further complex and advanced tasks. In state-of-the-art RE approaches, syntactic information obtained through parsing plays a crucial role. In the context of biomedical RE previous studies report usage of various automatic preprocessing techniques applied before parsing the input text. However, these studies do not specify to what extent such techniques improve RE results and to what extent they are corpus specific as well as parser specific. In this paper, we aim at addressing these issues by using various preprocessing techniques, two syntactic tree kernel based RE approaches and two different parsers on 5 widely used benchmark biomedical corpora of the protein-protein interaction (PPI) extraction task. We also provide analyses of various corpus characteristics to verify whether there are correlations between these characteristics and the RE results obtained. These analyses of corpus characteristics can be exploited to compare the 5 PPI corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,491
inproceedings
stehouwer-etal-2012-federated
Federated Search: Towards a Common Search Infrastructure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1291/
Stehouwer, Herman and Durco, Matej and Auer, Eric and Broeder, Daan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3255--3259
Within scientific institutes there exist many language resources. These resources are often quite specialized and relatively unknown. The current infrastructural initiatives try to tackle this issue by collecting metadata about the resources and establishing centers with stable repositories to ensure the availability of the resources. It would be beneficial if the researcher could, by means of a simple query, determine which resources and which centers contain information useful to his or her research, or even work on a set of distributed resources as a virtual corpus. In this article we propose an architecture for a distributed search environment allowing researchers to perform searches in a set of distributed language resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,492
inproceedings
tolone-etal-2012-evaluating
Evaluating and improving syntactic lexica by plugging them within a parser
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1292/
Tolone, Elsa and Sagot, Beno{\^i}t and Villemonte de La Clergerie, {\'E}ric
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2742--2749
We present some evaluation results for four French syntactic lexica, obtained through their conversion to the Alexina format used by the Lefff lexicon, and their integration within the large-coverage TAG-based FRMG parser. The evaluations are run on two test corpora, annotated with two distinct annotation formats, namely EASy/Passage chunks and relations and CoNLL dependencies. The information provided by the evaluation results provide valuable feedback about the four lexica. Moreover, when coupled with error mining techniques, they allow us to identify how these lexica might be improved.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,493
inproceedings
han-etal-2012-herme
The Herme Database of Spontaneous Multimodal Human-Robot Dialogues
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1293/
Han, Jing Guang and Gilmartin, Emer and De Looze, Celine and Vaughan, Brian and Campbell, Nick
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1328--1331
This paper presents methodologies and tools for language resource (LR) construction. It describes a database of interactive speech collected over a three-month period at the Science Gallery in Dublin, where visitors could take part in a conversation with a robot. The system collected samples of informal, chatty dialogue -- normally difficult to capture under laboratory conditions for human-human dialogue, and particularly so for human-machine interaction. The conversations were based on a script followed by the robot consisting largely of social chat with some task-based elements. The interactions were audio-visually recorded using several cameras together with microphones. As part of the conversation the participants were asked to sign a consent form giving permission to use their data for human-machine interaction research. The multimodal corpus will be made available to interested researchers and the technology developed during the three-month exhibition is being extended for use in education and assisted-living applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,494
inproceedings
sanchez-cartagena-etal-2012-source
Source-Language Dictionaries Help Non-Expert Users to Enlarge Target-Language Dictionaries for Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1294/
S{\'a}nchez-Cartagena, V{\'i}ctor M. and Espl{\`a}-Gomis, Miquel and P{\'e}rez-Ortiz, Juan Antonio
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3422--3429
In this paper, a previous work on the enlargement of monolingual dictionaries of rule-based machine translation systems by non-expert users is extended to tackle the complete task of adding both source-language and target-language words to the monolingual dictionaries and the bilingual dictionary. In the original method, users validate whether some suffix variations of the word to be inserted are correct in order to find the most appropriate inflection paradigm. This method is now improved by taking advantage from the strong correlation detected between paradigms in both languages to reduce the search space of the target-language paradigm once the source-language paradigm is known. Results show that, when the source-language word has already been inserted, the system is able to more accurately predict which is the right target-language paradigm, and the number of queries posed to users is significantly reduced. Experiments also show that, when the source language and the target language are not closely related, it is only the source-language part-of-speech category, but not the rest of information provided by the source-language paradigm, which helps to correctly classify the target-language word.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,495
inproceedings
schmidt-2012-exmaralda
{EXMAR}a{LDA} and the {FOLK} tools {---} two toolsets for transcribing and annotating spoken language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1295/
Schmidt, Thomas
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
236--240
This paper presents two toolsets for transcribing and annotating spoken language: the EXMARaLDA system, developed at the University of Hamburg, and the FOLK tools, developed at the Institute for the German Language in Mannheim. Both systems are targeted at users interested in the analysis of spontaneous, multi-party discourse. Their main user community is situated in conversation analysis, pragmatics, sociolinguistics and related fields. The paper gives an overview of the individual tools of the two systems {\textemdash} the Partitur-Editor, a tool for multi-level annotation of audio or video recordings, the Corpus Manager, a tool for creating and administering corpus metadata, EXAKT, a query and analysis tool for spoken language corpora, FOLKER, a transcription editor optimized for speed and efficiency of transcription, and OrthoNormal, a tool for orthographical normalization of transcription data. It concludes with some thoughts about the integration of these tools into the larger tool landscape.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,496
inproceedings
bunt-etal-2012-iso
{ISO} 24617-2: A semantically-based standard for dialogue annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1296/
Bunt, Harry and Alexandersson, Jan and Choe, Jae-Woong and Fang, Alex Chengyu and Hasida, Koiti and Petukhova, Volha and Popescu-Belis, Andrei and Traum, David
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
430--437
This paper summarizes the latest, final version of ISO standard 24617-2 ``Semantic annotation framework, Part 2: Dialogue acts''''''''. Compared to the preliminary version ISO DIS 24617-2:2010, described in Bunt et al. (2010), the final version additionally includes concepts for annotating rhetorical relations between dialogue units, defines a full-blown compositional semantics for the Dialogue Act Markup Language DiAML (resulting, as a side-effect, in a different treatment of functional dependence relations among dialogue acts and feedback dependence relations); and specifies an optimally transparent XML-based reference format for the representation of DiAML annotations, based on the systematic application of the notion of `ideal concrete syntax'. We describe these differences and briefly discuss the design and implementation of an incremental method for dialogue act recognition, which proves the usability of the ISO standard for automatic dialogue annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,497
inproceedings
gheorghita-pierrel-2012-towards
Towards a methodology for automatic identification of hypernyms in the definitions of large-scale dictionary
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1297/
Gheorghita, Inga and Pierrel, Jean-Marie
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2614--2618
The purpose of this paper is to identify automatically hypernyms for dictionary entries by exploring their definitions. In order to do this, we propose a weighting methodology that lets us assign to each lexeme a weight in a definition. This fact allows us to predict that lexemes with the highest weight are the closest hypernyms of the defined lexeme in the dictionary. The extracted semantic relation “is-a” is used for the automatic construction of a thesaurus for image indexing and retrieval. We conclude the paper by showing some experimental results to validate our method and by presenting our methodology of automatic thesaurus construction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,498
inproceedings
konstantinova-etal-2012-review
A review corpus annotated for negation, speculation and their scope
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1298/
Konstantinova, Natalia and de Sousa, Sheila C.M. and Cruz, Noa P. and Ma{\~n}a, Manuel J. and Taboada, Maite and Mitkov, Ruslan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3190--3195
This paper presents a freely available resource for research on handling negation and speculation in review texts. The SFU Review Corpus, consisting of 400 documents of movie, book, and consumer product reviews, was annotated at the token level with negative and speculative keywords and at the sentence level with their linguistic scope. We report statistics on corpus size and the consistency of annotations. The annotated corpus will be useful in many applications, such as document mining and sentiment analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,499
inproceedings
basile-etal-2012-developing
Developing a large semantically annotated corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1299/
Basile, Valerio and Bos, Johan and Evang, Kilian and Venhuizen, Noortje
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3196--3200
What would be a good method to provide a large collection of semantically annotated texts with formal, deep semantics rather than shallow? We argue that a bootstrapping approach comprising state-of-the-art NLP tools for parsing and semantic interpretation, in combination with a wiki-like interface for collaborative annotation of experts, and a game with a purpose for crowdsourcing, are the starting ingredients for fulfilling this enterprise. The result is a semantic resource that anyone can edit and that integrates various phenomena, including predicate-argument structure, scope, tense, thematic roles, rhetorical relations and presuppositions, into a single semantic formalism: Discourse Representation Theory. Taking texts rather than sentences as the units of annotation results in deep semantic representations that incorporate discourse structure and dependencies. To manage the various (possibly conflicting) annotations provided by experts and non-experts, we introduce a method that stores ``Bits of Wisdom'' in a database as stand-off annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,500
inproceedings
kemps-snijders-etal-2012-dynamic
Dynamic web service deployment in a cloud environment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1300/
Kemps-Snijders, Marc and Brouwer, Matthijs and Kunst, Jan Pieter and Visser, Tom
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2941--2944
E-infrastructure projects such as CLARIN do not only make research data available to the scientific community, but also deliver a growing number of web services. While the standard methods for deploying web services using dedicated (virtual) server may suffice in many circumstances, CLARIN centers are also faced with a growing number of services that are not frequently used and for which significant compute power needs to be reserved. This paper describes an alternative approach towards service deployment capable of delivering on demand services in a workflow using cloud infrastructure capabilities. Services are stored as disk images and deployed on a workflow scenario only when needed this helping to reduce the overall service footprint.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,501
inproceedings
iosif-etal-2012-associative
Associative and Semantic Features Extracted From Web-Harvested Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1301/
Iosif, Elias and Giannoudaki, Maria and Fosler-Lussier, Eric and Potamianos, Alexandros
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2991--2998
We address the problem of automatic classification of associative and semantic relations between words, and particularly those that hold between nouns. Lexical relations such as synonymy, hypernymy/hyponymy, constitute the fundamental types of semantic relations. Associative relations are harder to define, since they include a long list of diverse relations, e.g., ''''''``Cause-Effect'''''''', ''''''``Instrument-Agency''''''''. Motivated by findings from the literature of psycholinguistics and corpus linguistics, we propose features that take advantage of general linguistic properties. For evaluation we merged three datasets assembled and validated by cognitive scientists. A proposed priming coefficient that measures the degree of asymmetry in the order of appearance of the words in text achieves the best classification results, followed by context-based similarity metrics. The web-based features achieve classification accuracy that exceeds 85{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,502
inproceedings
treurniet-etal-2012-collection
Collection of a corpus of {D}utch {SMS}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1302/
Treurniet, Maaske and De Clercq, Orph{\'e}e and van den Heuvel, Henk and Oostdijk, Nelleke
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2268--2273
In this paper we present the first freely available corpus of Dutch text messages containing data originating from the Netherlands and Flanders. This corpus has been collected in the framework of the SoNaR project and constitutes a viable part of this 500-million-word corpus. About 53,000 text messages were collected on a large scale, based on voluntary donations. These messages will be distributed as such. In this paper we focus on the data collection processes involved and after studying the effect of media coverage we show that especially free publicity in newspapers and on social media networks results in more contributions. All SMS are provided with metadata information. Looking at the composition of the corpus, it becomes visible that a small number of people have contributed a large amount of data, in total 272 people have contributed to the corpus during three months. The number of women contributing to the corpus is larger than the number of men, but male contributors submitted larger amounts of data. This corpus will be of paramount importance for sociolinguistic research and normalisation studies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,503
inproceedings
marzi-etal-2012-evaluating
Evaluating Hebbian Self-Organizing Memories for Lexical Representation and Access
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1303/
Marzi, Claudia and Ferro, Marcello and Caudai, Claudia and Pirrelli, Vito
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
886--893
The lexicon is the store of words in long-term memory. Any attempt at modelling lexical competence must take issues of string storage seriously. In the present contribution, we discuss a few desiderata that any biologically-inspired computational model of the mental lexicon has to meet, and detail a multi-task evaluation protocol for their assessment. The proposed protocol is applied to a novel computational architecture for lexical storage and acquisition, the ''''''``Topological Temporal Hebbian SOMs'''''''' (T2HSOMs), which are grids of topologically organised memory nodes with dedicated sensitivity to time-bound sequences of letters. These maps can provide a rigorous and testable conceptual framework within which to provide a comprehensive, multi-task protocol for testing the performance of Hebbian self-organising memories, and a comprehensive picture of the complex dynamics between lexical processing and the acquisition of morphological structure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,504
inproceedings
tsourakis-rayner-2012-corpus
A Corpus for a Gesture-Controlled Mobile Spoken Dialogue System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1304/
Tsourakis, Nikos and Rayner, Manny
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1315--1322
Speech and hand gestures offer the most natural modalities for everyday human-to-human interaction. The availability of diverse spoken dialogue applications and the proliferation of accelerometers on consumer electronics allow the introduction of new interaction paradigms based on speech and gestures. Little attention has been paid however to the manipulation of spoken dialogue systems through gestures. Situation-induced disabilities or real disabilities are determinant factors that motivate this type of interaction. In this paper we propose six concise and intuitively meaningful gestures that can be used to trigger the commands in any SDS. Using different machine learning techniques we achieve a classification error for the gesture patterns of less than 5{\%}, and we also compare our own set of gestures to ones proposed by users. Finally, we examine the social acceptability of the specific interaction scheme and encounter high levels of acceptance for public use.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,505
inproceedings
poch-etal-2012-towards
Towards a User-Friendly Platform for Building Language Resources based on Web Services
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1305/
Poch, Marc and Toral, Antonio and Hamon, Olivier and Quochi, Valeria and Bel, N{\'u}ria
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1156--1163
This paper presents the platform developed in the PANACEA project, a distributed factory that automates the stages involved in the acquisition, production, updating and maintenance of Language Resources required by Machine Translation and other Language Technologies. We adopt a set of tools that have been successfully used in the Bioinformatics field, they are adapted to the needs of our field and used to deploy web services, which can be combined to build more complex processing chains (workflows). This paper describes the platform and its different components (web services, registry, workflows, social network and interoperability). We demonstrate the scalability of the platform by carrying out a set of massive data experiments. Finally, a validation of the platform across a set of required criteria proves its usability for different types of users (non-technical users and providers).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,506
inproceedings
mccrae-etal-2012-collaborative
Collaborative semantic editing of linked data lexica
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1306/
McCrae, John and Montiel-Ponsoda, Elena and Cimiano, Philipp
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2619--2625
The creation of language resources is a time-consuming process requiring the efforts of many people. The use of resources collaboratively created by non-linguistists can potentially ameliorate this situation. However, such resources often contain more errors compared to resources created by experts. For the particular case of lexica, we analyse the case of Wiktionary, a resource created along wiki principles and argue that through the use of a principled lexicon model, namely Lemon, the resulting data could be better understandable to machines. We then present a platform called Lemon Source that supports the creation of linked lexical data along the Lemon model. This tool builds on the concept of a semantic wiki to enable collaborative editing of the resources by many users concurrently. In this paper, we describe the model, the tool and present an evaluation of its usability based on a small group of users.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,507
inproceedings
mendes-etal-2012-evaluating
Evaluating the Impact of Phrase Recognition on Concept Tagging
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1307/
Mendes, Pablo and Daiber, Joachim and Rajapakse, Rohana and Sasaki, Felix and Bizer, Christian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1277--1280
We have developed DBpedia Spotlight, a flexible concept tagging system that is able to annotate entities, topics and other terms in natural language text. The system starts by recognizing phrases to annotate in the input text, and subsequently disambiguates them to a reference knowledge base extracted from Wikipedia. In this paper we evaluate the impact of the phrase recognition step on the ability of the system to correctly reproduce the annotations of a gold standard in an unsupervised setting. We argue that a combination of techniques is needed, and we evaluate a number of alternatives according to an existing evaluation set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,508
inproceedings
elbers-etal-2012-proper
Proper Language Resource Centers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1308/
Elbers, Willem and Broeder, Daan and van Uytvanck, Dieter
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3260--3263
Language resource centers allow researchers to reliably deposit their structured data together with associated meta data and run services operating on this deposited data. We are looking into possibilities to create long-term persistency of both the deposited data and the services operating on this data. Challenges, both technical and non-technical, that need to be solved are the need to replicate more than just the data, proper identification of the digital objects in a distributed environment by making use of persistent identifiers and the set-up of a proper authentication and authorization domain including the management of the authorization information on the digital objects. We acknowledge the investment that most language resource centers have made in their current infrastructure. Therefore one of the most important requirements is the loose coupling with existing infrastructures without the need to make many changes. This shift from a single language resource center into a federated environment of many language resource centers is discussed in the context of a real world center: The Language Archive supported by the Max Planck Institute for Psycholinguistics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,509
inproceedings
panunzi-etal-2012-ridire
{RIDIRE}-{CPI}: an Open Source Crawling and Processing Infrastructure for Supervised Web-Corpora Building
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1309/
Panunzi, Alessandro and Fabbri, Marco and Moneglia, Massimo and Gregori, Lorenzo and Paladini, Samuele
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2274--2279
This paper introduces the RIDIRE-CPI, an open source tool for the building of web corpora with a specific design through a targeted crawling strategy. The tool has been developed within the RIDIRE Project, which aims at creating a 2 billion word balanced web corpus for Italian. RIDIRE-CPI architecture integrates existing open source tools as well as modules developed specifically within the RIDIRE project. It consists of various components: a robust crawler (Heritrix), a user friendly web interface, several conversion and cleaning tools, an anti-duplicate filter, a language guesser, and a PoS tagger. The RIDIRE-CPI user-friendly interface is specifically intended for allowing collaborative work performance by users with low skills in web technology and text processing. Moreover, RIDIRE-CPI integrates a validation interface dedicated to the evaluation of the targeted crawling. Through the content selection, metadata assignment, and validation procedures, the RIDIRE-CPI allows the gathering of linguistic data with a supervised strategy that leads to a higher level of control of the corpus contents. The modular architecture of the infrastructure and its open-source distribution will assure the reusability of the tool for other corpus building initiatives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,510
inproceedings
fort-etal-2012-analyzing
Analyzing the Impact of Prevalence on the Evaluation of a Manual Annotation Campaign
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1310/
Fort, Kar{\"en and Fran{\c{cois, Claire and Galibert, Olivier and Ghribi, Maha
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1474--1480
This article details work aiming at evaluating the quality of the manual annotation of gene renaming couples in scientific abstracts, which generates sparse annotations. To evaluate these annotations, we compare the results obtained using the commonly advocated inter-annotator agreement coefficients such as S, {\ensuremath{\kappa and {\"I€, the less known R, the weighted coefficients {\ensuremath{\kappa{\"I‰ and {\^I{\ensuremath{\pm as well as the F-measure and the SER. We analyze to which extent they are relevant for our data. We then study the bias introduced by prevalence by changing the way the contingency table is built. We finally propose an original way to synthesize the results by computing distances between categories, based on the produced annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,511
inproceedings
rosner-etal-2012-last
{LAST} {MINUTE}: a Multimodal Corpus of Speech-based User-Companion Interactions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1311/
R{\"osner, Dietmar and Frommer, J{\"org and Friesen, Rafael and Haase, Matthias and Lange, Julia and Otto, Mirko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2559--2566
We report about design and characteristics of the LAST MINUTE corpus. The recordings in this data collection are taken from a WOZ experiment that allows to investigate how users interact with a companion system in a mundane situation with the need for planning, re-planning and strategy change. The resulting corpus is distinguished with respect to aspects of size (e.g. number of subjects, length of sessions, number of channels, total length of records) as well as quality (e.g. balancedness of cohort, well designed scenario, standard based transcripts, psychological questionnaires, accompanying in-depth interviews).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,512
inproceedings
thuilier-danlos-2012-semantic
Semantic annotation of {F}rench corpora: animacy and verb semantic classes
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1312/
Thuilier, Juliette and Danlos, Laurence
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1533--1537
This paper presents a first corpus of French annotated for animacy and for verb semantic classes. The resource consists of 1,346 sentences extracted from three different corpora: the French Treebank (Abeill{\'e} and Barrier, 2004), the Est-R{\'e}publicain corpus (CNRTL) and the ESTER corpus (ELRA). It is a set of parsed sentences, containing a verbal head subcategorizing two complements, with annotations on the verb and on both complements, in the TIGER XML format (Mengel and Lezius, 2000). The resource was manually annotated and manually corrected by three annotators. Animacy has been annotated following the categories of Zaenen et al. (2004). Measures of inter-annotator agreement are good (Multi-pi = 0.82 and Multi-kappa = 0.86 (k = 3
null
null
null
1346). The inter-annotator agreements show that the annotated data are reliable for both animacy and verbal semantic classes." }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,513
inproceedings
wang-etal-2012-evaluation
Evaluation of Unsupervised Information Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1313/
Wang, Wei and Besan{\c{c}}on, Romaric and Ferret, Olivier and Grau, Brigitte
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
552--558
Unsupervised methods gain more and more attention nowadays in information extraction area, which allows to design more open extraction systems. In the domain of unsupervised information extraction, clustering methods are of particular importance. However, evaluating the results of clustering remains difficult at a large scale, especially in the absence of reliable reference. On the basis of our experiments on unsupervised relation extraction, we first discuss in this article how to evaluate clustering quality without a reference by relying on internal measures. Then we propose a method, supported by a dedicated annotation tool, for building a set of reference clusters of relations from a corpus. Moreover, we apply it to our experimental framework and illustrate in this way how to build a significant reference for unsupervised relation extraction, more precisely made of 80 clusters gathering more than 4,000 relation instances, in a short time. Finally, we present how such reference is exploited for the evaluation of clustering with external measures and analyze the results of the application of these measures to the clusters of relations produced by our unsupervised relation extraction system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,514
inproceedings
bouamor-etal-2012-contrastive
A contrastive review of paraphrase acquisition techniques
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1314/
Bouamor, Houda and Max, Aur{\'e}lien and Illouz, Gabriel and Vilnat, Anne
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2653--2658
This paper addresses the issue of what approach should be used for building a corpus of sententential paraphrases depending on one`s requirements. Six strategies are studied: (1) multiple translations into a single language from another language; (2) multiple translations into a single language from different other languages; (3) multiple descriptions of short videos; (4) multiple subtitles for the same language; (5) headlines for similar news articles; and (6) sub-sentential paraphrasing in the context of a Web-based game. We report results on French for 50 paraphrase pairs collected for all these strategies, where corpora were manually aligned at the finest possible level to define oracle performance in terms of accessible sub-sentential paraphrases. The differences observed will be used as criteria for motivating the choice of a given approach before attempting to build a new paraphrase corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,515
inproceedings
maamouri-etal-2012-expanding
Expanding {A}rabic Treebank to Speech: Results from Broadcast News
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1315/
Maamouri, Mohamed and Bies, Ann and Kulick, Seth
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1856--1861
Treebanking a large corpus of relatively structured speech transcribed from various Arabic Broadcast News (BN) sources has allowed us to begin to address the many challenges of annotating and parsing a speech corpus in Arabic. The now completed Arabic Treebank BN corpus consists of 432,976 source tokens (517,080 tree tokens) in 120 files of manually transcribed news broadcasts. Because news broadcasts are predominantly scripted, most of the transcribed speech is in Modern Standard Arabic (MSA). As such, the lexical and syntactic structures are very similar to the MSA in written newswire data. However, because this is spoken news, cross-linguistic speech effects such as restarts, fillers, hesitations, and repetitions are common. There is also a certain amount of dialect data present in the BN corpus, from on-the-street interviews and similar informal contexts. In this paper, we describe the finished corpus and focus on some of the necessary additions to our annotation guidelines, along with some of the technical challenges of a treebanked speech corpus and an initial parsing evaluation for this data. This corpus will be available to the community in 2012 as an LDC publication.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,516
inproceedings
rodrigues-rytting-2012-typing
Typing Race Games as a Method to Create Spelling Error Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1316/
Rodrigues, Paul and Rytting, C. Anton
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3019--3024
This paper presents a method to elicit spelling error corpora using an online typing race game. After being tested for their native language, English-native participants were instructed to retype stimuli as quickly and as accurately as they could. The participants were informed that the system was keeping a score based on accuracy and speed, and that a high score would result in a position on a public scoreboard. Words were presented on the screen one at a time from a queue, and the queue was advanced by pressing the ENTER key following the stimulus. Responses were recorded and compared to the original stimuli. Responses that differed from the stimuli were considered a typographical or spelling error, and added to an error corpus. Collecting a corpus using a game offers several unique benefits. 1) A game attracts engaged participants, quickly. 2) The web-based delivery reduces the cost and decreases the time and effort of collecting the corpus. 3) Participants have fun. Spelling error corpora have been difficult and expensive to obtain for many languages and this research was performed to fill this gap. In order to evaluate the methodology, we compare our game data against three existing spelling corpora for English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,517
inproceedings
alicante-etal-2012-treebank
A treebank-based study on the influence of {I}talian word order on parsing performance
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1317/
Alicante, Anita and Bosco, Cristina and Corazza, Anna and Lavelli, Alberto
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1985--1992
The aim of this paper is to contribute to the debate on the issues raised by Morphologically Rich Languages, and more precisely to investigate, in a cross-paradigm perspective, the influence of the constituent order on the data-driven parsing of one of such languages(i.e. Italian). It shows therefore new evidence from experiments on Italian, a language characterized by a rich verbal inflection, which leads to a widespread diffusion of the pro{\textemdash}drop phenomenon and to a relatively free word order. The experiments are performed by using state-of-the-art data-driven parsers (i.e. MaltParser and Berkeley parser) and are based on an Italian treebank available in formats that vary according to two dimensions, i.e. the paradigm of representation (dependency vs. constituency) and the level of detail of linguistic information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,518
inproceedings
georgila-etal-2012-practical
Practical Evaluation of Human and Synthesized Speech for Virtual Human Dialogue Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1318/
Georgila, Kallirroi and Black, Alan and Sagae, Kenji and Traum, David
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3519--3526
The current practice in virtual human dialogue systems is to use professional human recordings or limited-domain speech synthesis. Both approaches lead to good performance but at a high cost. To determine the best trade-off between performance and cost, we perform a systematic evaluation of human and synthesized voices with regard to naturalness, conversational aspect, and likability. We vary the type (in-domain vs. out-of-domain), length, and content of utterances, and take into account the age and native language of raters as well as their familiarity with speech synthesis. We present detailed results from two studies, a pilot one and one run on Amazon`s Mechanical Turk. Our results suggest that a professional human voice can supersede both an amateur human voice and synthesized voices. Also, a high-quality general-purpose voice or a good limited-domain voice can perform better than amateur human recordings. We do not find any significant differences between the performance of a high-quality general-purpose voice and a limited-domain voice, both trained with speech recorded by actors. As expected, the high-quality general-purpose voice is rated higher than the limited-domain voice for out-of-domain sentences and lower for in-domain sentences. There is also a trend for long or negative-content utterances to receive lower ratings.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,519
inproceedings
sato-2012-search
A Search Tool for {F}rame{N}et Constructicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1319/
Sato, Hiroaki
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1655--1658
The Berkeley FrameNet Project (BFN, \url{https://framenet.icsi.berkeley.edu/fndrupal/}) created descriptions of 73 “non-core” grammatical constructions, annotation of 50 of these constructions and about 1500 example sentences in its one year project “Beyond the Core: A Pilot Project on Cataloging Grammatical Constructions and Multiword Expressions in English” supported by the National Science Foundation. The project did not aim at building a full-fledged Construction Grammar, but the registry of English constructions created by this project, which is called Constructicon, provides a representative sample of the current coverage of English constructions (Lee-Goldman {\&} Rhodes 2009). CxN Viewer is a search tool which I have developed for Constructicon and the tool shows its typical English constructions on the web browser. CxN Viewer is a web application consisting of HTML files and JavaScript codes. The tool is a useful program that will benefit researchers working with the data annotated within the framework of BFN. CxN Viewer is a unicode-compliant application, and it can deal with constructions of other languages such as Spanish.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,520
inproceedings
weiser-watrin-2012-extraction
Extraction of unmarked quotations in Newspapers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1320/
Weiser, St{\'e}phanie and Watrin, Patrick
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
559--562
This paper presents work in progress to automatically extract quotation sentences from newspaper articles. The focus is the extraction and annotation of unmarked quotation sentences. A linguistic study shows that unmarked quotation sentences can be formalised into 16 patterns that can be used to develop an extraction grammar. The question of unmarked quotation boundaries identification is also raised as they are often ambiguous. An annotation scheme allowing to describe all the elements that can take place in a quotation sentence is defined. This paper presents the creation of two resources necessary to our system. A dictionary of verbs introducing quotations has been automatically built using a grammar of marked quotations sentences to identify the verbs able to introduce quotations. A grammar formalising the patterns of unmarked quotation sentences {\textemdash} using the tool Unitex, based on finite state machines {\textemdash} has been developed. A short experiment has been performed on two patterns and shows some promising results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,521
inproceedings
roche-2012-ontoterminology
{O}ntoterminology: How to unify terminology and ontology into a single paradigm
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1321/
Roche, Christophe
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2626--2630
Terminology is assigned to play a more and more important role in the Information Society. The need for a computational representation of terminology for IT applications raises new challenges for terminology. Ontology appears to be one of the most suitable solutions for such an issue. But an ontology is not a terminology as well as a terminology is not an ontology. Terminology, especially for technical domains, relies on two different semiotic systems: the linguistic one, which is directly linked to the “Language for Special Purposes” and the conceptual system that describes the domain knowledge. These two systems must be both separated and linked. The new paradigm of ontoterminology, i.e. a terminology whose conceptual system is a formal ontology, emphasizes the difference between the linguistic and conceptual dimensions of terminology while unifying them. A double semantic triangle is introduced in order to link terms (signifiers) to concept names on a first hand and meanings (signified) to concepts on the other hand. Such an approach allows two kinds of definition to be introduced. The definition of terms written in natural language is considered as a linguistic explanation while the definition of concepts written in a formal language is viewed as a formal specification that allows operationalization of terminology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,522
inproceedings
scott-etal-2012-corpus
Corpus Annotation as a Scientific Task
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1322/
Scott, Donia and Barone, Rossano and Koeling, Rob
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1481--1485
Annotation studies in CL are generally unscientific: they are mostly not reproducible, make use of too few (and often non-independent) annotators and use guidelines that are often something of a moving target. Additionally, the notion of ‘expert annotators' invariably means only that the annotators have linguistic training. While this can be acceptable in some special contexts, it is often far from ideal. This is particularly the case when subtle judgements are required or when, as increasingly, one is making use of corpora originating from technical texts that have been produced by, and intended to be consumed by, an audience of technical experts in the field. We outline a more rigorous approach to collecting human annotations, using as our example a study designed to capture judgements on the meaning of hedge words in medical records.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,523
inproceedings
mendes-etal-2012-dbpedia
{DB}pedia: A Multilingual Cross-domain Knowledge Base
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1323/
Mendes, Pablo and Jakob, Max and Bizer, Christian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1813--1817
The DBpedia project extracts structured information from Wikipedia editions in 97 different languages and combines this information into a large multi-lingual knowledge base covering many specific domains and general world knowledge. The knowledge base contains textual descriptions (titles and abstracts) of concepts in up to 97 languages. It also contains structured knowledge that has been extracted from the infobox systems of Wikipedias in 15 different languages and is mapped onto a single consistent ontology by a community effort. The knowledge base can be queried using the SPARQL query language and all its data sets are freely available for download. In this paper, we describe the general DBpedia knowledge base and as well as the DBpedia data sets that specifically aim at supporting computational linguistics tasks. These task include Entity Linking, Word Sense Disambiguation, Question Answering, Slot Filling and Relationship Extraction. These use cases are outlined, pointing at added value that the structured data of DBpedia provides.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,524
inproceedings
dione-2012-morphological
A Morphological Analyzer For {W}olof Using Finite-State Techniques
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1324/
Dione, Cheikh M. Bamba
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
894--901
This paper reports on the design and implementation of a morphological analyzer for Wolof. The main motivation for this work is to obtain a linguistically motivated tool using finite-state techniques. The finite-state technology is especially attractive in dealing with human language morphologies. Finite-state transducers (FST) are fast, efficient and can be fully reversible, enabling users to perform analysis as well as generation. Hence, I use this approach to construct a new FST tool for Wolof, as a first step towards a computational grammar for the language in the Lexical Functional Grammar framework. This article focuses on the methods used to model complex morphological issues and on developing strategies to limit ambiguities. It discusses experimental evaluations conducted to assess the performance of the analyzer with respect to various statistical criteria. In particular, I also wanted to create morphosyntactically annotated resources for Wolof, obtained by automatically analyzing text corpora with a computational morphology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,525
inproceedings
llanos-2012-designing
Designing a search interface for a {S}panish learner spoken corpus: the end-user`s evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1325/
Llanos, Leonardo Campillos
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
241--248
This article summarizes the evaluation process of an interface under development to consult an oral corpus of learners of Spanish as a Foreign Language. The databank comprises 40 interviews with students with over 9 different mother tongues collected for Error Analysis. XML mark-up is used to code the information about the learners and their errors (with an explanation), and the search tool makes it is possible to look up these errors and to listen to the utterances where they appear. The formative evaluation was performed to improve the interface during the design stage by means of a questionnaire which addressed issues related to the teachers' beliefs about languages, their opinion about the Error Analysis methodology, and specific points about the interface design and usability. The results unveiled some deficiencies of the current prototype as well as the interests of the teaching professionals which should be considered to bridge the gap between technology development and its pedagogical applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,526
inproceedings
escartin-2012-design
Design and compilation of a specialized {S}panish-{G}erman parallel corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1326/
Escart{\'i}n, Carla Parra
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2199--2206
This paper discusses the design and compilation of the TRIS corpus, a specialized parallel corpus of Spanish and German texts. It will be used for phraseological research aimed at improving statistical machine translation. The corpus is based on the European database of Technical Regulations Information System (TRIS), containing 995 original documents written in German and Spanish and their translations into Spanish and German respectively. This parallel corpus is under development and the first version with 97 aligned file pairs was released in the first META-NORD upload of metadata and resources in November 2011. The second version of the corpus, described in the current paper, contains 205 file pairs which have been completely aligned at sentence level, which account for approximately 1,563,000 words and 70,648 aligned sentence pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,527
inproceedings
sasaki-shinnou-2012-detection
Detection of Peculiar Word Sense by Distance Metric Learning with Labeled Examples
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1327/
Sasaki, Minoru and Shinnou, Hiroyuki
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
601--604
For natural language processing on machines, resolving such peculiar usages would be particularly useful in constructing a dictionary and dataset for word sense disambiguation. Hence, it is necessary to develop a method to detect such peculiar examples of a target word from a corpus. Note that, hereinafter, we define a peculiar example as an instance in which the target word or phrase has a new meaning. In this paper, we proposed a new peculiar example detection method using distance metric learning from labeled example pairs. In this method, first, distance metric learning is performed by large margin nearest neighbor classification for the training data, and new training data points are generated using the distance metric in the original space. Then, peculiar examples are extracted using the local outlier factor, which is a density-based outlier detection method, from the updated training and test data. The efficiency of the proposed method was evaluated on an artificial dataset and the Semeval-2010 Japanese WSD task dataset. The results showed that the proposed method has the highest number of properly detected instances and the highest F-measure value. This shows that the label information of training data is effective for density-based peculiar example detection. Moreover, an experiment on outlier detection using a classification method such as SVM showed that it is difficult to apply the classification method to outlier detection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,528
inproceedings
habash-etal-2012-conventional
Conventional Orthography for Dialectal {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1328/
Habash, Nizar and Diab, Mona and Rambow, Owen
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
711--718
Dialectal Arabic (DA) refers to the day-to-day vernaculars spoken in the Arab world. DA lives side-by-side with the official language, Modern Standard Arabic (MSA). DA differs from MSA on all levels of linguistic representation, from phonology and morphology to lexicon and syntax. Unlike MSA, DA has no standard orthography since there are no Arabic dialect academies, nor is there a large edited body of dialectal literature that follows the same spelling standard. In this paper, we present CODA, a conventional orthography for dialectal Arabic; it is designed primarily for the purpose of developing computational models of Arabic dialects. We explain the design principles of CODA and provide a detailed description of its guidelines as applied to Egyptian Arabic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,529
inproceedings
broeder-etal-2012-standardizing
Standardizing a Component Metadata Infrastructure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1329/
Broeder, Daan and van Uytvanck, Dieter and Gavrilidou, Maria and Trippel, Thorsten and Windhouwer, Menzo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1387--1390
This paper describes the status of the standardization efforts of a Component Metadata approach for describing Language Resources with metadata. Different linguistic and Language {\&} Technology communities as CLARIN, META-SHARE and NaLiDa use this component approach and see its standardization of as a matter for cooperation that has the possibility to create a large interoperable domain of joint metadata. Starting with an overview of the component metadata approach together with the related semantic interoperability tools and services as the ISOcat data category registry and the relation registry we explain the standardization plan and efforts for component metadata within ISO TC37/SC4. Finally, we present information about uptake and plans of the use of component metadata within the three mentioned linguistic and L{\&}T communities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,530
inproceedings
aker-etal-2012-assessing
Assessing Crowdsourcing Quality through Objective Tasks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1330/
Aker, Ahmet and El-Haj, Mahmoud and Albakour, M-Dyaa and Kruschwitz, Udo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1456--1461
The emergence of crowdsourcing as a commonly used approach to collect vast quantities of human assessments on a variety of tasks represents nothing less than a paradigm shift. This is particularly true in academic research where it has suddenly become possible to collect (high-quality) annotations rapidly without the need of an expert. In this paper we investigate factors which can influence the quality of the results obtained through Amazon`s Mechanical Turk crowdsourcing platform. We investigated the impact of different presentation methods (free text versus radio buttons), workers' base (USA versus India as the main bases of MTurk workers) and payment scale (about {\$}4, {\$}8 and {\$}10 per hour) on the quality of the results. For each run we assessed the results provided by 25 workers on a set of 10 tasks. We run two different experiments using objective tasks: maths and general text questions. In both tasks the answers are unique, which eliminates the uncertainty usually present in subjective tasks, where it is not clear whether the unexpected answer is caused by a lack of worker`s motivation, the worker`s interpretation of the task or genuine ambiguity. In this work we present our results comparing the influence of the different factors used. One of the interesting findings is that our results do not confirm previous studies which concluded that an increase in payment attracts more noise. We also find that the country of origin only has an impact in some of the categories and only in general text questions but there is no significant difference at the top pay.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,531
inproceedings
schulte-im-walde-etal-2012-association
Association Norms of {G}erman Noun Compounds
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1331/
Schulte im Walde, Sabine and Borgwaldt, Susanne and Jauch, Ronny
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
632--639
This paper introduces association norms of German noun compounds as a lexical semantic resource for cognitive and computational linguistics research on compositionality. Based on an existing database of German noun compounds, we collected human associations to the compounds and their constituents within a web experiment. The current study describes the collection process and a part-of-speech analysis of the association resource. In addition, we demonstrate that the associations provide insight into the semantic properties of the compounds, and perform a case study that predicts the degree of compositionality of the experiment compound nouns, as relying on the norms. Applying a comparatively simple measure of association overlap, we reach a Spearman rank correlation coefficient of rs=0.5228; p{\ensuremath{<}}000001, when comparing our predictions with human judgements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,532
inproceedings
ambati-etal-2012-word
Word Sketches for {T}urkish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1332/
Ambati, Bharat Ram and Reddy, Siva and Kilgarriff, Adam
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2945--2950
Word sketches are one-page, automatic, corpus-based summaries of a word`s grammatical and collocational behaviour. In this paper we present word sketches for Turkish. Until now, word sketches have been generated using a purpose-built finite-state grammars. Here, we use an existing dependency parser. We describe the process of collecting a 42 million word corpus, parsing it, and generating word sketches from it. We evaluate the word sketches in comparison with word sketches from a language independent sketch grammar on an external evaluation task called topic coherence, using Turkish WordNet to derive an evaluation set of coherent topics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,533
inproceedings
bick-etal-2012-annotation
The annotation of the {C}-{ORAL}-{BRASIL} oral through the implementation of the Palavras Parser
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1333/
Bick, Eckhard and Mello, Heliana and Panunzi, Alessandro and Raso, Tommaso
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3382--3386
This article describes the morphosyntactic annotation of the C-ORAL-BRASIL speech corpus, using an adapted version of the Palavras parser. In order to achieve compatibility with annotation rules designed for standard written Portuguese, transcribed words were orthographically normalized, and the parsing lexicon augmented with speech-specific material, phonetically spelled abbreviations etc. Using a two-level annotation approach, speech flow markers like overlaps, retractions and non-verbal productions were separated from running, annotatable text. In the absence of punctuation, syntactic segmentation was achieved by exploiting prosodic break markers, enhanced by a rule-based distinctions between pause and break functions. Under optimal conditions, the modified parsing system achieved correctness rates (F-scores) of 98.6{\%} for part of speech, 95{\%} for syntactic function and 99{\%} for lemmatization. Especially at the syntactic level, a clear connection between accessibility of prosodic break markers and annotation performance could be documented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,534
inproceedings
koeva-etal-2012-bulgarian
{B}ulgarian {X}-language Parallel Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1334/
Koeva, Svetla and Stoyanova, Ivelina and Dekova, Rositsa and Rizov, Borislav and Genov, Angel
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2480--2486
The paper presents the methodology and the outcome of the compilation and the processing of the Bulgarian X-language Parallel Corpus (Bul-X-Cor) which was integrated as part of the Bulgarian National Corpus (BulNC). We focus on building representative parallel corpora which include a diversity of domains and genres, reflect the relations between Bulgarian and other languages and are consistent in terms of compilation methodology, text representation, metadata description and annotation conventions. The approaches implemented in the construction of Bul-X-Cor include using readily available text collections on the web, manual compilation (by means of Internet browsing) and preferably automatic compilation (by means of web crawling {\textemdash} general and focused). Certain levels of annotation applied to Bul-X-Cor are taken as obligatory (sentence segmentation and sentence alignment), while others depend on the availability of tools for a particular language (morpho-syntactic tagging, lemmatisation, syntactic parsing, named entity recognition, word sense disambiguation, etc.) or for a particular task (word and clause alignment). To achieve uniformity of the annotation we have either annotated raw data from scratch or transformed the already existing annotation to follow the conventions accepted for BulNC. Finally, actual uses of the corpora are presented and conclusions are drawn with respect to future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,535
inproceedings
passonneau-etal-2012-masc
The {MASC} Word Sense Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1335/
Passonneau, Rebecca J. and Baker, Collin F. and Fellbaum, Christiane and Ide, Nancy
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3025--3030
The MASC project has produced a multi-genre corpus with multiple layers of linguistic annotation, together with a sentence corpus containing WordNet 3.1 sense tags for 1000 occurrences of each of 100 words produced by multiple annotators, accompanied by indepth inter-annotator agreement data. Here we give an overview of the contents of MASC and then focus on the word sense sentence corpus, describing the characteristics that differentiate it from other word sense corpora and detailing the inter-annotator agreement studies that have been performed on the annotations. Finally, we discuss the potential to grow the word sense sentence corpus through crowdsourcing and the plan to enhance the content and annotations of MASC through a community-based collaborative effort.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,536
inproceedings
mota-etal-2012-pagico
{P}{\'a}gico: Evaluating {W}ikipedia-based information retrieval in {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1336/
Mota, Cristina and Sim{\~o}es, Alberto and Freitas, Cl{\'a}udia and Costa, Lu{\'i}s and Santos, Diana
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2015--2022
How do people behave in their everyday information seeking tasks, which often involve Wikipedia? Are there systems which can help them, or do a similar job? In this paper we describe P{\'a}gico, an evaluation contest with the main purpose of fostering research in these topics. We describe its motivation, the collection of documents created, the evaluation setup, the topics chosen and their choice, the participation, as well as the measures used for evaluation and the gathered resources. The task{\textemdash}between information retrieval and question answering{\textemdash}can be further described as answering questions related to Portuguese-speaking culture in the Portuguese Wikipedia, in a number of different themes and geographic and temporal angles. This initiative allowed us to create interesting datasets and perform some assessment of Wikipedia, while also improving a public-domain open-source system for further wikipedia-based evaluations. In the paper, we provide examples of questions, we report the results obtained by the participants, and provide some discussion on complex issues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,537
inproceedings
denis-etal-2012-representation
Representation of linguistic and domain knowledge for second language learning in virtual worlds
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1337/
Denis, Alexandre and Falk, Ingrid and Gardent, Claire and Perez-Beltrachini, Laura
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2631--2635
There has been much debate, both theoretical and practical, on how to link ontologies and lexicons in natural language processing (NLP) applications. In this paper, we focus on an application in which lexicon and ontology are used to generate teaching material. We briefly describe the application (a serious game for language learning). We then zoom in on the representation and interlinking of the lexicon and of the ontology. We show how the use of existing standards and of good practice principles facilitates the design of our resources while satisfying the expressivity requirements set by natural language generation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,538
inproceedings
zuo-etal-2012-multilingual
A Multilingual Natural Stress Emotion Database
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1338/
Zuo, Xin and Li, Tian and Fung, Pascale
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1174--1178
In this paper, we describe an ongoing effort in collecting and annotating a multilingual speech database of natural stress emotion from university students. The goal is to detect natural stress emotions and study the stress expression differences in different languages, which may help psychologists in the future. We designed a common questionnaire of stress-inducing and non-stress-inducing questions in English, Mandarin and Cantonese and collected a first ever, multilingual corpus of natural stress emotion. All of the students are native speakers of the corresponding language. We asked native language speakers to annotate recordings according to the participants' self-label states and obtained a very good kappa inter labeler agreement. We carried out human perception tests where listeners who do not understand Chinese were asked to detect stress emotion from the Mandarin Chinese database. Compared to the annotation labels, these human perceived emotions are of low accuracy, which shows a great necessity for natural stress detection research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,539
inproceedings
aggarwal-etal-2012-twins
The Twins Corpus of Museum Visitor Questions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1339/
Aggarwal, Priti and Artstein, Ron and Gerten, Jillian and Katsamanis, Athanasios and Narayanan, Shrikanth and Nazarian, Angela and Traum, David
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2355--2361
The Twins corpus is a collection of utterances spoken in interactions with two virtual characters who serve as guides at the Museum of Science in Boston. The corpus contains about 200,000 spoken utterances from museum visitors (primarily children) as well as from trained handlers who work at the museum. In addition to speech recordings, the corpus contains the outputs of speech recognition performed at the time of utterance as well as the system interpretation of the utterances. Parts of the corpus have been manually transcribed and annotated for question interpretation. The corpus has been used for improving performance of the museum characters and for a variety of research projects, such as phonetic-based Natural Language Understanding, creation of conversational characters from text resources, dialogue policy learning, and research on patterns of user interaction. It has the potential to be used for research on children`s speech and on language used when talking to a virtual human.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,540
inproceedings
yu-etal-2012-development
Development of a Web-Scale {C}hinese Word N-gram Corpus with Parts of Speech Information
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1340/
Yu, Chi-Hsin and Tang, Yi-jie and Chen, Hsin-Hsi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
320--324
Web provides a large-scale corpus for researchers to study the language usages in real world. Developing a web-scale corpus needs not only a lot of computation resources, but also great efforts to handle the large variations in the web texts, such as character encoding in processing Chinese web texts. In this paper, we aim to develop a web-scale Chinese word N-gram corpus with parts of speech information called NTU PN-Gram corpus using the ClueWeb09 dataset. We focus on the character encoding and some Chinese-specific issues. The statistics about the dataset is reported. We will make the resulting corpus a public available resource to boost the Chinese language processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,541
inproceedings
samy-etal-2012-medical
Medical Term Extraction in an {A}rabic Medical Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1341/
Samy, Doaa and Moreno-Sandoval, Antonio and Bueno-D{\'i}az, Conchi and Garrote-Salazar, Marta and Guirao, Jos{\'e} M.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
640--645
This paper tests two different strategies for medical term extraction in an Arabic Medical Corpus. The experiments and the corpus are developed within the framework of Multimedica project funded by the Spanish Ministry of Science and Innovation and aiming at developing multilingual resources and tools for processing of newswire texts in the Health domain. The first experiment uses a fixed list of medical terms, the second experiment uses a list of Arabic equivalents of very limited list of common Latin prefix and suffix used in medical terms. Results show that using equivalents of Latin suffix and prefix outperforms the fixed list. The paper starts with an introduction, followed by a description of the state-of-art in the field of Arabic Medical Language Resources (LRs). The third section describes the corpus and its characteristics. The fourth and the fifth sections explain the lists used and the results of the experiments carried out on a sub-corpus for evaluation. The last section analyzes the results outlining the conclusions and future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,542
inproceedings
kipp-2012-annotation
Annotation Facilities for the Reliable Analysis of Human Motion
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1342/
Kipp, Michael
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4103--4107
Human motion is challenging to analyze due to the many degrees of freedom of the human body. While the qualitative analysis of human motion lies at the core of many research fields, including multimodal communication, it is still hard to achieve reliable results when human coders transcribe motion with abstract categories. In this paper we tackle this problem in two respects. First, we provide facilities for qualitative and quantitative comparison of annotations. Second, we provide facilities for exploring highly precise recordings of human motion (motion capture) using a low-cost consumer device (Kinect). We present visualization and analysis methods, integrated in the existing ANVIL video annotation tool (Kipp 2001), and provide both a precision analysis and a ''''''``cookbook'''''''' for Kinect-based motion analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,543
inproceedings
fiser-etal-2012-addressing
Addressing polysemy in bilingual lexicon extraction from comparable corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1343/
Fi{\v{s}}er, Darja and Ljube{\v{s}}i{\'c}, Nikola and Kubelka, Ozren
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3031--3035
This paper presents an approach to extract translation equivalents from comparable corpora for polysemous nouns. As opposed to the standard approaches that build a single context vector for all occurrences of a given headword, we first disambiguate the headword with third-party sense taggers and then build a separate context vector for each sense of the headword. Since state-of-the-art word sense disambiguation tools are still far from perfect, we also tried to improve the results by combining the sense assignments provided by two different sense taggers. Evaluation of the results shows that we outperform the baseline (0.473) in all the settings we experimented with, even when using only one sense tagger, and that the best-performing results are indeed obtained by taking into account the intersection of both sense taggers (0.720).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,544
inproceedings
shaalan-etal-2012-arabic
{A}rabic Word Generation and Modelling for Spell Checking
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1344/
Shaalan, Khaled and Attia, Mohammed and Pecina, Pavel and Samih, Younes and van Genabith, Josef
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
719--725
Arabic is a language known for its rich and complex morphology. Although many research projects have focused on the problem of Arabic morphological analysis using different techniques and approaches, very few have addressed the issue of generation of fully inflected words for the purpose of text authoring. Available open-source spell checking resources for Arabic are too small and inadequate. Ayaspell, for example, the official resource used with OpenOffice applications, contains only 300,000 fully inflected words. We try to bridge this critical gap by creating an adequate, open-source and large-coverage word list for Arabic containing 9,000,000 fully inflected surface words. Furthermore, from a large list of valid forms and invalid forms we create a character-based tri-gram language model to approximate knowledge about permissible character clusters in Arabic, creating a novel method for detecting spelling errors. Testing of this language model gives a precision of 98.2{\%} at a recall of 100{\%}. We take our research a step further by creating a context-independent spelling correction tool using a finite-state automaton that measures the edit distance between input words and candidate corrections, the Noisy Channel Model, and knowledge-based rules. Our system performs significantly better than Hunspell in choosing the best solution, but it is still below the MS Spell Checker.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,545
inproceedings
den-etal-2012-annotation
Annotation of response tokens and their triggering expressions in {J}apanese multi-party conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1345/
Den, Yasuharu and Koiso, Hanae and Takanashi, Katsuya and Yoshida, Nao
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1332--1337
In this paper, we propose a new scheme for annotating response tokens (RTs) and their triggering expressions in Japanese multi-party conversations. In the proposed scheme, RTs are first identified and classified according to their forms, and then sub-classified according to their sequential positions in the discourse. To deeply study the contexts in which RTs are used, the scheme also provides procedures for annotating triggering expressions, which are considered to trigger the listener`s production of RTs. RTs are classified according to whether or not there is a particular object or proposition in the speaker`s turn for which the listener shows a positive or aligned stance. Triggering expressions are then identified in the speaker`s turn; they include surprising facts and other newsworthy things, opinions and assessments, focus of a response to a question or repair initiation, keywords in narratives, and embedded propositions quoted from other`s statement or thought, which are to be agreed upon, assessed, or noticed. As an illustrative application of our scheme, we present a preliminary analysis on the distribution of the latency of the listener`s response to the triggering expression, showing how it differs according to RT`s forms and positions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,546
inproceedings
heja-takacs-2012-automatically-generated
Automatically Generated Online Dictionaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1346/
H{\'e}ja, Enik{\H{o}} and Tak{\'a}cs, D{\'a}vid
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2487--2493
The aim of our software presentation is to demonstrate that corpus-driven bilingual dictionaries generated fully by automatic means are suitable for human use. Need for such dictionaries shows up specifically in the case of lesser used languages where due to the low demand it does not pay off for publishers to invest into the production of dictionaries. Previous experiments have proven that bilingual lexicons can be created by applying word alignment on parallel corpora. Such an approach, especially the corpus-driven nature of it, yields several advantages over more traditional approaches. Most importantly, automatically attained translation probabilities are able to guarantee that the most frequently used translations come first within an entry. However, the proposed technique have to face some difficulties, as well. In particular, the scarce availability of parallel texts for medium density languages imposes limitations on the size of the resulting dictionary. Our objective is to design and implement a dictionary building workflow and a query system that is apt to exploit the additional benefits of the method and overcome the disadvantages of it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,547
inproceedings
miyajima-etal-2012-method
Method for Collection of Acted Speech Using Various Situation Scripts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1347/
Miyajima, Takahiro and Kikuchi, Hideaki and Shirai, Katsuhiko and Okawa, Shigeki
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1179--1182
This study was carried out to improve the quality of acted emotional speech. In the recent paradigm shift in speech collection techniques, methods for the collection of high-quality and spontaneous speech has been strongly focused on. However, such methods involve various constraints: such as the difficulty in controlling utterances and sound quality. Hence, our study daringly focuses on acted speech because of its high operability. In this paper, we propose a new method for speech collection by refining acting scripts. We compared the speech collected using our proposed method and that collected using an imitation of the legacy method that was implemented with traditional basic emotional words. The results show the advantage of our proposed method, i.e., the possibility of the generating high F0 fluctuations in acoustical expressions, which is one of the important features of the expressive speech, while ensuring that there is no decline in the naturalness and other psychological features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,548
inproceedings
broeder-etal-2012-citing
Citing on-line Language Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1348/
Broeder, Daan and van Uytvanck, Dieter and Senft, Gunter
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1391--1394
Although the possibility of referring or citing on-line data from publications is seen at least theoretically as an important means to provide immediate testable proof or simple illustration of a line of reasoning, the practice has not been wide-spread yet and no extensive experience has been gained about the possibilities and problems of referring to raw data-sets. This paper makes a case to investigate the possibility and need of persistent data visualization services that facilitate the inspection and evaluation of the cited data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,549
inproceedings
attia-etal-2012-automatic
Automatic Extraction and Evaluation of {A}rabic {LFG} Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1349/
Attia, Mohammed and Shaalan, Khaled and Tounsi, Lamia and van Genabith, Josef
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1947--1954
This paper presents the results of an approach to automatically acquire large-scale, probabilistic Lexical-Functional Grammar (LFG) resources for Arabic from the Penn Arabic Treebank (ATB). Our starting point is the earlier, work of (Tounsi et al., 2009) on automatic LFG f(eature)-structure annotation for Arabic using the ATB. They exploit tree configuration, POS categories, functional tags, local heads and trace information to annotate nodes with LFG feature-structure equations. We utilize this annotation to automatically acquire grammatical function (dependency) based subcategorization frames and paths linking long-distance dependencies (LDDs). Many state-of-the-art treebank-based probabilistic parsing approaches are scalable and robust but often also shallow: they do not capture LDDs and represent only local information. Subcategorization frames and LDD paths can be used to recover LDDs from such parser output to capture deep linguistic information. Automatic acquisition of language resources from existing treebanks saves time and effort involved in creating such resources by hand. Moreover, data-driven automatic acquisition naturally associates probabilistic information with subcategorization frames and LDD paths. Finally, based on the statistical distribution of LDD path types, we propose empirical bounds on traditional regular expression based functional uncertainty equations used to handle LDDs in LFG.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,550
inproceedings
constant-tellier-2012-evaluating
Evaluating the Impact of External Lexical Resources into a {CRF}-based Multiword Segmenter and Part-of-Speech Tagger
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1350/
Constant, Matthieu and Tellier, Isabelle
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
646--650
This paper evaluates the impact of external lexical resources into a CRF-based joint Multiword Segmenter and Part-of-Speech Tagger. We especially show different ways of integrating lexicon-based features in the tagging model. We display an absolute gain of 0.5{\%} in terms of f-measure. Moreover, we show that the integration of lexicon-based features significantly compensates the use of a small training corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,551
inproceedings
drury-almeida-2012-minho
The Minho Quotation Resource
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1351/
Drury, Brett and Almeida, Jos{\'e} Jo{\~a}o
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2280--2285
Direct quotations from business leaders can provide a rich sample of language which is in common use in the world of commerce. This language used by business leaders often uses: metaphors, euphemisms, slang, obscenities and invented words. In addition the business lexicon is dynamic because new words or terms will gain popularity with businessmen whilst obsolete words will exit their common vocabulary. In addition to being a rich source of language direct quotations from business leaders can have ``real world'' consequences. For example, Gerald Ratner nearly bankrupted his company with an infamous candid comment at an Institute of Directors meeting in 1993. Currently, there is no ``direct quotations from business leaders'' resource freely available to the research community. The ``Minho Quotation Resource'' captures the business lexicon with in excess of 500,000 quotations from individuals from the business world. The quotations were captured from October 2009 and April 2011. The resource is available in a searchable Lucene index and will be available for download in May 2012
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,552
inproceedings
carl-2012-translog
Translog-{II}: a Program for Recording User Activity Data for Empirical Reading and Writing Research
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1352/
Carl, Michael
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4108--4112
This paper presents a novel implementation of Translog-II. Translog-II is a Windows-oriented program to record and study reading and writing processes on a computer. In our research, it is an instrument to acquire objective, digital data of human translation processes. As their predecessors, Translog 2000 and Translog 2006, also Translog-II consists of two main components: Translog-II Supervisor and Translog-II User, which are used to create a project file, to run a text production experiments (a user reads, writes or translates a text) and to replay the session. Translog produces a log files which contains all user activity data of the reading, writing, or translation session, and which can be evaluated by external tools. While there is a large body of translation process research based on Translog, this paper gives an overview of the Translog-II functions and its data visualization options.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,553
inproceedings
macwhinney-2012-morphosyntactic
Morphosyntactic Analysis of the {CHILDES} and {T}alk{B}ank Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1353/
MacWhinney, Brian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2375--2380
This paper describes the construction and usage of the MOR and GRASP programs for part of speech tagging and syntactic dependency analysis of the corpora in the CHILDES and TalkBank databases. We have written MOR grammars for 11 languages and GRASP analyses for three. For English data, the MOR tagger reaches 98{\%} accuracy on adult corpora and 97{\%} accuracy on child language corpora. The paper discusses the construction of MOR lexicons with an emphasis on compounds and special conversational forms. The shape of rules for controlling allomorphy and morpheme concatenation are discussed. The analysis of bilingual corpora is illustrated in the context of the Cantonese-English bilingual corpora. Methods for preparing data for MOR analysis and for developing MOR grammars are discussed. We believe that recent computational work using this system is leading to significant advances in child language acquisition theory and theories of grammar identification more generally.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,554
inproceedings
rygl-horak-2012-similarity
Similarity Ranking as Attribute for Machine Learning Approach to Authorship Identification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1354/
Rygl, Jan and Hor{\'a}k, Ale{\v{s}}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
726--729
In the authorship identification task, examples of short writings of N authors and an anonymous document written by one of these N authors are given. The task is to determine the authorship of the anonymous text. Practically all approaches solved this problem with machine learning methods. The input attributes for the machine learning process are usually formed by stylistic or grammatical properties of individual documents or a defined similarity between a document and an author. In this paper, we present the results of an experiment to extend the machine learning attributes by ranking the similarity between a document and an author: we transform the similarity between an unknown document and one of the N authors to the order in which the author is the most similar to the document in the set of N authors. The comparison of similarity probability and similarity ranking was made using the Support Vector Machines algorithm. The results show that machine learning methods perform slightly better with attributes based on the ranking of similarity than with previously used similarity between an author and a document.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,555
inproceedings
kumar-2012-challenges
Challenges in the development of annotated corpora of computer-mediated communication in {I}ndian Languages: A Case of {H}indi
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1355/
Kumar, Ritesh
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
299--302
The present paper describes an ongoing effort to compile and annotate a large corpus of computer-mediated communication (CMC) in Hindi. It describes the process of the compilation of the corpus, the basic structure of the corpus and the annotation of the corpus and the challenges faced in the creation of such a corpus. It also gives a description of the technologies developed for the processing of the data, addition of the metadata and annotation of the corpus. Since it is a corpus of written communication, it provides quite a distinctive challenge for the annotation process. Besides POS annotation, it will also be annotated at higher levels of representation. Once completely developed it will be a very useful resource of Hindi for research in the areas of linguistics, NLP and other social sciences research related to communication, particularly computer-mediated communication..Besides this the challenges discussed here and the way they are tackled could be taken as the model for developing the corpus of computer-mediated communication in other Indian languages. Furthermore the technologies developed for the construction of this corpus will also be made available publicly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,556
inproceedings
lenci-etal-2012-lexit
{L}ex{I}t: A Computational Resource on {I}talian Argument Structure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1356/
Lenci, Alessandro and Lapesa, Gabriella and Bonansinga, Giulia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3712--3718
The aim of this paper is to introduce LexIt, a computational framework for the automatic acquisition and exploration of distributional information about Italian verbs, nouns and adjectives, freely available through a web interface at the address \url{http://sesia.humnet.unipi.it/lexit}. LexIt is the first large-scale resource for Italian in which subcategorization and semantic selection properties are characterized fully on distributional ground: in the paper we describe both the process of data extraction and the evaluation of the subcategorization frames extracted with LexIt.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,557
inproceedings
fort-claveau-2012-annotating
Annotating Football Matches: Influence of the Source Medium on Manual Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1357/
Fort, Kar{\"en and Claveau, Vincent
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2567--2572
In this paper, we present an annotation campaign of football (soccer) matches, from a heterogeneous text corpus of both match minutes and video commentary transcripts, in French. The data, annotations and evaluation process are detailed, and the quality of the annotated corpus is discussed. In particular, we propose a new technique to better estimate the annotator agreement when few elements of a text are to be annotated. Based on that, we show how the source medium influenced the process and the quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,558
inproceedings
raso-etal-2012-c
The {C}-{ORAL}-{BRASIL} {I}: Reference Corpus for Spoken {B}razilian {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1358/
Raso, Tommaso and Mello, Heliana and Mittmann, Maryual{\^e} Malvessi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
106--113
C-ORAL-BRASIL I is a Brazilian Portuguese spontaneous speech corpus compiled following the same architecture adopted by the C-ORAL-ROM resource. The main goal is the documentation of the diaphasic and diastratic variations in Brazilian Portuguese. The diatopic variety represented is that of the metropolitan area of Belo Horizonte, capital city of Minas Gerais. Even though it was not a primary goal, a nice balance was achieved in terms of speakers' diastratic features (sex, age and school level). The corpus is entirely dedicated to informal spontaneous speech and comprises 139 informal speech texts, 208,130 words and 21:08:52 hours of recording, distributed into family/private (80{\%}) and public (20{\%}) contexts. The LR includes audio files, transcripts in text format and text-to-speech alignment (accessible with WinPitch Pro software). C-ORAL-BRASIL I also provides transcripts with Part-of-Speech annotation implemented through the parser system Palavras. Transcripts were validated regarding the proper application of transcription criteria and also for the annotation of prosodic boundaries. Some quantitative features of C-ORAL-BRASIL I in comparison with the informal C-ORAL-ROM are reported.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,559
inproceedings
aker-etal-2012-light
A light way to collect comparable corpora from the Web
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1359/
Aker, Ahmet and Kanoulas, Evangelos and Gaizauskas, Robert
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
15--20
Statistical Machine Translation (SMT) relies on the availability of rich parallel corpora. However, in the case of under-resourced languages, parallel corpora are not readily available. To overcome this problem previous work has recognized the potential of using comparable corpora as training data. The process of obtaining such data usually involves (1) downloading a separate list of documents for each language, (2) matching the documents between two languages usually by comparing the document contents, and finally (3) extracting useful data for SMT from the matched document pairs. This process requires a large amount of time and resources since a huge volume of documents needs to be downloaded to increase the chances of finding good document pairs. In this work we aim to reduce the amount of time and resources spent for tasks 1 and 2. Instead of obtaining full documents we first obtain just titles along with some meta-data such as time and date of publication. Titles can be obtained through Web Search and RSS News feed collections so that download of the full documents is not needed. We show experimentally that titles can be used to approximate the comparison between documents using full document contents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,560
inproceedings
melero-etal-2012-holaaa
Holaaa!! writin like u talk is kewl but kinda hard 4 {NLP}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1360/
Melero, Maite and Costa-Juss{\`a}, Marta R. and Domingo, Judith and Marquina, Montse and Quixal, Mart{\'i}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3794--3800
We present work in progress aiming to build tools for the normalization of User-Generated Content (UGC). As we will see, the task requires the revisiting of the initial steps of NLP processing, since UGC (micro-blog, blog, and, generally, Web 2.0 user texts) presents a number of non-standard communicative and linguistic characteristics, and is in fact much closer to oral and colloquial language than to edited text. We present and characterize a corpus of UGC text in Spanish from three different sources: Twitter, consumer reviews and blogs. We motivate the need for UGC text normalization by analyzing the problems found when processing this type of text through a conventional language processing pipeline, particularly in the tasks of lemmatization and morphosyntactic tagging, and finally we propose a strategy for automatically normalizing UGC using a selector of correct forms on top of a pre-existing spell-checker.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,561
inproceedings
damljanovic-etal-2012-applying
Applying Random Indexing to Structured Data to Find Contextually Similar Words
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1361/
Damljanovi{\'c}, Danica and Kruschwitz, Udo and Albakour, M-Dyaa and Petrak, Johann and Lupu, Mihai
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2023--2030
Language resources extracted from structured data (e.g. Linked Open Data) have already been used in various scenarios to improve conventional Natural Language Processing techniques. The meanings of words and the relations between them are made more explicit in RDF graphs, in comparison to human-readable text, and hence have a great potential to improve legacy applications. In this paper, we describe an approach that can be used to extend or clarify the semantic meaning of a word by constructing a list of contextually related terms. Our approach is based on exploiting the structure inherent in an RDF graph and then applying the methods from statistical semantics, and in particular, Random Indexing, in order to discover contextually related terms. We evaluate our approach in the domain of life science using the dataset generated with the help of domain experts from a large pharmaceutical company (AstraZeneca). They were involved in two phases: firstly, to generate a set of keywords of interest to them, and secondly to judge the set of generated contextually similar words for each keyword of interest. We compare our proposed approach, exploiting the semantic graph, with the same method applied on the human readable text extracted from the graph.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,562
inproceedings
amoia-etal-2012-coreference
Coreference in Spoken vs. Written Texts: a Corpus-based Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1362/
Amoia, Marilisa and Kunz, Kerstin and Lapshinova-Koltunski, Ekaterina
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
158--164
This paper describes an empirical study of coreference in spoken vs. written text. We focus on the comparison of two particular text types, interviews and popular science texts, as instances of spoken and written texts since they display quite different discourse structures. We believe in fact, that the correlation of difficulties in coreference resolution and varying discourse structures requires a deeper analysis that accounts for the diversity of coreference strategies or their sub-phenomena as indicators of text type or genre. In this work, we therefore aim at defining specific parameters that classify differences in genres of spoken and written texts such as the preferred segmentation strategy, the maximal allowed distance in or the length and size of coreference chains as well as the correlation of structural and syntactic features of coreferring expressions. We argue that a characterization of such genre dependent parameters might improve the performance of current state-of-art coreference resolution technology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,563
inproceedings
boeffard-etal-2012-towards
Towards Fully Automatic Annotation of Audio Books for {TTS}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1363/
Boeffard, Olivier and Charonnat, Laure and Maguer, S{\'e}bastien Le and Lolive, Damien
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
975--980
Building speech corpora is a first and crucial step for every text-to-speech synthesis system. Nowadays, the use of statistical models implies the use of huge sized corpora that need to be recorded, transcribed, annotated and segmented to be usable. The variety of corpora necessary for recent applications (content, style, etc.) makes the use of existing digital audio resources very attractive. Among all available resources, audiobooks, considering their quality, are interesting. Considering this framework, we propose a complete acquisition, segmentation and annotation chain for audiobooks that tends to be fully automatic. The proposed process relies on a data structure, Roots, that establishes the relations between the different annotation levels represented as sequences of items. This methodology has been applied successfully on 11 hours of speech extracted from an audiobook. A manual check, on a part of the corpus, shows the efficiency of the process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,564
inproceedings
lewin-etal-2012-centroids
{C}entroids: Gold standards with distributional variation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1364/
Lewin, Ian and Kafkas, {\c{S}}enay and Rebholz-Schuhmann, Dietrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3894--3900
Motivation: Gold Standards for named entities are, ironically, not standard themselves. Some specify the “one perfect annotation”. Others specify “perfectly good alternatives”. The concept of Silver standard is relatively new. The objective is consensus rather than perfection. How should the two concepts be best represented and related? Approach: We examine several Biomedical Gold Standards and motivate a new representational format, centroids, which simply and effectively represents name distributions. We define an algorithm for finding centroids, given a set of alternative input annotations and we test the outputs quantitatively and qualitatively. We also define a metric of relatively acceptability on top of the centroid standard. Results: Precision, recall and F-scores of over 0.99 are achieved for the simple sanity check of giving the algorithm Gold Standard inputs. Qualitative analysis of the differences very often reveals errors and incompleteness in the original Gold Standard. Given automatically generated annotations, the centroids effectively represent the range of those contributions and the quality of the centroid annotations is highly competitive with the best of the contributors. Conclusion: Centroids cleanly represent alternative name variations for Silver and Gold Standards. A centroid Silver Standard is derived just like a Gold Standard, only from imperfect inputs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,565
inproceedings
navarretta-paggio-2012-multimodal
Multimodal Behaviour and Feedback in Different Types of Interaction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1365/
Navarretta, Costanza and Paggio, Patrizia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2338--2342
In this article, we compare feedback-related multimodal behaviours in two different types of interactions: first encounters between two participants who do not know each other in advance, and naturally-occurring conversations between two and three participants recorded at their homes. All participants are Danish native speakers. The interactions are transcribed using the same methodology, and the multimodal behaviours are annotated according to the same annotation scheme. In the study we focus on the most frequently occurring feedback expressions in the interactions and on feedback-related head movements and facial expressions. The analysis of the corpora, while confirming general facts about feedback-related head movements and facial expressions previously reported in the literature, also shows that the physical setting, the number of participants, the topics discussed, and the degree of familiarity influence the use of gesture types and the frequency of feedback-related expressions and gestures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,566
inproceedings
lewis-etal-2012-using
On Using Linked Data for Language Resource Sharing in the Long Tail of the Localisation Market
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1366/
Lewis, David and O{'Connor, Alexander and Zydro{\'n, Andrzej and Sj{\"ogren, Gerd and Choudhury, Rahzeb
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1403--1409
Innovations in localisation have focused on the collection and leverage of language resources. However, smaller localisation clients and Language Service Providers are poorly positioned to exploit the benefits of language resource reuse in comparison to larger companies. Their low throughput of localised content means they have little opportunity to amass significant resources, such as Translation memories and Terminology databases, to reuse between jobs or to train statistical machine translation engines tailored to their domain specialisms and language pairs. We propose addressing this disadvantage via the sharing and pooling of language resources. However, the current localisation standards do not support multiparty sharing, are not well integrated with emerging language resource standards and do not address key requirements in determining ownership and license terms for resources. We survey standards and research in the area of Localisation, Language Resources and Language Technologies to leverage existing localisation standards via Linked Data methodologies. This points to the potential of using semantic representation of existing data models for localisation workflow metadata, terminology, parallel text, provenance and access control, which we illustrate with an RDF example.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,567