entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
bosco-etal-2010-comparing
Comparing the Influence of Different Treebank Annotations on Dependency Parsing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1132/
Bosco, Cristina and Montemagni, Simonetta and Mazzei, Alessandro and Lombardo, Vincenzo and Dell{'}Orletta, Felice and Lenci, Alessandro and Lesmo, Leonardo and Attardi, Giuseppe and Simi, Maria and Lavelli, Alberto and Hall, Johan and Nilsson, Jens and Nivre, Joakim
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
As the interest of the NLP community grows to develop several treebanks also for languages other than English, we observe efforts towards evaluating the impact of different annotation strategies used to represent particular languages or with reference to particular tasks. This paper contributes to the debate on the influence of resources used for the training and development on the performance of parsing systems. It presents a comparative analysis of the results achieved by three different dependency parsers developed and tested with respect to two treebanks for the Italian language, namely TUT and ISST--TANL, which differ significantly at the level of both corpus composition and adopted dependency representations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,015
inproceedings
federmann-2010-appraise
{A}ppraise: An Open-Source Toolkit for Manual Phrase-Based Evaluation of Translations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1133/
Federmann, Christian
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We describe a focused effort to investigate the performance of phrase-based, human evaluation of machine translation output achieving a high annotator agreement. We define phrase-based evaluation and describe the implementation of Appraise, a toolkit that supports the manual evaluation of machine translation results. Phrase ranking can be done using either a fine-grained six-way scoring scheme that allows to differentiate between ''``much better'''' and ''``slightly better'''', or a reduced subset of ranking choices. Afterwards we discuss kappa values for both scoring models from several experiments conducted with human annotators. Our results show that phrase-based evaluation can be used for fast evaluation obtaining significant agreement among annotators. The granularity of ranking choices should, however, not be too fine-grained as this seems to confuse annotators and thus reduces the overall agreement. The work reported in this paper confirms previous work in the field and illustrates that the usage of human evaluation in machine translation should be reconsidered. The Appraise toolkit is available as open-source and can be downloaded from the author`s website.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,016
inproceedings
zhao-etal-2010-large
How Large a Corpus Do We Need: Statistical Method Versus Rule-based Method
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1134/
Zhao, Hai and Song, Yan and Kit, Chunyu
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We investigate the impact of input data scale in corpus-based learning using a study style of Zipf’s law. In our research, Chinese word segmentation is chosen as the study case and a series of experiments are specially conducted for it, in which two types of segmentation techniques, statistical learning and rule-based methods, are examined. The empirical results show that a linear performance improvement in statistical learning requires an exponential increasing of training corpus size at least. As for the rule-based method, an approximate negative inverse relationship between the performance and the size of the input lexicon can be observed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,017
inproceedings
pedersen-etal-2010-merging
Merging Specialist Taxonomies and Folk Taxonomies in Wordnets - A case Study of Plants, Animals and Foods in the {D}anish {W}ordnet
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1135/
Pedersen, Bolette S. and Nimb, Sanni and Braasch, Anna
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we investigate the problem of merging specialist taxonomies with the more intuitive folk taxonomies in lexical-semantic resources like wordnets; and we focus in particular on plants, animals and foods. We show that a traditional dictionary like Den Danske Ordbog (DDO) survives well with several inconsistencies between different taxonomies of the vocabulary and that a restructuring is therefore necessary in order to compile a consistent wordnet resource on its basis. To this end, we apply Cruse’s definitions for hyponymies, namely those of natural kinds (such as plants and animals) on the one hand and functional kinds (such as foods) on the other. We pursue this distinction in the development of the Danish wordnet, DanNet, which has recently been built on the basis of DDO and is made open source for all potential users at www.wordnet.dk. Not surprisingly, we conclude that cultural background influences the structure of folk taxonomies quite radically, and that wordnet builders must therefore consider these carefully in order to capture their central characteristics in a systematic way.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,018
inproceedings
tatu-moldovan-2010-inducing
Inducing Ontologies from Folksonomies using Natural Language Understanding
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1136/
Tatu, Marta and Moldovan, Dan
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Folksonomies are unsystematic, unsophisticated collections of keywords associated by social bookmarking users to web content and, despite their inconsistency problems (typographical errors, spelling variations, use of space or punctuation as delimiters, same tag applied in different context, synonymy of concepts, etc.), their popularity is increasing among Web 2.0 application developers. In this paper, in addition to eliminating folksonomic irregularities existing at the lexical, syntactic or semantic understanding levels, we propose an algorithm that automatically builds a semantic representation of the folksonomy by exploiting the tags, their social bookmarking associations (co-occuring tags) and, more importantly, the content of labeled documents. We derive the semantics of each tag, discover semantic links between the folksonomic tags and expose the underlying semantic structure of the folksonomy, thus, enabling a number of information discovery and ontology-based reasoning applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,019
inproceedings
de-clercq-perez-2010-data
Data Collection and {IPR} in Multilingual Parallel Corpora. {D}utch Parallel Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1137/
De Clercq, Orph{\'e}e and Perez, Maribel Montero
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
After three years of work the Dutch Parallel Corpus (DPC) project has reached an end. The finalized corpus is a ten-million-word high-quality sentence-aligned bidirectional parallel corpus of Dutch, English and French, with Dutch as central language. In this paper we present the corpus and try to formulate some basic data collection principles, based on the work that was carried out for the project. Building a corpus is a difficult and time-consuming task, especially when every text sample included has to be cleared from copyrights. The DPC is balanced according to five text types (literature, journalistic texts, instructive texts, administrative texts and texts treating external communication) and four translation directions (Dutch-English, English-Dutch, Dutch-French and French-Dutch). All the text material was cleared from copyrights. The data collection process necessitated the involvement of different text providers, which resulted in drawing up four different licence agreements. Problems such as an unknown source language, copyright issues and changes to the corpus design are discussed in close detail and illustrated with examples so as to be of help to future corpus compilers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,020
inproceedings
cybulska-vossen-2010-event
Event Models for Historical Perspectives: Determining Relations between High and Low Level Events in Text, Based on the Classification of Time, Location and Participants.
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1138/
Cybulska, Agata and Vossen, Piek
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we report on a study that was performed within the “Semantics of History” project on how descriptions of historical events are realized in different types of text and what the implications are for modeling the event information. We believe that different historical perspectives of writers correspond in some degree with genre distinction and correlate with variation in language use. To capture differences between event representations in diverse text types and thus to identify relations between historical events, we defined an event model. We observed clear relations between particular parts of event descriptions - actors, time and location modifiers. Texts, written shortly after an event happened, use more specific and uniquely occurring event descriptions than texts describing the same events but written from a longer time perspective. We carried out some statistical corpus research to confirm this hypothesis. The ability to automatically determine relations between historical events and their sub-events over textual data, based on the relations between event participants, time markers and locations, will have important repercussions for the design of historical information retrieval systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,021
inproceedings
scheible-2010-evaluation
An Evaluation of Predicate Argument Clustering using Pseudo-Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1139/
Scheible, Christian
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Schulte im Walde et al. (2008) presented a novel approach to semantic verb classication. The predicate argument model (PAC) presented in their paper models selectional preferences by using soft clustering that incorporates the Expectation Maximization (EM) algorithm and the MDL principle. In this paper, I will show how the model handles the task of differentiating between plausible and implau- sible combinations of verbs, subcategorization frames and arguments by applying the pseudo-disambiguation evaluation method. The predicate argument clustering model will be evaluated in comparison with the latent semantic clustering model by Rooth et al. (1999). In particular, the influences of the model parameters, data frequency, and the individual components of the predicate argument model are examined. The results of these experiments show that (i) the selectional preference model overgeneralizes over arguments for the purpose of a pseudo-disambiguation task and that (ii) pseudo-disambiguation should not be used as a universal indicator for the quality of a model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,022
inproceedings
venant-2010-meaning
Meaning Representation: From Continuity to Discreteness
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1140/
Venant, Fabienne
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents a geometric approach to meaning representation within the framework of continuous mathematics. Meaning representation is a central issue in Natural Language Processing, in particular for tasks like word sense disambiguation or information extraction. We want here to discuss the relevance of using continuous models in semantics. We don’t want to argue the continuous or discrete nature of lexical meaning. We use continuity as a tool to access and manipulate lexical meaning. Following Victorri (1994), we assume that continuity or discreteness are not properties of phenomena but characterizations of theories upon phenomena. We briefly describe our theoretical framework, the dynamical construction of meaning (Victorri and Fuchs, 1996), then present the way we automatically build continuous semantic spaces from a graph of synonymy and discuss their relevance and utility. We also think that discreteness and continuity can collaborate. We show here how we can complete our geometric representations with informations from discrete descriptions of meaning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,023
inproceedings
vernier-etal-2010-learning
Learning Subjectivity Phrases missing from Resources through a Large Set of Semantic Tests
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1141/
Vernier, Matthieu and Monceaux, Laura and Daille, B{\'e}atrice
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In recent years, blogs and social networks have particularly boosted interests for opinion mining research. In order to satisfy real-scale applicative needs, a main task is to create or to enhance lexical and semantic resources on evaluative language. Classical resources of the area are mostly built for english, they contain simple opinion word markers and are far to cover the lexical richness of this linguistic phenomenon. In particular, infrequent subjective words, idiomatic expressions, and cultural stereotypes are missing from resources. We propose a new method, applied on french, to enhance automatically an opinion word lexicon. This learning method relies on linguistic uses of internet users and on semantic tests to infer the degree of subjectivity of many new adjectives, nouns, verbs, noun phrases, verbal phrases which are usually forgotten by other resources. The final appraisal lexicon contains 3,456 entries. We evaluate the lexicon enhancement with and without textual context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,024
inproceedings
guerini-etal-2010-evaluation
Evaluation Metrics for Persuasive {NLP} with {G}oogle {A}d{W}ords
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1142/
Guerini, Marco and Strapparava, Carlo and Stock, Oliviero
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Evaluating systems and theories about persuasion represents a bottleneck for both theoretical and applied fields: experiments are usually expensive and time consuming. Still, measuring the persuasive impact of a message is of paramount importance. In this paper we present a new ``cheap and fast'' methodology for measuring the persuasiveness of communication. This methodology allows conducting experiments with thousands of subjects for a few dollars in a few hours, by tweaking and using existing commercial tools for advertising on the web, such as Google AdWords. The central idea is to use AdWords features for defining message persuasiveness metrics. Along with a description of our approach we provide some pilot experiments, conducted both with text and image based ads, that confirm the effectiveness of our ideas. We also discuss the possible application of research on persuasive systems to Google AdWords in order to add more flexibility in the wearing out of persuasive messages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,025
inproceedings
desmet-hoste-2010-towards
Towards a Balanced Named Entity Corpus for {D}utch
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1143/
Desmet, Bart and Hoste, V{\'e}ronique
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper introduces a new named entity corpus for Dutch. State-of-the-art named entity recognition systems require a substantial annotated corpus to be trained on. Such corpora exist for English, but not for Dutch. The STEVIN-funded SoNaR project aims to produce a diverse 500-million-word reference corpus of written Dutch, with four semantic annotation layers: named entities, coreference relations, semantic roles and spatiotemporal expressions. A 1-million-word subset will be manually corrected. Named entity annotation guidelines for Dutch were developed, adapted from the MUC and ACE guidelines. Adaptations include the annotation of products and events, the classification into subtypes, and the markup of metonymic usage. Inter-annotator agreement experiments were conducted to corroborate the reliability of the guidelines, which yielded satisfactory results (Kappa scores above 0.90). We are building a NER system, trained on the 1-million-word subcorpus, to automatically classify the remainder of the SoNaR corpus. To this end, experiments with various classification algorithms (MBL, SVM, CRF) and features have been carried out and evaluated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,026
inproceedings
senay-etal-2010-transcriber
Transcriber Driving Strategies for Transcription Aid System
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1144/
Senay, Gr{\'e}gory and Linar{\`e}s, Georges and Lecouteux, Benjamin and Oger, Stanislas and Michel, Thierry
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Speech recognition technology suffers from a lack of robustness which limits its usability for fully automated speech-to-text transcription, and manual correction is generally required to obtain perfect transcripts. In this paper, we propose a general scheme for semi-automatic transcription, in which the system and the transcriptionist contribute jointly to the speech transcription. The proposed system relies on the editing of confusion networks and on reactive decoding, the latter one being supposed to take benefits from the manual correction and improve the error rates. In order to reduce the correction time, we evaluate various strategies aiming to guide the transcriptionist towards the critical areas of transcripts. These strategies are based on graph density-based criterion and two semantic consistency criterion; using a corpus-based method and a web-search engine. They allow to indicate to the user the areas which present severe lacks of understandability. We evaluate these driving strategies by simulating the correction process of French broadcast news transcriptions. Results show that interactive decoding improves the correction act efficiency with all driving strategies and semantic information must be integrated into the interactive decoding process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,027
inproceedings
ljubesic-etal-2010-building
Building a Gold Standard for Event Detection in {C}roatian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1145/
Ljube{\v{s}}i{\'c}, Nikola and Lauc, Tomislava and Boras, Damir
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper describes the process of building a newspaper corpus annotated with events described in specific documents. The main difference to the corpora built as part of the TDT initiative is that documents are not annotated by topics, but by specific events they describe. Additionally, documents are gathered from sixteen sources and all documents in the corpus are annotated with the corresponding event. The annotation process consists of a browsing and a searching step. Experiments are performed with a threshold that could be used in the browsing step yielding the result of having to browse through only 1{\%} of document pairs for a 2{\%} loss of relevant document pairs. A statistical analysis of the annotated corpus is undertaken showing that most events are described by few documents while just some events are reported by many documents. The inter-annotator agreement measures show high agreement concerning grouping documents into event clusters, but show a much lower agreement concerning the number of events the documents are organized into. An initial experiment is described giving a baseline for further research on this corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,028
inproceedings
zhang-etal-2010-improving
Improving Domain-specific Entity Recognition with Automatic Term Recognition and Feature Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1146/
Zhang, Ziqi and Iria, Jos{\'e} and Ciravegna, Fabio
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Domain specific entity recognition often relies on domain-specific knowledge to improve system performance. However, such knowledge often suffers from limited domain portability and is expensive to build and maintain. Therefore, obtaining it in a generic and unsupervised manner would be a desirable feature for domain-specific entity recognition systems. In this paper, we introduce an approach that exploits domain-specificity of words as a form of domain-knowledge for entity-recognition tasks. Compared to prior work in the field, our approach is generic and completely unsupervised. We empirically show an improvement in entity extraction accuracy when features derived by our unsupervised method are used, with respect to baseline methods that do not employ domain knowledge. We also compared the results against those of existing systems that use manually crafted domain knowledge, and found them to be competitive.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,029
inproceedings
strakova-pecina-2010-czech
{C}zech Information Retrieval with Syntax-based Language Models
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1147/
Strakov{\'a}, Jana and Pecina, Pavel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In recent years, considerable attention has been dedicated to language modeling methods in information retrieval. Although these approaches generally allow exploitation of any type of language model, most of the published experiments were conducted with a classical n-gram model, usually limited only to unigrams. A few works exploiting syntax in information retrieval can be cited in this context, but significant contribution of syntax based language modeling for information retrieval is yet to be proved. In this paper, we propose, implement, and evaluate an enrichment of language model employing syntactic dependency information acquired automatically from both documents and queries. Our experiments are conducted on Czech which is a morphologically rich language and has a considerably free word order, therefore a syntactic language model is expected to contribute positively to the unigram and bigram language model based on surface word order. By testing our model on the Czech test collection from Cross Language Evaluation Forum 2007 Ad-Hoc track, we show positive contribution of using dependency syntax in this context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,030
inproceedings
attardi-etal-2010-resource
A Resource and Tool for Super-sense Tagging of {I}talian Texts
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1148/
Attardi, Giuseppe and Rossi, Stefano Dei and Di Pietro, Giulia and Lenci, Alessandro and Montemagni, Simonetta and Simi, Maria
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
A SuperSense Tagger is a tool for the automatic analysis of texts that associates to each noun, verb, adjective and adverb a semantic category within a general taxonomy. The developed tagger, based on a statistical model (Maximum Entropy), required the creation of an Italian annotated corpus, to be used as a training set, and the improvement of various existing tools. The obtained results significantly improved the current state-of-the art for this particular task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,031
inproceedings
aldezabal-etal-2010-building
Building the {B}asque {P}rop{B}ank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1149/
Aldezabal, Izaskun and Aranzabe, Mar{\'i}a Jes{\'u}s and D{\'i}az de Ilarraza, Arantza and Estarrona, Ainara
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the work that has been carried out to annotate semantic roles in the Basque Dependency Treebank (BDT). We will describe the resources we have used and the way the annotation of 100 verbs has been done. We decide to follow the model proposed in the PropBank project that has been deployed in other languages, such as Chinese, Spanish, Catalan and Russian. The resources used are: an in-house database with syntactic/semantic subcategorization frames for Basque verbs, an English-Basque verb mapping based on Levin’s classification and the BDT itself. Detailed guidelines for human taggers have been established as a result of this annotation process. In addition, we have characterized the information associated to the semantic tag. Besides, and based on this study, we will define semi-automatic procedures that will facilitate the task of manual annotation for the rest of the verbs of the Treebank. We have also adapted AbarHitz, a tool used in the construction of the BDT, for the task of annotating semantic roles according to the proposed characterization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,032
inproceedings
bigi-etal-2010-automatic
Automatic Detection of Syllable Boundaries in Spontaneous Speech
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1150/
Bigi, Brigitte and Meunier, Christine and Nesterenko, Irina and Bertrand, Roxane
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the outline and performance of an automatic syllable boundary detection system. The syllabification of phonemes is performed with a rule-based system, implemented in a Java program. Phonemes are categorized into 6 classes. A set of specific rules are developed and categorized as general rules which can be applied in all cases, and exception rules which are applied in some specific situations. These rules deal with a French spontaneous speech corpus. Moreover, the proposed phonemes, classes and rules are listed in an external configuration file of the tool (under GPL licence) that make the tool very easy to adapt to a specific corpus by adding or modifying rules, phoneme encoding or phoneme classes, by the use of a new configuration file. Finally, performances are evaluated and compared to 3 other French syllabification systems and show significant improvements. Automatic system output and expert`s syllabification are in agreement for most of syllable boundaries in our corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,033
inproceedings
tsourakis-etal-2010-examining
Examining the Effects of Rephrasing User Input on Two Mobile Spoken Language Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1151/
Tsourakis, Nikos and Lisowska, Agnes and Rayner, Manny and Bouillon, Pierrette
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
During the construction of a spoken dialogue system much effort is spent on improving the quality of speech recognition as possible. However, even if an application perfectly recognizes the input, its understanding may be far from what the user originally meant. The user should be informed about what the system actually understood so that an error will not have a negative impact in the later stages of the dialogue. One important aspect that this work tries to address is the effect of presenting the system’s understanding during interaction with users. We argue that for specific kinds of applications it’s important to confirm the understanding of the system before obtaining the output. In this way the user can avoid misconceptions and problems occurring in the dialogue flow and he can enhance his confidence in the system. Nevertheless this has an impact on the interaction, as the mental workload increases, and the user’s behavior may adapt to the system’s coverage. We focus on two applications that implement the notion of rephrasing user’s input in a different way. Our study took place among 14 subjects that used both systems on a Nokia N810 Internet Tablet.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,034
inproceedings
reese-etal-2010-wikicorpus
{W}ikicorpus: A Word-Sense Disambiguated Multilingual {W}ikipedia Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1152/
Reese, Samuel and Boleda, Gemma and Cuadros, Montse and Padr{\'o}, Llu{\'i}s and Rigau, German
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed for the construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,035
inproceedings
dickinson-jochim-2010-evaluating
Evaluating Distributional Properties of Tagsets
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1153/
Dickinson, Markus and Jochim, Charles
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We investigate which distributional properties should be present in a tagset by examining different mappings of various current part-of-speech tagsets, looking at English, German, and Italian corpora. Given the importance of distributional information, we present a simple model for evaluating how a tagset mapping captures distribution, specifically by utilizing a notion of frames to capture the local context. In addition to an accuracy metric capturing the internal quality of a tagset, we introduce a way to evaluate the external quality of tagset mappings so that we can ensure that the mapping retains linguistically important information from the original tagset. Although most of the mappings we evaluate are motivated by linguistic concerns, we also explore an automatic, bottom-up way to define mappings, to illustrate that better distributional mappings are possible. Comparing our initial evaluations to POS tagging results, we find that more distributional tagsets can sometimes result in worse accuracy, underscring the need to carefully define the properties of a tagset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,036
inproceedings
khalilov-etal-2010-towards
Towards Improving {E}nglish-{L}atvian Translation: A System Comparison and a New Rescoring Feature
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1154/
Khalilov, Maxim and Fonollosa, Jos{\'e} A. R. and Skadin̨a, Inguna and Br{\={a}}l{\={i}}tis, Edgars and Pretkalnin̨a, Lauma
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Translation into the languages with relatively free word order has received a lot less attention than translation into fixed word order languages (English), or into analytical languages (Chinese). At the same time this translation task is found among the most difficult challenges for machine translation (MT), and intuitively it seems that there is some space in improvement intending to reflect the free word order structure of the target language. This paper presents a comparative study of two alternative approaches to statistical machine translation (SMT) and their application to a task of English-to-Latvian translation. Furthermore, a novel feature intending to reflect the relatively free word order scheme of the Latvian language is proposed and successfully applied on the n-best list rescoring step. Moving beyond classical automatic scores of translation quality that are classically presented in MT research papers, we contribute presenting a manual error analysis of MT systems output that helps to shed light on advantages and disadvantages of the SMT systems under consideration.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,037
inproceedings
sidorov-etal-2010-english
{E}nglish-{S}panish Large Statistical Dictionary of Inflectional Forms
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1155/
Sidorov, Grigori and Barr{\'o}n-Cede{\~n}o, Alberto and Rosso, Paolo
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper presents an approach for constructing a weighted bilingual dictionary of inflectional forms using as input data a traditional bilingual dictionary, and not parallel corpora. An algorithm is developed that generates all possible morphological (inflectional) forms and weights them using information on distribution of corresponding grammar sets (grammar information) in large corpora for each language. The algorithm also takes into account the compatibility of grammar sets in a language pair; for example, verb in past tense in language L normally is expected to be translated by verb in past tense in Language L'. We consider that the developed method is universal, i.e. can be applied to any pair of languages. The obtained dictionary is freely available. It can be used in several NLP tasks, for example, statistical machine translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,038
inproceedings
fernandez-martinez-etal-2010-hifi
{HIFI}-{AV}: An Audio-visual Corpus for Spoken Language Human-Machine Dialogue Research in {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1156/
Fern{\'a}ndez-Mart{\'i}nez, Fernando and Lucas-Cuesta, Juan Manuel and Chicote, Roberto Barra and Ferreiros, Javier and Mac{\'i}as-Guarasa, Javier
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we describe a new multi-purpose audio-visual database on the context of speech interfaces for controlling household electronic devices. The database comprises speech and video recordings of 19 speakers interacting with a HIFI audio box by means of a spoken dialogue system. Dialogue management is based on Bayesian Networks and the system is provided with contextual information handling strategies. Each speaker was requested to fulfil different sets of specific goals following predefined scenarios, according to both different complexity levels and degrees of freedom or initiative allowed to the user. Due to a careful design and its size, the recorded database allows comprehensive studies on speech recognition, speech understanding, dialogue modeling and management, microphone array based speech processing, and both speech and video-based acoustic source localisation. The database has been labelled for quality and efficiency studies on dialogue performance. The whole database has been validated through both objective and subjective tests.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,039
inproceedings
hois-2010-inter
Inter-Annotator Agreement on a Linguistic Ontology for Spatial Language - A Case Study for {GUM}-Space
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1157/
Hois, Joana
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present a case study for measuring inter-annotator agreement on a linguistic ontology for spatial language, namely the spatial extension of the Generalized Upper Model. This linguistic ontology specifies semantic categories, and it is used in dialogue systems for natural language of space in the context of human-computer interaction and spatial assistance systems. Its core representation for spatial language distinguishes how sentences can be structured and categorized into units that contribute certain meanings to the expression. This representation is here evaluated in terms of inter-annotator agreement: four uninformed annotators were instructed by a manual how to annotate sentences with the linguistic ontology. They have been assigned to annotate 200 sentences with varying length and complexity. Their resulting agreements are calculated together with our own `expert annotation' of the same sentences. We show that linguistic ontologies can be evaluated with respect to inter-annotator agreement, and we present encouraging results of calculating agreements for the spatial extension of the Generalized Upper Model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,040
inproceedings
lin-etal-2010-new
New Tools for Web-Scale N-grams
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1158/
Lin, Dekang and Church, Kenneth and Ji, Heng and Sekine, Satoshi and Yarowsky, David and Bergsma, Shane and Patil, Kailash and Pitler, Emily and Lathbury, Rachel and Rao, Vikram and Dalwani, Kapil and Narsale, Sushant
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
While the web provides a fantastic linguistic resource, collecting and processing data at web-scale is beyond the reach of most academic laboratories. Previous research has relied on search engines to collect online information, but this is hopelessly inefficient for building large-scale linguistic resources, such as lists of named-entity types or clusters of distributionally similar words. An alternative to processing web-scale text directly is to use the information provided in an N-gram corpus. An N-gram corpus is an efficient compression of large amounts of text. An N-gram corpus states how often each sequence of words (up to length N) occurs. We propose tools for working with enhanced web-scale N-gram corpora that include richer levels of source annotation, such as part-of-speech tags. We describe a new set of search tools that make use of these tags, and collectively lower the barrier for lexical learning and ambiguity resolution at web-scale. They will allow novel sources of information to be applied to long-standing natural language challenges.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,041
inproceedings
auer-etal-2010-elan
{ELAN} as Flexible Annotation Framework for Sound and Image Processing Detectors
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1159/
Auer, Eric and Russel, Albert and Sloetjes, Han and Wittenburg, Peter and Schreer, Oliver and Masnieri, S. and Schneider, Daniel and Tsch{\"opel, Sebastian
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Annotation of digital recordings in humanities research still is, to a large extend, a process that is performed manually. This paper describes the first pattern recognition based software components developed in the AVATecH project and their integration in the annotation tool ELAN. AVATecH (Advancing Video/Audio Technology in Humanities Research) is a project that involves two Max Planck Institutes (Max Planck Institute for Psycholinguistics, Nijmegen, Max Planck Institute for Social Anthropology, Halle) and two Fraunhofer Institutes (Fraunhofer-Institut f{\"ur Intelligente Analyse- und Informationssysteme IAIS, Sankt Augustin, Fraunhofer Heinrich-Hertz-Institute, Berlin) and that aims to develop and implement audio and video technology for semi-automatic annotation of heterogeneous media collections as they occur in multimedia based research. The highly diverse nature of the digital recordings stored in the archives of both Max Planck Institutes, poses a huge challenge to most of the existing pattern recognition solutions and is a motivation to make such technology available to researchers in the humanities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,042
inproceedings
vlaj-etal-2010-acquisition
Acquisition and Annotation of {S}lovenian {L}ombard Speech Database
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1160/
Vlaj, Damjan and Marku{\v{s, Aleksandra Z{\"ogling and Kos, Marko and Ka{\v{ci{\v{c, Zdravko
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the acquisition and annotation of Slovenian Lombard Speech Database, the recording of which started in the year 2008. The database was recorded at the University of Maribor, Slovenia. The goal of this paper is to describe the hardware platform used for the acquisition of speech material, recording scenarios and tools used for the annotation of Slovenian Lombard Speech Database. The database consists of recordings of 10 Slovenian native speakers. Five males and five females were recorded. Each speaker pronounced a set of eight corpuses in two recording sessions with at least one week pause between recordings. The structure of the corpus is similar to SpeechDat II database. Approximately 30 minutes of speech material per speaker and per session was recorded. The manual annotation of speech material is performed with the LombardSpeechLabel tool developed at the University of Maribor. The speech and annotation material was saved on 10 DVDs (one speaker on one DVD).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,043
inproceedings
cartoni-lefer-2010-mulexfor
The {M}u{L}e{XF}o{R} Database: Representing Word-Formation Processes in a Multilingual Lexicographic Environment
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1161/
Cartoni, Bruno and Lefer, Marie-Aude
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper introduces a new lexicographic resource, the MuLeXFoR database, which aims to present word-formation processes in a multilingual environment. Morphological items represent a real challenge for lexicography, especially for the development of multilingual tools. Affixes can take part in several word-formation rules and, conversely, rules can be realised by means of a variety of affixes. Consequently, it is often difficult to provide enough information to help users understand the meaning(s) of an affix or familiarise with the most frequent strategies used to translate the meaning(s) conveyed by affixes. In fact, traditional dictionaries often fail to achieve this goal. The database introduced in this paper tries to take advantage of recent advances in electronic implementation and morphological theory. Word-formation is presented as a set of multilingual rules that users can access via different indexes (affixes, rules and constructed words). MuLeXFoR entries contain, among other things, detailed descriptions of morphological constraints and productivity notes, which are sorely lacking in currently available tools such as bilingual dictionaries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,044
inproceedings
xu-klakow-2010-paragraph
Paragraph Acquisition and Selection for List Question Using {A}mazon`s {M}echanical {T}urk
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1162/
Xu, Fang and Klakow, Dietrich
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Creating more fine-grained annotated data than previously relevent document sets is important for evaluating individual components in automatic question answering systems. In this paper, we describe using the Amazon`s Mechanical Turk (AMT) to judge whether paragraphs in relevant documents answer corresponding list questions in TREC QA track 2004. Based on AMT results, we build a collection of 1300 gold-standard supporting paragraphs for list questions. Our online experiments suggested that recruiting more people per task assures better annotation quality. In order to learning true labels from AMT annotations, we investigated three approaches on two datasets with different levels of annotation errors. Experimental studies show that the Naive Bayesian model and EM-based GLAD model can generate results highly agreeing with gold-standard annotations, and dominate significantly over the majority voting method for true label learning. We also suggested setting higher HIT approval rate to assure better online annotation quality, which leads to better performance of learning methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,045
inproceedings
ku-etal-2010-construction
Construction of a {C}hinese Opinion Treebank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1163/
Ku, Lun-Wei and Huang, Ting-Hao and Chen, Hsin-Hsi
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we base on the syntactic structural Chinese Treebank corpus, construct the Chinese Opinon Treebank for the research of opinion analysis. We introduce the tagging scheme and develop a tagging tool for constructing this corpus. Annotated samples are described. Information including opinions (yes or no), their polarities (positive, neutral or negative), types (expression, status, or action), is defined and annotated. In addition, five structure trios are introduced according to the linguistic relations between two Chinese words. Four of them that are possibly related to opinions are also annotated in the constructed corpus to provide the linguistic cues. The number of opinion sentences together with the number of their polarities, opinion types, and trio types are calculated. These statistics are compared and discussed. To know the quality of the annotations in this corpus, the kappa values of the annotations are calculated. The substantial agreement between annotations ensures the applicability and reliability of the constructed corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,046
inproceedings
abekawa-etal-2010-community
Community-based Construction of Draft and Final Translation Corpus Through a Translation Hosting Site Minna no Hon`yaku ({MNH})
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1164/
Abekawa, Takeshi and Utiyama, Masao and Sumita, Eiichiro and Kageura, Kyo
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we report a way of constructing a translation corpus that contains not only source and target texts, but draft and final versions of target texts, through the translation hosting site Minna no Hon`yaku (MNH). We made MNH publicly available on April 2009. Since then, more than 1,000 users have registered and over 3,500 documents have been translated, as of February 2010, from English to Japanese and from Japanese to English. MNH provides an integrated translation-aid environment, QRedit, which enables translators to look up high-quality dictionaries and Wikipedia as well as to search Google seamlessly. As MNH keeps translation logs, a corpus consisting of source texts, draft translations in several versions, and final translations is constructed naturally through MNH. As of 7 February, 764 documents with multiple translation versions are accumulated, of which 110 are edited by more than one translators. This corpus can be used for self-learning by inexperienced translators on MNH, and potentially for improving machine translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,047
inproceedings
ambati-etal-2010-active
Active Learning and Crowd-Sourcing for Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1165/
Ambati, Vamshi and Vogel, Stephan and Carbonell, Jaime
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Large scale parallel data generation for new language pairs requires intensive human effort and availability of experts. It becomes immensely difficult and costly to provide Statistical Machine Translation (SMT) systems for most languages due to the paucity of expert translators to provide parallel data. Even if experts are present, it appears infeasible due to the impending costs. In this paper we propose Active Crowd Translation (ACT), a new paradigm where active learning and crowd-sourcing come together to enable automatic translation for low-resource language pairs. Active learning aims at reducing cost of label acquisition by prioritizing the most informative data for annotation, while crowd-sourcing reduces cost by using the power of the crowds to make do for the lack of expensive language experts. We experiment and compare our active learning strategies with strong baselines and see significant improvements in translation quality. Similarly, our experiments with crowd-sourcing on Mechanical Turk have shown that it is possible to create parallel corpora using non-experts and with sufficient quality assurance, a translation system that is trained using this corpus approaches expert quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,048
inproceedings
dalianis-velupillai-2010-certain
How Certain are Clinical Assessments? Annotating {S}wedish Clinical Text for (Un)certainties, Speculations and Negations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1166/
Dalianis, Hercules and Velupillai, Sumithra
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Clinical texts contain a large amount of information. Some of this information is embedded in contexts where e.g. a patient status is reasoned about, which may lead to a considerable amount of statements that indicate uncertainty and speculation. We believe that distinguishing such instances from factual statements will be very beneficial for automatic information extraction. We have annotated a subset of the Stockholm Electronic Patient Record Corpus for certain and uncertain expressions as well as speculative and negation keywords, with the purpose of creating a resource for the development of automatic detection of speculative language in Swedish clinical text. We have analyzed the results from the initial annotation trial by means of pairwise Inter-Annotator Agreement (IAA) measured with F-score. Our main findings are that IAA results for certain expressions and negations are very high, but for uncertain expressions and speculative keywords results are less encouraging. These instances need to be defined in more detail. With this annotation trial, we have created an important resource that can be used to further analyze the properties of speculative language in Swedish clinical text. Our intention is to release this subset to other research groups in the future after removing identifiable information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,049
inproceedings
anderson-etal-2010-base
Base Concepts in the {A}frican Languages Compared to Upper Ontologies and the {W}ord{N}et Top Ontology
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1167/
Anderson, Winston and Pretorius, Laurette and Kotz{\'e}, Albert
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Ontologies, and in particular upper ontologies, are foundational to the establishment of the Semantic Web. Upper ontologies are used as equivalence formalisms between domain specific ontologies. Multilingualism brings one of the key challenges to the development of these ontologies. Fundamental to the challenges of defining upper ontologies is the assumption that concepts are universally shared. The approach to developing linguistic ontologies aligned to upper ontologies, particularly in the non-Indo-European language families, has highlighted these challenges. Previously two approaches to developing new linguistic ontologies and the influence of these approaches on the upper ontologies have been well documented. These approaches are examined in a unique new context: the African, and in particular, the Bantu languages. In particular, we address the following two questions: Which approach is better for the alignment of the African languages to upper ontologies? Can the concepts that are linguistically shared amongst the African languages be aligned easily with upper ontology concepts claimed to be universally shared?
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,050
inproceedings
zhou-etal-2010-casia
{CASIA}-{CASSIL}: a {C}hinese Telephone Conversation Corpus in Real Scenarios with Multi-leveled Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1168/
Zhou, Keyan and Li, Aijun and Yin, Zhigang and Zong, Chengqing
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
CASIA-CASSIL is a large-scale corpus base of Chinese human-human naturally-occurring telephone conversations in restricted domains. The first edition consists of 792 90-second conversations belonging to tourism domain, which are selected from 7,639 spontaneous telephone recordings in real scenarios. The corpus is now being annotated with wide range of linguistic and paralinguistic information in multi-levels. The annotations include Turns, Speaker Gender, Orthographic Transcription, Chinese Syllable, Chinese Phonetic Transcription, Prosodic Boundary, Stress of Sentence, Non-Speech Sounds, Voice Quality, Topic, Dialog-act and Adjacency Pairs, Ill-formedness, and Expressive Emotion as well, 13 levels in total. The abundant annotation will be effective especially for studying Chinese spoken language phenomena. This paper describes the whole process to build the conversation corpus, including collecting and selecting the original data, and the follow-up process such as transcribing, annotating, and so on. CASIA-CASSIL is being extended to a large scale corpus base of annotated Chinese dialogs for spoken Chinese study.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,051
inproceedings
saykham-etal-2010-online
Online Temporal Language Model Adaptation for a {T}hai Broadcast News Transcription System
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1169/
Saykham, Kwanchiva and Chotimongkol, Ananlada and Wutiwiwatchai, Chai
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper investigates the effectiveness of online temporal language model adaptation when applied to a Thai broadcast news transcription task. Our adaptation scheme works as follow: first an initial language model is trained with broadcast news transcription available during the development period. Then the language model is adapted over time with more recent broadcast news transcription and online news articles available during deployment especially the data from the same time period as the broadcast news speech being recognized. We found that the data that are closer in time are more similar in terms of perplexity and are more suitable for language model adaptation. The LMs that are adapted over time with more recent news data are better, both in terms of perplexity and WER, than the static LM trained from only the initial set of broadcast news data. Adaptation data from broadcast news transcription improved perplexity by 38.3{\%} and WER by 7.1{\%} relatively. Though, online news articles achieved less improvement, it is still a useful resource as it can be obtained automatically. Better data pre-processing techniques and data selection techniques based on text similarity could be applied to the news articles to obtain further improvement from this promising result.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,052
inproceedings
koolen-krahmer-2010-tuna
The {D}-{TUNA} Corpus: A {D}utch Dataset for the Evaluation of Referring Expression Generation Algorithms
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1170/
Koolen, Ruud and Krahmer, Emiel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present the D-TUNA corpus, which is the first semantically annotated corpus of referring expressions in Dutch. Its primary function is to evaluate and improve the performance of REG algorithms. Such algorithms are computational models that automatically generate referring expressions by computing how a specific target can be identified to an addressee by distinguishing it from a set of distractor objects. We performed a large-scale production experiment, in which participants were asked to describe furniture items and people, and provided all descriptions with semantic information regarding the target and the distractor objects. Besides being useful for evaluating REG algorithms, the corpus addresses several other research goals. Firstly, the corpus contains both written and spoken referring expressions uttered in the direction of an addressee, which enables systematic analyses of how modality (text or speech) influences the human production of referring expressions. Secondly, due to its comparability with the English TUNA corpus, our Dutch corpus can be used to explore the differences between Dutch and English speakers regarding the production of referring expressions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,053
inproceedings
peris-etal-2010-adn
{ADN}-Classifier:Automatically Assigning Denotation Types to Nominalizations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1171/
Peris, Aina and Taul{\'e}, Mariona and Boleda, Gemma and Rodr{\'i}guez, Horacio
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the ADN-Classifier, an Automatic classification system of Spanish Deverbal Nominalizations aimed at identifying its semantic denotation (i.e. event, result, underspecified, or lexicalized). The classifier can be used for NLP tasks such as coreference resolution or paraphrase detection. To our knowledge, the ADN-Classifier is the first effort in acquisition of denotations for nominalizations using Machine Learning. We compare the results of the classifier when using a decreasing number of Knowledge Sources, namely (1) the complete nominal lexicon (AnCora-Nom) that includes sense distictions, (2) the nominal lexicon (AnCora-Nom) removing the sense-specific information, (3) nominalizations’ context information obtained from a treebank corpus (AnCora-Es) and (4) the combination of the previous linguistic resources. In a realistic scenario, that is, without sense distinction, the best results achieved are those taking into account the information declared in the lexicon (89.40{\%} accuracy). This shows that the lexicon contains crucial information (such as argument structure) that corpus-derived features cannot substitute for.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,054
inproceedings
antonsen-etal-2010-reusing
Reusing Grammatical Resources for New Languages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1172/
Antonsen, Lene and Trosterud, Trond and Wiechetek, Linda
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Grammatical approaches to language technology are often considered less optimal than statistical approaches in multilingual settings, where large-scale portability becomes an important issue. The present paper argues that there is a notable gain in reusing grammatical resources when porting technology to new languages. The pivot language is North S{\'a}mi, and the paper discusses portability with respect to the closely related Lule and South S{\'a}mi, and to the unrelated Faroese and Greenlandic languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,055
inproceedings
adde-svendsen-2010-namedat
{N}ame{D}at: A Database of {E}nglish Proper Names Spoken by Native Norwegians
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1173/
Adde, Line and Svendsen, Torbj{\o}rn
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper describes the design and collection of NameDat, a database containing English proper names spoken by native Norwegians. The database was designed to cover the typical acoustic and phonetic variations that appear when Norwegians pronounce English names. The intended use of the database is acoustic and lexical modeling of these phonetic variations. The English names in the database have been enriched with several annotation tiers. The recorded names were selected according to three selection criteria: the familiarity of the name, the expected recognition performance and the coverage of non-native phonemes. The validity of the manual annotations was verified by means of an automatic recognition experiment of non-native names. The experiment showed that the use of the manual transcriptions from NameDat yields an increase in recognition performance over automatically generated transcriptions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,056
inproceedings
snoeren-etal-2010-study
The Study of Writing Variants in an Under-resourced Language: Some Evidence from Mobile N-Deletion in {L}uxembourgish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1174/
Snoeren, Natalie D. and Adda-Decker, Martine and Adda, Gilles
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The national language of the Grand-Duchy of Luxembourg, Luxembourgish, has often been characterized as one of Europe`s under-described and under-resourced languages. Because of a limited written production of Luxembourgish, poorly observed writing standardization (as compared to other languages such as English and French) and a large diversity of spoken varieties, the study of Luxembourgish poses many interesting challenges to automatic speech processing studies as well as to linguistic enquiries. In the present paper, we make use of large corpora to focus on typical writing and derived pronunciation variants in Luxembourgish, elicited by mobile -n deletion (hereafter shortened to MND). Using transcriptions from the House of Parliament debates and 10k words from news reports, we examine the reality of MND variants in written transcripts of speech. The goal of this study is manyfold: quantify the potential of variation due to MND in written Luxembourgish, check the mandatory status of the MND rule and discuss the arising problems for automatic spoken Luxembourgish processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,057
inproceedings
glowinska-przepiorkowski-2010-design
The Design of Syntactic Annotation Levels in the {N}ational {C}orpus of {P}olish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1175/
G{\l}owi{\'n}ska, Katarzyna and Przepi{\'o}rkowski, Adam
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper presents the procedure of syntactic annotation of the National Corpus of Polish. The paper concentrates on the delimitation of syntactic words (analytical forms, reflexive verbs, discontinuous conjunctions, etc.) and syntactic groups, as well as on problems encountered during the annotation process: syntactic group boundaries, multiword entities, abbreviations, discontinuous phrases and syntactic words. It includes the complete tagset for syntactic words and the list of syntactic groups recognized in NKJP. The tagset defines grammatical classes and categories according to morphosyntactic and syntactic criteria only. Syntactic annotation in the National Corpus of Polish is limited to making constituents of combinations of words. Annotation depends on shallow parsing and manual post-editing of the results by annotators. Manual annotation is performed by two independents annotators, with a referee in cases of disagreement. The manually constructed grammar, both for syntactic words and for syntactic groups, is encoded in the shallow parsing system Spejd.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,058
inproceedings
kamiya-etal-2010-construction
Construction of Back-Channel Utterance Corpus for Responsive Spoken Dialogue System Development
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1176/
Kamiya, Yuki and Ohno, Tomohiro and Matsubara, Shigeki and Kashioka, Hideki
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In spoken dialogues, if a spoken dialogue system does not respond at all during user’s utterances, the user might feel uneasy because the user does not know whether or not the system has recognized the utterances. In particular, back-channel utterances, which the system outputs as voices such as “yeah” and “uh huh” in English have important roles for a driver in in-car speech dialogues because the driver does not look owards a listener while driving. This paper describes construction of a back-channel utterance corpus and its analysis to develop the system which can output back-channel utterances at the proper timing in the responsive in-car speech dialogue. First, we constructed the back-channel utterance corpus by integrating the back-channel utterances that four subjects provided for the driver’s utterances in 60 dialogues in the CIAIR in-car speech dialogue corpus. Next, we analyzed the corpus and revealed the relation between back-channel utterance timings and information on bunsetsu, clause, pause and rate of speech. Based on the analysis, we examined the possibility of detecting back-channel utterance timings by machine learning technique. As the result of the experiment, we confirmed that our technique achieved as same detection capability as a human.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,059
inproceedings
ruiter-etal-2010-human
Human Language Technology and Communicative Disabilities: Requirements and Possibilities for the Future
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1177/
Ruiter, Marina B. and Rietveld, Toni C. M. and Cucchiarini, Catia and Krahmer, Emiel J. and Strik, Helmer
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
For some years now, the Nederlandse Taalunie (Dutch Language Union) has been active in promoting the development of human language technology (HLT) applications for users of Dutch with communication disabilities. The reason is that HLT products and services may enable these users to improve their verbal autonomy and communication skills. We sought to identify a minimum common set of HLT resources that is required to develop tools for a wide range of communication disabilities. In order to reach this goal, we investigated the specific HLT needs of communicatively disabled people and related these needs to the underlying HLT software components. By analysing the availability and quality of these essential HLT resources, we were able to identify which of the crucial elements need further research and development to become usable for developing applications for communicatively disabled users of Dutch. The results obtained in the current survey can be used to inform policy institutions on how they can stimulate the development of HLT resources for this target group. In the current study results were obtained for Dutch, but a similar approach can also be used for other languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,060
inproceedings
burkhardt-etal-2010-database
A Database of Age and Gender Annotated Telephone Speech
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1178/
Burkhardt, Felix and Eckert, Martin and Johannsen, Wiebke and Stegmann, Joachim
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This article describes an age-annotated database of German telephone speech. All in all 47 hours of prompted and free text was recorded, uttered by 954 paid participants in a style typical for automated voice services. The participants were selected based on an equal distribution of males and females within four age cluster groups; children, youth, adults and seniors. Within the children, gender is not distinguished, because it doesn’t have a strong enough effect on the voice. The textual content was designed to be typical for automated voice services and consists mainly of short commands, single words and numbers. An additional database consists of 659 speakers (368 female and 291 male) that called an automated voice portal server and answered freely on one of the two questions “What is your favourite dish?” and “What would you take to an island?” (island set, 422 speakers). This data might be used for out-of domain testing. The data will be used to tune an age-detecting automated voice service and might be released to research institutes under controlled conditions as part of an open age and gender detection challenge.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,061
inproceedings
marx-schuth-2010-dutchparl
{D}utch{P}arl. The Parliamentary Documents in {D}utch
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1179/
Marx, Maarten and Schuth, Anne
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
A corpus called DutchParl is created which aims to contain all digitally available parliamentary documents written in the Dutch language. The first version of DutchParl contains documents from the parliaments of The Netherlands, Flanders and Belgium. The corpus is divided along three dimensions: per parliament, scanned or digital documents, written recordings of spoken text and others. The digital collection contains more than 800 million tokens, the scanned collection more than 1 billion. All documents are available as UTF-8 encoded XML files with extensive metadata in Dublin Core standard. The text itself is divided into pages which are divided into paragraphs. Every document, page and paragraph has a unique URN which resolves to a web page. Every page element in the XML files is connected to a facsimile image of that page in PDF or JPEG format. We created a viewer in which both versions can be inspected simultaneously. The corpus is available for download in several formats. The corpus can be used for corpus-linguistic and political science research, and is suitable for performing scalability tests for XML information systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,062
inproceedings
henrich-hinrichs-2010-gernedit
{G}ern{E}di{T} - The {G}erma{N}et Editing Tool
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1180/
Henrich, Verena and Hinrichs, Erhard
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper introduces GernEdiT (short for: GermaNet Editing Tool), a new graphical user interface for the lexicographers and developers of GermaNet, the German version of the Princeton WordNet. GermaNet is a lexical-semantic net that relates German nouns, verbs, and adjectives. Traditionally, lexicographic work for extending the coverage of GermaNet utilized the Princeton WordNet development environment of lexicographer files. Due to a complex data format and no opportunity of automatic consistency checks, this process was very error prone and time consuming. The GermaNet Editing Tool GernEdiT was developed to overcome these shortcomings. The main purposes of the GernEdiT tool are, besides supporting lexicographers to access, modify, and extend GermaNet data in an easy and adaptive way, as follows: Replace the standard editing tools by a more user-friendly tool, use a relational database as data storage, support export formats in the form of XML, and facilitate internal consistency and correctness of the linguistic resource. All these core functionalities of GernEdiT along with the main aspects of the underlying lexical resource GermaNet and its current database format are presented in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,063
inproceedings
hinrichs-etal-2010-sustainability
Sustainability of Linguistic Data and Analysis in the Context of a Collaborative e{S}cience Environment
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1181/
Hinrichs, Erhard and Henrich, Verena and Zastrow, Thomas
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
For researchers, it is especially important that primary research data are preserved and made available on a long-term basis and to a wide variety of researchers. In order to ensure long-term availability of the archived data, it is imperative that the data to be stored is conformant with standardized data formats and best practices followed by the relevant research communities. Storing, managing, and accessing such standard-conformant data requires a repository-based infrastructure. Two projects at the University of T{\"ubingen are realizing a collaborative eScience research environment with the help of eSciDoc for the university that supports long-term preservation of all kinds of data as well as a fine-grained and contextualized data management: the INF project and the BW-eSci(T) project. The task of the infrastructure (INF) project within the collaborative research centre {\^a€žEmergence of Meaning“ (SFB 833) is to guarantee the long-term availability of the SFBs data. BW-eSci(T) is a joint project of the University of T{\"ubingen and the Fachinformationszentrums (FIZ) Karlsruhe. The goal of this project is to develop a prototypical eScience research environment for the University of T{\"ubingen.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,064
inproceedings
fritzinger-etal-2010-pattern
Pattern-Based Extraction of Negative Polarity Items from Dependency-Parsed Text
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1182/
Fritzinger, Fabienne and Richter, Frank and Weller, Marion
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We describe a new method for extracting Negative Polarity Item candidates (NPI candidates) from dependency-parsed German text corpora. Semi-automatic extraction of NPIs is a challenging task since NPIs do not have uniform categorical or other syntactic properties that could be used for detecting them; they occur as single words or as multi-word expressions of almost any syntactic category. Their defining property is of a semantic nature, they may only occur in the scope of negation and related semantic operators. In contrast to an earlier approach to NPI extraction from corpora, we specifically target multi-word expressions. Besides applying statistical methods to measure the co-occurrence of our candidate expressions with negative contexts, we also apply linguistic criteria in an attempt to determine to which degree they are idiomatic. Our method is evaluated by comparing the set of NPIs we found with the most comprehensive electronic list of German NPIs, which currently contains 165 entries. Our method retrieved 142 NPIs, 114 of which are new.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,065
inproceedings
gorog-vossen-2010-computer
Computer Assisted Semantic Annotation in the {D}utch{S}em{C}or Project
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1183/
G{\"or{\"og, Attila and Vossen, Piek
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The goal of this paper is to describe the annotation protocols and the Semantic Annotation Tool (SAT) used in the DutchSemCor project. The DutchSemCor project is aiming at aligning the Cornetto lexical database with the Dutch language corpus SoNaR. 250K corpus occurrences of the 3,000 most frequent and most ambiguous Dutch nouns, adjectives and verbs are being annotated manually using the SAT. This data is then used for bootstrapping 750K extra occurrences which in turn will be checked manually. Our main focus in this paper is the methodology applied in the project to attain the envisaged Inter-annotator Agreement (IA) of =80{\%}. We will also discuss one of the main objectives of DutchSemCor i.e. to provide semantically annotated language data with high scores for quantity, quality and diversity. Sample data with high scores for these three features can yield better results for co-training WSD systems. Finally, we will take a brief look at our annotation tool.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,066
inproceedings
hinrichs-etal-2010-weblicht
{W}eb{L}icht: Web-based {LRT} Services in a Distributed e{S}cience Infrastructure
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1184/
Hinrichs, Marie and Zastrow, Thomas and Hinrichs, Erhard
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
eScience - enhanced science - is a new paradigm of scientific work and research. In the humanities, eScience environments can be helpful in establishing new workflows and lifecycles of scientific data. WebLicht is such an eScience environment for linguistic analysis, making linguistic tools and resources available network-wide. Today, most digital language resources and tools (LRT) are available by download only. This is inconvenient for someone who wants to use and combine several tools because these tools are normally not compatible with each other. To overcome this restriction, WebLicht makes the functionality of linguistic tools and the resources themselves available via the internet as web services. In WebLicht, several kinds of linguistic tools are available which cover the basic functionality of automatic and incremental creation of annotated text corpora. To make use of the more than 70 tools and resources currently available, the end user needs nothing more than just a common web browser.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,067
inproceedings
torreira-ernestus-2010-nijmegen
The Nijmegen Corpus of Casual {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1185/
Torreira, Francisco and Ernestus, Mirjam
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This article describes the preparation, recording and orthographic transcription of a new speech corpus, the Nijmegen Corpus of Casual Spanish (NCCSp). The corpus contains around 30 hours of recordings of 52 Madrid Spanish speakers engaged in conversations with friends. Casual speech was elicited during three different parts, which together provided around ninety minutes of speech from every group of speakers. While Parts 1 and 2 did not require participants to perform any specific task, in Part 3 participants negotiated a common answer to general questions about society. Information about how to obtain a copy of the corpus can be found online at \url{http://mirjamernestus.ruhosting.nl/Ernestus/NCCSp}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,068
inproceedings
santos-etal-2010-gikiclef
{G}iki{CLEF}: Crosscultural Issues in Multilingual Information Access
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1186/
Santos, Diana and Cabral, Lu{\'i}s Miguel and Forascu, Corina and Forner, Pamela and Gey, Fredric and Lamm, Katrin and Mandl, Thomas and Osenova, Petya and Pe{\~n}as, Anselmo and Rodrigo, {\'A}lvaro and Schulz, Julia and Skalban, Yvonne and Tjong Kim Sang, Erik
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we describe GikiCLEF, the first evaluation contest that, to our knowledge, was specifically designed to expose and investigate cultural and linguistic issues involved in structured multimedia collections and searching, and which was organized under the scope of CLEF 2009. GikiCLEF evaluated systems that answered hard questions for both human and machine, in ten different Wikipedia collections, namely Bulgarian, Dutch, English, German, Italian, Norwegian (Bokm{\"al and Nynorsk), Portuguese, Romanian, and Spanish. After a short historical introduction, we present the task, together with its motivation, and discuss how the topics were chosen. Then we provide another description from the point of view of the participants. Before disclosing their results, we introduce the SIGA management system explaining the several tasks which were carried out behind the scenes. We quantify in turn the GIRA resource, offered to the community for training and further evaluating systems with the help of the 50 topics gathered and the solutions identified. We end the paper with a critical discussion of what was learned, advancing possible ways to reuse the data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,069
inproceedings
van-uytvanck-etal-2010-virtual
Virtual Language Observatory: The Portal to the Language Resources and Technology Universe
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1187/
Van Uytvanck, Dieter and Zinn, Claus and Broeder, Daan and Wittenburg, Peter and Gardellini, Mariano
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Over the years, the field of Language Resources and Technology (LRT) has developed a tremendous amount of resources and tools. However, there is no ready-to-use map that researchers could use to gain a good overview and steadfast orientation when searching for, say corpora or software tools to support their studies. It is rather the case that information is scattered across project- or organisation-specific sites, which makes it hard if not impossible for less-experienced researchers to gather all relevant material. Clearly, the provision of metadata is central to resource and software exploration. However, in the LRT field, metadata comes in many forms, tastes and qualities, and therefore substantial harmonization and curation efforts are required to provide researchers with metadata-based guidance. To address this issue a broad alliance of LRT providers (CLARIN, the Linguist List, DOBES, DELAMAN, DFKI, ELRA) have initiated the Virtual Language Observatory portal to provide a low-barrier, easy-to-follow entry point to language resources and tools; it can be accessed via \url{http://www.clarin.eu/vlo}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,070
inproceedings
skrelin-etal-2010-fully
A Fully Annotated Corpus of {R}ussian Speech
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1188/
Skrelin, Pavel and Volskaya, Nina and Kocharov, Daniil and Evgrafova, Karina and Glotova, Olga and Evdokimova, Vera
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper introduces CORPRES {\textemdash} a fully annotated Russian speech corpus developed at the Department of Phonetics, St. Petersburg State University as a result of a three-year project. The corpus includes samples of different speaking styles produced by 4 male and 4 female speakers. Six levels of annotation cover all phonetic and prosodic information about the recorded speech data, including labels for pitch marks, phonetic events, narrow and wide phonetic transcription, orthographic and prosodic transcription. Precise phonetic transcription of the data provides an especially valuable resource for both research and development purposes. Overall corpus size is 528 458 running words and contains 60 hours of speech made up of 7.5 hours from each speaker. 40{\%} of the corpus was manually segmented and fully annotated on all six levels. 60{\%} of the corpus was partly annotated; there are labels for pitch period and phonetic event labels. Orthographic, prosodic and ideal phonetic transcription for this part was generated and stored as text files. The fully annotated part of the corpus covers all speaking styles included in the corpus and all speakers. The paper contains information about CORPRES design and annotation principles, overall data description and some speculation about possible use of the corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,071
inproceedings
spiegl-etal-2010-fau
{FAU} {IISAH} Corpus {--} A {G}erman Speech Database Consisting of Human-Machine and Human-Human Interaction Acquired by Close-Talking and Far-Distance Microphones
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1189/
Spiegl, Werner and Riedhammer, Korbinian and Steidl, Stefan and N{\"oth, Elmar
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper the FAU IISAH corpus and its recording conditions are described: a new speech database consisting of human-machine and human-human interaction recordings. Beside close-talking microphones for the best possible audio quality of the recorded speech, far-distance microphones were used to acquire the interaction and communication. The recordings took place during a Wizard-of-Oz experiment in the intelligent, senior-adapted house (ISA-House). That is a living room with a speech controlled home assistance system for elderly people, based on a dialogue system, which is able to process spontaneous speech. During the studies in the ISA-House more than eight hours of interaction data were recorded including 3 hours and 27 minutes of spontaneous speech. The data were annotated in terms of human-human (off-talk) and human-machine (on-talk) interaction. The test persons used 2891 turns of off-talk and 2752 turns of on-talk including 1751 different words. Still in progress is the analysis under statistical and linguistical aspects.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,072
inproceedings
dukes-habash-2010-morphological
Morphological Annotation of {Q}uranic {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1190/
Dukes, Kais and Habash, Nizar
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The Quranic Arabic Corpus (\url{http://corpus.quran.com}) is an annotated linguistic resource with multiple layers of annotation including morphological segmentation, part-of-speech tagging, and syntactic analysis using dependency grammar. The motivation behind this work is to produce a resource that enables further analysis of the Quran, the 1,400 year old central religious text of Islam. This paper describes a new approach to morphological annotation of Quranic Arabic, a genre difficult to compare with other forms of Arabic. Processing Quranic Arabic is a unique challenge from a computational point of view, since the vocabulary and spelling differ from Modern Standard Arabic. The Quranic Arabic Corpus differs from other Arabic computational resources in adopting a tagset that closely follows traditional Arabic grammar. We made this decision in order to leverage a large body of existing historical grammatical analysis, and to encourage online collaborative annotation. In this paper, we discuss how the unique challenge of morphological annotation of Quranic Arabic is solved using a multi-stage approach. The different stages include automatic morphological tagging using diacritic edit-distance, two-pass manual verification, and online collaborative annotation. This process is evaluated to validate the appropriateness of the chosen methodology.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,073
inproceedings
schiel-2010-bastat
{BAS}tat : New Statistical Resources at the {B}avarian Archive for Speech Signals
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1191/
Schiel, Florian
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
A new type of language resource called `BAStat' has been released by the Bavarian Archive for Speech Signals at Ludwig Maximilians Universitaet, Munich. In contrast to primary resources like speech and text corpora BAStat comprises statistical estimates based on a number of primary spoken language resources: first and second order occurrence probability of phones, syllables and words, duration statistics, probabilities of pronunciation variants of words and probabilities of context information. Unlike other statistical speech resources BAStat is based solely on recordings of conversational German and therefore models spoken language not text. The resource consists of a bundle of 7-bit ASCII tables and matrices to maximize inter-operability between different operation systems and can be downloaded for free from the BAS web-site. This contribution gives a detailed description about the empirical basis, the contained data types, the format of the resulting statistical data, some interesting interpretations of grand figures and a brief comparison to the text-based statistical resource CELEX.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,074
inproceedings
dukes-etal-2010-syntactic
Syntactic Annotation Guidelines for the {Q}uranic {A}rabic Dependency Treebank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1192/
Dukes, Kais and Atwell, Eric and Sharaf, Abdul-Baquee M.
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The Quranic Arabic Dependency Treebank (QADT) is part of the Quranic Arabic Corpus (\url{http://corpus.quran.com}), an online linguistic resource organized by the University of Leeds, and developed through online collaborative annotation. The website has become a popular study resource for Arabic and the Quran, and is now used by over 1,500 researchers and students daily. This paper presents the treebank, explains the choice of syntactic representation, and highlights key parts of the annotation guidelines. The text being analyzed is the Quran, the central religious book of Islam, written in classical Quranic Arabic (c. 600 CE). To date, all 77,430 words of the Quran have a manually verified morphological analysis, and syntactic analysis is in progress. 11,000 words of Quranic Arabic have been syntactically annotated as part of a gold standard treebank. Annotation guidelines are especially important to promote consistency for a corpus which is being developed through online collaboration, since often many people will participate from different backgrounds and with different levels of linguistic expertise. The treebank is available online for collaborative correction to improve accuracy, with suggestions reviewed by expert Arabic linguists, and compared against existing published books of Quranic Syntax.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,075
inproceedings
vatanen-etal-2010-language
Language Identification of Short Text Segments with N-gram Models
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1193/
Vatanen, Tommi and V{\"ayrynen, Jaakko J. and Virpioja, Sami
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
There are many accurate methods for language identification of long text samples, but identification of very short strings still presents a challenge. This paper studies a language identification task, in which the test samples have only 5-21 characters. We compare two distinct methods that are well suited for this task: a naive Bayes classifier based on character n-gram models, and the ranking method by Cavnar and Trenkle (1994). For the n-gram models, we test several standard smoothing techniques, including the current state-of-the-art, the modified Kneser-Ney interpolation. Experiments are conducted with 281 languages using the Universal Declaration of Human Rights. Advanced language model smoothing techniques improve the identification accuracy and the respective classifiers outperform the ranking method. The higher accuracy is obtained at the cost of larger models and slower classification speed. However, there are several methods to reduce the size of an n-gram model, and our experiments with model pruning show that it provides an easy way to balance the size and the identification accuracy. We also compare the results to the language identifier in Google AJAX Language API, using a subset of 50 languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,076
inproceedings
polifroni-etal-2010-bootstrapping
Bootstrapping Named Entity Extraction for the Creation of Mobile Services
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1194/
Polifroni, Joseph and Kiss, Imre and Adler, Mark
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
As users become more accustomed to using their mobile devices to organize and schedule their lives, there is more of a demand for applications that can make that process easier. Automatic speech recognition technology has already been developed to enable essentially unlimited vocabulary in a mobile setting. Understanding the words that are spoken is the next challenge. In this paper, we describe efforts to develop a dataset and classifier to recognize named entities in speech. Using sets of both real and simulated data, in conjunction with a very large set of real named entities, we created a challenging corpus of training and test data. We use these data to develop a classifier to identify names and locations on a word-by-word basis. In this paper, we describe the process of creating the data and determining a set of features to use for named entity recognition. We report on our classification performance on these data, as well as point to future work in improving all aspects of the system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,077
inproceedings
reveil-etal-2010-improving
Improving Proper Name Recognition by Adding Automatically Learned Pronunciation Variants to the Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1195/
R{\'e}veil, Bert and Martens, Jean-Pierre and van den Heuvel, Henk
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper deals with the task of large vocabulary proper name recognition. In order to accomodate a wide diversity of possible name pronunciations (due to non-native name origins or speaker tongues) a multilingual acoustic model is combined with a lexicon comprising 3 grapheme-to-phoneme (G2P) transcriptions from G2P transcribers for 3 different languages) and up to 4 so-called phoneme-to-phoneme (P2P) transcriptions. The latter are generated with (speaker tongue, name source) specific P2P converters that try to transform a set of baseline name transcriptions into a pool of transcription variants that lie closer to the `true’ name pronunciations. The experimental results show that the generated P2P variants can be employed to improve name recognition, and that the obtained accuracy is comparable to what is achieved with typical (TY) transcriptions (made by a human expert). Furthermore, it is demonstrated that the P2P conversion can best be instantiated from a baseline transcription in the name source language, and that knowledge of the speaker tongue is an important input as well for the P2P transcription process.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,078
inproceedings
sawalha-atwell-2010-fine
Fine-Grain Morphological Analyzer and Part-of-Speech Tagger for {A}rabic Text
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1196/
Sawalha, Majdi and Atwell, Eric
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Morphological analyzers and part-of-speech taggers are key technologies for most text analysis applications. Our aim is to develop a part-of-speech tagger for annotating a wide range of Arabic text formats, domains and genres including both vowelized and non-vowelized text. Enriching the text with linguistic analysis will maximize the potential for corpus re-use in a wide range of applications. We foresee the advantage of enriching the text with part-of-speech tags of very fine-grained grammatical distinctions, which reflect expert interest in syntax and morphology, but not specific needs of end-users, because end-user applications are not known in advance. In this paper we review existing Arabic Part-of-Speech Taggers and tag-sets, and illustrate four different Arabic PoS tag-sets for a sample of Arabic text from the Quran. We describe the detailed fine-grained morphological feature tag set of Arabic, and the fine-grained Arabic morphological analyzer algorithm. We faced practical challenges in applying the morphological analyzer to the 100-million-word Web Arabic Corpus: we had to port the software to the National Grid Service, adapt the analyser to cope with spelling variations and errors, and utilise a Broad-Coverage Lexical Resource combining 23 traditional Arabic lexicons. Finally we outline the construction of a Gold Standard for comparative evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,079
inproceedings
perinan-pascual-arcas-tunez-2010-architecture
The Architecture of {F}un{G}ram{KB}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1197/
Peri{\~n}{\'a}n-Pascual, Carlos and Arcas-T{\'u}nez, Francisco
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Natural language understanding systems require a knowledge base provided with conceptual representations reflecting the structure of human beings’ cognitive system. Although surface semantics can be sufficient in some other systems, the construction of a robust knowledge base guarantees its use in most natural language processing applications, consolidating thus the concept of resource reuse. In this scenario, FunGramKB is presented as a multipurpose knowledge base whose model has been particularly designed for natural language understanding tasks. The theoretical basement of this knowledge engineering project lies in the construction of two complementary types of interlingua: the conceptual logical structure, i.e. a lexically-driven interlingua which can predict linguistic phenomena according to the Role and Reference Grammar syntax-semantics interface, and the COREL scheme, i.e. a concept-oriented interlingua on which our rule-based reasoning engine is able to make inferences effectively. The objective of the paper is to describe the different conceptual, lexical and grammatical modules which make up the architecture of FunGramKB, together with an exploratory outline on how to exploit such a knowledge base within an NLP system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,080
inproceedings
bauer-etal-2010-wtimit
{WTIMIT}: The {TIMIT} Speech Corpus Transmitted Over The 3{G} {AMR} Wideband Mobile Network
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1198/
Bauer, Patrick and Scheler, David and Fingscheidt, Tim
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In anticipation of upcoming mobile telephony services with higher speech quality, a wideband (50 Hz to 7 kHz) mobile telephony derivative of TIMIT has been recorded called WTIMIT. It opens up various scientific investigations; e.g., on speech quality and intelligibility, as well as on wideband upgrades of network-side interactive voice response (IVR) systems with retrained or bandwidth-extended acoustic models for automatic speech recognition (ASR). Wideband telephony could enable network-side speech recognition applications such as remote dictation or spelling without the need of distributed speech recognition techniques. The WTIMIT corpus was transmitted via two prepared Nokia 6220 mobile phones over T-Mobile`s 3G wideband mobile network in The Hague, The Netherlands, employing the Adaptive Multirate Wideband (AMR-WB) speech codec. The paper presents observations of transmission effects and phoneme recognition experiments. It turns out that in the case of wideband telephony, server-side ASR should not be carried out by simply decimating received signals to 8 kHz and applying existent narrowband acoustic models. Nor do we recommend just simulating the AMR-WB codec for training of wideband acoustic models. Instead, real-world wideband telephony channel data (such as WTIMIT) provides the best training material for wideband IVR systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,081
inproceedings
van-oosten-etal-2010-towards
Towards an Improved Methodology for Automated Readability Prediction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1199/
van Oosten, Philip and Tanghe, Dries and Hoste, V{\'e}ronique
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Since the first half of the 20th century, readability formulas have been widely employed to automatically predict the readability of an unseen text. In this article, the formulas and the text characteristics they are composed of are evaluated in the context of large Dutch and English corpora. We describe the behaviour of the formulas and the text characteristics by means of correlation matrices and a principal component analysis, and test the methodological validity of the formulas by means of collinearity tests. Both the correlation matrices and the principal component analysis show that the formulas described in this paper strongly correspond, regardless of the language for which they were designed. Furthermore, the collinearity test reveals shortcomings in the methodology that was used to create some of the existing readability formulas. All of this leads us to conclude that a new readability prediction method is needed. We finally make suggestions to come to a cleaner methodology and present web applications that will help us collect data to compile a new gold standard for readability prediction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,082
inproceedings
sawalha-atwell-2010-constructing
Constructing and Using Broad-coverage Lexical Resource for Enhancing Morphological Analysis of {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1200/
Sawalha, Majdi and Atwell, Eric
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Broad-coverage language resources which provide prior linguistic knowledge must improve the accuracy and the performance of NLP applications. We are constructing a broad-coverage lexical resource to improve the accuracy of morphological analyzers and part-of-speech taggers of Arabic text. Over the past 1200 years, many different kinds of Arabic language lexicons were constructed; these lexicons are different in ordering, size and aim or goal of construction. We collected 23 machine-readable lexicons, which are freely available on the web. We combined lexical resources into one large broad-coverage lexical resource by extracting information from disparate formats and merging traditional Arabic lexicons. To evaluate the broad-coverage lexical resource we computed coverage over the Qur’an, the Corpus of Contemporary Arabic, and a sample from the Arabic Web Corpus, using two methods. Counting exact word matches between test corpora and lexicon scored about 65-68{\%}; Arabic has a rich morphology with many combinations of roots, affixes and clitics, so about a third of words in the corpora did not have an exact match in the lexicon. The second approach is to compute coverage in terms of use in a lemmatizer program, which strips clitics to look for a match for the underlying lexeme; this scored about 82-85{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,083
inproceedings
urbain-etal-2010-avlaughtercycle
The {AVL}aughter{C}ycle Database
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1201/
Urbain, J{\'er{\^ome and Bevacqua, Elisabetta and Dutoit, Thierry and Moinet, Alexis and Niewiadomski, Radoslaw and Pelachaud, Catherine and Picart, Benjamin and Tilmanne, Jo{\"elle and Wagner, Johannes
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the large audiovisual laughter database recorded as part of the AVLaughterCycle project held during the eNTERFACE’09 Workshop in Genova. 24 subjects participated. The freely available database includes audio signal and video recordings as well as facial motion tracking, thanks to markers placed on the subjects’ face. Annotations of the recordings, focusing on laughter description, are also provided and exhibited in this paper. In total, the corpus contains more than 1000 spontaneous laughs and 27 acted laughs. The laughter utterances are highly variable: the laughter duration ranges from 250ms to 82s and the sounds cover voiced vowels, breath-like expirations, hum-, hiccup- or grunt-like sounds, etc. However, as the subjects had no one to interact with, the database contains very few speech-laughs. Acted laughs tend to be longer than spontaneous ones and are more often composed of voiced vowels. The database can be useful for automatic laughter processing or cognitive science works. For the AVLaughterCycle project, it has served to animate a laughing virtual agent with an output laugh linked to the conversational partner’s input laugh.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,084
inproceedings
bouma-etal-2010-towards
Towards a Large Parallel Corpus of Cleft Constructions
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1202/
Bouma, Gerlof and {\O}vrelid, Lilja and Kuhn, Jonas
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present our efforts to create a large-scale, semi-automatically annotated parallel corpus of cleft constructions. The corpus is intended to reduce or make more effective the manual task of finding examples of clefts in a corpus. The corpus is being developed in the context of the Collaborative Research Centre SFB 632, which is a large, interdisciplinary research initiative to study information structure, at the University of Potsdam and the Humboldt University in Berlin. The corpus is based on the Europarl corpus (version 3). We show how state-of-the-art NLP tools, like POS taggers and statistical dependency parsers, may facilitate powerful and precise searches. We argue that identifying clefts using automatically added syntactic structure annotation is ultimately to be preferred over using lower level, though more robust, extraction methods like regular expression matching. An evaluation of the extraction method for one of the languages also offers some support for this method. We end the paper by discussing the resulting corpus itself. We present some examples of interesting clefts and translational counterparts from the corpus and suggest ways of exploiting our newly created resource in the cross-linguistic study of clefts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,085
inproceedings
zhang-etal-2010-random
A Random Graph Walk based Approach to Computing Semantic Relatedness Using Knowledge from {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1203/
Zhang, Ziqi and Gentile, Anna Lisa and Xia, Lei and Iria, Jos{\'e} and Chapman, Sam
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Determining semantic relatedness between words or concepts is a fundamental process to many Natural Language Processing applications. Approaches for this task typically make use of knowledge resources such as WordNet and Wikipedia. However, these approaches only make use of limited number of features extracted from these resources, without investigating the usefulness of combining various different features and their importance in the task of semantic relatedness. In this paper, we propose a random walk model based approach to measuring semantic relatedness between words or concepts, which seamlessly integrates various features extracted from Wikipedia to compute semantic relatedness. We empirically study the usefulness of these features in the task, and prove that by combining multiple features that are weighed according to their importance, our system obtains competitive results, and outperforms other systems on some datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,086
inproceedings
lardilleux-etal-2010-bilingual
Bilingual Lexicon Induction: Effortless Evaluation of Word Alignment Tools and Production of Resources for Improbable Language Pairs
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1204/
Lardilleux, Adrien and Gosme, Julien and Lepage, Yves
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present a simple protocol to evaluate word aligners on bilingual lexicon induction tasks from parallel corpora. Rather than resorting to gold standards, it relies on a comparison of the outputs of word aligners against a reference bilingual lexicon. The quality of this reference bilingual lexicon does not need to be particularly high, because evaluation quality is ensured by systematically filtering this reference lexicon with the parallel corpus the word aligners are trained on. We perform a comparison of three freely available word aligners on numerous language pairs from the Bible parallel corpus (Resnik et al., 1999): MGIZA++ (Gao and Vogel, 2008), BerkeleyAligner (Liang et al., 2006), and Anymalign (Lardilleux and Lepage, 2009). We then select the most appropriate one to produce bilingual lexicons for all language pairs of this corpus. These involve Cebuano, Chinese, Danish, English, Finnish, French, Greek, Indonesian, Latin, Spanish, Swedish, and Vietnamese. The 66 resulting lexicons are made freely available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,087
inproceedings
schraagen-bloothooft-2010-evaluating
Evaluating Repetitions, or how to Improve your Multilingual {ASR} System by doing Nothing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1205/
Schraagen, Marijn and Bloothooft, Gerrit
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Repetition is a common concept in human communication. This paper investigates possible benefits of repetition for automatic speech recognition under controlled conditions. Testing is performed on the newly created Autonomata TOO speech corpus, consisting of multilingual names for Points-Of-Interest as spoken by both native and non-native speakers. During corpus recording, ASR was being performed under baseline conditions using a Nuance Vocon 3200 system. On failed recognition, additional attempts for the same utterances were added to the corpus. Substantial improvements in recognition results are shown for all categories of speakers and utterances, even if speakers did not noticeably alter their previously misrecognized pronunciation. A categorization is proposed for various types of differences between utterance realisations. The number of attempts, the pronunciation of an utterance over multiple attempts compared to both previous attempts and reference pronunciation is analyzed for difference type and frequency. Variables such as the native language of the speaker and the languages in the lexicon are taken into account. Possible implications for ASR research are discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,088
inproceedings
utsumi-2010-exploring
Exploring the Relationship between Semantic Spaces and Semantic Relations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1206/
Utsumi, Akira
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This study examines the relationship between two kinds of semantic spaces {\textemdash} i.e., spaces based on term frequency (tf) and word cooccurrence frequency (co) {\textemdash} and four semantic relations {\textemdash} i.e., synonymy, coordination, superordination, and collocation {\textemdash} by comparing, for each semantic relation, the performance of two semantic spaces in predicting word association. The simulation experiment demonstrates that the tf-based spaces perform better in predicting word association based on the syntagmatic relation (i.e., superordination and collocation), while the co-based semantic spaces are suited for predicting word association based on the paradigmatic relation (i.e., synonymy and coordination). In addition, the co-based space with a larger context size yields better performance for the syntagmatic relation, while the co-based space with a smaller context size tends to show better performance for the paradigmatic relation. These results indicate that different semantic spaces can be used depending on what kind of semantic relatedness should be computed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,089
inproceedings
leitner-etal-2010-example
Example-Based Automatic Phonetic Transcription
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1207/
Leitner, Christina and Schickbichler, Martin and Petrik, Stefan
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Current state-of-the-art systems for automatic phonetic transcription (APT) are mostly phone recognizers based on Hidden Markov models (HMMs). We present a different approach for APT especially designed for transcription with a large inventory of phonetic symbols. In contrast to most systems which are model-based, our approach is non-parametric using techniques derived from concatenative speech synthesis and template-based speech recognition. This example-based approach not only produces draft transcriptions that just need to be corrected instead of created from scratch but also provides a validation mechanism for ensuring consistency within the corpus. Implementations of this transcription framework are available as standalone Java software and extension to the ELAN linguistic annotation software. The transcription system was tested with audio files and reference transcriptions from the Austrian Pronunciation Database (ADABA) and compared to an HMM-based system trained on the same data set. The example-based and the HMM-based system achieve comparable phone recognition rates. A combination of rule-based and example-based APT in a constrained phone recognition scenario returned the best results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,090
inproceedings
garcia-fernandez-etal-2010-macaq
{MACAQ} : A Multi Annotated Corpus to Study how we Adapt Answers to Various Questions
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1208/
Garcia-Fernandez, Anne and Rosset, Sophie and Vilnat, Anne
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents a corpus of human answers in natural language collected in order to build a base of examples useful when generating natural language answers. We present the corpus and the way we acquired it. Answers correspond to questions with fixed linguistic form, focus, and topic. Answers to a given question exist for two modalities of interaction: oral and written. The whole corpus of answers was annotated manually and automatically on different levels including words from the questions being reused in the answer, the precise element answering the question (or information-answer), and completions. A detailed description of the annotations is presented. Two examples of corpus analyses are described. The first analysis shows some differences between oral and written modality especially in terms of length of the answers. The second analysis concerns the reuse of the question focus in the answers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,091
inproceedings
martinez-hinarejos-etal-2010-evaluation
Evaluation of {HMM}-based Models for the Annotation of Unsegmented Dialogue Turns
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1209/
Mart{\'i}nez-Hinarejos, Carlos-D. and Tamarit, Vicent and Bened{\'i}, Jos{\'e}-M.
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Corpus-based dialogue systems rely on statistical models, whose parameters are inferred from annotated dialogues. The dialogues are usually annotated in terms of Dialogue Acts (DA), and the manual annotation is difficult (as annotation rule are hard to define), error-prone and time-consuming. Therefore, several semi-automatic annotation processes have been proposed to speed-up the process and consequently obtain a dialogue system in less total time. These processes are usually based on statistical models. The standard statistical annotation model is based on Hidden Markov Models (HMM). In this work, we explore the impact of different types of HMM, with different number of states, on annotation accuracy. We performed experiments using these models on two dialogue corpora (Dihana and SwitchBoard) of dissimilar features. The results show that some types of models improve standard HMM in a human-computer task-oriented dialogue corpus (Dihana corpus), but their impact is lower in a human-human non-task-oriented dialogue corpus (SwitchBoard corpus).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,092
inproceedings
nawaz-etal-2010-meta
Meta-Knowledge Annotation of Bio-Events
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1210/
Nawaz, Raheel and Thompson, Paul and McNaught, John and Ananiadou, Sophia
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Biomedical corpora annotated with event-level information provide an important resource for the training of domain-specific information extraction (IE) systems. These corpora concentrate primarily on creating classified, structured representations of important facts and findings contained within the text. However, bio-event annotations often do not take into account additional information (meta-knowledge) that is expressed within the textual context of the bio-event, e.g., the pragmatic/rhetorical intent and the level of certainty ascribed to a particular bio-event by the authors. Such additional information is indispensible for correct interpretation of bio-events. Therefore, an IE system that simply presents a list of “bare” bio-events, without information concerning their interpretation, is of little practical use. We have addressed this sparseness of meta-knowledge available in existing bio-event corpora by developing a multi-dimensional annotation scheme tailored to bio-events. The scheme is intended to be general enough to allow integration with different types of bio-event annotation, whilst being detailed enough to capture important subtleties in the nature of the meta-knowledge expressed about different bio-events. To our knowledge, our scheme is unique within the field with regards to the diversity of meta-knowledge aspects annotated for each event.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,093
inproceedings
kano-etal-2010-u
{U}-Compare: An Integrated Language Resource Evaluation Platform Including a Comprehensive {UIMA} Resource Library
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1211/
Kano, Yoshinobu and Dorado, Ruben and McCrohon, Luke and Ananiadou, Sophia and Tsujii, Jun{'}ichi
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Language resources, including corpus and tools, are normally required to be combined in order to achieve a user’s specific task. However, resources tend to be developed independently in different, incompatible formats. In this paper we describe about U-Compare, which consists of the U-Compare component repository and the U-Compare platform. We have been building a highly interoperable resource library, providing the world largest ready-to-use UIMA component repository including wide variety of corpus readers and state-of-the-art language tools. These resources can be deployed as local services or web services, even possible to be hosted in clustered machines to increase the performance, while users do not need to be aware of such differences. In addition to the resource library, an integrated language processing platform is provided, allowing workflow creation, comparison, evaluation and visualization, using the resources in the library or any UIMA component, without any programming via graphical user interfaces, while a command line launcher is also available without GUIs. The evaluation itself is processed in a UIMA component, users can create and plug their own evaluation metrics in addition to the predefined metrics. U-Compare has been successfully used in many projects including BioCreative, Conll and the BioNLP shared task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,094
inproceedings
johannessen-etal-2010-enhancing
Enhancing Language Resources with Maps
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1212/
Johannessen, Janne Bondi and Hagen, Kristin and N{\o}klestad, Anders and Priestley, Joel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We will look at how maps can be integrated in research resources, such as language databases and language corpora. By using maps, search results can be illustrated in a way that immediately gives the user information that words or numbers on their own would not give. We will illustrate with two different resources, into which we have now added a Google Maps application: The Nordic Dialect Corpus (Johannessen et al. 2009) and The Nordic Syntactic Judgments Database (Lindstad et al. 2009). We have integrated Google Maps into these applications. The database contains some hundred syntactic test sentences that have been evaluated by four speakers in more than hundred locations in Norway and Sweden. Searching for the evaluations of a particular sentence gives a list of several hundred judgments, which are difficult for a human researcher to assess. With the map option, isoglosses are immediately visible. We show in the paper that both with the maps depicting corpus hits and with the maps depicting database results, the map visualizations actually show clear geographical differences that would be very difficult to spot just by reading concordance lines or database tables.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,095
inproceedings
sukkarieh-bolge-2010-building
Building a Textual Entailment Suite for the Evaluation of Automatic Content Scoring Technologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1213/
Sukkarieh, Jana Z. and Bolge, Eleanor
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Automatic content scoring for free-text responses has started to emerge as an application of Natural Language Processing in its own right, much like question answering or machine translation. The task, in general, is reduced to comparing a student’s answer to a model answer. Although a considerable amount of work has been done, common benchmarks and evaluation measures for this application do not currently exist. It is yet impossible to perform a comparative evaluation or progress tracking of this application across systems {\textemdash} an application that we view as a textual entailment task. This paper concentrates on introducing an Educational Testing Service-built test suite that makes a step towards establishing such a benchmark. The suite can be used as regression and performance evaluations both intra-c-rater{\^A}{\textregistered} or inter automatic content scoring technologies. It is important to note that existing textual entailment test suites like PASCAL RTE or FraCas, though beneficial, are not suitable for our purposes since we deal with atypical naturally-occurring student responses that need to be categorized in order to serve as regression test cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,096
inproceedings
zargayouna-nazarenko-2010-evaluation
Evaluation of Textual Knowledge Acquisition Tools: a Challenging Task
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1214/
Zargayouna, Ha{\"ifa and Nazarenko, Adeline
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
A large effort has been devoted to the development of textual knowledge acquisition (KA) tools, but it is still difficult to assess the progress that has been made. The results produced by these tools are difficult to compare, due to the heterogeneity of the proposed methods and of their goals. Various experiments have been made to evaluate terminological and ontological tools. They show that in terminology as well as in ontology acquisition, it remains difficult to compare existing tools and to analyse their advantages and drawbacks. From our own experiments in evaluating terminology and ontology acquisition tools, it appeared that the difficulties and solutions are similar for both tasks. We propose a unified approach for the evaluation of textual KA tools that can be instantiated in different ways for various tasks. The main originality of this approach lies in the way it takes into account the subjectivity of evaluation and the relativity of gold standards. In this paper, we highlight the major difficulties of KA evaluation, we then present a unified proposal for the evaluation of terminologies and ontologies acquisition tools and the associated experiments. The proposed protocols take into consideration the specificity of this type of evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,097
inproceedings
de-melo-weikum-2010-providing
Providing Multilingual, Multimodal Answers to Lexical Database Queries
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1215/
de Melo, Gerard and Weikum, Gerhard
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Language users are increasingly turning to electronic resources to address their lexical information needs, due to their convenience and their ability to simultaneously capture different facets of lexical knowledge in a single interface. In this paper, we discuss techniques to respond to a user`s lexical queries by providing multilingual and multimodal information, and facilitating navigating along different types of links. To this end, structured information from sources like WordNet, Wikipedia, Wiktionary, as well as Web services is linked and integrated to provide a multi-faceted yet consistent response to user queries. The meanings of words in many different languages are characterized by mapping them to appropriate WordNet sense identifiers and adding multilingual gloss descriptions as well as example sentences. Relationships are derived from WordNet and Wiktionary to allow users to discover semantically related words, etymologically related words, alternative spellings, as well as misspellings. Last but not least, images, audio recordings, and geographical maps extracted from Wikipedia and Wiktionary allow for a multimodal experience.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,098
inproceedings
lenci-etal-2010-building
Building an {I}talian {F}rame{N}et through Semi-automatic Corpus Analysis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1216/
Lenci, Alessandro and Johnson, Martina and Lapesa, Gabriella
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
n this paper, we outline the methodology we adopted to develop a FrameNet for Italian. The main element of novelty with respect to the original FrameNet is represented by the fact that the creation and annotation of Lexical Units is strictly grounded in distributional information (statistical distribution of verbal subcategorization frames, lexical and semantic preferences of each frame) automatically acquired from a large, dependency-parsed corpus. We claim that this approach allows us to overcome some of the shortcomings of the classical lexicographic method used to create FrameNet, by complementing the accuracy of manual annotation with the robustness of data on the global distributional patterns of a verb. In the paper, we describe our method for extracting distributional data from the corpus and the way we used it for the encoding and annotation of LUs. The long-term goal of our project is to create an electronic lexicon for Italian similar to the original English FrameNet. For the moment, we have developed a database of syntactic valences that will be made freely accessible via a web interface. This represents an autonomous resource besides the FrameNet lexicon, of which we have a beginning nucleus consisting of 791 annotated sentences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,099
inproceedings
sikveland-etal-2010-spontal
Spontal-N: A Corpus of Interactional Spoken {N}orwegian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1217/
Sikveland, Rein Ove and {\"Ottl, Anton and Amdal, Ingunn and Ernestus, Mirjam and Svendsen, Torbj{\orn and Edlund, Jens
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Spontal-N is a corpus of spontaneous, interactional Norwegian. To our knowledge, it is the first corpus of Norwegian in which the majority of speakers have spent significant parts of their lives in Sweden, and in which the recorded speech displays varying degrees of interference from Swedish. The corpus consists of studio quality audio- and video-recordings of four 30-minute free conversations between acquaintances, and a manual orthographic transcription of the entire material. On basis of the orthographic transcriptions, we automatically annotated approximately 50 percent of the material on the phoneme level, by means of a forced alignment between the acoustic signal and pronunciations listed in a dictionary. Approximately seven percent of the automatic transcription was manually corrected. Taking the manual correction as a gold standard, we evaluated several sources of pronunciation variants for the automatic transcription. Spontal-N is intended as a general purpose speech resource that is also suitable for investigating phonetic detail.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,100
inproceedings
koeva-etal-2010-bulgarian
{B}ulgarian National Corpus Project
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1218/
Koeva, Svetla and Blagoeva, Diana and Kolkovska, Siya
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper presents Bulgarian National Corpus project (BulNC) - a large-scale, representative, online available corpus of Bulgarian. The BulNC is also a monolingual general corpus, fully morpho-syntactically (and partially semantically) annotated, and manually provided with detailed meta-data descriptions. Presently the Bulgarian National corpus consists of about 320 000 000 graphical words and includes more than 10 000 samples. Briefly the corpus structure and the accepted criteria for representativeness and well-balancing are presented. The query language for advance search of collocations and concordances is demonstrated with some examples - it allows to retrieve word combinations, ordered queries, inflexionally and semantically related words, part-of-speech tags, utilising Boolean operations and grouping as well. The BulNC already plays a significant role in natural language processing of Bulgarian contributing to scientific advances in spelling and grammar checking, word sense disambiguation, speech recognition, text categorisation, topic extraction and machine translation. The BulNC can also be used in different investigations going beyond the linguistics: library studies, social sciences research, teaching methods studies, etc.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,101
inproceedings
lin-etal-2010-composing
Composing Human and Machine Translation Services: Language Grid for Improving Localization Processes
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1219/
Lin, Donghui and Murakami, Yoshiaki and Ishida, Toru and Murakami, Yohei and Tanaka, Masahiro
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
With the development of the Internet environments, more and more language services become accessible for common people. However, the gap between human translators and machine translators remains huge especially for the domain of localization processes that requires high translation quality. Although efforts of combining human and machine translators for supporting multilingual communication have been reported in previous research, how to apply such approaches for improving localization processes are rarely discussed. In this paper, we aim at improving localization processes by composing human and machine translation services based on the Language Grid, which is a language service platform that we have developed. Further, we conduct experiments to compare the translation quality and translation cost using several translation processes, including absolute machine translation processes, absolute human translation processes and translation processes by human and machine translation services. The experiment results show that composing monolingual roles and dictionary services improves the translation quality of machine translators, and that collaboration of human and machine translators is possible to reduce the cost comparing with the absolute bilingual human translation. We also discuss the generality of the experimental results and further challenging issues of the proposed localization processes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,102
inproceedings
haselbach-heid-2010-development
The Development of a Morphosyntactic Tagset for {A}frikaans and its Use with Statistical Tagging
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1220/
Haselbach, Boris and Heid, Ulrich
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present a morphosyntactic tagset for Afrikaans based on the guidelines developed by the Expert Advisory Group on Language Engineering Standards (EAGLES). We compare our slim yet expressive tagset, MAATS (Morphosyntactic AfrikAans TagSet), with an existing one which primarily focuses on a detailed morphosyntactic and semantic description of word forms. MAATS will primarily be used for the extraction of lexical data from large pos-tagged corpora. We not only focus on morphosyntactic properties but also on the processability with statistical tagging. We discuss the tagset design and motivate our classification of Afrikaans word forms, in particular we focus on the categorization of verbs and conjunctions. The complete tagset in presented and we briefly discuss each word class. In a case study with an Afrikaans newspaper corpus, we evaluate our tagset with four different statistical taggers. Despite a relatively small amount of training data, however with a large tagger lexicon, TnT-Tagger scores 97.05 {\%} accuracy. Additionally, we present some error sources and discuss future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,103
inproceedings
lee-etal-2010-emotion
Emotion Cause Events: Corpus Construction and Analysis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1221/
Lee, Sophia Yat Mei and Chen, Ying and Li, Shoushan and Huang, Chu-Ren
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Emotion processing has always been a great challenge. Given the fact that an emotion is triggered by cause events and that cause events are an integral part of emotion, this paper constructs a Chinese emotion cause corpus as a first step towards automatic inference of cause-emotion correlation. The corpus focuses on five primary emotions, namely happiness, sadness, fear, anger, and surprise. It is annotated with emotion cause events based on our proposed annotation scheme. Corpus data shows that most emotions are expressed with causes, and that causes mostly occur before the corresponding emotion verbs. We also examine the correlations between emotions and cause events in terms of linguistic cues: causative verbs, perception verbs, epistemic markers, conjunctions, prepositions, and others. Results show that each group of linguistic cues serves as an indicator marking the cause events in different structures of emotional constructions. We believe that the emotion cause corpus will be the useful resource for automatic emotion cause detection as well as emotion detection and classification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,104
inproceedings
navarretta-2010-dad
The {DAD} Parallel Corpora and their Uses
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1222/
Navarretta, Costanza
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper deals with the uses of the annotations of third person singular neuter pronouns in the DAD parallel and comparable corpora of Danish and Italian texts and spoken data. The annotations contain information about the functions of these pronouns and their uses as abstract anaphora. Abstract anaphora have constructions such as verbal phrases, clauses and discourse segments as antecedents and refer to abstract objects comprising events, situations and propositions. The analysis of the annotated data shows the language specific characteristics of abstract anaphora in the two languages compared with the uses of abstract anaphora in English. Finally, the paper presents machine learning experiments run on the annotated data in order to identify the functions of third person singular neuter personal pronouns and neuter demonstrative pronouns. The results of these experiments vary from corpus to corpus. However, they are all comparable with the results obtained in similar tasks in other languages. This is very promising because the experiments have been run on both written and spoken data using a classification of the pronominal functions which is much more fine-grained than the classifications used in other studies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,105
inproceedings
thwaites-etal-2010-lips
{LIPS}: A Tool for Predicting the Lexical Isolation Point of a Word
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1223/
Thwaites, Andrew and Geertzen, Jeroen and Marslen-Wilson, William D. and Buttery, Paula
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present LIPS (Lexical Isolation Point Software), a tool for accurate lexical isolation point (IP) prediction in recordings of speech. The IP is the point in time in which a word is correctly recognised given the acoustic evidence available to the hearer. The ability to accurately determine lexical IPs is of importance to work in the field of cognitive processing, since it enables the evaluation of competing models of word recognition. IPs are also of importance in the field of neurolinguistics, where the analyses of high-temporal-resolution neuroimaging data require a precise time alignment of the observed brain activity with the linguistic input. LIPS provides an attractive alternative to costly multi-participant perception experiments by automatically computing IPs for arbitrary words. On a test set of words, the LIPS system predicts IPs with a mean difference from the actual IP of within 1ms. The difference from the predicted and actual IP approximate to a normal distribution with a standard deviation of around 80ms (depending on the model used).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,106
inproceedings
williams-etal-2010-cambridge
The {C}ambridge Cookie-Theft Corpus: A Corpus of Directed and Spontaneous Speech of Brain-Damaged Patients and Healthy Individuals
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1224/
Williams, Caroline and Thwaites, Andrew and Buttery, Paula and Geertzen, Jeroen and Randall, Billi and Shafto, Meredith and Devereux, Barry and Tyler, Lorraine
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Investigating differences in linguistic usage between individuals who have suffered brain injury (hereafter patients) and those who haven’t can yield a number of benefits. It provides a better understanding about the precise way in which impairments affect patients’ language, improves theories of how the brain processes language, and offers heuristics for diagnosing certain types of brain damage based on patients’ speech. One method for investigating usage differences involves the analysis of spontaneous speech. In the work described here we construct a text corpus consisting of transcripts of individuals’ speech produced during two tasks: the Boston-cookie-theft picture description task (Goodglass and Kaplan, 1983) and a spontaneous speech task, which elicits a semi-prompted monologue, and/or free speech. Interviews with patients from 19yrs to 89yrs were transcribed, as were interviews with a comparable number of healthy individuals (20yrs to 89yrs). Structural brain images are available for approximately 30{\%} of participants. This unique data source provides a rich resource for future research in many areas of language impairment and has been constructed to facilitate analysis with natural language processing and corpus linguistics techniques.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,107
inproceedings
van-den-heuvel-etal-2010-veterantapes
The {V}eteran{T}apes: Research Corpus, Fragment Processing Tool, and Enhanced Publications for the e-Humanities
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1225/
van den Heuvel, Henk and van Horik, Ren{\'e} and Scagliola, Stef and Sanders, Eric and Witkamp, Paula
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Enhanced Publications are a new way to publish scientific and other results in an electronic article. The advantage of EPs is that the relation between the article and the underlying data facilitate the peer review process and other quality assessment activities. Due to the link between de publication and the research data the publication can be much richer than a paper edition permits. We present an example of EPs in which links are made to interview fragments that include transcripts, audio segments, annotations and metadata. EPs call for a new paradigm of research methodology in which digital persistent access to research data are a central issue. In this contribution we highlight 1. The research data as it is archived and curated, 2. the concept ''``enhanced publication'''' and its scientific value, 3. the ''``fragment fitter tool'''', a language processing tool to facilitate the creation of EPs, 4. IPR issues related to the re-use of the interview data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,108
inproceedings
bernardi-etal-2010-context
Context Fusion: The Role of Discourse Structure and Centering Theory
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1226/
Bernardi, Raffaella and Kirschner, Manuel and Ratkovic, Zorana
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Questions are not asked in isolation. Their context, viz. the preceding interactions, might be of help to understand them and retrieve the correct answer. Previous research in Interactive Question Answering showed that context fusion has a big potential to improve the performance of answer retrieval. In this paper, we study how much context, and what elements of it, should be considered to answer Follow-Up Questions (FU Qs). Following previous research, we exploit Logistic Regression Models to learn aspects of dialogue structure relevant to answering FU Qs. We enrich existing models based on shallow features with deep features, relying on the theory of discourse structure of (Chai and Jin, 2004), and on Centering Theory, respectively. Using models trained on realistic IQA data, we show which of the various theoretically motivated features hold up against empirical evidence. We also show that, while these deep features do not outperform the shallow ones on their own, an IQA system`s answer correctness increases if the shallow and deep features are combined.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,109
inproceedings
okamoto-ishizaki-2010-homographic
Homographic Ideogram Understanding Using Contextual Dynamic Network
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1227/
Okamoto, Jun and Ishizaki, Shun
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Conventional methods for disambiguation problems have been using statistical methods with co-occurrence of words in their contexts. It seems that human-beings assign appropriate word senses to the given ambiguous word in the sentence depending on the words which followed the ambiguous word when they could not disambiguate by using the previous contextual information. In this research, Contextual Dynamic Network Model is developed using the Associative Concept Dictionary which includes semantic relations among concepts/words and the relations can be represented with quantitative distances among them. In this model, an interactive activation method is used to identify a word’s meaning on the Contextual Semantic Network where the activation values on the network are calculated using the distances. The proposed method constructs dynamically the Contextual Semantic Network according to the input words sequentially that appear in the sentence including an ambiguous word. Therefore, in this research, after the model calculates the activation values, if there is little difference between the activation values, it reconstructs the network depending on the next words in input sentence. The evaluation of proposed method showed that the accuracy rates are high when Contextual Semantic Network has high density whose node are extended using around the ambiguous word.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,110
inproceedings
dione-etal-2010-design
Design and Development of Part-of-Speech-Tagging Resources for {W}olof ({N}iger-{C}ongo, spoken in {S}enegal)
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1228/
Dione, Cheikh M. Bamba and Kuhn, Jonas and Zarrie{\ss}, Sina
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we report on the design of a part-of-speech-tagset for Wolof and on the creation of a semi-automatically annotated gold standard. In order to achieve high-quality annotation relatively fast, we first generated an accurate lexicon that draws on existing word and name lists and takes into account inflectional and derivational morphology. The main motivation for the tagged corpus is to obtain data for training automatic taggers with machine learning approaches. Hence, we took machine learning considerations into account during tagset design and we present training experiments as part of this paper. The best automatic tagger achieves an accuracy of 95.2{\%} in cross-validation experiments. We also wanted to create a basis for experimenting with annotation projection techniques, which exploit parallel corpora. For this reason, it was useful to use a part of the Bible as the gold standard corpus, for which sentence-aligned parallel versions in many languages are easy to obtain. We also report on preliminary experiments exploiting a statistical word alignment of the parallel text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,111
inproceedings
morante-2010-descriptive
Descriptive Analysis of Negation Cues in Biomedical Texts
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1229/
Morante, Roser
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we present a description of negation cues and their scope in biomedical texts, based on the cues that occur in the BioScope corpus. We provide information about the morphological type of the cue, the characteristics of the scope in relation to the morpho-syntactic features of the cue and of the clause, and the ambiguity level of the cue by describing in which cases certain negation cues do not express negation. Additionally, we provide positive and negative examples per cue from the BioScope corpus. We show that the scope depends mostly on the part-of-speech of the cue and on the syntactic features of the clause. Although several studies have focused on processing negation in biomedical texts, we are not aware of publicly available resources that describe the scope of negation cues in detail. This paper aims at providing information for producing guidelines to annotate corpora with a negation layer, and for building resources that find the scope of negation cues automatically.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,112
inproceedings
yao-etal-2010-pdtb
{PDTB} {XML}: the {XML}ization of the {P}enn {D}iscourse {T}ree{B}ank 2.0
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1230/
Yao, Xuchen and Borisova, Irina and Alam, Mehwish
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The current study presents a conversion and unification of the Penn Discourse TreeBank 2.0 (PDTB) and the Penn TreeBank (PTB) under XML format. The main goal of the PDTB XML is to create a tool for efficient and broad querying of the syntax and discourse information simultaneously. The key stages of the project are developing proper cross-references between different data types and their representation in the modified TIGER-XML format, and then writing the required declarative languages (XML Schema). PTB XML is compatible with TIGER-XML format. The PDTB XML is developed as a unified format for the convenience of XQuery users; it integrates discourse relations and XML structures into one unified hierarchy and builds the cross references between the syntactic trees and the discourse relations. The syntactic and discourse elements are assigned with unique IDs in order to build cross-references between them. The converted corpus allows for a simultaneous search for syntactically specified discourse information based on the XQuery standard, which is illustrated with a simple example in the article.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,113
inproceedings
mykowiecka-etal-2010-domain
Domain-related Annotation of {P}olish Spoken Dialogue Corpus {LUNA}.{PL}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1231/
Mykowiecka, Agnieszka and G{\l}owi{\'n}ska, Katarzyna and Rabiega-Wi{\'s}niewska, Joanna
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper presents a corpus of Polish spoken dialogues annotated on several levels, from transcription of dialogues and their morphosyntactic analysis, to semantic annotation. The LUNA.PL corpus is the first semantically annotated corpus of Polish spontaneous speech. It contains 500 dialogues recorded at the Warsaw Transport Authority call centre. For each dialogue, the corpus contains recorded audio signal, its transcription and five XML files with annotations on subsequent levels. Speech transcription was done manually. Text annotation was constructed using a combination of rule based programmes and computer-aided manual work. For morphological annotation we used the already existing analyzer and manually disambiguated the results. Morphologically annotated texts of dialogues were automatically segmented into elementary syntactic chunks. Semantic annotation was done by a set of specially designed rules and then manually corrected. The paper describes details of the domain related semantic annotation which consists of two levels - concept level at which around 200 attributes and their values are annotated, and predicate level at which 47 frame types are recognized. We describe the domain model accepted, and the statistics over the entire annotated set of dialogues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,114