entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
pustylnikov-etal-2008-unified
A Unified Database of Dependency Treebanks: Integrating, Quantifying {\&} Evaluating Dependency Data
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1152/
Pustylnikov, Olga and Mehler, Alexander and Gleim, R{\"udiger
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a database of 11 dependency treebanks which were unified by means of a two-dimensional graph format. The format was evaluated with respect to storage-complexity on the one hand, and efficiency of data access on the other hand. An example of how the treebanks can be integrated within a unique interface is given by means of the DTDB interface.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,540
inproceedings
bouhjar-2008-amazigh
{A}mazigh Language Terminology in {M}orocco or Management of a {\textquotedblleft}Multidimensional{\textquotedblright} Variation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1153/
Bouhjar, Aicha
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The present communication brings to the fore the work undertaken at the Royal Institute of the Amazigh Culture (IRCAM, henceforth) within the Language Planning Center known as “Centre de l’Am{\'e}nagement Linguistique” (CAL) within the framework of the language planning of Amazigh, particularly on the side of terminology. The focus will be on the concept of “variation” that affects different levels in the course of standardizing a language: orthography, spelling, grammar and lexis. Thus, after a brief survey of the main features of the Amazigh (Berber) language in general, the missions and the projects far achieved by CAL will be presented, particularly the objectives that relate to the work on the multiply varied corpus-based terminology. It appears that eliciting the pertinent information, for the most part, requires a whole amount of work on the re-writing of corpora so that the latter become exploitable in the standardization process. It should be pointed out that this stage of data homogenization, seemingly unwieldy for optimal exploitation, cannot be undertaken Amazighist linguists being involved in theoretical and methodological presuppositions that are at the root of this variation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,541
inproceedings
yang-etal-2008-chinese-term
{C}hinese Term Extraction Based on Delimiters
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1154/
Yang, Yuhang and Lu, Qin and Zhao, Tiejun
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Existing techniques extract term candidates by looking for internal and contextual information associated with domain specific terms. The algorithms always face the dilemma that fewer features are not enough to distinguish terms from non-terms whereas more features lead to more conflicts among selected features. This paper presents a novel approach for term extraction based on delimiters which are much more stable and domain independent. The proposed approach is not as sensitive to term frequency as that of previous works. This approach has no strict limit or hard rules and thus they can deal with all kinds of terms. It also requires no prior domain knowledge and no additional training to adapt to new domains. Consequently, the proposed approach can be applied to different domains easily and it is especially useful for resource-limited domains. Evaluations conducted on two different domains for Chinese term extraction show significant improvements over existing techniques which verifies its efficiency and domain independent nature. Experiments on new term extraction indicate that the proposed approach can also serve as an effective tool for domain lexicon expansion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,542
inproceedings
boulaknadel-etal-2008-multi
A Multi-Word Term Extraction Program for {A}rabic Language
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1155/
Boulaknadel, Siham and Daille, Beatrice and Aboutajdine, Driss
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Terminology extraction commonly includes two steps: identification of term-like units in the texts, mostly multi-word phrases, and the ranking of the extracted term-like units according to their domain representativity. In this paper, we design a multi-word term extraction program for Arabic language. The linguistic filtering performs a morphosyntactic analysis and takes into account several types of variations. The domain representativity is measure thanks to statistical scores. We evalutate several association measures and show that the results we otained are consitent with those obtained for Romance languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,543
inproceedings
butters-ciravegna-2008-using
Using Similarity Metrics For Terminology Recognition
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1156/
Butters, Jonathan and Ciravegna, Fabio
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present an approach to terminology recognition whereby a sublanguage term (e.g. an aircraft engine component term extracted from a maintenance log) is matched to its corresponding term from a pre-defined list (such as a taxonomy representing the official break-down of the engine). Terminology recognition is addressed as a classification task whereby the extracted term is associated to one or more potential terms in the official description list via the application of string similarity metrics. The solution described in the paper uses dynamically computed similarity cut-off thresholds calculated on the basis of modeling a noise curve. Dissimilar string matches form a Gaussian distributed noise curve that can be identified and extracted leaving only mostly similar string matches. Dynamically calculated thresholds are preferable over fixed similarity thresholds as fixed thresholds are inherently imprecise, that is, there is no similarity boundary beyond which any two strings always describe the same concept.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,544
inproceedings
pitel-grefenstette-2008-semi
Semi-automatic Building Method for a Multidimensional Affect Dictionary for a New Language
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1158/
Pitel, Guillaume and Grefenstette, Gregory
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Detecting the tone or emotive content of a text message is increasingly important in many natural language processing applications. While for the English language there exists a number of affect, emotive, opinion, or affect computer-usable lexicons for automatically processing text, other languages rarely possess these primary resources. Here we present a semi-automatic technique for quickly building a multidimensional affect lexicon for a new language. Most of the work consists of defining 44 paired affect directions (e.g. love-hate, courage-fear, etc.) and choosing a small number of seed words for each dimension. From this initial investment, we show how a first pass affect lexicon can be created for new language, using a SVM classifier trained on a feature space produced from Latent Semantic Analysis over a large corpus in the new language. We evaluate the accuracy of placing newly found emotive words in one or more of the defined semantic dimensions. We illustrate this technique by creating an affect lexicon for French, but the techniques can be applied to any language found on the Web and for which a large quantity of text exists.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,546
inproceedings
devillers-martin-2008-coding
Coding Emotional Events in Audiovisual Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1159/
Devillers, Laurence and Martin, Jean-Claude
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The modelling of realistic emotional behaviour is needed for various applications in multimodal human-machine interaction such as the design of emotional conversational agents (Martin et al., 2005) or of emotional detection systems (Devillers and Vidrascu, 2007). Yet, building such models requires appropriate definition of various levels for representing the emotions themselves but also some contextual information such as the events that elicit these emotions. This paper presents a coding scheme that has been defined following annotations of a corpus of TV interviews (EmoTV). Deciding which events triggered or may trigger which emotion is a challenge for building efficient emotion eliciting protocols. In this paper, we present the protocol that we defined for collecting another corpus of spontaneous human-human interactions recorded in laboratory conditions (EmoTaboo). We discuss the events that we designed for eliciting emotions. Part of this scheme for coding emotional event is being included in the specifications that are currently defined by a working group of the W3C (the W3C Emotion Incubator Working group). This group is investigating the feasibility of working towards a standard representation of emotions and related states in technological contexts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,547
inproceedings
esuli-etal-2008-annotating
Annotating Expressions of Opinion and Emotion in the {I}talian Content Annotation Bank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1160/
Esuli, Andrea and Sebastiani, Fabrizio and Urciuoli, Ilaria
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we describe the result of manually annotating I-CAB, the Italian Content Annotation Bank, by expressions of private state (EPSs), i.e., expressions that denote the presence of opinions, emotions, and other cognitive states. The aim of this effort was the generation of a standard resource for supporting the development of opinion extraction algorithms for Italian, and of a benchmark for testing such algorithms. To this end we have employed a previously existing annotation language (here dubbed WWC, from the initials of its proponents). We here describe the results of this annotation effort, including the results of a thorough inter-annotator agreement test. We conclude by discussing how WWC can be adapted to the specificities of a Romance language such as Italian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,548
inproceedings
maks-etal-2008-adjectives
Adjectives in the {D}utch Semantic Lexical Database {CORNETTO}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1161/
Maks, Isa and Vossen, Piek and Segers, Roxane and van der Vliet, Hennie
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The goal of this paper is to describe how adjectives are encoded in Cornetto, a semantic lexical database for Dutch. Cornetto combines two existing lexical resources with different semantic organisation, i.e. Dutch Wordnet (DWN) with a synset organisation and Referentie Bestand Nederlands (RBN) with an organisation in Lexical Units. Both resources will be aligned and mapped on the formal ontology SUMO. In this paper, we will first present details of the description of adjectives in each of the the two resources. We will then address the problems that are encountered during alignment to the SUMO ontology which are greatly due to the fact that SUMO has never been tested for its adequacy with respect to adjectives. We contrasted SUMO with an existing semantic classification which resulted in a further refined and extended SUMO geared for the description of adjectives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,549
inproceedings
dickinson-lee-2008-detecting
Detecting Errors in Semantic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1162/
Dickinson, Markus and Lee, Chong Min
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We develop a method for detecting errors in semantic predicate-argument annotation, based on the variation n-gram error detection method. After establishing an appropriate data representation, we detect inconsistencies by searching for identical text with varying annotation. By remaining data-driven, we are able to detect inconsistencies arising from errors at lower layers of annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,550
inproceedings
roth-schulte-im-walde-2008-corpus
Corpus Co-Occurrence, Dictionary and {W}ikipedia Entries as Resources for Semantic Relatedness Information
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1163/
Roth, Michael and Schulte im Walde, Sabine
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Distributional, corpus-based descriptions have frequently been applied to model aspects of word meaning. However, distributional models that use corpus data as their basis have one well-known disadvantage: even though the distributional features based on corpus co-occurrence were often successful in capturing meaning aspects of the words to be described, they generally fail to capture those meaning aspects that refer to world knowledge, because coherent texts tend not to provide redundant information that is presumably available knowledge. The question we ask in this paper is whether dictionary and encyclopaedic resources might complement the distributional information in corpus data, and provide world knowledge that is missing in corpora. As test case for meaning aspects, we rely on a collection of semantic associates to German verbs and nouns. Our results indicate that a combination of the knowledge resources should be helpful in work on distributional descriptions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,551
inproceedings
giovannetti-etal-2008-ontology
Ontology Learning and Semantic Annotation: a Necessary Symbiosis
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1164/
Giovannetti, Emiliano and Marchi, Simone and Montemagni, Simonetta and Bartolini, Roberto
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Semantic annotation of text requires the dynamic merging of linguistically structured information and a “world model”, usually represented as a domain-specific ontology. On the other hand, the process of engineering a domain-ontology through semi-automatic ontology learning system requires the availability of a considerable amount of semantically annotated documents. Facing this bootstrapping paradox requires an incremental process of annotation-acquisition-annotation, whereby domain-specific knowledge is acquired from linguistically-annotated texts and then projected back onto texts for extra linguistic information to be annotated and further knowledge layers to be extracted. The presented methodology is a first step in the direction of a full “virtuous” circle where the semantic annotation platform and the evolving ontology interact in symbiosis. As a case study we have chosen the semantic annotation of product catalogues. We propose a hybrid approach, combining pattern matching techniques to exploit the regular structure of product descriptions in catalogues, and Natural Language Processing techniques which are resorted to analyze natural language descriptions. The semantic annotation involves the access to the ontology, semi-automatically bootstrapped with an ontology learning tool from annotated collections of catalogues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,552
inproceedings
atserias-etal-2008-semantically
Semantically Annotated Snapshot of the {E}nglish {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1165/
Atserias, Jordi and Zaragoza, Hugo and Ciaramita, Massimiliano and Attardi, Giuseppe
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes SW1, the first version of a semantically annotated snapshot of the English Wikipedia. In recent years Wikipedia has become a valuable resource for both the Natural Language Processing (NLP) community and the Information Retrieval (IR) community. Although NLP technology for processing Wikipedia already exists, not all researchers and developers have the computational resources to process such a volume of information. Moreover, the use of different versions of Wikipedia processed differently might make it difficult to compare results. The aim of this work is to provide easy access to syntactic and semantic annotations for researchers of both NLP and IR communities by building a reference corpus to homogenize experiments and make results comparable. These resources, a semantically annotated corpus and a “entity containment” derived graph, are licensed under the GNU Free Documentation License and available from \url{http://www.yr-bcn.es/semanticWikipedia}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,553
inproceedings
nielsen-etal-2008-annotating
Annotating Students' Understanding of Science Concepts
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1166/
Nielsen, Rodney D. and Ward, Wayne and Martin, James and Palmer, Martha
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper summarizes the annotation of fine-grained entailment relationships in the context of student answers to science assessment questions. We annotated a corpus of 15,357 answer pairs with 145,911 fine-grained entailment relationships. We provide the rationale for such fine-grained analysis and discuss its perceived benefits to an Intelligent Tutoring System. The corpus also has potential applications in other areas, such as question answering and multi-document summarization. Annotators achieved 86.2{\%} inter-annotator agreement (Kappa=0.728, corresponding to substantial agreement) annotating the fine-grained facets of reference answers with regard to understanding expressed in student answers and labeling from one of five possible detailed relationship categories. The corpus described in this paper, which is the only one providing such detailed entailment annotations, is available as a public resource for the research community. The corpus is expected to enable application development, not only for intelligent tutoring systems, but also for general textual entailment applications, that is currently not practical.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,554
inproceedings
passonneau-etal-2008-relation
Relation between Agreement Measures on Human Labeling and Machine Learning Performance: Results from an Art History Domain
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1167/
Passonneau, Rebecca and Lippincott, Tom and Yano, Tae and Klavans, Judith
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We discuss factors that affect human agreement on a semantic labeling task in the art history domain, based on the results of four experiments where we varied the number of labels annotators could assign, the number of annotators, the type and amount of training they received, and the size of the text span being labeled. Using the labelings from one experiment involving seven annotators, we investigate the relation between interannotator agreement and machine learning performance. We construct binary classifiers and vary the training and test data by swapping the labelings from the seven annotators. First, we find performance is often quite good despite lower than recommended interannotator agreement. Second, we find that on average, learning performance for a given functional semantic category correlates with the overall agreement among the seven annotators for that category. Third, we find that learning performance on the data from a given annotator does not correlate with the quality of that annotator’s labeling. We offer recommendations for the use of labeled data in machine learning, and argue that learners should attempt to accommodate human variation. We also note implications for large scale corpus annotation projects that deal with similarly subjective phenomena.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,555
inproceedings
peirsman-etal-2008-construction
The Construction and Evaluation of Word Space Models
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1168/
Peirsman, Yves and De Deyne, Simon and Heylen, Kris and Geeraerts, Dirk
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Semantic similarity is a key issue in many computational tasks. This paper goes into the development and evaluation of two common ways of automatically calculating the semantic similarity between two words. On the one hand, such methods may depend on a manually constructed thesaurus like (Euro)WordNet. Their performance is often evaluated on the basis of a very restricted set of human similarity ratings. On the other hand, corpus-based methods rely on the distribution of two words in a corpus to determine their similarity. Their performance is generally quantified through a comparison with the judgements of the first type of approach. This paper introduces a new Gold Standard of more than 5,000 human intra-category similarity judgements. We show that corpus-based methods often outperform (Euro)WordNet on this data set, and that the use of the latter as a Gold Standard for the former, is thus often far from ideal.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,556
inproceedings
babko-malaya-2008-annotation
Annotation of Nuggets and Relevance in {GALE} Distillation Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1169/
Babko-Malaya, Olga
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents an approach to annotation that BAE Systems has employed in the DARPA GALE Phase 2 Distillation evaluation. The purpose of the GALE Distillation evaluation is to quantify the amount of relevant and non-redundant information a distillation engine is able to produce in response to a specific, formatted query; and to compare that amount of information to the amount of information gathered by a bilingual human using commonly available state-of-the-art tools. As part of the evaluation, following NIST evaluation methodology of complex question answering (Voorhees, 2003), human annotators were asked to establish the relevancy of responses as well as the presence of atomic facts or information units, called nuggets of information. This paper discusses various challenges to the annotation of nuggets, called nuggetization, which include interaction between the granularity of nuggets and relevancy of these nuggets to the query in question. The approach proposed in the paper views nuggetization as a procedural task and allows annotators to revisit nuggetization based on the requirements imposed by the relevancy guidelines defined with a specific end-user in mind. This approach is shown in the paper to produce consistent annotations with high inter-annotator agreement scores.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,557
inproceedings
white-etal-2008-statistical
Statistical Evaluation of Information Distillation Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1170/
White, J.V. and Hunter, D. and Goldstein, J.D.
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe a methodology for evaluating the statistical performance of information distillation systems and apply it to a simple illustrative example. (An information distiller provides written English responses to English queries based on automated searches/transcriptions/translations of English and foreign-language sources. The sources include written documents and sound tracks.) The evaluation methodology extracts information nuggets from the distiller response texts and gathers them into fuzzy equivalence classes called nugs. Themethodology supports the usual performancemetrics, such as recall and precision, as well as a new information-theoretic metric called proficiency, which measures how much information a distiller provides relative to all of the information provided by a collection of distillers working on a common query and corpora. Unlike previous evaluation techniques, the methodology evaluates the relevance, granularity, and redundancy of information nuggets explicitly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,558
inproceedings
rieser-lemon-2008-automatic
Automatic Learning and Evaluation of User-Centered Objective Functions for Dialogue System Optimisation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1171/
Rieser, Verena and Lemon, Oliver
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The ultimate goal when building dialogue systems is to satisfy the needs of real users, but quality assurance for dialogue strategies is a non-trivial problem. The applied evaluation metrics and resulting design principles are often obscure, emerge by trial-and-error, and are highly context dependent. This paper introduces data-driven methods for obtaining reliable objective functions for system design. In particular, we test whether an objective function obtained from Wizard-of-Oz (WOZ) data is a valid estimate of real users’ preferences. We test this in a test-retest comparison between the model obtained from the WOZ study and the models obtained when testing with real users. We can show that, despite a low fit to the initial data, the objective function obtained from WOZ data makes accurate predictions for automatic dialogue evaluation, and, when automatically optimising a policy using these predictions, the improvement over a strategy simply mimicking the data becomes clear from an error analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,559
inproceedings
bielicky-smrz-2008-building
Building the Valency Lexicon of {A}rabic Verbs
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1172/
Bielick{\'y}, Viktor and Smr{\v{z}}, Otakar
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes the building of a valency lexicon of Arabic verbs using a morphologically and syntactically annotated corpus, the Prague Arabic Dependency Treebank (PADT), as its primary source. We present the theoretical account on valency developed within the Functional Generative Description (FGD) theory. We apply the framework to Modern Standard Arabic and discuss various valency-related phenomena with respect to examples from the corpus. We then outline the methodology and the linguistic and technical resources used in the building of the lexicon. The key concept in our scenario is that of PDT-VALLEX of Czech. Our lexicon will be developed by linking the conceivable entries with their instances in the treebank. Conversely, the treebank’s annotations will be linked to the lexicon. While a comparable scheme has been developed for Czech, our own contribution is to design and implement this model thoroughly for Arabic and the PADT data. The Arabic valency lexicon is intended for applications in computational parsing or language generation, and for use by human researchers. The proposed valency lexicon will be exploited in particular during further tectogrammatical annotations of PADT and might serve for enriching the expected second edition of the corpus-based Arabic-Czech Dictionary.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,560
inproceedings
roberts-etal-2008-combining
Combining Terminology Resources and Statistical Methods for Entity Recognition: an Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1173/
Roberts, Angus and Gaizasukas, Robert and Hepple, Mark and Guo, Yikun
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Terminologies and other knowledge resources are widely used to aid entity recognition in specialist domain texts. As well as providing lexicons of specialist terms, linkage from the text back to a resource can make additional knowledge available to applications. Use of such resources is especially pertinent in the biomedical domain, where large numbers of these resources are available, and where they are widely used in informatics applications. Terminology resources can be most readily used by simple lexical lookup of terms in the text. A major drawback with such lexical lookup, however, is poor precision caused by ambiguity between domain terms and general language words. We combine lexical lookup with simple filtering of ambiguous terms, to improve precision. We compare this lexical lookup with a statistical method of entity recognition, and to a method which combines the two approaches. We show that the combined method boosts precision with little loss of recall, and that linkage from recognised entities back to the domain knowledge resources can be maintained.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,561
inproceedings
nazar-etal-2008-suite
A Suite to Compile and Analyze an {LSP} Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1174/
Nazar, Rogelio and Vivaldi, Jorge and Cabr{\'e}, Teresa
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a series of tools for the extraction of specialized corpora from the web and its subsequent analysis mainly with statistical techniques. It is an integrated system of original as well as standard tools and has a modular conception that facilitates its re-integration on different systems. The first part of the paper describes the original techniques, which are devoted to the categorization of documents as relevant or irrelevant to the corpus under construction, considering relevant a specialized document of the selected technical domain. Evaluation figures are provided for the original part, but not for the second part involving the analysis of the corpus, which is composed of algorithms that are well known in the field of Natural Language Processing, such as Kwic search, measures of vocabulary richness, the sorting of n-grams by frequency of occurrence or by measures of statistical association, distribution or similarity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,562
inproceedings
blanco-etal-2008-causal
Causal Relation Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1175/
Blanco, Eduardo and Castell, Nuria and Moldovan, Dan
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a supervised method for the detection and extraction of Causal Relations from open domain text. First we give a brief outline of the definition of causation and how it relates to other Semantic Relations, as well as a characterization of their encoding. In this work, we only consider marked and explicit causations. Our approach first identifies the syntactic patterns that may encode a causation, then we use Machine Learning techniques to decide whether or not a pattern instance encodes a causation. We focus on the most productive pattern, a verb phrase followed by a relator and a clause, and its reverse version, a relator followed by a clause and a verb phrase. As relators we consider the words as, after, because and since. We present a set of lexical, syntactic and semantic features for the classification task, their rationale and some examples. The results obtained are discussed and the errors analyzed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,563
inproceedings
chrupala-etal-2008-learning
Learning Morphology with {M}orfette
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1176/
Chrupala, Grzegorz and Dinu, Georgiana and van Genabith, Josef
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Morfette is a modular, data-driven, probabilistic system which learns to perform joint morphological tagging and lemmatization from morphologically annotated corpora. The system is composed of two learning modules which are trained to predict morphological tags and lemmas using the Maximum Entropy classifier. The third module dynamically combines the predictions of the Maximum-Entropy models and outputs a probability distribution over tag-lemma pair sequences. The lemmatization module exploits the idea of recasting lemmatization as a classification task by using class labels which encode mappings from word forms to lemmas. Experimental evaluation results and error analysis on three morphologically rich languages show that the system achieves high accuracy with no language-specific feature engineering or additional resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,564
inproceedings
cui-etal-2008-corpus
Corpus Exploitation from {W}ikipedia for Ontology Construction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1177/
Cui, Gaoying and Lu, Qin and Li, Wenjie and Chen, Yirong
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Ontology construction usually requires a domain-specific corpus for building corresponding concept hierarchy. The domain corpus must have a good coverage of domain knowledge. Wikipedia(Wiki), the world’s largest online encyclopaedic knowledge source, is open-content, collaboratively edited, and free of charge. It covers millions of articles and still keeps on expanding continuously. These characteristics make Wiki a good candidate as domain corpus resource in ontology construction. However, the selected article collection must have considerable quality and quantity. In this paper, a novel approach is proposed to identify articles in Wiki as domain-specific corpus by using available classification information in Wiki pages. The main idea is to generate a domain hierarchy from the hyperlinked pages of Wiki. Only articles strongly linked to this hierarchy are selected as the domain corpus. The proposed approach makes use of linked category information in Wiki pages to produce the hierarchy as a directed graph for obtaining a set of pages in the same connected branch. Ranking and filtering are then done on these pages based on the classification tree generated by the traversal algorithm. The experiment and evaluation results show that Wiki is a good resource for acquiring a relative high quality domain-specific corpus for ontology construction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,565
inproceedings
ou-etal-2008-development
Development and Alignment of a Domain-Specific Ontology for Question Answering
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1178/
Ou, Shiyan and Pekar, Viktor and Orasan, Constantin and Spurk, Christian and Negri, Matteo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
With the appearance of Semantic Web technologies, it becomes possible to develop novel, sophisticated question answering systems, where ontologies are usually used as the core knowledge component. In the EU-funded project, QALL-ME, a domain-specific ontology was developed and applied for question answering in the domain of tourism, along with the assistance of two upper ontologies for concept expansion and reasoning. This paper focuses on the development of the QALL-ME ontology in the tourism domain and its alignment with the upper ontologies - WordNet and SUMO. The design of the ontology is presented in the paper, and a semi-automatic alignment procedure is described with some alignment results given as well. Furthermore, the aligned ontology was used to semantically annotate original data obtained from the tourism web sites and natural language questions. The storage schema of the annotated data and the data access method for retrieving answers from the annotated data are also reported in the paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,566
inproceedings
manzano-macho-etal-2008-unsupervised
Unsupervised and Domain Independent Ontology Learning: Combining Heterogeneous Sources of Evidence
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1179/
Manzano-Macho, David and G{\'o}mez-P{\'e}rez, Asunci{\'o}n and Borrajo, Daniel
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Acquiring knowledge from the Web to build domain ontologies has become a common practice in the Ontological Engineering field. The vast amount of freely available information allows collecting enough information about any domain. However, the Web usually suffers a lack of structure, untrustworthiness and ambiguity of the content. These drawbacks hamper the application of unsupervised methods of building ontologies demanded by the increasingly popular applications of the Semantic Web. We believe that the combination of several processing mechanisms and complementary information sources may potentially solve the problem. The analysis of different sources of evidence allows determining with greater reliability the validity of the detected knowledge. In this paper, we present GALeOn (General Architecture for Learning Ontologies) that combines sources and processing resources to provide complementary and redundant evidence for making better estimations about the relevance of the extracted knowledge and their relationships. Our goal in this paper is to show how combining several information sources and extraction mechanisms is possible to build a taxonomy of concepts with a higher accuracy than if only one of them is applied. The experimental results show how this combination notably increases the precision of the obtained results with minimum user intervention.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,567
inproceedings
potrich-pianta-2008-l
{L}-{ISA}: Learning Domain Specific Isa-Relations from the Web
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1180/
Potrich, Alessandra and Pianta, Emanuele
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Automated extraction of ontological knowledge from text corpora is a relevant task in Natural Language Processing. In this paper, we focus on the problem of finding hypernyms for relevant concepts in a specific domain (e.g. Optical Recording) in the context of a concrete and challenging application scenario (patent processing). To this end information available on the Web is exploited. The extraction method includes four mains steps. Firstly, the Google search engine is exploited to retrieve possible instances of isa-patterns reported in the literature. Then, the returned snippets are filtered on the basis of lexico-syntactic criteria (e.g. the candidate hypernym must be expressed as a noun phrase without complex modifiers). In a further filtering step, only candidate hypernyms compatible with the target domain are kept. Finally a candidate ranking mechanism is applied to select one hypernym as output of the algorithm. The extraction method was evaluated on 100 concepts of the Optical Recording domain. Moreover, the reliability of isa-patterns reported in the literature as predictors of isa-relations was assessed by manually evaluating the template instances remaining after lexico-syntactic filtering, for 3 concepts of the same domain. While more extensive testing is needed the method appears promising especially for its portability across different domains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,568
inproceedings
hartholt-etal-2008-common
A Common Ground for Virtual Humans: Using an Ontology in a Natural Language Oriented Virtual Human Architecture
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1181/
Hartholt, Arno and Russ, Thomas and Traum, David and Hovy, Eduard and Robinson, Susan
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
When dealing with large, distributed systems that use state-of-the-art components, individual components are usually developed in parallel. As development continues, the decoupling invariably leads to a mismatch between how these components internally represent concepts and how they communicate these representations to other components: representations can get out of synch, contain localized errors, or become manageable only by a small group of experts for each module. In this paper, we describe the use of an ontology as part of a complex distributed virtual human architecture in order to enable better communication between modules while improving the overall flexibility needed to change or extend the system. We focus on the natural language understanding capabilities of this architecture and the relationship between language and concepts within the entire system in general and the ontology in particular.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,569
inproceedings
agirre-soroa-2008-using
Using the Multilingual Central Repository for Graph-Based Word Sense Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1182/
Agirre, Eneko and Soroa, Aitor
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents the results of a graph-based method for performing knowledge-based Word Sense Disambiguation (WSD). The technique exploits the structural properties of the graph underlying the chosen knowledge base. The method is general, in the sense that it is not tied to any particular knowledge base, but in this work we have applied it to the Multilingual Central Repository (MCR). The evaluation has been performed on the Senseval-3 all-words task. The main contributions of the paper are twofold: (1) We have evaluated the separate and combined performance of each type of relation in the MCR, and thus indirectly validated the contents of the MCR and their potential for WSD. (2) We obtain state-of-the-art results, and in fact yield the best results that can be obtained using publicly available data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,570
inproceedings
gey-etal-2008-japanese
A {J}apanese-{E}nglish Technical Lexicon for Translation and Language Research
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1183/
Gey, Fredric and Evans, David Kirk and Kando, Noriko
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present a Japanese-English Bilingual lexicon of technical terms. The lexicon was derived from the first and second NTCIR evaluation collections for research into cross-language information retrieval for Asian languages. While it can be utilized for translation between Japanese and English, the lexicon is also suitable for language research and language engineering. Since it is collection-derived, it contains instances of word variants and miss-spellings which make it eminently suitable for further research. For a subset of the lexicon we make available the collection statistics. In addition we make available a Katakana subset suitable for transliteration research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,571
inproceedings
ha-etal-2008-mutual
Mutual Bilingual Terminology Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1184/
Ha, Le An and Fernandez, Gabriela and Mitkov, Ruslan and Corpas, Gloria
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a novel methodology to perform bilingual terminology extraction, in which automatic alignment is used to improve the performance of terminology extraction for each language. The strengths of monolingual terminology extraction for each language are exploited to improve the performance of terminology extraction in the other language, thanks to the availability of a sentence-level aligned bilingual corpus, and an automatic noun phrase alignment mechanism. The experiment indicates that weaknesses in monolingual terminology extraction due to the limitation of resources in certain languages can be overcome by using another language which has no such limitation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,572
inproceedings
graca-etal-2008-building
Building a Golden Collection of Parallel Multi-Language Word Alignment
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1185/
Gra{\c{c}}a, Jo{\~a}o and Pardal, Joana Paulo and Coheur, Lu{\'i}sa and Caseiro, Diamantino
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper reports an experience on producing manual word alignments over six different language pairs (all combinations between Portuguese, English, French and Spanish) (Gra{\c{c}}a et al., 2008). Word alignment of each language pair is made over the first 100 sentences of the common test set from the Europarl corpora (Koehn, 2005), corresponding to 600 new annotated sentences. This collection is publicly available at http://www.l2f.inesc- id.pt/resources/translation/. It contains, to our knowledge, the first word alignment gold set for the Portuguese language, with three other languages. Besides, it is to our knowledge, the first multi-language manual word aligned parallel corpus, where the same sentences are annotated for each language pair. We started by using the guidelines presented at (Mari{\~n}o, 2005) and performed several refinements: some due to under-specifications on the original guidelines, others because of disagreement on some choices. This lead to the development of an extensive new set of guidelines for multi-lingual word alignment annotation that, we believe, makes the alignment process less ambiguous. We evaluate the inter-annotator agreement obtaining an average of 91.6{\%} agreement between the different language pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,573
inproceedings
cabrio-etal-2008-qall
The {QALL}-{ME} Benchmark: a Multilingual Resource of Annotated Spoken Requests for Question Answering
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1186/
Cabrio, Elena and Kouylekov, Milen and Magnini, Bernardo and Negri, Matteo and Hasler, Laura and Orasan, Constantin and Tom{\'a}s, David and Vicedo, Jose Luis and Neumann, Guenter and Weber, Corinna
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents the QALL-ME benchmark, a multilingual resource of annotated spoken requests in the tourism domain, freely available for research purposes. The languages currently involved in the project are Italian, English, Spanish and German. It introduces a semantic annotation scheme for spoken information access requests, specifically derived from Question Answering (QA) research. In addition to pragmatic and semantic annotations, we propose three QA-based annotation levels: the Expected Answer Type, the Expected Answer Quantifier and the Question Topical Target of a request, to fully capture the content of a request and extract the sought-after information. The QALL-ME benchmark is developed under the EU-FP6 QALL-ME project which aims at the realization of a shared and distributed infrastructure for Question Answering (QA) systems on mobile devices (e.g. mobile phones). Questions are formulated by the users in free natural language input, and the system returns the actual sequence of words which constitutes the answer from a collection of information sources (e.g. documents, databases). Within this framework, the benchmark has the twofold purpose of training machine learning based applications for QA, and testing their actual performance with a rapid turnaround in controlled laboratory setting.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,574
inproceedings
campbell-2008-tools
Tools {\&} Resources for Visualising Conversational-Speech Interaction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1187/
Campbell, Nick
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes tools and techniques for accessing large quantities of speech data and for the visualisation of discourse interactions and events at levels above that of linguistic content. We are working with large quantities of dialogue speech including business meetings, friendly discourse, and telephone conversations, and have produced web-based tools for the visualisation of non-verbal and paralinguistic features of the speech data. In essence, they provide higher-level displays so that specific sections of speech, text, or other annotation can be accessed by the researcher and provide an interactive interface to the large amount of data through an Archive Browser.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,575
inproceedings
pazienza-etal-2008-web
A Web Browser Extension for Growing-up Ontological Knowledge from Traditional Web Content
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1188/
Pazienza, Maria Teresa and Pennacchiotti, Marco and Stellato, Armando
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
While the Web is facing interesting new changes in the way users access, interact and even participate to its growth, the most traditional applications dedicated to its fruition: web browsers, are not responding with the same euphoric boost for innovation, mostly relying on third party or open-source community-driven extensions for addressing the new Social and Semantic Web trends and technologies. This technological and decisional gap, which is probably due to the lack of a strong standardization commitment on the one side (Web 2.0/Social Web) and in the delay of massive adherence to new officially approved standards (W3C approved Semantic Web languages), has to be filled by successful stories which could lay the path for the evolution of browsers. In this work we present a novel web browser extension which combines several features coming from the worlds of terminology and information extraction, semantic annotation and knowledge management, to support users in the process of both keeping track of interesting information they find on the web, and organizing its associated content following knowledge representation standards offered by the Semantic Web
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,576
inproceedings
drissi-etal-2008-development
A Development Environment for Configurable Meta-Annotators in a Pipelined {NLP} Architecture
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1189/
Drissi, Youssef and Boguraev, Branimir and Ferrucci, David and Keyser, Paul and Levas, Anthony
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Information extraction from large data repositories is critical to Information Management solutions. In addition to prerequisite corpus analysis, to determine domain-specific characteristics of text resources, developing, refining and evaluating analytics entails a complex and lengthy process, typically requiring more than just domain expertise. Modern architectures for text processing, while facilitating reuse and (re-)composition of analytical pipelines, do place additional constraints upon the analytics development, as domain experts need not only configure individual annotator components, but situate these within a fully functional annotator pipeline. We present the design, and current status, of a tool for configuring model-driven annotators, which abstracts away from annotator implementation details, pipeline composition constraints, and data management. Instead, the tool embodies support for all stages of ontology-centric model development cycle from corpus analysis and concept definition, to model development and testing, to large scale evaluation, to easy and rapid composition of text applications deploying these concept models. With our design, we aim to meet the needs of domain experts, who are not necessarily expert NLP practitioners.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,577
inproceedings
rehm-etal-2008-ontology
Ontology-Based {XQ}uery`ing of {XML}-Encoded Language Resources on Multiple Annotation Layers
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1190/
Rehm, Georg and Eckart, Richard and Chiarcos, Christian and Dellert, Johannes
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present an approach for querying collections of heterogeneous linguistic corpora that are annotated on multiple layers using arbitrary XML-based markup languages. An OWL ontology provides a homogenising view on the conceptually different markup languages so that a common querying framework can be established using the method of ontology-based query expansion. In addition, we present a highly flexible web-based graphical interface that can be used to query corpora with regard to several different linguistic properties such as, for example, syntactic tree fragments. This interface can also be used for ontology-based querying of multiple corpora simultaneously.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,578
inproceedings
evert-2008-lightweight
A Lightweight and Efficient Tool for Cleaning Web Pages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1191/
Evert, Stefan
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Originally conceived as a “na{\"ive” baseline experiment using traditional n-gram language models as classifiers, the NCleaner system has turned out to be a fast and lightweight tool for cleaning Web pages with state-of-the-art accuracy (based on results from the CLEANEVAL competition held in 2007). Despite its simplicity, the algorithm achieves a significant improvement over the baseline (i.e. plain, uncleaned text dumps), trading off recall for substantially higher precision. NCleaner is available as an open-source software package. It is pre-configured for English Web pages, but can be adapted to other languages with minimal amounts of manually cleaned training data. Since NCleaner does not make use of HTML structure, it can also be applied to existing Web corpora that are only available in plain text format, with a minor loss in classfication accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,579
inproceedings
melnar-liu-2008-borrowing
Borrowing Language Resources for Development of Automatic Speech Recognition for Low- and Middle-Density Languages
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1192/
Melnar, Lynette and Liu, Chen
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we describe an approach that both creates crosslingual acoustic monophone model sets for speech recognition tasks and objectively predicts their performance without target-language speech data or acoustic measurement techniques. This strategy is based on a series of linguistic metrics characterizing the articulatory phonetic and phonological distances of target-language phonemes from source-language phonemes. We term these algorithms the Combined Phonetic and Phonological Crosslingual Distance (CPP-CD) metric and the Combined Phonetic and Phonological Crosslingual Prediction (CPP-CP) metric. The particular motivations for this project are the current unavailability and often prohibitively high production cost of speech databases for many strategically important low- and middle-density languages. First, we describe the CPP-CD approach and compare the performance of CPP-CD-specified models to both native language models and crosslingual models selected by the Bhattacharyya acoustic-model distance metric in automatic speech recognition (ASR) experiments. Results confirm that the CPP-CD approach nearly matches those achieved by the acoustic distance metric. We then test the CPP-CP algorithm on the CPP-CD models by comparing the CPP-CP scores to the recognition phoneme error rates. Based on this comparison, we conclude that the CPP-CP algorithm is a reliable indicator of crosslingual model performance in speech recognition tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,580
inproceedings
moller-etal-2008-corpus
Corpus Analysis of Spoken Smart-Home Interactions with Older Users
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1193/
M{\"oller, Sebastian and G{\"odde, Florian and Wolters, Maria
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we present the collection and analysis of a spoken dialogue corpus obtained from interactions of older and younger users with a smart-home system. Our aim is to identify the amount and the origin of linguistic differences in the way older and younger users address the system. In addition, we investigate changes in the users’ linguistic behaviour after exposure to the system. The results show that the two user groups differ in their speaking style as well as their vocabulary. In contrast to younger users, who adapt their speaking style to the expected limitations of the system, older users tend to use a speaking style that is closer to human-human communication in terms of sentence complexity and politeness. However, older users are far less easy to stereotype than younger users.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,581
inproceedings
georgila-etal-2008-fully
A Fully Annotated Corpus for Studying the Effect of Cognitive Ageing on Users' Interactions with Spoken Dialogue Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1194/
Georgila, Kallirroi and Wolters, Maria and Karaiskos, Vasilis and Kronenthal, Melissa and Logie, Robert and Mayo, Neil and Moore, Johanna and Watson, Matt
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present a corpus of interactions of older and younger users with nine different dialogue systems. The corpus has been fully transcribed and annotated with dialogue acts and “Information State Update” (ISU) representations of dialogue context. Users not only underwent a comprehensive battery of cognitive assessments, but they also rated the usability of each dialogue system on a standardised questionnaire. In this paper, we discuss the corpus collection and outline the semi-automatic methods we used for discourse-level annotations. We expect that the corpus will provide a key resource for modelling older people’s interaction with spoken dialogue systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,582
inproceedings
cucchiarini-etal-2008-recording
Recording Speech of Children, Non-Natives and Elderly People for {HLT} Applications: the {JASMIN}-{CGN} Corpus.
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1195/
Cucchiarini, Catia and Driesen, Joris and Van hamme, Hugo and Sanders, Eric
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Within the framework of the Dutch-Flemish programme STEVIN, the JASMIN-CGN (Jongeren, Anderstaligen en Senioren in Mens-machine Interactie’ Corpus Gesproken Nederlands) project was carried out, which was aimed at collecting speech of children, non-natives and elderly people. The JASMIN-CGN project is an extension of the Spoken Dutch Corpus (CGN) along three dimensions. First, by collecting a corpus of contemporary Dutch as spoken by children of different age groups, elderly people and non-natives with different mother tongues, an extension along the age and mother tongue dimensions was achieved. In addition, we collected speech material in a communication setting that was not envisaged in the CGN: human-machine interaction. One third of the data was collected in Flanders and two thirds in the Netherlands. In this paper we report on our experiences in collecting this corpus and we describe some of the important decisions that we made in the attempt to combine efficiency and high quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,583
tt
S}essionz Database"
null
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1196/
Draxler, Christoph and Schiel, Florian and Ellbogen, Tania
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The first release of the German Ph@ttSessionz speech database contains read and spontaneous speech from 864 adolescent speakers and is the largest database of its kind for German. It was recorded via the WWW in over 40 public schools in all dialect regions of Germany. In this paper, we present a cross-sectional study of f0 measurements on this database. The study documents the profound changes in male voices at the age 13-15. Furthermore, it shows that on a perceptive mel-scale, there is little difference in the relative f0 variability for male and female speakers. A closer analysis reveals that f0 variability is dependent on the speech style and both the length and the type of the utterance. The study provides statistically reliable voice parameters of adolescent speakers for German. The results may contribute to making spoken dialog systems more robust by restricting user input to utterances with low f0 variability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,585
inproceedings
wilks-etal-2008-dialogue
Dialogue, Speech and Images: the Companions Project Data Set
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1197/
Wilks, Yorick and Benyon, David and Brewster, Christopher and Ircing, Pavel and Mival, Oli
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes part of the corpus collection efforts underway in the EC funded Companions project. The Companions project is collecting substantial quantities of dialogue a large part of which focus on reminiscing about photographs. The texts are in English and Czech. We describe the context and objectives for which this dialogue corpus is being collected, the methodology being used and make observations on the resulting data. The corpora will be made available to the wider research community through the Companions Project web site.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,586
inproceedings
goldstein-stewart-etal-2008-creating
Creating and Using a Correlated Corpus to Glean Communicative Commonalities
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1198/
Goldstein-Stewart, Jade and Goodwin, Kerri and Sabin, Roberta and Winder, Ransom
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a collection of correlated communicative samples collected from the same individuals across six diverse genres. Three of the genres were computer mediated: email, blog, and chat, and three non-computer-mediated: essay, interview, and discussion. Participants were drawn from a college student population with an equal number of males and females recruited. All communication expressed opinion on six pre-selected, current topics that had been determined to stimulate communication. The experimental design including methods of collection, randomization of scheduling of genre order and topic order is described. Preliminary results for two descriptive metrics, word count and Flesch readability, are presented. Interesting and, in some cases, significant effects were observed across genres by topic and by gender of participant. This corpus will provide a resource to investigate communication stylistics of individuals across genres, the identification of individuals from correlated data, as well as commonalities and differences across samples that agree in genre, topic, and/or gender of participant.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,587
inproceedings
catizone-etal-2008-information
Information Extraction Tools and Methods for Understanding Dialogue in a Companion
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1199/
Catizone, Roberta and Dingli, Alexiei and Pinto, Hugo and Wilks, Yorick
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper discusses how Information Extraction is used to understand and manage Dialogue in the EU-funded Companions project. This will be discussed with respect to the Senior Companion, one of two applications under development in the EU-funded Companions project. Over the last few years, research in human-computer dialogue systems has increased and much attention has focused on applying learning methods to improving a key part of any dialogue system, namely the dialogue manager. Since the dialogue manager in all dialogue systems relies heavily on the quality of the semantic interpretation of the user’s utterance, our research in the Companions project, focuses on how to improve the semantic interpretation and combine it with knowledge from the Knowledge Base to increase the performance of the Dialogue Manager. Traditionally the semantic interpretation of a user utterance is handled by a natural language understanding module which embodies a variety of natural language processing techniques, from sentence splitting, to full parsing. In this paper we discuss the use of a variety of NLU processes and in particular Information Extraction as a key part of the NLU module in order to improve performance of the dialogue manager and hence the overall dialogue system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,588
inproceedings
gallo-etal-2008-production
Production in a Multimodal Corpus: how Speakers Communicate Complex Actions
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1200/
Gallo, Carlos G{\'o}mez and Jaeger, T. Florian and Allen, James and Swift, Mary
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe a new multimodal corpus currently under development. The corpus consists of videos of task-oriented dialogues that are annotated for speaker’s verbal requests and domain action executions. This resource provides data for new research on language production and comprehension. The corpus can be used to study speakers’ decisions as to how to structure their utterances given the complexity of the message they are trying to convey.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,589
inproceedings
bunt-overbeeke-2008-towards
Towards Formal Interpretation of Semantic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1201/
Bunt, Harry and Overbeeke, Chwhynny
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper we present a novel approach to the incremental incorporation of semantic information in natural language processing which does not fall victim to the notorious problems of ambiguity and lack of robustness, namely through the formal interpretation of semantic annotation. We present a formal semantics for a language for the integrated annotation of several types of semantic information, such as (co-)reference relations, temporal information, and semantic roles. This semantics has the form of a compositional translation into second-order logic. We show that a truly semantic approach to the annotation of different types of semantic information raises interesting issues relating to the borders between these areas of semantics, and to the consistency of semantic annotations in multiple areas or in multiple annotation layers. The approach is compositional, in the sense that every well-formed subexpression of the annotation language can be translated to formal logic (and hence interpreted) independent of the rest of the annotation structure. The approach is also incremental in the sense that it is designed to be extendable to the semantic annotation of many other types of semantic information, such as spatial information, noun-noun relations, or quantification and modification structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,590
inproceedings
pennacchiotti-etal-2008-towards
Towards a Vector Space Model for {F}rame{N}et-like Resources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1202/
Pennacchiotti, Marco and De Cao, Diego and Marocco, Paolo and Basili, Roberto
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we present an original framework to model frame semantic resources (namely, FrameNet) using minimal supervision. This framework can be leveraged both to expand an existing FrameNet with new knowledge, and to induce a FrameNet in a new language. Our hypothesis is that a frame semantic resource can be modeled and represented by a suitable semantic space model. The intuition is that semantic spaces are an effective model of the notion of “being characteristic of a frame” for both lexical elements and full sentences. The paper gives two main contributions. First, it shows that our hypothesis is valid and can be successfully implemented. Second, it explores different types of semantic VSMs, outlining which one is more suitable for representing a frame semantic resource. In the paper, VSMs are used for modeling the linguistic core of a frame, the lexical units. Indeed, if the hypothesis is verified for these units, the proposed framework has a much wider application.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,591
inproceedings
smrz-2008-knofusius
{K}no{F}usius: a New Knowledge Fusion System for Interpretation of Gene Expression Data
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1203/
Smr{\v{z}}, Pavel
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper introduces a new architecture that aims at combining molecular biology data with information automatically extracted from relevant scientific literature (using text mining techniques on PubMed abstracts and fulltext papers) to help biomedical experts to interpret experimental results in hand. The infrastructural level bears on semantic-web technologies and standards that facilitate the actual fusion of the multi-source knowledge.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,592
inproceedings
heylen-etal-2008-modelling
Modelling Word Similarity: an Evaluation of Automatic Synonymy Extraction Algorithms.
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1204/
Heylen, Kris and Peirsman, Yves and Geeraerts, Dirk and Speelman, Dirk
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Vector-based models of lexical semantics retrieve semantically related words automatically from large corpora by exploiting the property that words with a similar meaning tend to occur in similar contexts. Despite their increasing popularity, it is unclear which kind of semantic similarity they actually capture and for which kind of words. In this paper, we use three vector-based models to retrieve semantically related words for a set of Dutch nouns and we analyse whether three linguistic properties of the nouns influence the results. In particular, we compare results from a dependency-based model with those from a 1st and 2nd order bag-of-words model and we examine the effect of the nouns’ frequency, semantic speficity and semantic class. We find that all three models find more synonyms for high-frequency nouns and those belonging to abstract semantic classses. Semantic specificty does not have a clear influence.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,593
inproceedings
cleuren-etal-2008-childrens
Children`s Oral Reading Corpus ({CHOREC}): Description and Assessment of Annotator Agreement
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1205/
Cleuren, Leen and Duchateau, Jacques and Ghesqui{\`e}re, Pol and Van hamme, Hugo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Within the scope of the SPACE project, the CHildren’s Oral REading Corpus (CHOREC) is developed. This database contains recorded, transcribed and annotated read speech (42 GB or 130 hours) of 400 Dutch speaking elementary school children with or without reading difficulties. Analyses of inter- and intra-annotator agreement are carried out in order to investigate the consistency with which reading errors are detected, orthographic and phonetic transcriptions are made, and reading errors and reading strategies are labeled. Percentage agreement scores and kappa values both show that agreement between annotations, and therefore the quality of the annotations, is high. Taken all double or triple annotations (for 10{\%} resp. 30{\%} of the corpus) together, {\%} agreement varies between 86.4{\%} and 98.6{\%}, whereas kappa varies between 0.72 and 0.97 depending on the annotation tier that is being assessed. School type and reading type seem to account for systematic differences in {\%} agreement, but these differences disappear when kappa values are calculated that correct for chance agreement. To conclude, an analysis of the annotation differences with respect to the ’*s’ label (i.e. a label that is used to annotate undistinguishable spelling behaviour), phoneme labels, reading strategy and error labels is given.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,594
inproceedings
caselli-etal-2008-bilingual
A Bilingual Corpus of Inter-linked Events
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1206/
Caselli, Tommaso and Ide, Nancy and Bartolini, Roberto
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes the creation of a bilingual corpus of inter-linked events for Italian and English. Linkage is accomplished through the Inter-Lingual Index (ILI) that links ItalWordNet with WordNet. The availability of this resource, on the one hand, enables contrastive analysis of the linguistic phenomena surrounding events in both languages, and on the other hand, can be used to perform multilingual temporal analysis of texts. In addition to describing the methodology for construction of the inter-linked corpus and the analysis of the data collected, we demonstrate that the ILI could potentially be used to bootstrap the creation of comparable corpora by exporting layers of annotation for words that have the same sense.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,595
inproceedings
strassel-etal-2008-new
New Resources for Document Classification, Analysis and Translation Technologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1207/
Strassel, Stephanie and Friedman, Lauren and Ismael, Safa and Brandschain, Linda
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The goal of the DARPA MADCAT (Multilingual Automatic Document Classification Analysis and Translation) Program is to automatically convert foreign language text images into English transcripts, for use by humans and downstream applications. The first phase the program focuses on translation of handwritten Arabic documents. Linguistic Data Consortium (LDC) is creating publicly available linguistic resources for MADCAT technologies, on a scale and richness not previously available. Corpora will consist of existing LDC corpora and data donations from MADCAT partners, plus new data collection to provide high quality material for evaluation and to address strategic gaps (for genre, dialect, image quality, etc.) in the existing resources. Training and test data properties will expand over time to encompass a wide range of topics and genres: letters, diaries, training manuals, brochures, signs, ledgers, memos, instructions, postcards and forms among others. Data will be ground truthed, with line, word and token segmentation and zoning, and translations and word alignments will be produced for a subset. Evaluation data will be carefully selected from the available data pools and high quality references will be produced, which can be used to compare MADCAT system performance against the human-produced gold standard.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,596
inproceedings
tomanek-hahn-2008-approximating
Approximating Learning Curves for Active-Learning-Driven Annotation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1208/
Tomanek, Katrin and Hahn, Udo
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Active learning (AL) is getting more and more popular as a methodology to considerably reduce the annotation effort when building training material for statistical learning methods for various NLP tasks. A crucial issue rarely addressed, however, is when to actually stop the annotation process to profit from the savings in efforts. This question is tightly related to estimating the classifier performance after a certain amount of data has already been annotated. While learning curves are the default means to monitor the progress of the annotation process in terms of classifier performance, this requires a labeled gold standard which - in realistic annotation settings, at least - is often unavailable. We here propose a method for committee-based AL to approximate the progression of the learning curve based on the disagreement among the committee members. This method relies on a separate, unlabeled corpus and is thus well suited for situations where a labeled gold standard is not available or would be too expensive to obtain. Considering named entity recognition as a test case we provide empirical evidence that this approach works well under simulation as well as under real-world annotation conditions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,597
inproceedings
trippel-etal-2008-lexicon
Lexicon Schemas and Related Data Models: when Standards Meet Users
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1209/
Trippel, Thorsten and Maxwell, Michael and Corbett, Greville and Prince, Cambell and Manning, Christopher and Grimes, Stephen and Moran, Steve
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Lexicon schemas and their use are discussed in this paper from the perspective of lexicographers and field linguists. A variety of lexicon schemas have been developed, with goals ranging from computational lexicography (DATR) through archiving (LIFT, TEI) to standardization (LMF, FSR). A number of requirements for lexicon schemas are given. The lexicon schemas are introduced and compared to each other in terms of conversion and usability for this particular user group, using a common lexicon entry and providing examples for each schema under consideration. The formats are assessed and the final recommendation is given for the potential users, namely to request standard compliance from the developers of the tools used. This paper should foster a discussion between authors of standards, lexicographers and field linguists.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,598
inproceedings
messiant-etal-2008-lexschem
{L}ex{S}chem: a Large Subcategorization Lexicon for {F}rench Verbs
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1210/
Messiant, C{\'e}dric and Poibeau, Thierry and Korhonen, Anna
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents LexSchem - the first large, fully automatically acquired subcategorization lexicon for French verbs. The lexicon includes subcategorization frame and frequency information for 3297 French verbs. When evaluated on a set of 20 test verbs against a gold standard dictionary, it shows 0.79 precision, 0.55 recall and 0.65 F-measure. We have made this resource freely available to the research community on the web.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,599
inproceedings
rodriguez-etal-2008-arabic
{A}rabic {W}ord{N}et: Semi-automatic Extensions using {B}ayesian Inference
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1211/
Rodr{\'i}guez, Horacio and Farwell, David and Ferreres, Javi and Bertran, Manuel and Alkhalifa, Musa and Mart{\'i}, M. Antonia
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This presentation focuses on the semi-automatic extension of Arabic WordNet (AWN) using lexical and morphological rules and applying Bayesian inference. We briefly report on the current status of AWN and propose a way of extending its coverage by taking advantage of a limited set of highly productive Arabic morphological rules for deriving a range of semantically related word forms from verb entries. The application of this set of rules, combined with the use of bilingual Arabic-English resources and Princeton’s WordNet, allows the generation of a graph representing the semantic neighbourhood of the original word. In previous work, a set of associations between the hypothesized Arabic words and English synsets was proposed on the basis of this graph. Here, a novel approach to extending AWN is presented whereby a Bayesian Network is automatically built from the graph and then the net is used as an inferencing mechanism for scoring the set of candidate associations. Both on its own and in combination with the previous technique, this new approach has led to improved results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,600
inproceedings
sainz-etal-2008-subjective
Subjective Evaluation of an Emotional Speech Database for {B}asque
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1212/
Sainz, I{\~n}aki and Saratxaga, Ibon and Navas, Eva and Hern{\'a}ez, Inmaculada and Sanchez, Jon and Luengo, Iker and Odriozola, Igor
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes the evaluation process of an emotional speech database recorded for standard Basque, in order to determine its adequacy for the analysis of emotional models and its use in speech synthesis. The corpus consists of seven hundred semantically neutral sentences that were recorded for the Big Six emotions and neutral style, by two professional actors. The test results show that every emotion is readily recognized far above chance level for both speakers. Therefore the database is a valid linguistic resource for the research and development purposes it was designed for.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,601
inproceedings
kubler-etal-2008-compare
How to Compare Treebanks
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1213/
K{\"ubler, Sandra and Maier, Wolfgang and Rehbein, Ines and Versley, Yannick
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Recent years have seen an increasing interest in developing standards for linguistic annotation, with a focus on the interoperability of the resources. This effort, however, requires a profound knowledge of the advantages and disadvantages of linguistic annotation schemes in order to avoid importing the flaws and weaknesses of existing encoding schemes into the new standards. This paper addresses the question how to compare syntactically annotated corpora and gain insights into the usefulness of specific design decisions. We present an exhaustive evaluation of two German treebanks with crucially different encoding schemes. We evaluate three different parsers trained on the two treebanks and compare results using EvalB, the Leaf-Ancestor metric, and a dependency-based evaluation. Furthermore, we present TePaCoC, a new testsuite for the evaluation of parsers on complex German grammatical constructions. The testsuite provides a well thought-out error classification, which enables us to compare parser output for parsers trained on treebanks with different encoding schemes and provides interesting insights into the impact of treebank annotation schemes on specific constructions like PP attachment or non-constituent coordination.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,602
inproceedings
besancon-etal-2008-infile
The {INFILE} Project: a Crosslingual Filtering Systems Evaluation Campaign
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1214/
Besan{\c{con, Romaric and Chaudiron, St{\'ephane and Mostefa, Djamel and Timimi, Isma{\"il and Choukri, Khalid
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The InFile project (INformation, FILtering, Evaluation) is a cross-language adaptive filtering evaluation campaign, sponsored by the French National Research Agency. The campaign is organized by the CEA LIST, ELDA and the University of Lille3-GERiiCO. It has an international scope as it is a pilot track of the CLEF 2008 campaigns. The corpus is built from a collection of about 1.4 million newswires (10 GB) in three languages, Arabic, English and French provided by the French news Agency Agence France Press (AFP) and selected from a 3-year period. The profiles corpus is made of 50 profiles from which 30 concern general news and events (national and international affairs, politics, sports?) and 20 concern scientific and technical subjects.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,603
inproceedings
tufis-ceausu-2008-diac
{DIAC}+: a Professional Diacritics Recovering System
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1215/
Tufi{\c{s}}, Dan and Ceau{\c{s}}u, Alexandru
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In languages that use diacritical characters, if these special signs are stripped-off from a word, the resulted string of characters may not exist in the language, and therefore its normative form is, in general, easy to recover. However, this is not always the case, as presence or absence of a diacritical sign attached to a base letter of a word which exists in both variants, may change its grammatical properties or even the meaning, making the recovery of the missing diacritics a difficult task, not only for a program but sometimes even for a human reader. We describe and evaluate an accurate knowledge-based system for automatic recovery of the missing diacritics in MS-Office documents written in Romanian. For the rare cases when the system is not able to make a reliable decision, it either provides the user a list of words with their recovery suggestions, or probabilistically chooses one of the possible changes, but leaves a trace (a highlighted comment) on each word the modification of which was uncertain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,604
inproceedings
abuhakema-etal-2008-annotating
Annotating an {A}rabic Learner Corpus for Error
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1216/
Abuhakema, Ghazi and Faraj, Reem and Feldman, Anna and Fitzpatrick, Eileen
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes an ongoing project in which we are collecting a learner corpus of Arabic, developing a tagset for error annotation and performing Computer-aided Error Analysis (CEA) on the data. We adapted the French Interlanguage Database FRIDA tagset (Granger, 2003a) to the data. We chose FRIDA in order to follow a known standard and to see whether the changes needed to move from a French to an Arabic tagset would give us a measure of the distance between the two languages with respect to learner difficulty. The current collection of texts, which is constantly growing, contains intermediate and advanced-level student writings. We describe the need for such corpora, the learner data we have collected and the tagset we have developed. We also describe the error frequency distribution of both proficiency levels and the ongoing work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,605
inproceedings
reynaert-2008-errors
All, and only, the Errors: more Complete and Consistent Spelling and {OCR}-Error Correction Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1217/
Reynaert, Martin
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Some time in the future, some spelling error correction system will correct all the errors, and only the errors. We need evaluation metrics that will tell us when this has been achieved and that can help guide us there. We survey the current practice in the form of the evaluation scheme of the latest major publication on spelling correction in a leading journal. We are forced to conclude that while the metric used there can tell us exactly when the ultimate goal of spelling correction research has been achieved, it offers little in the way of directions to be followed to eventually get there. We propose to consistently use the well-known metrics Recall and Precision, as combined in the F score, on 5 possible levels of measurement that should guide us more informedly along that path. We describe briefly what is then measured or measurable at these levels and propose a framework that should allow for concisely stating what it is one performs in one’s evaluations. We finally contrast our preferred metrics to Accuracy, which is widely used in this field to this day and to the Area-Under-the-Curve, which is increasingly finding acceptance in other fields.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,606
inproceedings
itamar-itai-2008-using
Using Movie Subtitles for Creating a Large-Scale Bilingual Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1218/
Itamar, Einav and Itai, Alon
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents a method for compiling a large-scale bilingual corpus from a database of movie subtitles. To create the corpus, we propose an algorithm based on Gale and Church’s sentence alignment algorithm(1993). However, our algorithm not only relies on character length information, but also uses subtitle-timing information, which is encoded in the subtitle files. Timing is highly correlated between subtitles in different versions (for the same movie), since subtitles that match should be displayed at the same time. However, the absolute time values can’t be used for alignment, since the timing is usually specified by frame numbers and not by real time, and converting it to real time values is not always possible, hence we use normalized subtitle duration instead. This results in a significant reduction in the alignment error rate.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,607
inproceedings
van-son-etal-2008-ifadv
The {IFADV} Corpus: a Free Dialog Video Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1219/
van Son, Rob and Wesseling, Wieneke and Sanders, Eric and van den Heuvel, Henk
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Research into spoken language has become more visual over the years. Both fundamental and applied research have progressively included gestures, gaze, and facial expression. Corpora of multi-modal conversational speech are rare and frequently difficult to use due to privacy and copyright restrictions. A freely available annotated corpus is presented, gratis and libre, of high quality video recordings of face-to-face conversational speech. Annotations include orthography, POS tags, and automatically generated phonemes transcriptions and word boundaries. In addition, labeling of both simple conversational function and gaze direction has been a performed. Within the bounds of the law, everything has been done to remove copyright and use restrictions. Annotations have been processed to RDBMS tables that allow SQL queries and direct connections to statistical software. From our experiences we would like to advocate the formulation of “best practises” for both legal handling and database storage of recordings and annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,608
inproceedings
brutti-etal-2008-woz
{WOZ} Acoustic Data Collection for Interactive {TV}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1220/
Brutti, Alessio and Cristoforetti, Luca and Kellermann, Walter and Marquardt, Lutz and Omologo, Maurizio
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a multichannel acoustic data collection recorded under the European DICIT project, during the Wizard of Oz (WOZ) experiments carried out at FAU and FBK-irst laboratories. The scenario is a distant-talking interface for interactive control of a TV. The experiments involve the acquisition of multichannel data for signal processing front-end and were carried out due to the need to collect a database for testing acoustic pre-processing algorithms. In this way, realistic scenarios can be simulated at a preliminary stage, instead of real-time implementations, allowing for repeatable experiments. To match the project requirements, the WOZ experiments were recorded in three languages: English, German and Italian. Besides the user inputs, the database also contains non-speech related acoustic events, room impulse response measurements and video data, the latter used to compute 3D labels. Sessions were manually transcribed and segmented at word level, introducing also specific labels for acoustic events.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,609
inproceedings
lounela-2008-process
Process Model for Composing High-quality Text Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1221/
Lounela, Mikko
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The Teko corpus composing model offers a decentralized, dynamic way of collecting high-quality text corpora for linguistic research. The resulting corpus consists of independent text sets. The sets are composed in cooperation with linguistic research projects, so each of them responds to a specific research need. The corpora are morphologically annotated and XML-based, with in-built compatibilty with the Kaino user interface used in the corpus server of the Research Institute for the Languages of Finland. Furthermore, software for extracting standard quantitative reports from the text sets has been created during the project. The paper describes the project, and estimates its benefits and problems. It also gives an overview of the technical qualities of the corpora and corpus interface connected to the Teko project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,610
inproceedings
taule-etal-2008-ancora
{A}n{C}ora: Multilevel Annotated Corpora for {C}atalan and {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1222/
Taul{\'e}, Mariona and Mart{\'i}, M. Ant{\`o}nia and Recasens, Marta
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper presents AnCora, a multilingual corpus annotated at different linguistic levels consisting of 500,000 words in Catalan (AnCora-Ca) and in Spanish (AnCora-Es). At present AnCora is the largest multilayer annotated corpus of these languages freely available from \url{http://clic.ub.edu/ancora}. The two corpora consist mainly of newspaper texts annotated at different levels of linguistic description: morphological (PoS and lemmas), syntactic (constituents and functions), and semantic (argument structures, thematic roles, semantic verb classes, named entities, and WordNet nominal senses). All resulting layers are independent of each other, thus making easier the data management. The annotation was performed manually, semiautomatically, or fully automatically, depending on the encoded linguistic information. The development of these basic resources constituted a primary objective, since there was a lack of such resources for these languages. A second goal was the definition of a consistent methodology that can be followed in further annotations. The current versions of AnCora have been used in several international evaluation competitions
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,611
inproceedings
purpura-etal-2008-u
The {U}.{S}. Policy Agenda Legislation Corpus Volume 1 - a Language Resource from 1947 - 1998
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1223/
Purpura, Stephen and Wilkerson, John and Hillard, Dustin
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We introduce the corpus of United States Congressional bills from 1947 to 1998 for use by language research communities. The U.S. Policy Agenda Legislation Corpus Volume 1 (USPALCV1) includes more than 375,000 legislative bills annotated with a hierarchical policy area category. The human annotations in USPALCV1 have been reliably applied over time to enable social science analysis of legislative trends. The corpus is a member of an emerging family of corpora that are annotated by policy area to enable comparative parallel trend recognition across countries and domains (legislation, political speeches, newswire articles, budgetary expenditures, web sites, etc.). This paper describes the origins of the corpus, its creation, ways to access it, design criteria, and an analysis with common supervised machine learning methods. The use of machine learning methods establishes a baseline proposed modeling for the topic classification of legal documents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,612
inproceedings
bensley-hickl-2008-unsupervised
Unsupervised Resource Creation for Textual Inference Applications
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1224/
Bensley, Jeremy and Hickl, Andrew
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper explores how a battery of unsupervised techniques can be used in order to create large, high-quality corpora for textual inference applications, such as systems for recognizing textual entailment (TE) and textual contradiction (TC). We show that it is possible to automatically generate sets of positive and negative instances of textual entailment and contradiction from textual corpora with greater than 90{\%} precision. We describe how we generated more than 1 million TE pairs - and a corresponding set of and 500,000 TC pairs - from the documents found in the 2 GB AQUAINT-2 newswire corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,613
inproceedings
dickinson-jochim-2008-simple
A Simple Method for Tagset Comparision
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1225/
Dickinson, Markus and Jochim, Charles
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Based on the idea that local contexts predict the same basic category across a language, we develop a simple method for comparing tagsets across corpora. The principle differences between tagsets are evidenced by variation in categories in one corpus in the same contexts where another corpus exhibits only a single tag. Such mismatches highlight differences in the definitions of tags which are crucial when porting technology from one annotation scheme to another.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,614
inproceedings
oostdijk-etal-2008-coi
From {D}-Coi to {S}o{N}a{R}: a reference corpus for {D}utch
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1226/
Oostdijk, Nelleke and Reynaert, Martin and Monachesi, Paola and Van Noord, Gertjan and Ordelman, Roeland and Schuurman, Ineke and Vandeghinste, Vincent
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The computational linguistics community in The Netherlands and Belgium has long recognized the dire need for a major reference corpus of written Dutch. In part to answer this need, the STEVIN programme was established. To pave the way for the effective building of a 500-million-word reference corpus of written Dutch, a pilot project was established. The Dutch Corpus Initiative project or D-Coi was highly successful in that it not only realized about 10{\%} of the projected large reference corpus, but also established the best practices and developed all the protocols and the necessary tools for building the larger corpus within the confines of a necessarily limited budget. We outline the steps involved in an endeavour of this kind, including the major highlights and possible pitfalls. Once converted to a suitable XML format, further linguistic annotation based on the state-of-the-art tools developed either before or during the pilot by the consortium partners proved easily and fruitfully applicable. Linguistic enrichment of the corpus includes PoS tagging, syntactic parsing and semantic annotation, involving both semantic role labeling and spatiotemporal annotation. D-Coi is expected to be followed by SoNaR, during which the 500-million-word reference corpus of Dutch should be built.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,615
inproceedings
ozaku-etal-2008-relationships
Relationships between Nursing Converstaions and Activities
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1227/
Ozaku, Hiromi Itoh and Abe, Akinori and Sagara, Kaoru and Kogure, Kiyoshi
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this paper, we determine the relationships between nursing activities and nurseing conversations based on the principle of maximum entropy. For analysis of the features of nursing activities, we built nursing corpora from actual nursing conversation sets collected in hospitals that involve various information about nursing activities. Ex-nurses manually assigned nursing activity information to the nursing conversations in the corpora. Since it is inefficient and too expensive to attach all information manually, we introduced an automatic nursing activity determination method for which we built models of relationships between nursing conversations and activities. In this paper, we adopted a maximum entropy approach for learning. Even though the conversation data set is not large enough for learning, acceptable results were obtained.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,616
inproceedings
glenn-etal-2008-management
Management of Large Annotation Projects Involving Multiple Human Judges: a Case Study of {GALE} Machine Translation Post-editing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1228/
Glenn, Meghan Lammie and Strassel, Stephanie and Friedman, Lauren and Lee, Haejoong and Medero, Shawn
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Managing large groups of human judges to perform any annotation task is a challenge. Linguistic Data Consortium coordinated the creation of manual machine translation post-editing results for the DARPA Global Autonomous Language Exploration Program. Machine translation is one of three core technology components for GALE, which includes an annual MT evaluation administered by National Institute of Standards and Technology. Among the training and test data LDC creates for the GALE program are gold standard translations for system evaluation. The GALE machine translation system evaluation metric is edit distance, measured by HTER (human translation edit rate), which calculates the minimum number of changes required for highly-trained human editors to correct MT output so that it has the same meaning as the reference translation. LDC has been responsible for overseeing the post-editing process for GALE. We describe some of the accomplishments and challenges of completing the post-editing effort, including developing a new web-based annotation workflow system, and recruiting and training human judges for the task. In addition, we suggest that the workflow system developed for post-editing could be ported efficiently to other annotation efforts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,617
inproceedings
hammarstrom-etal-2008-bootstrapping
Bootstrapping Language Description: the case of {M}piemo ({B}antu {A}, {C}entral {A}frican {R}epublic)
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1229/
Hammarstr{\"om, Harald and Thornell, Christina and Petzell, Malin and Westerlund, Torbj{\"orn
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Linguists have long been producing grammatical decriptions of yet undescribed languages. This is a time-consuming process, which has already adapted to improved technology for recording and storage. We present here a novel application of NLP techniques to bootstrap analysis of collected data and speed-up manual selection work. To be more precise, we argue that unsupervised induction of morphology and part-of-speech analysis from raw text data is mature enough to produce useful results. Experiments with Latent Semantic Analysis were less fruitful. We exemplify this on Mpiemo, a so-far essentially undescribed Bantu language of the Central African Republic, for which raw text data was available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,618
inproceedings
sato-etal-2008-automatic
Automatic Assessment of {J}apanese Text Readability Based on a Textbook Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1230/
Sato, Satoshi and Matsuyoshi, Suguru and Kondoh, Yohsuke
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes a method of readability measurement of Japanese texts based on a newly compiled textbook corpus. The textbook corpus consists of 1,478 sample passages extracted from 127 textbooks of elementary school, junior high school, high school, and university; it is divided into thirteen grade levels and the total size is about a million characters. For a given text passage, the readability measurement method determines the grade level to which the passage is the most similar by using character-unigram models, which are constructed from the textbook corpus. Because this method does not require sentence-boundary analysis and word-boundary analysis, it is applicable to texts that include incomplete sentences and non-regular text fragments. The performance of this method, which is measured by the correlation coefficient, is considerably high (R {\ensuremath{>}} 0.9); in case that the length of a text passage is limited in 25 characters, the correlation coefficient is still high (R = 0.83).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,619
inproceedings
thompson-etal-2008-building
Building a Bio-Event Annotated Corpus for the Acquisition of Semantic Frames from Biomedical Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1231/
Thompson, Paul and Cotter, Philip and McNaught, John and Ananiadou, Sophia and Montemagni, Simonetta and Trabucco, Andrea and Venturi, Giulia
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper reports on the design and construction of a bio-event annotated corpus which was developed with a specific view to the acquisition of semantic frames from biomedical corpora. We describe the adopted annotation scheme and the annotation process, which is supported by a dedicated annotation tool. The annotated corpus contains 677 abstracts of biomedical research articles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,620
inproceedings
rupp-etal-2008-language
Language Resources and Chemical Informatics
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1232/
Rupp, C.J. and Copestake, Ann and Corbett, Peter and Murray-Rust, Peter and Siddharthan, Advaith and Teufel, Simone and Waldron, Benjamin
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Chemistry research papers are a primary source of information about chemistry, as in any scientific field. The presentation of the data is, predominantly, unstructured information, and so not immediately susceptible to processes developed within chemical informatics for carrying out chemistry research by information processing techniques. At one level, extracting the relevant information from research papers is a text mining task, requiring both extensive language resources and specialised knowledge of the subject domain. However, the papers also encode information about the way the research is conducted and the structure of the field itself. Applying language technology to research papers in chemistry can facilitate eScience on several different levels. The SciBorg project sets out to provide an extensive, analysed corpus of published chemistry research. This relies on the cooperation of several journal publishers to provide papers in an appropriate form. The work is carried out as a collaboration involving the Computer Laboratory, Chemistry Department and eScience Centre at Cambridge University, and is funded under the UK eScience programme.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,621
inproceedings
hahn-etal-2008-semantic
Semantic Annotations for Biology: a Corpus Development Initiative at the Jena University Language {\&} Information Engineering ({JULIE}) Lab
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1233/
Hahn, Udo and Beisswanger, Elena and Buyko, Ekaterina and Poprat, Michael and Tomanek, Katrin and Wermter, Joachim
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We provide an overview of corpus building efforts at the Jena University Language {\&} Information Engineering (JULIE) Lab which are focused on life science documents. Special emphasis is laid on semantic annotations in terms of a large amount of biomedical named entities (almost 100 entity types), semantic relations, as well as discourse phenomena, reference relations in particular.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,622
inproceedings
quochi-etal-2008-lexicon
A lexicon for biology and bioinformatics: the {BOOTS}trep experience.
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1234/
Quochi, Valeria and Monachini, Monica and Del Gratta, Riccardo and Calzolari, Nicoletta
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
This paper describes the design, implementation and population of a lexical resource for biology and bioinformatics (the BioLexicon) developed within an ongoing European project. The aim of this project is text-based knowledge harvesting for support to information extraction and text mining in the biomedical domain. The BioLexicon is a large-scale lexical-terminological resource encoding different information types in one single integrated resource. In the design of the resource we follow the ISO/DIS 24613 “Lexical Mark-up Framework” standard, which ensures reusability of the information encoded and easy exchange of both data and architecture. The design of the resource also takes into account the needs of our text mining partners who automatically extract syntactic and semantic information from texts and feed it to the lexicon. The present contribution first describes in detail the model of the BioLexicon along its three main layers: morphology, syntax and semantics; then, it briefly describes the database implementation of the model and the population strategy followed within the project, together with an example. The BioLexicon database in fact comes equipped with automatic uploading procedures based on a common exchange XML format, which guarantees that the lexicon can be properly populated with data coming from different sources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,623
inproceedings
rinaldi-etal-2008-dependency
Dependency-Based Relation Mining for Biomedical Literature
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1235/
Rinaldi, Fabio and Schneider, Gerold and Kaljurand, Kaarel and Hess, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe techniques for the automatic detection of relationships among domain entities (e.g. genes, proteins, diseases) mentioned in the biomedical literature. Our approach is based on the adaptive selection of candidate interactions sentences, which are then parsed using our own dependency parser. Specific syntax-based filters are used to limit the number of possible candidate interacting pairs. The approach has been implemented as a demonstrator over a corpus of 2000 richly annotated MedLine abstracts, and later tested by participation to a text mining competition. In both cases, the results obtained have proved the adequacy of the proposed approach to the task of interaction detection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,624
inproceedings
kokkinakis-2008-mesh
{M}e{SH}{\textcopyright}: from a Controlled Vocabulary to a Processable Resource
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1236/
Kokkinakis, Dimitrios
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Large repositories of life science data in the form of domain-specific literature and large specialised textual collections increase on a daily basis to a level beyond the human mind can grasp and interpret. As the volume of data continues to increase, substantial support from new information technologies and computational techniques grounded in the mining paradigm is becoming apparent. These emerging technologies play a critical role in aiding research productivity, and they provide the means for reducing the workload for information access and decision support and for speeding up and enhancing the knowledge discovery process. In order to accomplish these higher level goals a fundamental and unavoidable starting point is the identification and mapping of terminology from unstructured data to biomedical knowledge sources and concept hierarchies. This paper provides a description of the work regarding terminology recognition using the Swedish MeSH{\textcopyright} thesaurus and its corresponding English source. The various transformation and refinement steps applied to the original database tables into a fully-fledged processing-oriented annotating resource are explained. Particular attention has been given to a number of these steps in order to automatically map the extensive variability of lexical terms to structured MeSH{\textcopyright} nodes. Issues on annotation and coverage are also discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,625
inproceedings
kokkinakis-2008-semantically
A Semantically Annotated {S}wedish Medical Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1237/
Kokkinakis, Dimitrios
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
With the information overload in the life sciences there is an increasing need for annotated corpora, particularly with biological and biomedical entities, which is the driving force for data-driven language processing applications and the empirical approach to language study. Inspired by the work in the GENIA Corpus, which is one of the very few of such corpora, extensively used in the biomedical field, and in order to fulfil the needs of our research, we have collected a Swedish medical corpus, the MEDLEX Corpus. MEDLEX is a large structurally and linguistically annotated document collection, consisting of a variety of text documents related to various medical text subfields, and does not focus at a particular medical genre, due to the lack of large Swedish resources within a particular medical subdomain. Out of this collection we selected 300 documents which were manually examined by two human experts who inspected, corrected and/or accordingly modified the automatically provided annotations according to a set of provided labelling guidelines. The annotations consist of medical terminology provided by the Swedish and English MeSH{\textcopyright} (Medical Subject Headings) thesauri as well as named entity labels provided by an enhanced named entity recognition software.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,626
inproceedings
embarek-ferret-2008-learning
Learning Patterns for Building Resources about Semantic Relations in the Medical Domain
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1238/
Embarek, Mehdi and Ferret, Olivier
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
In this article, we present a method for extracting automatically semantic relations from texts in the medical domain using linguistic patterns. These patterns refer to three levels of information about words: inflected form, lemma and part-of-speech. The method we present consists first in identifying the entities that are part of the relations to extract, that is to say diseases, exams, treatments, drugs or symptoms. Thereafter, sentences that contain couples of entities are extracted and the presence of a semantic relation is validated by applying linguistic patterns. These patterns were previously learnt automatically from a manually annotated corpus by relying onan algorithm based on the edit distance. We first report the results of an evaluation of our medical entity tagger for the five types of entities we have mentioned above and then, more globally, the results of an evaluation of our extraction method for four relations between these entities. Both evaluations were done for French.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,627
inproceedings
ienco-etal-2008-automatic
Automatic extraction of subcategorization frames for {I}talian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1239/
Ienco, Dino and Villata, Serena and Bosco, Cristina
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Subcategorization is a kind of knowledge which can be considered as crucial in several NLP tasks, such as Information Extraction or parsing, but the collection of very large resources including subcategorization representation is difficult and time-consuming. Various experiences show that the automatic extraction can be a practical and reliable solution for acquiring such a kind of knowledge. The aim of this paper is to investigate the relationships between subcategorization frame extraction and the nature of data from which the frames have to be extracted, e.g. how much the task can be influenced by the richness/poorness of the annotation. Therefore, we present some experiments that apply statistical subcategorization extraction methods, known in literature, on an Italian treebank that exploits a rich set of dependency relations that can be annotated at different degrees of specificity. Benefiting from the availability of relation sets that implement different granularity in the representation of relations, we evaluate our results with reference to previous works in a cross-linguistic perspective.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,628
inproceedings
francom-hulden-2008-parallel
Parallel Multi-Theory Annotations of Syntactic Structure
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1240/
Francom, Jerid and Hulden, Mans
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present an approach to creating a treebank of sentences using multiple notations or linguistic theories simultaneously. We illustrate the method by annotating sentences from the Penn Treebank II in three different theories in parallel: the original PTB notation, a Functional Dependency Grammar notation, and a Government and Binding style notation. Sentences annotated with all of these theories are represented in XML as a directed acyclic graph where nodes and edges may carry extra information depending on the theory encoded.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,629
inproceedings
adler-etal-2008-tagging
Tagging a {H}ebrew Corpus: the Case of Participles
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1241/
Adler, Meni and Netzer, Yael and Goldberg, Yoav and Gabay, David and Elhadad, Michael
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We report on an effort to build a corpus of Modern Hebrew tagged with part-of-speech and morphology. We designed a tagset specific to Hebrew while focusing on four aspects: the tagset should be consistent with common linguistic knowledge; there should be maximal agreement among taggers as to the tags assigned to maintain consistency; the tagset should be useful for machine taggers and learning algorithms; and the tagset should be effective for applications relying on the tags as input features. In this paper, we illustrate these issues by explaining our decision to introduce a tag for beinoni forms in Hebrew. We explain how this tag is defined, and how it helped us improve manual tagging accuracy to a high-level, while improving automatic tagging and helping in the task of syntactic chunking.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,630
inproceedings
nath-etal-2008-unsupervised
Unsupervised Parts-of-Speech Induction for {B}engali
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1242/
Nath, Joydeep and Choudhury, Monojit and Mukherjee, Animesh and Biemann, Christian and Ganguly, Niloy
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We present a study of the word interaction networks of Bengali in the framework of complex networks. The topological properties of these networks reveal interesting insights into the morpho-syntax of the language, whereas clustering helps in the induction of the natural word classes leading to a principled way of designing POS tagsets. We compare different network construction techniques and clustering algorithms based on the cohesiveness of the word clusters. Cohesiveness is measured against two gold-standard tagsets by means of the novel metric of tag-entropy. The approach presented here is a generic one that can be easily extended to any language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,631
inproceedings
de-cea-etal-2008-tagging
Tagging {S}panish Texts: the Problem of Problem of {\textquotedblleft}{SE}{\textquotedblright}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1243/
de Cea, Guadalupe Aguado and Puche, Javier and Ramos, Jos{\'e} {\'A}ngel
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Automatic tagging in Spanish has historically faced many problems because of some specific grammatical constructions. One of these traditional pitfalls is the “se” particle. This particle is a multifunctional and polysemous word used in many different contexts. Many taggers do not distinguish the possible uses of “se” and thus provide poor results at this point. In tune with the philosophy of free software, we have taken a free annotation tool as a basis, we have improved and enhanced its behaviour by adding new rules at different levels and by modifying certain parts in the code to allow for its possible implementation in other EAGLES-compliant tools. In this paper, we present the analysis carried out with different annotators for selecting the tool, the results obtained in all cases as well as the improvements added and the advantages of the modified tagger.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,632
inproceedings
mirovsky-2008-netgraph-fit
Does Netgraph Fit {P}rague Dependency Treebank?
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1244/
M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i}
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
On many examples we present a query language of Netgraph - a fully graphical tool for searching in the Prague Dependency Treebank 2.0. To demonstrate that the query language fits the treebank well, we study an annotation manual for the most complex layer of the treebank - the tectogrammatical layer - and show that linguistic phenomena annotated on the layer can be searched for using the query language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,633
inproceedings
by-2008-kalshnikov
The Kalshnikov 691 Dependency Bank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1245/
By, Tomas
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The PARC 700 dependency bank has a number of features that would seem to make it less than optimally suited for its intended purpose, parser evaluation. However, it is difficult to know precisely what impact these problems have on the evaluation results, and as a first step towards making comparison possible, a subset of the same sentences is presented here, marked up using a different format that avoids them. In this new representation, the tokens contain exactly the same sequence of characters as the original text, word order is encoded explicitly, and there is no artificial distinction between full tokens and attribute tokens. There is also a clear division between word tokens and empty nodes, and the token attributes are stored together with the word, instead of being spread out individually in the file. A standard programming language syntax is used for the data, so there is little room for markup errors. Finally, the dependency links are closer to standard grammatical terms, which presumably makes it easier to understand what they mean and to convert any particular parser output format to the Kalashnikov 691 representation. The data is provided both in machine-readable format and as graphical dependency trees.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,634
inproceedings
schluter-van-genabith-2008-treebank
Treebank-Based Acquisition of {LFG} Parsing Resources for {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1246/
Schluter, Natalie and van Genabith, Josef
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Motivated by the expense in time and other resources to produce hand-crafted grammars, there has been increased interest in automatically obtained wide-coverage grammars from treebanks for natural language processing. In particular, recent years have seen the growth in interest in automatically obtained deep resources that can represent information absent from simple CFG-type structured treebanks and which are considered to produce more language-neutral linguistic representations, such as dependency syntactic trees. As is often the case in early pioneering work on natural language processing, English has provided the focus of first efforts towards acquiring deep-grammar resources, followed by successful treatments of, for example, German, Japanese, Chinese and Spanish. However, no comparable large-scale automatically acquired deep-grammar resources have been obtained for French to date. The goal of this paper is to present the application of treebank-based language acquisition to the case of French. We show that with modest changes to the established parsing architectures, encouraging results can be obtained for French, with an overall best dependency structure f-score of 86.73{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,635
inproceedings
koeva-etal-2008-chooser
{C}hooser: a Multi-Task Annotation Tool
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1247/
Koeva, Svetla and Rizov, Borislav and Leseva, Svetlozara
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The paper presents a tool assisting manual annotation of linguistic data developed at the Department of Computational linguistics, IBL-BAS. Chooser is a general-purpose modular application for corpus annotation based on the principles of commonality and reusability of the created resources, language and theory independence, extendibility and user-friendliness. These features have been achieved through a powerful abstract architecture within the Model-View-Controller paradigm that is easily tailored to task-specific requirements and readily extendable to new applications. The tool is to a considerable extent independent of data format and representation and produces outputs that are largely consistent with existing standards. The annotated data are therefore reusable in tasks requiring different levels of annotation and are accessible to external applications. The tool incorporates edit functions, pass and arrangement strategies that facilitate annotators’ work. The relevant module produces tree-structured and graph-based representations in respective annotation modes. Another valuable feature of the application is concurrent access by multiple users and centralised storage of lexical resources underlying annotation schemata, as well as of annotations, including frequency of selection, updates in the lexical database, etc. Chooser has been successfully applied to a number of tasks: POS tagging, WS and syntactic annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,636
inproceedings
fragkou-etal-2008-boemie
{BOEMIE} Ontology-Based Text Annotation Tool
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1248/
Fragkou, Pavlina and Petasis, Georgios and Theodorakos, Aris and Karkaletsis, Vangelis and Spyropoulos, Constantine
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
The huge amount of the available information in the Web creates the need of effective information extraction systems that are able to produce metadata that satisfy user’s information needs. The development of such systems, in the majority of cases, depends on the availability of an appropriately annotated corpus in order to learn extraction models. The production of such corpora can be significantly facilitated by annotation tools that are able to annotate, according to a defined ontology, not only named entities but most importantly relations between them. This paper describes the BOEMIE ontology-based annotation tool which is able to locate blocks of text that correspond to specific types of named entities, fill tables corresponding to ontology concepts with those named entities and link the filled tables based on relations defined in the domain ontology. Additionally, it can perform annotation of blocks of text that refer to the same topic. The tool has a user-friendly interface, supports automatic pre-annotation, annotation comparison as well as customization to other annotation schemata. The annotation tool has been used in a large scale annotation task involving 3,000 web pages regarding athletics. It has also been used in another annotation task involving 503 web pages with medical information, in different languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,637
inproceedings
krestel-etal-2008-minding
Minding the Source: Automatic Tagging of Reported Speech in Newspaper Articles
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1249/
Krestel, Ralf and Bergler, Sabine and Witte, Ren{\'e}
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Reported speech in the form of direct and indirect reported speech is an important indicator of evidentiality in traditional newspaper texts, but also increasingly in the new media that rely heavily on citation and quotation of previous postings, as for instance in blogs or newsgroups. This paper details the basic processing steps for reported speech analysis and reports on performance of an implementation in form of a GATE resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,638
inproceedings
vossen-etal-2008-kyoto
{KYOTO}: a System for Mining, Structuring and Distributing Knowledge across Languages and Cultures
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1250/
Vossen, Piek and Agirre, Eneko and Calzolari, Nicoletta and Fellbaum, Christiane and Hsieh, Shu-kai and Huang, Chu-Ren and Isahara, Hitoshi and Kanzaki, Kyoko and Marchetti, Andrea and Monachini, Monica and Neri, Federico and Raffaelli, Remo and Rigau, German and Tescon, Maurizio and VanGent, Joop
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We outline work performed within the framework of a current EC project. The goal is to construct a language-independent information system for a specific domain (environment/ecology/biodiversity) anchored in a language-independent ontology that is linked to wordnets in seven languages. For each language, information extraction and identification of lexicalized concepts with ontological entries is carried out by text miners (“Kybots”). The mapping of language-specific lexemes to the ontology allows for crosslinguistic identification and translation of equivalent terms. The infrastructure developed within this project enables long-range knowledge sharing and transfer across many languages and cultures, addressing the need for global and uniform transition of knowledge beyond the specific domains addressed here.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,639
inproceedings
schafer-etal-2008-extracting
Extracting and Querying Relations in Scientific Papers on Language Technology
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1251/
Sch{\"afer, Ulrich and Uszkoreit, Hans and Federmann, Christian and Marek, Torsten and Zhang, Yajing
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
We describe methods for extracting interesting factual relations from scientific texts in computational linguistics and language technology taken from the ACL Anthology. We use a hybrid NLP architecture with shallow preprocessing for increased robustness and domain-specific, ontology-based named entity recognition, followed by a deep HPSG parser running the English Resource Grammar (ERG). The extracted relations in the MRS (minimal recursion semantics) format are simplified and generalized using WordNet. The resulting “quriples” are stored in a database from where they can be retrieved (again using abstraction methods) by relation-based search. The query interface is embedded in a web browser-based application we call the Scientist’s Workbench. It supports researchers in editing and online-searching scientific papers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,640
inproceedings
iftene-balahur-dobrescu-2008-named
Named Entity Relation Mining using {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel
may
2008
Marrakech, Morocco
European Language Resources Association (ELRA)
https://aclanthology.org/L08-1252/
Iftene, Adrian and Balahur-Dobrescu, Alexandra
Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08)
null
Discovering relations among Named Entities (NEs) from large corpora is both a challenging, as well as useful task in the domain of Natural Language Processing, with applications in Information Retrieval (IR), Summarization (SUM), Question Answering (QA) and Textual Entailment (TE). The work we present resulted from the attempt to solve practical issues we were confronted with while building systems for the tasks of Textual Entailment Recognition and Question Answering, respectively. The approach consists in applying grammar induced extraction patterns on a large corpus - Wikipedia - for the extraction of relations between a given Named Entity and other Named Entities. The results obtained are high in precision, determining a reliable and useful application of the built resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
83,641