entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
seddah-2010-exploring
Exploring the Spinal-{STIG} Model for Parsing {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1534/
Seddah, Djam{\'e}
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We evaluate statistical parsing of French using two probabilistic models derived from the Tree Adjoining Grammar framework: a Stochastic Tree Insertion Grammars model (STIG) and a specific instance of this formalism, called Spinal Tree Insertion Grammar model which exhibits interesting properties with regard to data sparseness issues common to small treebanks such as the Paris 7 French Treebank. Using David Chiang’s STIG parser (Chiang, 2003), we present results of various experiments we conducted to explore those models for French parsing. The grammar induction makes use of a head percolation table tailored for the French Treebank and which is provided in this paper. Using two evaluation metrics, we found that the parsing performance of a STIG model is tied to the size of the underlying Tree Insertion Grammar, with a more compact grammar, a spinal STIG, outperforming a genuine STIG. We finally note that a ''``spinal'''' framework seems to emerge in the literature. Indeed, the use of vertical grammars such as Spinal STIG instead of horizontal grammars such as PCFGs, afflicted with well known data sparseness issues, seems to be a promising path toward better parsing performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,417
inproceedings
caselli-prodanof-2010-annotating
Annotating Event Anaphora: A Case Study
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1535/
Caselli, Tommaso and Prodanof, Irina
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In recent years we have resgitered a renewed interest in event detection and temporal processing of text/discourse. TimeML (Pustejovsky et al., 2003a) has shed new lights on the notion of event and developed a new methodology for its annotation. On a parallel, works on anaphora resolution have developed a reliable methodology for the annotation and pointed out the core role of this phenomenon for the improvement of NLP systems. This paper tries to put together these two lines of research by describing a case study for the creation of an annotation scheme on event anaphora. We claim that this work could have consequences for the annotation of eventualities as proposed in TimeML and on the use of the tag and on the study of anaphora and its annotation. The annotation scheme and its guidelines have been developed on the basis of a coarse grained bottom up approach. In order to do this, we have performed a small sampling annotation which has highlighted shortcomings and open issues which need to be resolved.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,418
inproceedings
maegaard-etal-2010-cooperation
Cooperation for {A}rabic Language Resources and Tools {---} The {MEDAR} Project
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1536/
Maegaard, Bente and Attia, Mohamed and Choukri, Khalid and Hamon, Olivier and Krauwer, Steven and Yaseen, Mustafa
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper describes some of the work carried out within the European funded project MEDAR. The project has three streams of activity: the technical stream, the cooperation stream and the dissemination stream. MEDAR has first updated the existing surveys and BLARK for Arabic, and then the technical stream focused on machine translation. The consortium identified a number of freely available MT systems and then customized two versions of the famous MOSES package. The Consortium addressed the needs to package MOSES for English to Arabic (while the main MT stream is on Arabic to English). For performance assessment purposes, the partners produced test data that allowed carrying out an evaluation campaign with 5 different systems (including from outside the consortium) and two online ones. Both the MT baselines and the collected data will be made available via ELRA catalogue. The cooperation stream focuses mostly on the cooperation roadmap for Human Language Technologies for Arabic. Cooperation Roadmap for the region directed towards the Arabic HLT in general. It is the purpose of the roadmap to outline areas and priorities for collaboration, in terms of collaboration between EU countries and Arabic speaking countries, as well as cooperation in general: between countries, between universities, and last but not least between universities and industry.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,419
inproceedings
pastra-etal-2010-poeticon
The {POETICON} Corpus: Capturing Language Use and Sensorimotor Experience in Everyday Interaction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1537/
Pastra, Katerina and Wallraven, Christian and Schultze, Michael and Vataki, Argyro and Kaulard, Kathrin
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Natural language use, acquisition, and understanding takes place usually in multisensory and multimedia communication environments. Therefore, for one to model language in its interaction and integration with sensorimotor experiences, one needs a representative corpus of such interplay. In this paper, we will present the first corpus of language use and sensorimotor experience recordings in everyday human:human interaction, in which spontaneous language communication has been recorded along with corresponding multiview video recordings, recordings of 3D full body kinematics, and 3D tracking of objects in focus. It is a twelve-hour corpus which comprises of six everyday human:human interaction scenes, each one performed 3 times by 4 different English-speaking couples (interaction between a male and a female actor), each couple acting each scene in two settings: a fully naturalistic setting in which 5-camera multi-view video recordings take place, and a high-tech setting, with full body motion capture for both individuals, a 2-camera multiview video recording, and 3D tracking of focus objects. The corpus has been developed within an EU-funded cognitive systems research project, POETICON (\url{http://www.poeticon.eu}), and represents a new type of language resources for cognitive systems. Namely, a corpus that reveals the dynamic role of language in its interplay with sensorimotor experiences and which allows one to computationally model this interplay.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,420
inproceedings
blache-etal-2010-otim
The {OTIM} Formal Annotation Model: A Preliminary Step before Annotation Scheme
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1538/
Blache, Philippe and Bertrand, Roxane and Guardiola, Mathilde and Gu{\'e}not, Marie-Laure and Meunier, Christine and Nesterenko, Irina and Pallaud, Berthille and Pr{\'e}vot, Laurent and Priego-Valverde, B{\'e}atrice and Rauzy, St{\'e}phane
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Large annotation projects, typically those addressing the question of multimodal annotation in which many different kinds of information have to be encoded, have to elaborate precise and high level annotation schemes. Doing this requires first to define the structure of the information: the different objects and their organization. This stage has to be as much independent as possible from the coding language constraints. This is the reason why we propose a preliminary formal annotation model, represented with typed feature structures. This representation requires a precise definition of the different objects, their properties (or features) and their relations, represented in terms of type hierarchies. This approach has been used to specify the annotation scheme of a large multimodal annotation project (OTIM) and experimented in the annotation of a multimodal corpus (CID, Corpus of Interactional Data). This project aims at collecting, annotating and exploiting a dialogue video corpus in a multimodal perspective (including speech and gesture modalities). The corpus itself, is made of 8 hours of dialogues, fully transcribed and richly annotated (phonetics, syntax, pragmatics, gestures, etc.).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,421
inproceedings
jabbari-etal-2010-evaluating
Evaluating Lexical Substitution: Analysis and New Measures
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1539/
Jabbari, Sanaz and Hepple, Mark and Guthrie, Louise
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Lexical substitution is the task of finding a replacement for a target word in a sentence so as to preserve, as closely as possible, the meaning of the original sentence. It has been proposed that lexical substitution be used as a basis for assessing the performance of word sense disambiguation systems, an idea realised in the English Lexical Substitution Task of SemEval-2007. In this paper, we examine the evaluation metrics used for the English Lexical Substitution Task and identify some problems that arise for them. We go on to propose some alternative measures for this purpose, that avoid these problems, and which in turn can be seen as redefining the key tasks that lexical substitution systems should be expected to perform. We hope that these new metrics will better serve to guide the development of lexical substitution systems in future work. One of the new metrics addresses how effective systems are in ranking substitution candidates, a key ability for lexical substitution systems, and we report some results concerning the assessment of systems produced by this measure as compared to the relevant measure from SemEval-2007.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,422
inproceedings
shamsfard-etal-2010-extracting
Extracting Lexico-conceptual Knowledge for Developing {P}ersian {W}ord{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1540/
Shamsfard, Mehrnoush and Fadaei, Hakimeh and Fekri, Elham
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Semantic lexicons and lexical ontologies are some major resources in natural language processing. Developing such resources are time consuming tasks for which some automatic methods are proposed. This paper describes some methods used in semi-automatic development of FarsNet; a lexical ontology for the Persian language. FarsNet includes the Persian WordNet with more than 10000 synsets of nouns, verbs and adjectives. In this paper we discuss extraction of lexico-conceptual relations such as synonymy, antonymy, hyperonymy, hyponymy, meronymy, holonymy and other lexical or conceptual relations between words and concepts (synsets) from Persian resources. Relations are extracted from different resources like web, corpora, Wikipedia, Wiktionary, dictionaries and WordNet. In the system presented in this paper a variety of approaches are applied in the task of relation extraction to extract ladled or unlabeled relations. They exploit the texts, structures, hyperlinks and statistics of web documents as well as the relations of English WordNet and entries of mono and bi-lingual dictionaries.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,423
inproceedings
lobo-de-matos-2010-fairy
Fairy Tale Corpus Organization Using Latent Semantic Mapping and an Item-to-item Top-n Recommendation Algorithm
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1541/
Lobo, Paula Vaz and de Matos, David Martins
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we present a fairy tale corpus that was semantically organized and tagged. The proposed method uses latent semantic mapping to represent the stories and a top-n item-to-item recommendation algorithm to define clusters of similar stories. Each story can be placed in more than one cluster and stories in the same cluster are related to the same concepts. The results were manually evaluated regarding the groupings as perceived by human judges. The evaluation resulted in a precision of 0.81, a recall of 0.69, and an f-measure of 0.75 when using tf*idf for word frequency. Our method is topic- and language-independent, and, contrary to traditional clustering methods, automatically defines the number of clusters based on the set of documents. This method can be used as a setup for traditional clustering or classification. The resulting corpus will be used for recommendation purposes, although it can also be used for emotion extraction, semantic role extraction, meaning extraction, text classification, among others.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,424
inproceedings
willis-etal-2010-xml
From {XML} to {XML}: The Why and How of Making the Biodiversity Literature Accessible to Researchers
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1542/
Willis, Alistair and King, David and Morse, David and Dil, Anton and Lyal, Chris and Roberts, Dave
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present the ABLE document collection, which consists of a set of annotated volumes of the Bulletin of the British Museum (Natural History). These were developed during our ongoing work on automating the markup of scanned copies of the biodiversity literature. Such automation is required if historic literature is to be used to inform contemporary issues in biodiversity research. We consider an enhanced TEI XML markup language, which is used as an intermediate stage in translating from the initial XML obtained from Optical Character Recognition to taXMLit, the target annotation schema. The intermediate representation allows additional information from external sources such as a taxonomic thesaurus to be incorporated before the final translation into taXMLit. We give an overview of the project workflow in automating the markup process, and consider what extensions to existing markup schema will be required to best support working taxonomists. Finally, we discuss some of the particular issues which were encountered in converting between different XML formats.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,425
inproceedings
brandschain-etal-2010-greybeard
Greybeard Longitudinal Speech Study
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1543/
Brandschain, Linda and Graff, David and Cieri, Christopher and Walker, Kevin and Caruso, Chris and Neely, Abby
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The Greybeard Project was designed so as to enable research in speaker recognition using data that have been collected over a long period of time. Since 1994, LDC has been collecting speech samples for use in research and evaluations. By mining our earlier collections we assembled a list of subjects who had participated in multiple studies. These participants were then contacted and asked to take part in the Greybeard Project. The only constraint was that the participants must have made numerous calls in prior studies and the calls had to be a minimum of two years old. The archived data was sorted by participant and subsequent calls were added to their files. This is the first longitudinal study of its kind. The resulting corpus contains multiple calls for each participant that span anywhere from two to 12 years in time. It is our hope that these data will enable speaker recognition researchers to explore the effects of aging on voice.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,426
inproceedings
campillo-etal-2010-building
Building High Quality Databases for Minority Languages such as {G}alician
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1544/
Campillo, Francisco and Braga, Daniela and Mour{\'i}n, Ana Bel{\'e}n and Garc{\'i}a-Mateo, Carmen and Silva, Pedro and Dias, Miguel Sales and M{\'e}ndez, Francisco
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper describes the result of a joint R{\&}D project between Microsoft Portugal and the Signal Theory Group of the University of Vigo (Spain), where a set of language resources was developed with application to Text{\textemdash}to{\textemdash}Speech synthesis. First, a large Corpus of 10000 Galician sentences was designed and recorded by a professional female speaker. Second, a lexicon with phonetic and grammatical information of over 90000 entries was collected and reviewed manually by a linguist expert. And finally, these resources were used for a MOS (Mean Opinion Score) perceptual test to compare two state{\textemdash}of{\textemdash}the{\textemdash}art speech synthesizers of both groups, the one from Microsoft based on HMM, and the one from the University of Vigo based on unit selection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,427
inproceedings
lewis-etal-2010-achieving
Achieving Domain Specificity in {SMT} without Overt Siloing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1545/
Lewis, William D. and Wendt, Chris and Bullock, David
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We examine pooling data as a method for improving Statistical Machine Translation (SMT) quality for narrowly defined domains, such as data for a particular company or public entity. By pooling all available data, building large SMT engines, and using domain-specific target language models, we see boosts in quality, and can achieve the generalizability and resiliency of a larger SMT but with the precision of a domain-specific engine.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,428
inproceedings
brandschain-etal-2010-mixer
Mixer 6
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1546/
Brandschain, Linda and Graff, David and Cieri, Chris and Walker, Kevin and Caruso, Chris and Neely, Abby
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Linguistic Data Consortium’s Human Subjects Data Collection lab conducts multi-modal speech collections to develop corpora for use in speech, speaker and language research and evaluations. The Mixer collections have evolved over the years to best accommodate the ever changing needs of the research community and to hopefully keep one step ahead by providing increasingly challenging data. Over the years Mixer collections have grown to include socio-linguistic interviews, a wide variety of telephone conditions and multiple languages, recording conditions, channels and speech acts.. Mixer 6 was the most recent collection. This paper describes the Mixer 6 Phase 1 project. Mixer 6 Phase 1 was a study supporting linguistic research, technology development and education. The object of this study was to record speech in a variety of situations that vary formality and model multiple naturally occurring interactions as well as a variety of channel conditions
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,429
inproceedings
egg-redeker-2010-complex
How Complex is Discourse Structure?
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1547/
Egg, Markus and Redeker, Gisela
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper contributes to the question of which degree of complexity is called for in representations of discourse structure. We review recent claims that tree structures do not suffice as a model for discourse structure, with a focus on the work done on the Discourse Graphbank (DGB) of Wolf and Gibson (2005, 2006). We will show that much of the additional complexity in the DGB is not inherent in the data, but due to specific design choices that underlie W{\&}G’s annotation. Three kinds of configuration are identified whose DGB analysis violates tree-structure constraints, but for which an analysis in terms of tree structures is possible, viz., crossed dependencies that are eventually based on lexical or referential overlap, multiple-parent structures that could be handled in terms of Marcu’s (1996) Nuclearity Principle, and potential list structures, in which whole lists of segments are related to a preceding segment in the same way. We also discuss the recent results which Lee et al. (2008) adduce as evidence for a complexity of discourse structure that cannot be handled in terms of tree structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,430
inproceedings
attia-etal-2010-automatically
An Automatically Built Named Entity Lexicon for {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1548/
Attia, Mohammed and Toral, Antonio and Tounsi, Lamia and Monachini, Monica and van Genabith, Josef
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We have adapted and extended the automatic Multilingual, Interoperable Named Entity Lexicon approach to Arabic, using Arabic WordNet (AWN) and Arabic Wikipedia (AWK). First, we extract AWN’s instantiable nouns and identify the corresponding categories and hyponym subcategories in AWK. Then, we exploit Wikipedia inter-lingual links to locate correspondences between articles in ten different languages in order to identify Named Entities (NEs). We apply keyword search on AWK abstracts to provide for Arabic articles that do not have a correspondence in any of the other languages. In addition, we perform a post-processing step to fetch further NEs from AWK not reachable through AWN. Finally, we investigate diacritization using matching with geonames databases, MADA-TOKAN tools and different heuristics for restoring vowel marks of Arabic NEs. Using this methodology, we have extracted approximately 45,000 Arabic NEs and built, to the best of our knowledge, the largest, most mature and well-structured Arabic NE lexical resource to date. We have stored and organised this lexicon following the LMF ISO standard. We conduct a quantitative and qualitative evaluation against a manually annotated gold standard and achieve precision scores from 95.83{\%} (with 66.13{\%} recall) to 99.31{\%} (with 61.45{\%} recall) according to different values of a threshold.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,431
inproceedings
song-etal-2010-enhanced
Enhanced Infrastructure for Creation and Collection of Translation Resources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1549/
Song, Zhiyi and Strassel, Stephanie and Krug, Gary and Maeda, Kazuaki
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Statistical Machine Translation (MT) systems have achieved impressive results in recent years, due in large part to the increasing availability of parallel text for system training and development. This paper describes recent efforts at Linguistic Data Consortium to create linguistic resources for MT, including corpora, specifications and resource infrastructure. We review LDC`s three-pronged ap-proach to parallel text corpus development (acquisition of existing parallel text from known repositories, harvesting and aligning of potential parallel documents from the web, and manual creation of parallel text by professional translators), and describe recent adap-tations that have enabled significant expansions in the scope, variety, quality, efficiency and cost-effectiveness of translation resource creation at LDC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,432
inproceedings
laparra-rigau-2010-extended
e{X}tended {W}ord{F}rame{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1550/
Laparra, Egoitz and Rigau, German
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents a novel automatic approach to partially integrate FrameNet and WordNet. In that way we expect to extend FrameNet coverage, to enrich WordNet with frame semantic information and possibly to extend FrameNet to languages other than English. The method uses a knowledge-based Word Sense Disambiguation algorithm for matching the FrameNet lexical units to WordNet synsets. Specifically, we exploit a graph-based Word Sense Disambiguation algorithm that uses a large-scale knowledge-base derived from existing semantic resources. We have developed and tested additional versions of this algorithm showing substantial improvements over state-of-the-art results. Finally, we show some examples and figures of the resulting semantic resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,433
inproceedings
plank-2010-improved
Improved Statistical Measures to Assess Natural Language Parser Performance across Domains
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1551/
Plank, Barbara
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We examine the performance of three dependency parsing systems, in particular, their performance variation across Wikipedia domains. We assess the performance variation of (i) Alpino, a deep grammar-based system coupled with a statistical disambiguation versus (ii) MST and Malt, two purely data-driven statistical dependency parsing systems. The question is how the performance of each parser correlates with simple statistical measures of the text (e.g. sentence length, unknown word rate, etc.). This would give us an idea of how sensitive the different systems are to domain shifts, i.e. which system is more in need for domain adaptation techniques. To this end, we extend the statistical measures used by Zhang and Wang (2009) for English and evaluate the systems on several Wikipedia domains by focusing on a freer word-order language, Dutch. The results confirm the general findings of Zhang and Wang (2009), i.e. different parsing systems have different sensitivity against various statistical measure of the text, where the highest correlation to parsing accuracy was found for the measure we added, sentence perplexity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,434
inproceedings
ji-etal-2010-annotating
Annotating Event Chains for Carbon Sequestration Literature
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1552/
Ji, Heng and Li, Xiang and Lucia, Angelo and Zhang, Jianting
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we present a project of annotating event chains for an important scientific domain {\textemdash} carbon sequestration. This domain aims to reduce carbon emissions and has been identified by the U.S. National Academy of Engineering (NAE) as a grand challenge problem for the 21st century. Given a collection of scientific literature, we identify a set of centroid experiments; and then link and order the observations and events centered around these experiments on temporal or causal chains. We describe the fundamental challenges on annotations and our general solutions to address them. We expect that our annotation efforts will produce significant advances in inter-operability through new information extraction techniques and permit scientists to build knowledge that will provide better understanding of important scientific challenges in this domain, share and re-use of diverse data sets and experimental results in a more efficient manner. In addition, the annotations of metadata and ontology for these literature will provide important support for data lifecycle activities.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,435
inproceedings
ramisch-etal-2010-mwetoolkit
mwetoolkit: a Framework for Multiword Expression Identification
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1553/
Ramisch, Carlos and Villavicencio, Aline and Boitet, Christian
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the Multiword Expression Toolkit (mwetoolkit), an environment for type and language-independent MWE identification from corpora. The mwetoolkit provides a targeted list of MWE candidates, extracted and filtered according to a number of user-defined criteria and a set of standard statistical association measures. For generating corpus counts, the toolkit provides both a corpus indexation facility and a tool for integration with web search engines, while for evaluation, it provides validation and annotation facilities. The mwetoolkit also allows easy integration with a machine learning tool for the creation and application of supervised MWE extraction models if annotated data is available. In our experiment, the mwetoolkit was tested and evaluated in the context of MWE extraction in the biomedical domain. Our preliminary results show that the toolkit performs better than other approaches, especially concerning recall. Moreover, this first version can also be extended in several ways in order to improve the quality of the results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,436
inproceedings
rehbein-ruppenhofer-2010-theres
There`s no Data like More Data? Revisiting the Impact of Data Size on a Classification Task
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1554/
Rehbein, Ines and Ruppenhofer, Josef
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In the paper we investigate the impact of data size on a Word Sense Disambiguation task (WSD). We question the assumption that the knowledge acquisition bottleneck, which is known as one of the major challenges for WSD, can be solved by simply obtaining more and more training data. Our case study on 1,000 manually annotated instances of the German verb ''``drohen'''' (threaten) shows that the best performance is not obtained when training on the full data set, but by carefully selecting new training instances with regard to their informativeness for the learning process (Active Learning). We present a thorough evaluation of the impact of different sampling methods on the data sets and propose an improved method for uncertainty sampling which dynamically adapts the selection of new instances to the learning progress of the classifier, resulting in more robust results during the initial stages of learning. A qualitative error analysis identifies problems for automatic WSD and discusses the reasons for the great gap in performance between human annotators and our automatic WSD system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,437
inproceedings
hana-feldman-2010-positional
A Positional Tagset for {R}ussian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1555/
Hana, Jirka and Feldman, Anna
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Fusional languages have rich inflection. As a consequence, tagsets capturing their morphological features are necessarily large. A natural way to make a tagset manageable is to use a structured system. In this paper, we present a positional tagset for describing morphological properties of Russian. The tagset was inspired by the Czech positional system (Hajic, 2004). We have used preliminary versions of this tagset in our previous work (e.g., Hana et al. (2004, 2006); Feldman (2006); Feldman and Hana (2010)). Here, we both systematize and extend these preliminary versions (by adding information about animacy, aspect and reflexivity); give a more detailed description of the tagset and provide comparison with the Czech system. Each tag of the tagset consists of 16 positions, each encoding one morphological feature (part-of-speech, detailed part-of-speech, gender, animacy, number, case, possessor`s gender and number, person, reflexivity, tense, aspect, degree of comparison, negation, voice, variant). The tagset contains approximately 2,000 tags.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,438
inproceedings
petasis-petasis-2010-blogbuster
{B}log{B}uster: A Tool for Extracting Corpora from the Blogosphere
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1556/
Petasis, Georgios and Petasis, Dimitrios
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents BlogBuster, a tool for extracting a corpus from the blogosphere. The topic of cleaning arbitrary web pages with the goal of extracting a corpus from web data, suitable for linguistic and language technology research and development, has attracted significant research interest recently. Several general purpose approaches for removing boilerplate have been presented in the literature; however the blogosphere poses additional requirements, such as a finer control over the extracted textual segments in order to accurately identify important elements, i.e. individual blog posts, titles, posting dates or comments. BlogBuster tries to provide such additional details along with boilerplate removal, following a rule-based approach. A small set of rules were manually constructed by observing a limited set of blogs from the Blogger and Wordpress hosting platforms. These rules operate on the DOM tree of an HTML page, as constructed by a popular browser, Mozilla Firefox. Evaluation results suggest that BlogBuster is very accurate when extracting corpora from blogs hosted in the Blogger and Wordpress, while exhibiting a reasonable precision when applied to blogs not hosted in these two popular blogging platforms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,439
inproceedings
shamsfard-etal-2010-step
{ST}e{P}-1: A Set of Fundamental Tools for {P}ersian Text Processing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1557/
Shamsfard, Mehrnoush and Jafari, Hoda Sadat and Ilbeygi, Mahdi
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Many NLP applications need fundamental tools to convert the input text into appropriate form or format and extract the primary linguistic knowledge of words and sentences. These tools perform segmentation of text into sentences, words and phrases, checking and correcting the spellings, doing lexical and morphological analysis, POS tagging and so on. Persian is among languages with complex preprocessing tasks. Having different writing prescriptions, spacings between or within words, character codings and spellings are some of the difficulties and challenges in converting various texts into a standard one. The lack of fundamental text processing tools such as morphological analyser (especially for derivational morphology) and POS tagger is another problem in Persian text processing. This paper introduces a set of fundamental tools for Persian text processing in STeP-1 package. STeP-1 (Standard Text Preparation for Persian language) performs a combination of tokenization, spell checking, morphological analysis and POS tagging. It also turns all Persian texts with different prescribed forms of writing to a series of tokens in the standard style introduced by Academy of Persian Language and Literature (APLL). Experimental results show high performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,440
inproceedings
spoustova-etal-2010-building
Building a Web Corpus of {C}zech
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1558/
Spoustov{\'a}, Drahom{\'i}ra {\quotedblbase}johanka{\textquotedblleft} and Spousta, Miroslav and Pecina, Pavel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Large corpora are essential to modern methods of computational linguistics and natural language processing. In this paper, we describe an ongoing project whose aim is to build a largest corpus of Czech texts. We are building the corpus from Czech Internet web pages, using (and, if needed, developing) advanced downloading, cleaning and automatic linguistic processing tools. Our concern is to keep the whole process language independent and thus applicable also for building web corpora of other languages. In the paper, we briefly describe the crawling, cleaning, and part-of-speech tagging procedures. Using a prototype corpus, we provide a comparison with a current corpora (in particular, SYN2005, part of the Czech National Corpora). We analyse part-of-speech tag distribution, OOV word ratio, average sentence length and Spearman rank correlation coefficient of the distance of ranks of 500 most frequent words. Our results show that our prototype corpus is now quite homogenous. The challenging task is to find a way to decrease the homogeneity of the text while keeping the high quality of the data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,441
inproceedings
vertan-2010-towards
Towards the Integration of Language Tools Within Historical Digital Libraries
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1559/
Vertan, Cristina
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
During the last years the campaign of mass digitization made available catalogues and valuable rare manuscripts and old printed books vie the Internet. The Manuscriptorium digital library ingested hundreds of olumes and it is expected that the volume will grow up in the next years. Other European initiatives like Europeana and Monasterium have also as central activities the online presentation of cultural heritage. With the growing of the available on-line volumes, a special attention was paid to the management and retrieval of documents within digital libraries. Enabling semantic technologies and intelligent linking and search are a big step forward, but they still do not succeed in making the content of old rare books intelligible to the broad public or specialists in other domains or languages. In this paper we will argue that multilingual language technologies have the potential to fill this gap. We overview the existent language resources for historical documents, and present an architecture which aims at presenting such texts to the normal user, without altering the character of the texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,442
inproceedings
boyd-2010-eagle
{EAGLE}: an Error-Annotated Corpus of Beginning Learner {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1560/
Boyd, Adriane
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper describes the Error-Annotated German Learner Corpus (EAGLE), a corpus of beginning learner German with grammatical error annotation. The corpus contains online workbook and and hand-written essay data from learners in introductory German courses at The Ohio State University. We introduce an error typology developed for beginning learners of German that focuses on linguistic properties of lexical items present in the learner data and present the detailed error typologies for selection, agreement, and word order errors. The corpus uses an error annotation format that extends the multi-layer standoff format proposed by Luedeling et al. (2005) to include incremental target hypotheses for each error. In this format, each annotated error includes information about the location of tokens affected by the error, the error type, and the proposed target correction. The multi-layer standoff format allows us to annotate ambiguous errors with more than one possible target correction and to annotate the multiple, overlapping errors common in beginning learner productions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,443
inproceedings
ferret-2010-testing
Testing Semantic Similarity Measures for Extracting Synonyms from a Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1561/
Ferret, Olivier
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The definition of lexical semantic similarity measures has been the subject of lots of works for many years. In this article, we focus more specifically on distributional semantic similarity measures. Although several evaluations of this kind of measures were already achieved for determining if they actually catch semantic relatedness, it is still difficult to determine if a measure that performs well in an evaluation framework can be applied more widely with the same success. In the work we present here, we first select a semantic similarity measure by testing a large set of such measures against the WordNet-based Synonymy Test, an extended TOEFL test proposed in (Freitag et al., 2005), and we show that its accuracy is comparable to the accuracy of the best state of the art measures while it has less demanding requirements. Then, we apply this measure for extracting automatically synonyms from a corpus and we evaluate the relevance of this process against two reference resources, WordNet and the Moby thesaurus. Finally, we compare our results in details to those of (Curran and Moens, 2002).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,444
inproceedings
de-luca-2010-corpus
A Corpus for Evaluating Semantic Multilingual Web Retrieval Systems: The Sense Folder Corpus
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1562/
De Luca, Ernesto William
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present the multilingual Sense Folder Corpus. After the analysis of different corpora, we describe the requirements that have to be satisfied for evaluating semantic multilingual retrieval approaches. Justified by the unfulfilled requirements explained, we start creating a small bilingual hand-tagged corpus of 502 documents retrieved from Web searches. The documents contained in this collection have been created using Google queries. A single ambiguous word has been searched and related documents (approx. the first 60 documents for every keyword) have been retrieved. The document collection has been extended at the query word level, using single ambiguous words for English (argument, bank, chair, network and rule) and for Italian (argomento, lingua, regola, rete and stampa). The search and annotation process has been done both in a monolingual way for the English and the Italian language. 252 English and 250 Italian documents have been retrieved from Google and saved in their original rank. The performance of semantic multilingual retrieval systems has been evaluated using such a corpus with three baselines (“Random”, “First Sense” and “Most Frequent Sense”) that are formally presented and discussed. The fine-grained evaluation of the Sense Folder approach is discussed in details.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,445
inproceedings
catizone-etal-2010-using
Using Dialogue Corpora to Extend Information Extraction Patterns for Natural Language Understanding of Dialogue
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1563/
Catizone, Roberta and Dingli, Alexiei and Gaizauskas, Robert
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper examines how Natural Language Process (NLP) resources and online dialogue corpora can be used to extend coverage of Information Extraction (IE) templates in a Spoken Dialogue system. IE templates are used as part of a Natural Language Understanding module for identifying meaning in a user utterance. The use of NLP tools in Dialogue systems is a difficult task given 1) spoken dialogue is often not well-formed and 2) there is a serious lack of dialogue data. In spite of that, we have devised a method for extending IE patterns using standard NLP tools and available dialogue corpora found on the web. In this paper, we explain our method which includes using a set of NLP modules developed using GATE (a General Architecture for Text Engineering), as well as a general purpose editing tool that we built to facilitate the IE rule creation process. Lastly, we present directions for future work in this area.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,446
inproceedings
tounsi-van-genabith-2010-arabic
{A}rabic Parsing Using Grammar Transforms
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1564/
Tounsi, Lamia and van Genabith, Josef
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We investigate Arabic Context Free Grammar parsing with dependency annotation comparing lexicalised and unlexicalised parsers. We study how morphosyntactic as well as function tag information percolation in the form of grammar transforms (Johnson, 1998, Kulick et al., 2006) affects the performance of a parser and helps dependency assignment. We focus on the three most frequent functional tags in the Arabic Penn Treebank: subjects, direct objects and predicates . We merge these functional tags with their phrasal categories and (where appropriate) percolate case information to the non-terminal (POS) category to train the parsers. We then automatically enrich the output of these parsers with full dependency information in order to annotate trees with Lexical Functional Grammar (LFG) f-structure equations with produce f-structures, i.e. attribute-value matrices approximating to basic predicate-argument-adjunct structure representations. We present a series of experiments evaluating how well lexicalized, history-based, generative (Bikel) as well as latent variable PCFG (Berkeley) parsers cope with the enriched Arabic data. We measure quality and coverage of both the output trees and the generated LFG f-structures. We show that joint functional and morphological information percolation improves both the recovery of trees as well as dependency results in the form of LFG f-structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,447
inproceedings
wang-sporleder-2010-constructing
Constructing a Textual Semantic Relation Corpus Using a Discourse Treebank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1565/
Wang, Rui and Sporleder, Caroline
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present our work on constructing a textual semantic relation corpus by making use of an existing treebank annotated with discourse relations. We extract adjacent text span pairs and group them into six categories according to the different discourse relations between them. After that, we present the details of our annotation scheme, which includes six textual semantic relations, `backward entailment', `forward entailment', `equality', `contradiction', `overlapping', and `independent'. We also discuss some ambiguous examples to show the difficulty of such annotation task, which cannot be easily done by an automatic mapping between discourse relations and semantic relations. We have two annotators and each of them performs the task twice. The basic statistics on the constructed corpus looks promising: we achieve 81.17{\%} of agreement on the six semantic relation annotation with a .718 kappa score, and it increases to 91.21{\%} if we collapse the last two labels with a .775 kappa score.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,448
inproceedings
han-etal-2010-using
Using an Error-Annotated Learner Corpus to Develop an {ESL}/{EFL} Error Correction System
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1566/
Han, Na-Rae and Tetreault, Joel and Lee, Soo-Hwa and Ha, Jin-Young
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents research on building a model of grammatical error correction, for preposition errors in particular, in English text produced by language learners. Unlike most previous work which trains a statistical classifier exclusively on well-formed text written by native speakers, we train a classifier on a large-scale, error-tagged corpus of English essays written by ESL learners, relying on contextual and grammatical features surrounding preposition usage. First, we show that such a model can achieve high performance values: 93.3{\%} precision and 14.8{\%} recall for error detection and 81.7{\%} precision and 13.2{\%} recall for error detection and correction when tested on preposition replacement errors. Second, we show that this model outperforms models trained on well-edited text produced by native speakers of English. We discuss the implications of our approach in the area of language error modeling and the issues stemming from working with a noisy data set whose error annotations are not exhaustive.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,449
inproceedings
mcgraw-etal-2010-collecting
Collecting Voices from the Cloud
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1567/
McGraw, Ian and Lee, Chia-ying and Hetherington, Lee and Seneff, Stephanie and Glass, Jim
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The collection and transcription of speech data is typically an expensive and time-consuming task. Voice over IP and cloud computing are poised to greatly reduce this impediment to research on spoken language interfaces in many domains. This paper documents our efforts to deploy speech-enabled web interfaces to large audiences over the Internet via Amazon Mechanical Turk, an online marketplace for work. Using the open source WAMI Toolkit, we collected corpora in two different domains which collectively constitute over 113 hours of speech. The first corpus contains 100,000 utterances of read speech, and was collected by asking workers to record street addresses in the United States. For the second task, we collected conversations with FlightBrowser, a multimodal spoken dialogue system. The FlightBrowser corpus obtained contains 10,651 utterances composing 1,113 individual dialogue sessions from 101 distinct users. The aggregate time spent collecting the data for both corpora was just under two weeks. At times, our servers were logging audio from workers at rates faster than real-time. We describe the process of collection and transcription of these corpora while providing an analysis of the advantages and limitations to this data collection method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,450
inproceedings
max-etal-2010-contrastive
Contrastive Lexical Evaluation of Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1568/
Max, Aur{\'e}lien and Crego, Josep Maria and Yvon, Fran{\c{c}}ois
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper advocates a complementary measure of translation performance that focuses on the constrastive ability of two or more systems or system versions to adequately translate source words. This is motivated by three main reasons : 1) existing automatic metrics sometimes do not show significant differences that can be revealed by fine-grained focussed human evaluation, 2) these metrics are based on direct comparisons between system hypotheses with the corresponding reference translations, thus ignoring the input words that were actually translated, and 3) as these metrics do not take input hypotheses from several systems at once, fine-grained contrastive evaluation can only be done indirectly. This proposal is illustrated on a multi-source Machine Translation scenario where multiple translations of a source text are available. Significant gains (up to +1.3 BLEU point) are achieved on these experiments, and contrastive lexical evaluation is shown to provide new information that can help to better analyse a system`s performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,451
inproceedings
ui-dhonnchadha-van-genabith-2010-partial
Partial Dependency Parsing for {I}rish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1569/
U{\'i} Dhonnchadha, Elaine and Van Genabith, Josef
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present a partial dependency parser for Irish. Constraint Grammar (CG) based rules are used to annotate dependency relations and grammatical functions. Chunking is performed using a regular-expression grammar which operates on the dependency tagged sentences. As this is the first implementation of a parser for unrestricted Irish text (to our knowledge), there were no guidelines or precedents available. Therefore deciding what constitutes a syntactic unit, and how it should be annotated, accounts for a major part of the early development effort. Currently, all tokens in a sentence are tagged for grammatical function and local dependency. Long-distance dependencies, prepositional attachments or coordination are not handled, resulting in a partial dependency analysis. Evaluations show that the partial dependency analysis achieves an f-score of 93.60{\%} on development data and 94.28{\%} on unseen test data, while the chunker achieves an f-score of 97.20{\%} on development data and 93.50{\%} on unseen test data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,452
inproceedings
monachesi-markus-2010-socially
Socially Driven Ontology Enrichment for e{L}earning
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1570/
Monachesi, Paola and Markus, Thomas
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
One of the objectives of the Language Technologies for Life-Long Learning (LTfLL) project, is to develop a knowledge sharing system that connects learners to resources and learners to other learners. To this end, we complement the formal knowledge represented by existing domain ontologies with the informal knowledge emerging from social tagging. More specifically, we crawl data from social media applications such as Delicious, Slideshare and YouTube. Similarity measures are employed to select possible lexicalizations of concepts that are related to the ones present in the given ontology and which are assumed to be socially relevant with respect to the input lexicalisation. In order to identify the appropriate relationships which exist between the extracted related terms and the existing domain ontology, we employ several heuristics that rely on the use of a large background knowledge base, such as DBpedia. An evaluation of the resulting ontology has been carried out. The methodology proposed allows for an appropriate enrichment process and produces a complementary vocabulary to that of a domain expert.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,453
inproceedings
max-wisniewski-2010-mining
Mining Naturally-occurring Corrections and Paraphrases from {W}ikipedia`s Revision History
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1571/
Max, Aur{\'e}lien and Wisniewski, Guillaume
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Naturally-occurring instances of linguistic phenomena are important both for training and for evaluating automatic text processing. When available in large quantities, they also prove interesting material for linguistic studies. In this article, we present WiCoPaCo (Wikipedia Correction and Paraphrase Corpus), a new freely-available resource built by automatically mining Wikipedia’s revision history. The WiCoPaCo corpus focuses on local modifications made by human revisors and include various types of corrections (such as spelling error or typographical corrections) and rewritings, which can be categorized broadly into meaning-preserving and meaning-altering revisions. We present an initial hand-built typology of these revisions, but the resource allows for any possible annotation scheme. We discuss the main motivations for building such a resource and describe the main technical details guiding its construction. We also present applications and data analysis on French and report initial results on spelling error correction and morphosyntactic rewriting. The WiCoPaCo corpus can be freely downloaded from \url{http://wicopaco.limsi.fr}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,454
inproceedings
rosenthal-etal-2010-towards
Towards Semi-Automated Annotation for Prepositional Phrase Attachment
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1572/
Rosenthal, Sara and Lipovsky, William and McKeown, Kathleen and Thadani, Kapil and Andreas, Jacob
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper investigates whether high-quality annotations for tasks involving semantic disambiguation can be obtained without a major investment in time or expense. We examine the use of untrained human volunteers from Amazons Mechanical Turk in disambiguating prepositional phrase (PP) attachment over sentences drawn from the Wall Street Journal corpus. Our goal is to compare the performance of these crowdsourced judgments to the annotations supplied by trained linguists for the Penn Treebank project in order to indicate the viability of this approach for annotation projects that involve contextual disambiguation. The results of our experiments on a sample of the Wall Street Journal corpus show that invoking majority agreement between multiple human workers can yield PP attachments with fairly high precision. This confirms that a crowdsourcing approach to syntactic annotation holds promise for the generation of training corpora in new domains and genres where high-quality annotations are not available and difficult to obtain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,455
inproceedings
lopez-romary-2010-grisp
{GRISP}: A Massive Multilingual Terminological Database for Scientific and Technical Domains
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1573/
Lopez, Patrice and Romary, Laurent
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The development of a multilingual terminology is a very long and costly process. We present the creation of a multilingual terminological database called GRISP covering multiple technical and scientific fields from various open resources. A crucial aspect is the merging of the different resources which is based in our proposal on the definition of a sound conceptual model, different domain mapping and the use of structural constraints and machine learning techniques for controlling the fusion process. The result is a massive terminological database of several millions terms, concepts, semantic relations and definitions. The accuracy of the concept merging between several resources have been evaluated following several methods. This resource has allowed us to improve significantly the mean average precision of an information retrieval system applied to a large collection of multilingual and multidomain patent documents. New specialized terminologies, not specifically created for text processing applications, can be aggregated and merged to GRISP with minimal manual efforts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,456
inproceedings
marinelli-2010-lexical
Lexical Resources and Ontological Classifications for the Recognition of Proper Names Sense Extension
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1574/
Marinelli, Rita
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Particular uses of PNs with sense extension are focussed on and inspected taking into account the presence of PNs in lexical semantic databases and electronic corpora. Methodology to select ad include PNs in semantic databases is described; the use of PNs in corpora of Italian Language is examined and evaluated, analyzing the behaviour of a set of PNs in different periods of time. Computational resources can facilitate our study in this field in an effective way by helping codify, translate and handle particular cases of polysemy, but also guiding in metaphorical and metonymic sense recognition, supported by the ontological classification of the lexical semantic entities. The relationship between the “abstract” and the “concrete”, which is at the basis of the Conceptual Metaphor perspective, can be considered strictly related to the variation of the ontological values found in our analysis of the PNs and their belonging classes which are codified in the ItalWordNet database.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,457
inproceedings
declerck-lendvai-2010-towards
Towards a Standardized Linguistic Annotation of the Textual Content of Labels in Knowledge Representation Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1575/
Declerck, Thierry and Lendvai, Piroska
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
WWe propose applying standardized linguistic annotation to terms included in labels of knowledge representation schemes (taxonomies or ontologies), hypothesizing that this would help improving ontology-based semantic annotation of texts. We share the view that currently used methods for including lexical and terminological information in such hierarchical networks of concepts are not satisfactory, and thus put forward {\textemdash} as a preliminary step to our annotation goal {\textemdash} a model for modular representation of conceptual, terminological and linguistic information within knowledge representation systems. Our CTL model is based on two recent initiatives that describe the representation of terminologies and lexicons in ontologies: the Terminae method for building terminological and ontological models from text (Aussenac-Gilles et al., 2008), and the LexInfo metamodel for ontology lexica (Buitelaar et al., 2009). CTL goes beyond the mere fusion of the two models and introduces an additional level of representation for the linguistic objects, whereas those are no longer limited to lexical information but are covering the full range of linguistic phenomena, including constituency and dependency. We also show that the approach benefits linguistic and semantic analysis of external documents that are often to be linked to semantic resources for enrichment with concepts that are newly extracted or inferred.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,458
inproceedings
murakami-etal-2010-language
Language Service Management with the Language Grid
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1576/
Murakami, Yohei and Lin, Donghui and Tanaka, Masahiro and Nakaguchi, Takao and Ishida, Toru
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
As the number of language resources accessible on the Internet increases, many efforts have been made for combining language resources and language processing tools to create new services. However, existing language resource coordination frameworks cannot manage issues of intellectual property associated with language resources, which make it difficult for most end-users to get supports for their intercultural collaborations because they always have to deal with the issues by themselves. In this paper, we aim at constructing a new language service management architecture on the Language Grid, which enables language resource providers to control access to their resources in accordance with their own policies. Furthermore, we apply the proposed architecture to the operating Language Grid in order to validate the effectiveness of the architecture. As a result, several service management models utilizing the monitoring and access constraints are occurring to satisfy various requirements from language resource providers. These models can handle paid-for language resources as well as free language resources. Finally, we discuss further challenging issues of combining language resources under each different policies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,459
inproceedings
vuckovic-etal-2010-improving
Improving Chunking Accuracy on {C}roatian Texts by Morphosyntactic Tagging
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1577/
Vu{\v{c}}kovi{\'c}, Kristina and Agi{\'c}, {\v{Z}}eljko and Tadi{\'c}, Marko
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present the results of an experiment with utilizing a stochastic morphosyntactic tagger as a pre-processing module of a rule-based chunker and partial parser for Croatian in order to raise its overall chunking and partial parsing accuracy on Croatian texts. In order to conduct the experiment, we have manually chunked and partially parsed 459 sentences from the Croatia Weekly 100 kw newspaper sub-corpus taken from the Croatian National Corpus, that were previously also morphosyntactically disambiguated and lemmatized. Due to the lack of resources of this type, these sentences were designated as a temporary chunking and partial parsing gold standard for Croatian. We have then evaluated the chunker and partial parser in three different scenarios: (1) chunking previously morphosyntactically untagged text, (2) chunking text that was tagged using the stochastic morphosyntactic tagger for Croatian and (3) chunking manually tagged text. The obtained F1-scores for the three scenarios were, respectively, 0.874 (P: 0.825, R: 0.930), 0.891 (P: 0.856, R: 0.928) and 0.914 (P: 0.904, R: 0.925). The paper provides the description of language resources and tools used in the experiment, its setup and discussion of results and perspectives for future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,460
inproceedings
elson-mckeown-2010-building
Building a Bank of Semantically Encoded Narratives
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1578/
Elson, David K. and McKeown, Kathleen R.
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We propose a methodology for a novel type of discourse annotation whose model is tuned to the analysis of a text as narrative. This is intended to be the basis of a “story bank” resource that would facilitate the automatic analysis of narrative structure and content. The methodology calls for annotators to construct propositions that approximate a reference text, by selecting predicates and arguments from among controlled vocabularies drawn from resources such as WordNet and VerbNet. Annotators then integrate the propositions into a conceptual graph that maps out the entire discourse; the edges represent temporal, causal and other relationships at the level of story content. Because annotators must identify the recurring objects and themes that appear in the text, they also perform coreference resolution and word sense disambiguation as they encode propositions. We describe a collection experiment and a method for determining inter-annotator agreement when multiple annotators encode the same short story. Finally, we describe ongoing work toward extending the method to integrate the annotator’s interpretations of character agency (the goals, plans and beliefs that are relevant, yet not explictly stated in the text).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,461
inproceedings
wong-2010-semantic
Semantic Evaluation of Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1579/
Wong, Billy Tak-Ming
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
It is recognized that many evaluation metrics of machine translation in use that focus on surface word level suffer from their lack of tolerance of linguistic variance, and the incorporation of linguistic features can improve their performance. To this end, WordNet is therefore widely utilized by recent evaluation metrics as a thesaurus for identifying synonym pairs. On this basis, word pairs in similar meaning, however, are still neglected. We investigate the significance of this particular word group to the performance of evaluation metrics. In our experiments we integrate eight different measures of lexical semantic similarity into an evaluation metric based on standard measures of unigram precision, recall and F-measure. It is found that a knowledge-based measure proposed by Wu and Palmer and a corpus-based measure, namely Latent Semantic Analysis, lead to an observable gain in correlation with human judgments of translation quality, in an extent to which better than the use of WordNet for synonyms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,462
inproceedings
dias-da-silva-di-felippo-2010-rebeca
{REBECA}: Turning {W}ord{N}et Databases into {\textquotedblleft}Ontolexicons{\textquotedblright}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1580/
Dias-da-Silva, Bento Carlos and Di Felippo, Ariani
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we outline the design and present a sample of the REBECA bilingual lexical-conceptual database constructed by linking two monolingual lexical resources in which a set of lexicalized concepts of the North-American English database, the Princeton WordNet (WN.Pr) synsets, is aligned with its corresponding set of lexicalized concepts of the Brazilian Portuguese database, the Brazilian Portuguese WordNet synsets under construction, by means of the MultiNet-based interlingual schema, the concepts of which are the ones represented by the Princeton WordNet synsets. Implemented in the Prot{\'e}g{\'e}-OWL editor, the alignment of the two databases illustrates how wordnets can be turned into ontolexicons. At the current stage of development, the “wheeled-vehicle” conceptual domain was modeled to develop and to test REBECA’s design and contents, respectively. The collection of 205 ontological concepts worked out, i.e. REBECA{\textasciiacute}s alignment indexes, is exemplified in the “wheeled- vehicle” conceptual domain, e.g. [CAR], [RAILCAR], etc., and it was selected in the WN.Pr database, version 2.0. Future work includes the population of the database with more lexical data and other conceptual domains so that the intricacies of adding more concepts and devising the spreading or pruning the relationships between them can be properly evaluated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,463
inproceedings
karasimos-petropoulou-2010-crash
A Crash Test with Linguistica in {M}odern {G}reek: The Case of Derivational Affixes and Bound Stems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1581/
Karasimos, Athanasios and Petropoulou, Evanthia
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper attempts to participate in the ongoing discussion in search of a suitable model for the computational treatment of Greek morphology. Focusing on the unsupervised morphology learning technique, and particularly on the model of Linguistica by Goldsmith (2001), we attempt a computational treatment of specific word formation phenomena in Modern Greek (MG), such as suffixation and compounding with bound stems, through the use of various corpora. The inability of the system to accept any morphological rule as input, hence the term `unsupervised', interferes to a great extent with its efficiency in parsing, especially in languages with rich morphology, such as MG, among others. Specifically, neither the rich allomorphy, nor the complex combinability of morphemes in MG appear to be treated efficiently through this technique, resulting in low scores of proper word segmentation (22{\%} in inflectional suffixes and 13{\%} in derivational ones), as well as the recognition of false morphemes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,464
inproceedings
federmann-declerck-2010-extraction
Extraction, Merging, and Monitoring of Company Data from Heterogeneous Sources
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1582/
Federmann, Christian and Declerck, Thierry
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We describe the implementation of an enterprise monitoring system that builds on an ontology-based information extraction (OBIE) component applied to heterogeneous data sources. The OBIE component consists of several IE modules - each extracting on a regular temporal basis a specific fraction of company data from a given data source - and a merging tool, which is used to aggregate all the extracted information about a company. The full set of information about companies, which is to be extracted and merged by the OBIE component, is given in the schema of a domain ontology, which is guiding the information extraction process. The monitoring system, in case it detects changes in the extracted and merged information on a company with respect to the actual state of the knowledge base of the underlying ontology, ensures the update of the population of the ontology. As we are using an ontology extended with temporal information, the system is able to assign time intervals to any of the object instances. Additionally, detected changes can be communicated to end-users, who can validate and possibly correct the resulting updates in the knowledge base.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,465
inproceedings
wang-zhang-2010-hybrid
Hybrid Constituent and Dependency Parsing with {T}singhua {C}hinese Treebank
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1583/
Wang, Rui and Zhang, Yi
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we describe our hybrid parsing model on the Mandarin Chinese processing. In particular, we work on the Tsinghua Chinese Treebank (TCT), whose annotation has both constitutes and the head information of each constitute. The model we design combines the mainstream constitute parsing and dependency parsing. We present in detail 1) how to (partially) encode the head information into the constitute parsing, 2) how to encode constitute information into the dependency parsing, and 3) how to restore the head information using the dependency structure. For each of them, we take different strategies to deal with different cases. In an open shared task evaluation, we achieve an f1-score of 85.23{\%} for the constitute parsing, 82.35{\%} with partial head information, and 74.27{\%} with complete head information. The error analysis shows the challenge of restoring multiple-headed constitutes and also some potentials to use the dependency structure to guide the constitute parsing, which will be our future work to explore.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,466
inproceedings
kordjamshidi-etal-2010-spatial
Spatial Role Labeling: Task Definition and Annotation Scheme
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1584/
Kordjamshidi, Parisa and Van Otterlo, Martijn and Moens, Marie-Francine
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
One of the essential functions of natural language is to talk about spatial relationships between objects. Linguistic constructs can express highly complex, relational structures of objects, spatial relations between them, and patterns of motion through spaces relative to some reference point. Learning how to map this information onto a formal representation from a text is a challenging problem. At present no well-defined framework for automatic spatial information extraction exists that can handle all of these issues. In this paper we introduce the task of spatial role labeling and propose an annotation scheme that is language-independent and facilitates the application of machine learning techniques. Our framework consists of a set of spatial roles based on the theory of holistic spatial semantics with the intent of covering all aspects of spatial concepts, including both static and dynamic spatial relations. We illustrate our annotation scheme with many examples throughout the paper, and in addition we highlight how to connect to spatial calculi such as region connection calculus and also how our approach fits into related work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,467
inproceedings
russo-2010-discovering
Discovering Polarity for Ambiguous and Objective Adjectives through Adverbial Modification
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1585/
Russo, Irene
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The field of opinion mining has emerged in recent years as an exciting challenge for computational linguistics: investigating how humans express subjective judgments through linguistic means paves the way for automatic recognition and summarization of opinionated texts, with the possibility of determining the polarities and strengths of opinions asserted. Sentiment lexicons are basic resources for investigating the orientation of a text that can be performed considering polarized words included in it but they encode the polarity of word types instead that the polarity of word tokens. The expression of an opinion through the choice of lexical items is context-sensitive and sentiment lexicons could be integrated with syntagmatic patterns that emerge as significant with statistical analyses. In this paper it will be proposed a corpus analysis of adverbially modified ambiguous (e.g. fast, rich) and objective adjectives (e.g. chemical, political) - that can be occasionally exploited to express a subjective judgments -. Comparing polarity encoded in sentiment lexicons and the results of a logistic regression analysis, the role of adverbial cues for polarity detection will be evaluated on the basis of a small sample of sentences manually annotated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,468
inproceedings
simov-osenova-2010-constructing
Constructing of an Ontology-based Lexicon for {B}ulgarian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1586/
Simov, Kiril and Osenova, Petya
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we report on the progress in the creation of an Ontology-based lexicon for Bulgarian. We have started with the concept set from an upper ontology (DOLCE). Then it was extended with concepts selected from the OntoWordNet, which correspond to Core WordNet and EuroWordNet Basic concepts. The underlying idea behind the ontology-based lexicon is its organization via two semantic relations - equivalence and subsumption. These relations reflect the distribution of lexical unit senses with respect to the concepts in the ontology. The lexical unit candidates for concept mapping have been selected from two large and well-developed lexical resources for Bulgarian - a machine readable explanatory dictionary and a morphological lexicon. In the initial step, the lexical units were handled that have equivalent senses to the concepts in the ontology (2500 at the moment). Then, in the second stage, we are proceeding with lexical units selected on their frequency distribution in a large Bulgarian corpus. This step is the more challenging one, since it might require also additions of concepts to the ontology. The main applications of the lexicon are envisaged to be the semantic annotation and semantic IR for Bulgarian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,469
inproceedings
glenn-etal-2010-transcription
Transcription Methods for Consistency, Volume and Efficiency
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1587/
Glenn, Meghan Lammie and Strassel, Stephanie M. and Lee, Haejoong and Maeda, Kazuaki and Zakhary, Ramez and Li, Xuansong
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper describes recent efforts at Linguistic Data Consortium at the University of Pennsylvania to create manual transcripts as a shared resource for human language technology research and evaluation. Speech recognition and related technologies in particular call for substantial volumes of transcribed speech for use in system development, and for human gold standard references for evaluating performance over time. Over the past several years LDC has developed a number of transcription approaches to support the varied goals of speech technology evaluation programs in multiple languages and genres. We describe each transcription method in detail, and report on the results of a comparative analysis of transcriber consistency and efficiency, for two transcription methods in three languages and five genres. Our findings suggest that transcripts for planned speech are generally more consistent than those for spontaneous speech, and that careful transcription methods result in higher rates of agreement when compared to quick transcription methods. We conclude with a general discussion of factors contributing to transcription quality, efficiency and consistency.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,470
inproceedings
mihaila-etal-2010-romanian
{R}omanian Zero Pronoun Distribution: A Comparative Study
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1588/
Mih{\u{a}}il{\u{a}}, Claudiu and Ilisei, Iustina and Inkpen, Diana
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Anaphora resolution is still a challenging research field in natural language processing, lacking a algorithm that correctly resolves anaphoric pronouns. Anaphoric zero pronouns pose an even greater challenge, since this category is not lexically realised. Thus, their resolution is conditioned by their prior identification stage. This paper reports on the distribution of zero pronouns in Romanian in various genres: encyclopaedic, legal, literary, and news-wire texts. For this purpose, the RoZP corpus has been created, containing almost 50000 tokens and 800 zero pronouns which are manually annotated. The distribution patterns are compared across genres, and exceptional cases are presented in order to facilitate the methodological process of developing a future zero pronoun identification and resolution algorithm. The evaluation results emphasise that zero pronouns appear frequently in Romanian, and their distribution depends largely on the genre. Additionally, possible features are revealed for their identification, and a search scope for the antecedent has been determined, increasing the chances of correct resolution.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,471
inproceedings
savy-2010-pr
{P}r.{A}.{T}i.{D}: A Coding Scheme for Pragmatic Annotation of Dialogues.
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1589/
Savy, Renata
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Our purpose is to propose and discuss the latest version of an integrated method for dialogue analysis, annotation and evaluation., using a set of different pragmatic parameters. The annotation scheme Pr.A.Ti.D was built up on task-oriented dialogues. Dialogues are part of the CLIPS corpus of spoken Italian, which consists of spoken material stratified as regard as the diatopic variation. A description of the multilevel annotation scheme is provided, discussing some problems of its design and formalisation in a DTD for Xml mark-up. A further goal was to extend the use of Pr.A.Ti.D to other typologies of task-oriented texts and to verify the necessity and the amount of possible changes to the scheme, in order to make it more general and less oriented to specific purposes: a test on map task dialogues and consequent modifications of the scheme are presented. The application of the scheme allowed us to extract pragmatic indexes, typical of each kind of text types, and to perform both a qualitative and quantitative analysis of texts. Finally, in a linguistic perspective, a comparative analyses of conversational and communicative styles in dialogues performed by speakers belonging to different linguistic cultures and areas is proposed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,472
inproceedings
kolachina-etal-2010-grammar
Grammar Extraction from Treebanks for {H}indi and {T}elugu
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1590/
Kolachina, Prasanth and Kolachina, Sudheer and Singh, Anil Kumar and Husain, Samar and Naidu, Viswanath and Sangal, Rajeev and Bharati, Akshar
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Grammars play an important role in many Natural Language Processing (NLP) applications. The traditional approach to creating grammars manually, besides being labor-intensive, has several limitations. With the availability of large scale syntactically annotated treebanks, it is now possible to automatically extract an approximate grammar of a language in any of the existing formalisms from a corresponding treebank. In this paper, we present a basic approach to extract grammars from dependency treebanks of two Indian languages, Hindi and Telugu. The process of grammar extraction requires a generalization mechanism. Towards this end, we explore an approach which relies on generalization of argument structure over the verbs based on their syntactic similarity. Such a generalization counters the effect of data sparseness in the treebanks. A grammar extracted using this system can not only expand already existing knowledge bases for NLP tasks such as parsing, but also aid in the creation of grammars for languages where none exist. Further, we show that the grammar extraction process can help in identifying annotation errors and thus aid in the task of the treebank validation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,473
inproceedings
vasiljevs-balodis-2010-corpus
Corpus Based Analysis for Multilingual Terminology Entry Compounding
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1591/
Vasiljevs, Andrejs and Balodis, Kaspars
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper proposes statistical analysis methods for improvement of terminology entry compounding. Terminology entry compounding is a mechanism that identifies matching entries across multiple multilingual terminology collections. Bilingual or trilingual term entries are unified in compounded multilingual entry. We suggest that corpus analysis can improve entry compounding results by analysing contextual terms of given term in the corpus data. Proposed algorithm is described. It is implemented in an experimental setup. Results of experiment on compounding of Latvian and Lithuanian terminology resources are provided. These results encourage further research for different language pairs and in different domains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,474
inproceedings
maeda-etal-2010-technical
Technical Infrastructure at {L}inguistic {D}ata {C}onsortium: Software and Hardware Resources for Linguistic Data Creation
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1592/
Maeda, Kazuaki and Lee, Haejoong and Grimes, Stephen and Wright, Jonathan and Parker, Robert and Lee, David and Mazzucchi, Andrea
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Linguistic Data Consortium (LDC) at the University of Pennsylvania has participated as a data provider in a variety of governmentsponsored programs that support development of Human Language Technologies. As the number of projects increases, the quantity and variety of the data LDC produces have increased dramatically in recent years. In this paper, we describe the technical infrastructure, both hardware and software, that LDC has built to support these complex, large-scale linguistic data creation efforts at LDC. As it would not be possible to cover all aspects of LDC’s technical infrastructure in one paper, this paper focuses on recent development. We also report on our plans for making our custom-built software resources available to the community as open source software, and introduce an initiative to collaborate with software developers outside LDC. We hope that our approaches and software resources will be useful to the community members who take on similar challenges.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,475
inproceedings
garcia-miguel-etal-2010-adesse
{ADESSE}, a Database with Syntactic and Semantic Annotation of a Corpus of {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1593/
Garc{\'i}a-Miguel, Jos{\'e} M. and Vaamonde, Gael and Dom{\'i}nguez, Fita Gonz{\'a}lez
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This is an overall description of ADESSE (''``Base de datos de verbos, Alternancias de Di{\'a}tesis y Esquemas Sintactico-Sem{\'a}nticos del Espa{\~n}ol''''), an online database (\url{http://adesse.uvigo.es/}) with syntactic and semantic information for all clauses in a corpus of Spanish. The manually annotated corpus has 1.5 million words, 159,000 clauses and 3,450 different verb lemmas. ADESSE is an expanded version of BDS (''``Base de datos sint{\'a}cticos del espa{\~n}ol actual''''), which contains the grammatical features of verbs and verb-arguments in the corpus. ADESSE has added semantic features such as verb sense, verb class and semantic role of arguments to make possible a detailed syntactic and semantic corpus-based characterization of verb valency. Each verb entry in the database is described in terms of valency potential and valency realizations (diatheses). The former includes a set of semantic roles of participants in a particular event type and a classification into a conceptual hierarchy of process types. Valency realizations are described in terms of correspondences of voice, syntactic functions and categories, and semantic roles. Verbs senses are discriminated at two levels: a more abstract level linked to a valency potential, and more specific verb senses taking into account particular lexical instantiations of arguments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,476
inproceedings
guthrie-etal-2010-efficient
Efficient Minimal Perfect Hash Language Models
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1594/
Guthrie, David and Hepple, Mark and Liu, Wei
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The availability of large collections of text have made it possible to build language models that incorporate counts of billions of n-grams. This paper proposes two new methods of efficiently storing large language models that allow O(1) random access and use significantly less space than all known approaches. We introduce two novel data structures that take advantage of the distribution of n-grams in corpora and make use of various numbers of minimal perfect hashes to compactly store language models containing full frequency counts of billions of n-grams using 2.5 Bytes per n-gram and language models of quantized probabilities using 2.26 Bytes per n-gram. These methods allow language processing applications to take advantage of much larger language models than previously was possible using the same hardware and we additionally describe how they can be used in a distributed environment to store even larger models. We show that our approaches are simple to implement and can easily be combined with pruning and quantization to achieve additional reductions in the size of the language model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,477
inproceedings
strassel-etal-2010-darpa
The {DARPA} Machine Reading Program - Encouraging Linguistic and Reasoning Research with a Series of Reading Tasks
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1595/
Strassel, Stephanie and Adams, Dan and Goldberg, Henry and Herr, Jonathan and Keesing, Ron and Oblinger, Daniel and Simpson, Heather and Schrag, Robert and Wright, Jonathan
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The goal of DARPA’s Machine Reading (MR) program is nothing less than making the world’s natural language corpora available for formal processing. Most text processing research has focused on locating mission-relevant text (information retrieval) and on techniques for enriching text by transforming it to other forms of text (translation, summarization) {\textemdash} always for use by humans. In contrast, MR will make knowledge contained in text available in forms that machines can use for automated processing. This will be done with little human intervention. Machines will learn to read from a few examples and they will read to learn what they need in order to answer questions or perform some reasoning task. Three independent Reading Teams are building universal text engines which will capture knowledge from naturally occurring text and transform it into the formal representations used by Artificial Intelligence. An Evaluation Team is selecting and annotating text corpora with task domain concepts, creating model reasoning systems with which the reading systems will interact, and establishing question-answer sets and evaluation protocols to measure progress toward this goal. We describe development of the MR evaluation framework, including test protocols, linguistic resources and technical infrastructure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,478
inproceedings
simpson-etal-2010-wikipedia
{W}ikipedia and the Web of Confusable Entities: Experience from Entity Linking Query Creation for {TAC} 2009 Knowledge Base Population
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1596/
Simpson, Heather and Strassel, Stephanie and Parker, Robert and McNamee, Paul
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The Text Analysis Conference (TAC) is a series of Natural Language Processing evaluation workshops organized by the National Institute of Standards and Technology. The Knowledge Base Population (KBP) track at TAC 2009, a hybrid descendant of the TREC Question Answering track and the Automated Content Extraction (ACE) evaluation program, is designed to support development of systems that are capable of automatically populating a knowledge base with information about entities mined from unstructured text. An important component of the KBP evaluation is the Entity Linking task, where systems must accurately associate text mentions of unknown Person (PER), Organization (ORG), and Geopolitical (GPE) names to entries in a knowledge base. Linguistic Data Consortium (LDC) at the University of Pennsylvania creates and distributes linguistic resources including data, annotations, system assessment, tools and specifications for the TAC KBP evaluations. This paper describes the 2009 resource creation efforts, with particular focus on the selection and development of named entity mentions for the Entity Linking task evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,479
inproceedings
nouvel-etal-2010-analysis
An Analysis of the Performances of the {C}as{EN} Named Entities Recognition System in the Ester2 Evaluation Campaign
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1597/
Nouvel, Damien and Antoine, Jean-Yves and Friburger, Nathalie and Maurel, Denis
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present a detailed and critical analysis of the behaviour of the CasEN named entity recognition system during the French Ester2 evaluation campaign. In this project, CasEN has been confronted with the task of detecting and categorizing named entities in manual and automatic transcriptions of radio broadcastings. At first, we give a general presentation of the Ester2 campaign. Then, we describe our system, based on transducers. Next, we depict how systems were evaluated during this campaign and we report the main official results. Afterwards, we investigate in details the influence of some annotation biases which have significantly affected the estimation of the performances of systems. At last, we conduct an in-depth analysis of the effective errors of the CasEN system, providing us with some useful indications about phenomena that gave rise to errors (e.g. metonymy, encapsulation, detection of right boundaries) and are as many challenges for named entity recognition systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,480
inproceedings
materna-pala-2010-using
Using Ontologies for Semi-automatic Linking {V}erba{L}ex with {F}rame{N}et
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1598/
Materna, Ji{\v{r}}{\'i} and Pala, Karel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This work presents a method of linking verbs and their valency frames in VerbaLex database developed at the Centre for NLP at the Faculty of Informatics Masaryk University to the frames in Berkeley FrameNet. While completely manual work may take a long time, the proposed semi-automatic approach requires a smaller amount of human effort to reach sufficient results. The method of linking VerbaLex frames to FrameNet frames consists of two phases. The goal of the first one is to find an appropriate FrameNet frame for each frame in VerbaLex. The second phase includes assigning FrameNet frame elements to the deep semantic roles in VerbaLex. In this work main emphasis is put on the exploitation of ontologies behind VerbaLex and FrameNet. Especially, the method of linking FrameNet frame elements with VerbaLex semantic roles is built using the information provided by the ontology of semantic types in FrameNet. Based on the proposed technique, a semi-automatic linking tool has been developed. By linking FrameNet to VerbaLex, we are able to find a non-trivial subset of the interlingual FrameNet frames (including their frame-to-frame relations), which could be used as a core for building FrameNet in Czech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,481
inproceedings
supnithi-etal-2010-autotagtcg
{A}uto{T}ag{TCG} : A Framework for Automatic {T}hai {CG} Tagging
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1599/
Supnithi, Thepchai and Ruangrajitpakorn, Taneth and Trakultaweekool, Kanokorn and Porkaew, Peerachet
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper aims to develop a framework for automatic CG tagging. We investigated two main algorithms, CRF and Statistical alignment model based on information theory (SAM). We found that SAM gives the best results both in word level and sentence level. We got the accuracy 89.25{\%} in word level and 82.49{\%} in sentence level. Combining both methods can be suited for both known and unknown word.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,482
inproceedings
blancafort-2010-learning
Learning Morphology of {R}omance, {G}ermanic and {S}lavic Languages with the Tool Linguistica
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1600/
Blancafort, Helena
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we present preliminary work conducted on semi-automatic induction of inflectional paradigms from non annotated corpora using the open-source tool Linguistica (Goldsmith 2001) that can be utilized without any prior knowledge of the language. The aim is to induce morphology information from corpora such as to compare languages and foresee the difficulty to develop morphosyntactic lexica. We report on a series of corpus-based experiments run with Linguistica in Romance languages (Catalan, French, Italian, Portuguese, and Spanish), Germanic languages (Dutch, English and German), and Slavic language Polish. For each language we obtained interesting clusters of stems sharing the same suffixes. They can be seen as mini inflectional paradigms that include productive derivative suffixes. We ranked results depending on the size of the paradigms (maximum number of suffixes per stem) per language. Results show that it is useful to get a first idea of the role and complexity of inflection and derivation in a language, to compare results with other languages, and that it could be useful to build lexicographic resources from scratch. Still, special post-processing is needed to face the two principal drawbacks of the tool: no clear distinction between inflection and derivation, and not taking allomorphy into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,483
inproceedings
loukil-etal-2010-syntactic
A Syntactic Lexicon for {A}rabic Verbs
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1601/
Loukil, Noureddine and Haddar, Kais and Benhamadou, Abdelmajid
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper, we present a modeling of a syntactic lexicon for Arabic verbs. The structure of the lexicon is based on the recently introduced ISO standard called the Lexical Markup Framework. This standard enables us to describe the lexical information in a versatile way using general guidelines and make possible to share the resources developed in compliance with it. We discuss the syntactic information associated to verbs and the model we propose to structure and represent the entries within the lexicon. To study the usability of the lexicon in a real application, we designed a rule-based system that translates a LMF syntactic resource into Type Description Language compliant resource. The rules are mapping information from LMF entries and types to TDL types. The generated lexicon is used as input for a previously written HPSG grammar for Arabic built within the Language Knowledge Builder platform. Finally, we discuss improvements in parsing results and possible perspectives of this work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,484
inproceedings
jha-2010-tdil
The {TDIL} Program and the {I}ndian Langauge Corpora Intitiative ({ILCI})
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1602/
Jha, Girish Nath
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
India is considered a linguistic ocean with 4 language families and 22 scheduled national languages, and 100 un-scheduled languages reported by the 2001 census. This puts tremendous pressures on the Indian government to not only have comprehensive language policies, but also to create resources for their maintenance and development. In the age of information technology, there is a greater need to have a fine balance between allocation of resources to each language keeping in view the political compulsions, electoral potential of a linguistic community and other issues. In this connection, the government of India through various ministries and a think tank consisting of eminent linguistics and policy makers has done a commendable job despite the obvious roadblocks. This paper describes the Indian government’s policies towards language development and maintenance in the age of technology through the Ministry of HRD through its various agencies and the Ministry of Communications {\&} Information Technology (MCIT) through its dedicated program called TDIL (Technology Development for Indian Languages). The paper also describes some of the recent activities of the TDIL in general and in particular, an innovative corpora project called ILCI - Indian Languages Corpora Initiative.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,485
inproceedings
agic-etal-2010-towards
Towards Sentiment Analysis of Financial Texts in {C}roatian
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1603/
Agi{\'c}, {\v{Z}}eljko and Ljube{\v{s}}i{\'c}, Nikola and Tadi{\'c}, Marko
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The paper presents results of an experiment dealing with sentiment analysis of Croatian text from the domain of finance. The goal of the experiment was to design a system model for automatic detection of general sentiment and polarity phrases in these texts. We have assembled a document collection from web sources writing on the financial market in Croatia and manually annotated articles from a subset of that collection for general sentiment. Additionally, we have manually annotated a number of these articles for phrases encoding positive or negative sentiment within a text. In the paper, we provide an analysis of the compiled resources. We show a statistically significant correspondence (1) between the overall market trend on the Zagreb Stock Exchange and the number of positively and negatively accented articles within periods of trend and (2) between the general sentiment of articles and the number of polarity phrases within those articles. We use this analysis as an input for designing a rule-based local grammar system for automatic detection of polarity phrases and evaluate it on held out data. The system achieves F1-scores of 0.61 (P: 0.94, R: 0.45) and 0.63 (P: 0.97, R: 0.47) on positive and negative polarity phrases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,486
inproceedings
ozdowska-claveau-2010-inferring
Inferring Syntactic Rules for Word Alignment through Inductive Logic Programming
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1604/
Ozdowska, Sylwia and Claveau, Vincent
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents and evaluates an original approach to automatically align bitexts at the word level. It relies on a syntactic dependency analysis of the source and target texts and is based on a machine-learning technique, namely inductive logic programming (ILP). We show that ILP is particularly well suited for this task in which the data can only be expressed by (translational and syntactic) relations. It allows us to infer easily rules called syntactic alignment rules. These rules make the most of the syntactic information to align words. A simple bootstrapping technique provides the examples needed by ILP, making this machine learning approach entirely automatic. Moreover, through different experiments, we show that this approach requires a very small amount of training data, and its performance rivals some of the best existing alignment systems. Furthermore, cases of syntactic isomorphisms or non-isomorphisms between the source language and the target language are easily identified through the inferred rules.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,487
inproceedings
savary-etal-2010-towards
Towards the Annotation of Named Entities in the {N}ational {C}orpus of {P}olish
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1605/
Savary, Agata and Waszczuk, Jakub and Przepi{\'o}rkowski, Adam
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present the named entity annotation task within the on-going project of the National Corpus of Polish. To the best of our knowledge, this is the first attempt at a large-scale corpus annotation of Polish named entities. We describe the scope and the TEI-inspired hierarchy of named entities admitted for this task, as well as the TEI-conformant multi-level stand-off annotation format. We also discuss some methodological strategies including the annotation of embedded, coordinated and discontinuous names. Our annotation platform consists of two main tools interconnected by converting facilities. A rule-based natural language processing platform SProUT is used for the automatic pre-annotation of named entities, due to the previously created Polish extraction grammars adapted to the annotation task. A customizable graphical tree editor TrEd, extended to our needs, provides an ergonomic environment for manual correction of annotations. Despite some difficult cases encountered in the early annotation phase, about 2,600 named entities in 1,800 corpus sentences have presently been annotated, which allowed to validate the project methodology and tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,488
inproceedings
stewart-etal-2010-cross
Cross-Corpus Textual Entailment for Sublanguage Analysis in Epidemic Intelligence
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1606/
Stewart, Avar{\'e} and Denecke, Kerstin and Nejdl, Wolfgang
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Textual entailment has been recognized as a generic task that captures major semantic inference needs across many natural language processing applications. However, to date, textual entailment has not been considered in a cross-corpus setting, nor for user generated content. Given the emergence of Medicine 2.0, medical blogs are becoming an increasingly accepted source of information. However, given the characteristics of blogs( which tend to be noisy and informal; or contain a interspersing of subjective and factual sentences) a potentially large amount of irrelevant information may be present. Given the potential noise, the overarching problem with respect to information extraction from social media is achieving the correct level of sentence filtering - as opposed to document or blog post level. Specifically for the task of medical intelligence gathering. In this paper, we propose an approach to textual entailment with uses the text from one source of user generated content (T text) for sentence-level filtering within a new and less amenable one (H text), when the underlying domain, tasks or semantic information is the same, or overlaps.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,489
inproceedings
couto-etal-2010-oal
{OAL}: A {NLP} Architecture to Improve the Development of Linguistic Resources for {NLP}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1607/
Couto, Javier and Blancafort, Helena and Seng, Somara and Kuchmann-Beauger, Nicolas and Talby, Anass and de Loupy, Claude
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The performance of most NLP applications relies upon the quality of linguistic resources. The creation, maintenance and enrichment of those resources are a labour-intensive task, especially when no tools are available. In this paper we present the NLP architecture OAL, designed to assist computational linguists in the whole process of the development of resources in an industrial context: from corpora compilation to quality assurance. To add new words more easily to the morphosyntactic lexica, a guesser that lemmatizes and assigns morphosyntactic tags as well as inflection paradigms to a new word has been developed. Moreover, different control mechanisms are set up to check the coherence and consistency of the resources. Today OAL manages resources in five European languages: French, English, Spanish, Italian and Polish. Chinese and Portuguese are in process. The development of OAL has followed an incremental strategy. At present, semantic lexica, a named entities guesser and a named entities phonetizer are being developed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,490
inproceedings
pala-etal-2010-lexical
Lexical Resources for Noun Compounds in {C}zech, {E}nglish and {Z}ulu
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1608/
Pala, Karel and Fellbaum, Christiane and Bosch, Sonja
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we discuss noun compounding, a highly generative, productive process, in three distinct languages: Czech, English and Zulu. Derivational morphology presents a large grey area between regular, compositional and idiosyncratic, non-compositional word forms. The structural properties of compounds in each of the languages are reviewed and contrasted. Whereas English compounds are head-final and thus left-branching, Czech and Zulu compounds usually consist of a leftmost governing head and a rightmost dependent element. Semantic properties of compounds are discussed with special reference to semantic relations between compound members which cross-linguistically show universal patterns, but idiosyncratic, language specific compounds are also identified. The integration of compounds into lexical resources, and WordNets in particular, remains a challenge that needs to be considered in terms of the compounds’ syntactic idiosyncrasy and semantic compositionality. Experiments with processing compounds in Czech, English and Zulu are reported and partly evaluated. The obtained partial lists of the Czech, English and Zulu compounds are also described.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,491
inproceedings
rebholz-schuhmann-etal-2010-calbc
The {CALBC} Silver Standard Corpus for Biomedical Named Entities {---} A Study in Harmonizing the Contributions from Four Independent Named Entity Taggers
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1609/
Rebholz-Schuhmann, Dietrich and Jimeno Yepes, Antonio Jos{\'e} and van Mulligen, Erik M. and Kang, Ning and Kors, Jan and Milward, David and Corbett, Peter and Buyko, Ekaterina and Tomanek, Katrin and Beisswanger, Elena and Hahn, Udo
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The production of gold standard corpora is time-consuming and costly. We propose an alternative: the {\^a}€šsilver standard corpus‘ (SSC), a corpus that has been generated by the harmonisation of the annotations that have been delivered from a selection of annotation systems. The systems have to share the type system for the annotations and the harmonisation solution has use a suitable similarity measure for the pair-wise comparison of the annotations. The annotation systems have been evaluated against the harmonised set (630.324 sentences, 15,956,841 tokens). We can demonstrate that the annotation of proteins and genes shows higher diversity across all used annotation solutions leading to a lower agreement against the harmonised set in comparison to the annotations of diseases and species. An analysis of the most frequent annotations from all systems shows that a high agreement amongst systems leads to the selection of terms that are suitable to be kept in the harmonised set. This is the first large-scale approach to generate an annotated corpus from automated annotation systems. Further research is required to understand, how the annotations from different systems have to be combined to produce the best annotation result for a harmonised corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,492
inproceedings
melli-2010-concept
Concept Mentions within {KDD}-2009 Abstracts (kdd09cma1) Linked to a {KDD} Ontology (kddo1)
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1610/
Melli, Gabor
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We introduce the kddo1 ontology and semantically annotated kdd09cma1 corpus from the field of knowledge discovery in database (KDD) research. The corpus is based on the abstracts for the papers accepted into the KDD-2009 conference. Each abstract has its concept mentions identified and, where possible, linked to the appropriate concept in the ontology. The ontology is based on a human generated and readable semantic wiki focused on concepts and relationships for the domain along with other related topics, papers and researchers from information sciences. To our knowledge this is the first ontology and interlinked corpus for a subdiscipline within computing science. The dataset enables the evaluation of supervised approaches to semantic annotation of documents that contain a large number of high-level concepts relative the number of named entity mentions. We plan to continue to evolve the ontology based on the discovered relations within the corpus and to extend the corpus to cover other research paper abstracts from the domain. Both resources are publicly available at \url{http://www.gabormelli.com/Projects/kdd/data/}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,493
inproceedings
strauss-etal-2010-evaluation
Evaluation of the {PIT} Corpus Or What a Difference a Face Makes?
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1611/
Strau{\ss}, Petra-Maria and Scherer, Stefan and Layher, Georg and Hoffmann, Holger
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents the evaluation of the PIT Corpus of multi-party dialogues recorded in a Wizard-of-Oz environment. An evaluation has been performed with two different foci: First, a usability evaluation was used to take a look at the overall ratings of the system. A shortened version of the SASSI questionnaire, namely the SASSISV, and the well established AttrakDiff questionnaire assessing the hedonistic and pragmatic dimension of computer systems have been analysed. In a second evaluation, the user`s gaze direction was analysed in order to assess the difference in the user`s (gazing) behaviour if interacting with the computer versus the other dialogue partner. Recordings have been performed in different setups of the system, e.g. with and without avatar. Thus, the presented evaluation further focuses on the difference in the interaction caused by deploying an avatar. The quantitative analysis of the gazing behaviour has resulted in several encouraging significant differences. As a possible interpretation it could be argued that users are more attentive towards systems with an avatar - the difference a face makes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,494
inproceedings
nerima-etal-2010-recursive
A Recursive Treatment of Collocations
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1612/
Nerima, Luka and Wehrli, Eric and Seretan, Violeta
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This article discusses the treatment of collocations in the context of a long-term project on the development of multilingual NLP tools. Besides “classical” two-word collocations, we will focus on the case of complex collocations (3 words or more) for which a recursive design is presented in the form of collocation of collocations. Although comparatively less numerous than two-word collocations, the complex collocations pose important challenges for NLP. The article discusses how these collocations are retrieved from corpora, inserted and stored in a lexical database, how the parser uses such knowledge and what are the advantages offered by a recursive approach to complex collocations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,495
inproceedings
coppola-moschitti-2010-general
A General Purpose {F}rame{N}et-based Shallow Semantic Parser
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1613/
Coppola, Bonaventura and Moschitti, Alessandro
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In this paper we present a new FrameNet-based Shallow Semantic Parser. Shallow Semantic Parsing has been a popular Natural Language Processing task since the 2004 and 2005 CoNLL Shared Task editions on Semantic Role Labeling, which were based on the PropBank lexical-semantic resource. Nonetheless, efforts in extending such task to the FrameNet setting have been constrained by practical software engineering issues. We hereby analyze these issues, identify desirable requirements for a practical parsing framework, and show the results of our software implementation. In particular, we attempt at meeting requirements arising from both a) the need of a flexible environment supporting current ongoing research, and b) the willingness of providing an effective platform supporting preliminary application prototypes in the field. After introducing the task of FrameNet-based Shallow Semantic Parsing, we sketch the system processing workflow and summarize a set of successful experimental results, directing the reader to previous published papers for extended experiment descriptions and wider discussion of the achieved results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,496
inproceedings
sowa-etal-2010-dicit
{DICIT}: Evaluation of a Distant-talking Speech Interface for Television
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1614/
Sowa, Timo and Arisio, Fiorenza and Cristoforetti, Luca
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
The EC-funded project DICIT developed distant-talking interfaces for interactive TV. The final DICIT prototype system processes multimodal user input by speech and remote control. It was designed to understand both natural language and command-and-control-style speech input. We conducted an evaluation campaign to examine the usability and performance of the prototype. The task-oriented evaluation involved naive test persons and consisted of a subjective part with a usability questionnaire and an objective part. We used three groups of objective metrics to assess the system: one group related to speech component performance, one related to interface design and user awareness, and a final group related to task-based effectiveness and usability. These metrics were acquired with a dedicated transcription and annotation tool. The evaluation revealed a quite positive subjective assessments of the system and reasonable objective results. We report how the objective metrics helped us to determine problems in specific areas and to distinguish design-related issues from technical problems. The metrics computed over modality-specific groups also show that speech input gives a usability advantage over remote control for certain types of tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,497
inproceedings
reimerink-etal-2010-ecolexicon
{E}co{L}exicon: An Environmental {TKB}
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1615/
Reimerink, Arianne and Ara{\'u}z, Pilar Le{\'o}n and Redondo, Pedro J. Maga{\~n}a
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
EcoLexicon, a multilingual knowledge resource on the environment, provides an internally coherent information system covering a wide range of specialized linguistic and conceptual needs. Data in our terminological knowledge base (TKB) are primarily hosted in a relational database which is now linked to an ontology in order to apply reasoning techniques and enhance user queries. The advantages of ontological reasoning can only be obtained if conceptual description is based on systematic criteria and a wide inventory of non-hierarchical relations, which confer dynamism to knowledge representation. Thus, our research has mainly focused on conceptual modelling and providing a user-friendly multimodal interface. The dynamic interface, which combines conceptual (networks and definitions), linguistic (contexts, concordances) and graphical information offers users the freedom to surf it according to their needs. Furthermore, dynamism is also present at the representational level. Contextual constraints have been applied to reconceptualise versatile concepts that cause a great deal of information overload.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,498
inproceedings
almeida-etal-2010-bigorna
Bigorna {--} A Toolkit for Orthography Migration Challenges
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1616/
Almeida, Jos{\'e} Jo{\~a}o and Santos, Andr{\'e} and Sim{\~o}es, Alberto
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Languages are born, evolve and, eventually, die. During this evolution their spelling rules (and sometimes the syntactic and semantic ones) change, putting old documents out of use. In Portugal, a pair of political agreements with Brazil forced relevant changes on the way the Portuguese language is written. In this article we will detail these two Orthographic Agreements (one in the thirties and the other more recently, in the nineties), and the challenges present on the automatic migration of old documents spelling to their actual one. We will reveal Bigorna, a toolkit for the classification of language variants, their comparison and the conversion of texts in different language versions. These tools will be explained together with examples of migration issues. As Birgorna relies on a set of conversion rules we will also discuss how to infer conversion rules from a set of documents (texts with different ages). The document concludes with a brief evaluation on the conversion and classification tool results and their relevance in the current Portuguese language scenario.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,499
inproceedings
javorsek-erjavec-2010-experimental
Experimental Deployment of a Grid Virtual Organization for Human Language Technologies
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1617/
Javor{\v{s}}ek, Jan Jona and Erjavec, Toma{\v{z}}
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We propose to create a grid virtual organization for human language technologies, at first chiefly with the task of enabling linguistic researches to use existing distributed computing facilities of the European grid infrastructure for more efficient processing of large data sets. After a brief overview of modern grid computing, a number of common use-cases of natural language processing tasks running on the grid are presented, notably corpus annotation with morpho-syntactic tagging (600+ million-word corpus annotated in less than a day), {\$}n{\$}-gram statistics processing of a corpus and creation of grid-backed web-accessible services with annotation and term-extraction as examples. Implementation considerations and common problems of using grid for this type of tasks are laid out. We conclude with an outline of a simple action plan for evolving the infrastructure created for these experiments into a fully functional Human Language Technology grid Virtual Organization with the goal of making the power of European grid infrastructure available to the linguistic community.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,500
inproceedings
charton-torres-moreno-2010-nlgbase
{NLG}b{A}se: A Free Linguistic Resource for Natural Language Processing Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1618/
Charton, Eric and Torres-Moreno, Juan-Manuel
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Availability of labeled language resources, such as annotated corpora and domain dependent labeled language resources is crucial for experiments in the field of Natural Language Processing. Most often, due to lack of resources, manual verification and annotation of electronic text material is a prerequisite for the development of NLP tools. In the context of under-resourced language, the lack of copora becomes a crucial problem because most of the research efforts are supported by organizations with limited funds. Using free, multilingual and highly structured corpora like Wikipedia to produce automatically labeled language resources can be an answer to those needs. This paper introduces NLGbAse, a multilingual linguistic resource built from the Wikipedia encyclopedic content. This system produces structured metadata which make possible the automatic annotation of corpora with syntactical and semantical labels. A metadata contains semantical and statistical informations related to an encyclopedic document. To validate our approach, we built and evaluated a Named Entity Recognition tool, trained with Wikipedia corpora annotated by our system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,501
inproceedings
bosma-vossen-2010-bootstrapping
Bootstrapping Language Neutral Term Extraction
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1619/
Bosma, Wauter and Vossen, Piek
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
A variety of methods exist for extracting terms and relations between terms from a corpus, each of them having strengths and weaknesses. Rather than just using the joint results, we apply different extraction methods in a way that the results of one method are input to another. This gives us the leverage to find terms and relations that otherwise would not be found. Our goal is to create a semantic model of a domain. To that end, we aim to find the complete terminology of the domain, consisting of terms and relations such as hyponymy and meronymy, and connected to generic wordnets and ontologies. Terms are ranked by domain-relevance only as a final step, after terminology extraction is completed. Because term relations are a large part of the semantics of a term, we estimate the relevance from its relation to other terms, in addition to occurrence and document frequencies. In the KYOTO project, we apply language-neutral terminology extraction from a parsed corpus for seven languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,502
inproceedings
choi-etal-2010-propbank-instance
{P}ropbank Instance Annotation Guidelines Using a Dedicated Editor, Jubilee
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1620/
Choi, Jinho D. and Bonial, Claire and Palmer, Martha
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper gives guidelines of how to annotate Propbank instances using a dedicated editor, Jubilee. Propbank is a corpus in which the arguments of each verb predicate are annotated with their semantic roles in relation to the predicate. Propbank annotation also requires the choice of a sense ID for each predicate. Jubilee facilitates this annotation process by displaying several resources of syntactic and semantic information simultaneously: the syntactic structure of a sentence is displayed in the main frame, the available senses with their corresponding argument structures are displayed in another frame, all available Propbank arguments are displayed for the annotators choice, and example annotations of each sense of the predicate are available to the annotator for viewing. Easy access to each of these resources allows the annotator to quickly absorb and apply the necessary syntactic and semantic information pertinent to each predicate for consistent and efficient annotation. Jubilee has been successfully adapted to many Propbank projects in several universities. The tool runs platform independently, is light enough to run as an X11 application and supports multiple languages such as Arabic, Chinese, English, Hindi and Korean.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,503
inproceedings
swampillai-stevenson-2010-inter
Inter-sentential Relations in Information Extraction Corpora
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1621/
Swampillai, Kumutha and Stevenson, Mark
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
In natural language relationships between entities can asserted within a single sentence or over many sentences in a document. Many information extraction systems are constrained to extracting binary relations that are asserted within a single sentence (single-sentence relations) and this limits the proportion of relations they can extract since those expressed across multiple sentences (inter-sentential relations) are not considered. The analysis in this paper focuses on finding the distribution of inter-sentential and single-sentence relations in two corpora used for the evaluation of Information Extraction systems: the MUC6 corpus and the ACE corpus from 2003. In order to carry out this analysis we had to manually mark up all the management succession relations described in the MUC6 corpus. It was found that inter-sentential relations constitute 28.5{\%} and 9.4{\%} of the total number of relations in MUC6 and ACE03 respectively. This places upper bounds on the recall of information extraction systems that do not consider relations that are asserted across multiple sentences (71.5{\%} and 90.6{\%} respectively).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,504
inproceedings
nabende-2010-applying
Applying a Dynamic {B}ayesian Network Framework to Transliteration Identification
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1622/
Nabende, Peter
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Identification of transliterations is aimed at enriching multilingual lexicons and improving performance in various Natural Language Processing (NLP) applications including Cross Language Information Retrieval (CLIR) and Machine Translation (MT). This paper describes work aimed at using the widely applied graphical models approach of ‘Dynamic Bayesian Networks (DBNs) to transliteration identification. The task of estimating transliteration similarity is not very different from specific identification tasks where DBNs have been successfully applied; it is also possible to adapt DBN models from the other identification domains to the transliteration identification domain. In particular, we investigate the applicability of a DBN framework initially proposed by Filali and Bilmes (2005) to learn edit distance estimation parameters for use in pronunciation classification. The DBN framework enables the specification of a variety of models representing different factors that can affect string similarity estimation. Three DBN models associated with two of the DBN classes originally specified by Filali and Bilmes (2005) have been tested on an experimental set up of Russian-English transliteration identification. Two of the DBN models result in high transliteration identification accuracy and combining the models leads to even much better transliteration identification accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,505
inproceedings
balahur-etal-2010-sentiment
Sentiment Analysis in the News
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1623/
Balahur, Alexandra and Steinberger, Ralf and Kabadjov, Mijail and Zavarella, Vanni and van der Goot, Erik and Halkia, Matina and Pouliquen, Bruno and Belyaeva, Jenya
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Recent years have brought a significant growth in the volume of research in sentiment analysis, mostly on highly subjective text types (movie or product reviews). The main difference these texts have with news articles is that their target is clearly defined and unique across the text. Following different annotation efforts and the analysis of the issues encountered, we realised that news opinion mining is different from that of other text types. We identified three subtasks that need to be addressed: definition of the target; separation of the good and bad news content from the good and bad sentiment expressed on the target; and analysis of clearly marked opinion that is expressed explicitly, not needing interpretation or the use of world knowledge. Furthermore, we distinguish three different possible views on newspaper articles {\textemdash} author, reader and text, which have to be addressed differently at the time of analysing sentiment. Given these definitions, we present work on mining opinions about entities in English language news, in which we apply these concepts. Results showed that this idea is more appropriate in the context of news opinion mining and that the approaches taking this into consideration produce a better performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,506
inproceedings
sonntag-sacaleanu-2010-speech
Speech Grammars for Textual Entailment Patterns in Multimodal Question Answering
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1624/
Sonntag, Daniel and Sacaleanu, Bogdan
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Over the last several years, speech-based question answering (QA) has become very popular in contrast to pure search engine based approaches on a desktop. Open-domain QA systems are now much more powerful and precise, and they can be used in speech applications. Speech-based question answering systems often rely on predefined grammars for speech understanding. In order to improve the coverage of such complex AI systems, we reused speech patterns used to generate textual entailment patterns. These can make multimodal question understanding more robust. We exemplify this in the context of a domain-specific dialogue scenario. As a result, written text input components (e.g., in a textual input field) can deal with more flexible input according to the derived textual entailment patterns. A multimodal QA dialogue spanning over several domains of interest, i.e., personal address book entries, questions about the music domain and politicians and other celebrities, demonstrates how the textual input mode can be used in a multimodal dialogue shell.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,507
inproceedings
strunk-2010-enriching
Enriching a Treebank to Investigate Relative Clause Extraposition in {G}erman
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1625/
Strunk, Jan
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
I describe the construction of a corpus for research on relative clause extraposition in German based on the treebank T{\"uBa-D/Z. I also define an annotation scheme for the relations between relative clauses and their antecedents which is added as a second annotation level to the syntactic trees. This additional annotation level allows for a direct representation of the relevant parts of the relative construction and also serves as a locus for the annotation of additional features which are partly automatically derived from the underlying treebank and partly added manually. Finally, I also report on the results of two pilot studies using this enriched treebank. The first study tests claims made in the theoretical literature on relative clause extraposition with regard to syntactic locality, definiteness, and restrictiveness. It shows that although the theoretical claims often go in the right direction, they go too far by positing categorical constraints that are not supported by the corpus data and thus underestimate the complexity of the data. The second pilot study goes one step in the direction of taking this complexity into account by demonstrating the potential of the enriched treebank for building a multivariate model of relative clause extraposition as a syntactic alternation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,508
inproceedings
de-loupy-etal-2010-french
A {F}rench Human Reference Corpus for Multi-Document Summarization and Sentence Compression
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1626/
de Loupy, Claude and Gu{\'e}gan, Marie and Ayache, Christelle and Seng, Somara and Moreno, Juan-Manuel Torres
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This paper presents two corpora produced within the RPM2 project: a multi-document summarization corpus and a sentence compression corpus. Both corpora are in French. The first one is the only one we know in this language. It contains 20 topics with 20 documents each. A first set of 10 documents per topic is summarized and then the second set is used to produce an update summarization (new information). 4 annotators were involved and produced a total of 160 abstracts. The second corpus contains all the sentences of the first one. 4 annotators were asked to compress the 8432 sentences. This is the biggest corpus of compressed sentences we know, whatever the language. The paper provides some figures in order to compare the different annotators: compression rates, number of tokens per sentence, percentage of tokens kept according to their POS, position of dropped tokens in the sentence compression phase, etc. These figures show important differences from an annotator to the other. Another point is the different strategies of compression used according to the length of the sentence.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,509
inproceedings
xia-etal-2010-problems
The Problems of Language Identification within Hugely Multilingual Data Sets
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1627/
Xia, Fei and Lewis, Carrie and Lewis, William D.
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
As the data for more and more languages is finding its way into digital form, with an increasing amount of this data being posted to the Web, it has become possible to collect language data from the Web and create large multilingual resources, covering hundreds or even thousands of languages. ODIN, the Online Database of INterlinear text (Lewis, 2006), is such a resource. It currently consists of nearly 200,000 data points for over 1,000 languages, the data for which was harvested from linguistic documents on the Web. We identify a number of issues with language identification for such broad-coverage resources including the lack of training data, ambiguous language names, incomplete language code sets, and incorrect uses of language names and codes. After providing a short overview of existing language code sets maintained by the linguistic community, we discuss what linguists and the linguistic community can do to make the process of language identification easier.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,510
inproceedings
passonneau-etal-2010-word
Word Sense Annotation of Polysemous Words by Multiple Annotators
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1628/
Passonneau, Rebecca J. and Salleb-Aoussi, Ansaf and Bhardwaj, Vikas and Ide, Nancy
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We describe results of a word sense annotation task using WordNet, involving half a dozen well-trained annotators on ten polysemous words for three parts of speech. One hundred sentences for each word were annotated. Annotators had the same level of training and experience, but interannotator agreement (IA) varied across words. There was some effect of part of speech, with higher agreement on nouns and adjectives, but within the words for each part of speech there was wide variation. This variation in IA does not correlate with number of senses in the inventory, or the number of senses actually selected by annotators. In fact, IA was sometimes quite high for words with many senses. We claim that the IA variation is due to the word meanings, contexts of use, and individual differences among annotators. We find some correlation of IA with sense confusability as measured by a sense confusion threshhold (CT). Data mining for association rules on a flattened data representation indicating each annotator`s sense choices identifies outliers for some words, and systematic differences among pairs of annotators on others.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,511
inproceedings
gasser-2010-expanding
Expanding the Lexicon for a Resource-Poor Language Using a Morphological Analyzer and a Web Crawler
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1629/
Gasser, Michael
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Resource-poor languages may suffer from a lack of any of the basic resources that are fundamental to computational linguistics, including an adequate digital lexicon. Given the relatively small corpus of texts that exists for such languages, extending the lexicon presents a challenge. Languages with complex morphology present a special case, however, because individual words in these languages provide a great deal of information about the grammatical properties of the roots that they are based on. Given a morphological analyzer, it is even possible to extract novel roots from words. In this paper, we look at the case of Tigrinya, a Semitic language with limited lexical resources for which a morphological analyzer is available. It is shown that this analyzer applied to the list of more than 200,000 Tigrinya words that is extracted by a web crawler can extend the lexicon in two ways, by adding new roots and by inferring some of the derivational constraints that apply to known roots.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,512
inproceedings
brown-etal-2010-number
Number or Nuance: Which Factors Restrict Reliable Word Sense Annotation?
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1630/
Brown, Susan Windisch and Rood, Travis and Palmer, Martha
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
This study attempts to pinpoint the factors that restrict reliable word sense annotation, focusing on the influence of the number of senses annotators use and the semantic granularity of those senses. Both of these factors may be possible causes of low interannotator agreement (ITA) when tagging with fine-grained word senses, and, consequently, low WSD system performance (Ng et al., 1999; Snyder {\&} Palmer, 2004; Chklovski {\&} Mihalcea, 2002). If number of senses is the culprit, modifying the task to show fewer senses at a time could improve annotator reliability. However, if overly nuanced distinctions are the problem, then more general, coarse-grained distinctions may be necessary for annotator success and may be all that is needed to supply systems with the types of distinctions that people make. We describe three experiments that explore the role of these factors in annotation performance. Our results indicate that of these two factors, only the granularity of the senses restricts interannotator agreement, with broader senses resulting in higher annotation reliability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,513
inproceedings
gordon-passonneau-2010-evaluation
An Evaluation Framework for Natural Language Understanding in Spoken Dialogue Systems
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1631/
Gordon, Joshua B. and Passonneau, Rebecca J.
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present an evaluation framework to enable developers of information seeking, transaction based spoken dialogue systems to compare the robustness of natural language understanding (NLU) approaches across varying levels of word error rate and contrasting domains. We develop statistical and semantic parsing based approaches to dialogue act identification and concept retrieval. Voice search is used in each approach to ultimately query the database. Included in the framework is a method for developers to bootstrap a representative pseudo-corpus, which is used to estimate NLU performance in a new domain. We illustrate the relative merits of these NLU techniques by contrasting our statistical NLU approach with a semantic parsing method over two contrasting applications, our CheckItOut library system and the deployed Let’s Go Public! system, across four levels of word error rate. We find that with respect to both dialogue act identification and concept retrieval, our statistical NLU approach is more likely to robustly accommodate the freer form, less constrained utterances of CheckItOut at higher word error rates than is possible with semantic parsing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,514
inproceedings
witte-etal-2010-flexible
Flexible Ontology Population from Text: The {O}wl{E}xporter
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1633/
Witte, Ren{\'e} and Khamis, Ninus and Rilling, Juergen
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
Ontology population from text is becoming increasingly important for NLP applications. Ontologies in OWL format provide for a standardized means of modeling, querying, and reasoning over large knowledge bases. Populated from natural language texts, they offer significant advantages over traditional export formats, such as plain XML. The development of text analysis systems has been greatly facilitated by modern NLP frameworks, such as the General Architecture for Text Engineering (GATE). However, ontology population is not currently supported by a standard component. We developed a GATE resource called the OwlExporter that allows to easily map existing NLP analysis pipelines to OWL ontologies, thereby allowing language engineers to create ontology population systems without requiring extensive knowledge of ontology APIs. A particular feature of our approach is the concurrent population and linking of a domainand NLP-ontology, including NLP-specific features such as safe reasoning over coreference chains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,516
inproceedings
prasad-etal-2010-exploiting
Exploiting Scope for Shallow Discourse Parsing
Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel
may
2010
Valletta, Malta
European Language Resources Association (ELRA)
https://aclanthology.org/L10-1634/
Prasad, Rashmi and Joshi, Aravind and Webber, Bonnie
Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10)
null
We present an approach to automatically identifying the arguments of discourse connectives based on data from the Penn Discourse Treebank. Of the two arguments of connectives, called Arg1 and Arg2, we focus on Arg1, which has proven more challenging to identify. Our approach employs a sentence-based representation of arguments, and distinguishes ''``intra-sentential connectives'''', which take both their arguments in the same sentence, from ''``inter-sentential connectives'''', whose arguments are found in different sentences. The latter are further distinguished by paragraph position into ''``ParaInit'''' connectives, which appear in a paragraph-initial sentence, and ''``ParaNonInit'''' connectives, which appear elsewhere. The paper focusses on predicting Arg1 of Inter-sentential ParaNonInit connectives, presenting a set of scope-based filters that reduce the search space for Arg1 from all the previous sentences in the paragraph to a subset of them. For cases where these filters do not uniquely identify Arg1, coreference-based heuristics are employed. Our analysis shows an absolute 3{\%} performance improvement over the high baseline of 83.3{\%} for identifying Arg1 of Inter-sentential ParaNonInit connectives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
79,517