entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | otrusina-smrz-2010-new | A New Approach to Pseudoword Generation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1232/ | Otrusina, Lubomir and Smrz, Pavel | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Sense-tagged corpora are used to evaluate word sense disambiguation (WSD) systems. Manual creation of such resources is often prohibitively expensive. That is why the concept of pseudowords - conflations of two or more unambiguous words - has been integrated into WSD evaluation experiments. This paper presents a new method of pseudoword generation which takes into account semantic-relatedness of the candidate words forming parts of the pseudowords to the particular senses of the word to be disambiguated. We compare the new approach to its alternatives and show that the results on pseudowords, that are more similar to real ambiguous words, better correspond to the actual results. Two techniques assessing the similarity are studied - the first one takes advantage of manually created dictionaries (wordnets), the second one builds on the automatically computed statistical data obtained from large corpora. Pros and cons of the two techniques are discussed and the results on a standard task are demonstrated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,115 |
inproceedings | saggion-etal-2010-nlp | {NLP} Resources for the Analysis of Patient/Therapist Interviews | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1233/ | Saggion, Horacio and Stein-Sparvieri, Elena and Maldavsky, David and Szasz, Sandra | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a set of tools and resources for the analysis of interviews during psychotherapy sessions. One of the main components of the work is a dictionary-based text interpretation tool for the Spanish language. The tool is designed to identify a subset of Freudian drives in patient and therapist discourse. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,116 |
inproceedings | gibbon-etal-2010-medefaidrin | {M}edefaidrin: Resources Documenting the Birth and Death Language Life-cycle | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1234/ | Gibbon, Dafydd and Ekpenyong, Moses and Urua, Eno-Abasi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Language resources are typically defined and created for application in speech technology contexts, but the documentation of languages which are unlikely ever to be provided with enabling technologies nevertheless plays an important role in defining the heritage of a speech community and in the provision of basic insights into the language oriented components of human cognition. This is particularly true of endangered languages. The present case study concerns the documentation both of the birth and of the endangerment within a rather short space of time of a spirit language, Medefaidrin, created and used as a vehicular language by a religious community in South-Eastern Nigeria. The documentation shows phonological, orthographic, morphological, syntactic and textual typological features of Medefaidrin which indicate that typological properties of English were a model for the creation of the language, rather than typological properties of the enclaving language, Ibibio. The documentation is designed as part of the West African Language Archive (WALA), following OLAC metadata standards. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,117 |
inproceedings | sato-kaide-2010-person | A Person-Name Filter for Automatic Compilation of Bilingual Person-Name Lexicons | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1235/ | Sato, Satoshi and Kaide, Sayoko | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper proposes a simple and fast person-name filter, which plays an important role in automatic compilation of a large bilingual person-name lexicon. This filter is based on pn{\_}score, which is the sum of two component scores, the score of the first name and that of the last name. Each score is calculated from two term sets: one is a dense set in which most of the members are person names; another is a baseline set that contains less person names. The pn{\_}score takes one of five values, {\{}+2, +1, 0, -1, -2{\}}, which correspond to strong positive, positive, undecidable, negative, and strong negative, respectively. This pn{\_}score can be easily extended to bilingual pn{\_}score that takes one of nine values, by summing scores of two languages. Experimental results show that our method works well for monolingual person names in English and Japanese; the F-score of each language is 0.929 and 0.939, respectively. The performance of the bilingual person-name filter is better; the F-score is 0.955. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,118 |
inproceedings | al-sabbagh-girju-2010-mining | Mining the Web for the Induction of a Dialectical {A}rabic Lexicon | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1236/ | Al-Sabbagh, Rania and Girju, Roxana | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the first phase of building a lexicon of Egyptian Cairene Arabic (ECA) {\textemdash} one of the most widely understood dialects in the Arab World {\textemdash} and Modern Standard Arabic (MSA). Each ECA entry is mapped to its MSA synonym, Part-of-Speech (POS) tag and top-ranked contexts based on Web queries; and thus each entry is provided with basic syntactic and semantic information for a generic lexicon compatible with multiple NLP applications. Moreover, through their MSA synonyms, ECA entries acquire access to MSA available NLP tools and resources which are considerably available. Using an associationist approach based on the correlations between word co-occurrence patterns in both dialects, we change the direction of the acquisition process from parallel to circular to overcome a bottleneck of current research on Arabic dialects, namely the lack of parallel corpora, and to alleviate accuracy rates for using unrelated Web documents which are more frequently available. Manually evaluated for 1,000 word entries by two native speakers of the ECA-MSA varieties, the proposed approach achieves a promising F-measured performance rate of 70.9{\%}. In discussion to the proposed algorithm, different semantic issues are highlighted for upcoming phases of the induction of a more comprehensive ECA-MSA lexicon. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,119 |
inproceedings | heinroth-etal-2010-efficient | Efficient Spoken Dialogue Domain Representation and Interpretation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1237/ | Heinroth, Tobias and Denich, Dan and Schmitt, Alexander and Minker, Wolfgang | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We provide a detailed look on the functioning of the OwlSpeak Spoken Dialogue Manager, which is part of the EU-funded project ATRACO. OwlSpeak interprets Spoken Dialogue Ontologies and on this basis generates VoiceXML dialogue snippets. The dialogue snippets can be interpreted by all speech servers that provide VoiceXML support and therefore make the dialogue management independent from the hosting systems providing speech recognition and synthesis. Ontologies are used within the framework of our prototype to represent specific spoken dialogue domains that can dynamically be broadened or tightened during an ongoing dialogue. We provide an exemplary dialogue encoded as OWL model and explain how this model is interpreted by the dialogue manager. The combination of a unified model for dialogue domains and the strict model-view-controller architecture that underlies the dialogue manager lead to an efficient system that allows for a new way of spoken dialogue system development and can be used for further research on adaptive spoken dialogue strategies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,120 |
inproceedings | dreuw-etal-2010-signspeak | The {S}ign{S}peak Project - Bridging the Gap Between Signers and Speakers | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1238/ | Dreuw, Philippe and Ney, Hermann and Martinez, Gregorio and Crasborn, Onno and Piater, Justus and Moya, Jose Miguel and Wheatley, Mark | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The SignSpeak project will be the first step to approach sign language recognition and translation at a scientific level already reached in similar research fields such as automatic speech recognition or statistical machine translation of spoken languages. Deaf communities revolve around sign languages as they are their natural means of communication. Although deaf, hard of hearing and hearing signers can communicate without problems amongst themselves, there is a serious challenge for the deaf community in trying to integrate into educational, social and work environments. The overall goal of SignSpeak is to develop a new vision-based technology for recognizing and translating continuous sign language to text. New knowledge about the nature of sign language structure from the perspective of machine recognition of continuous sign language will allow a subsequent breakthrough in the development of a new vision-based technology for continuous sign language recognition and translation. Existing and new publicly available corpora will be used to evaluate the research progress throughout the whole project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,121 |
inproceedings | kubo-etal-2010-automatic | Automatic Term Recognition Based on the Statistical Differences of Relative Frequencies in Different Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1239/ | Kubo, Junko and Tsuji, Keita and Sugimoto, Shigeo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we propose a method for automatic term recognition (ATR) which uses the statistical differences of relative frequencies of terms in target domain corpus and elsewhere. Generally, the target terms appear more frequently in target domain corpus than in other domain corpora. Utilizing such characteristics will lead to the improvement of extraction performance. Most of the ATR methods proposed so far only use the target domain corpus and do not take such characteristics into account. For the extraction experiment, we used the abstracts of a women`s studies journal as a target domain corpus and those of academic journals of 39 domains as other domain corpora. The women`s studies terms which were used for extraction evaluation were manually identified terms in the abstracts. The extraction performance was analyzed and we found that our method outperformed earlier methods. The previous methods were based on C-value, FLR and methods which were also used with other domain corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,122 |
inproceedings | seretan-etal-2010-fipsromanian | {F}ips{R}omanian: Towards a {R}omanian Version of the Fips Syntactic Parser | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1240/ | Seretan, Violeta and Wehrli, Eric and Nerima, Luka and Soare, Gabriela | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe work in progress on the development of a full syntactic parser for Romanian. This work is part of a larger project of multilingual extension of the Fips parser (Wehrli, 2007), already available for French, English, German, Spanish, Italian, and Greek, to four new languages (Romanian, Romansh, Russian and Japanese). The Romanian version was built by starting with the Fips generic parsing architecture for the Romance languages and customising the grammatical component, in close relation to the development of the lexical component. We describe this process and report on preliminary results obtained for journalistic texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,123 |
inproceedings | edlund-etal-2010-spontal | {S}pontal: A {S}wedish Spontaneous Dialogue Corpus of Audio, Video and Motion Capture | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1241/ | Edlund, Jens and Beskow, Jonas and Elenius, Kjell and Hellmer, Kahl and Str{\"onbergsson, Sofia and House, David | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present the Spontal database of spontaneous Swedish dialogues. 120 dialogues of at least 30 minutes each have been captured in high-quality audio, high-resolution video and with a motion capture system. The corpus is currently being processed and annotated, and will be made available for research at the end of the project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,124 |
inproceedings | magdy-etal-2010-building | Building a Domain-specific Document Collection for Evaluating Metadata Effects on Information Retrieval | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1242/ | Magdy, Walid and Min, Jinming and Leveling, Johannes and Jones, Gareth J. F. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the development of a structured document collection containing user-generated text and numerical metadata for exploring the exploitation of metadata in information retrieval (IR). The collection consists of more than 61,000 documents extracted from YouTube video pages on basketball in general and NBA (National Basketball Association) in particular, together with a set of 40 topics and their relevance judgements. In addition, a collection of nearly 250,000 user profiles related to the NBA collection is available. Several baseline IR experiments report the effect of using video-associated metadata on retrieval effectiveness. The results surprisingly show that searching the videos titles only performs significantly better than searching additional metadata text fields of the videos such as the tags or the description. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,125 |
inproceedings | saggion-funk-2010-interpreting | Interpreting {S}enti{W}ord{N}et for Opinion Classification | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1243/ | Saggion, Horacio and Funk, Adam | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a set of tools, resources, and experiments for opinion classification in business-related datasources in two languages. In particular we concentrate on SentiWordNet text interpretation to produce word, sentence, and text-based sentiment features for opinion classification. We achieve good results in experiments using supervised learning machine over syntactic and sentiment-based features. We also show preliminary experiments where the use of summaries before opinion classification provides competitive advantage over the use of full documents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,126 |
inproceedings | baum-etal-2010-disco | {D}i{SC}o - A {G}erman Evaluation Corpus for Challenging Problems in the Broadcast Domain | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1244/ | Baum, Doris and Schneider, Daniel and Bardeli, Rolf and Schwenninger, Jochen and Samlowski, Barbara and Winkler, Thomas and K{\"ohler, Joachim | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Typical broadcast material contains not only studio-recorded texts read by trained speakers, but also spontaneous and dialect speech, debates with cross-talk, voice-overs, and on-site reports with difficult acoustic environments. Standard approaches to speech and speaker recognition usually deteriorate under such conditions. This paper reports on the design, construction, and experimental analysis of DiSCo, a German corpus for the evaluation of speech and speaker recognition on challenging material from the broadcast domain. One of the key requirements for the design of this corpus was a good coverage of different types of serious programmes beyond clean speech and planned speech broadcast news. Corpus annotation encompasses manual segmentation, an orthographic transcription, and labelling with speech mode, dialect, and noise type. We indicate typical use cases for the corpus by reporting results from ASR, speech search, and speaker recognition on the new corpus, thereby obtaining insights into the difficulty of audio recognition on the various classes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,127 |
inproceedings | balvet-etal-2010-creagest | The Creagest Project: a Digitized and Annotated Corpus for {F}rench {S}ign {L}anguage ({LSF}) and Natural Gestural Languages | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1245/ | Balvet, Antonio and Courtin, Cyril and Boutet, Dominique and Cuxac, Christian and Fusellier-Souza, Ivani and Garcia, Brigitte and L{'}Huillier, Marie-Th{\'e}r{\`e}se and Sallandre, Marie-Anne | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we discuss the theoretical, sociolinguistic, methodological and technical objectives and issues of the French Creagest Project (2007-2012) in setting up, documenting and annotating a large corpus of adult and child French Sign Language (LSF) and of natural gestural language. The main objective of this ANR-funded research project is to set up a collaborative web-based platform for the study of semiogenesis in LSF (French Sign Language), i.e. the study of emerging structures and signs, be they used by Deaf adult signers, Deaf children, or even by Deaf and hearing subjects in interaction. In section 2, we address theoretical and practical issues, emphasizing the outstanding features of the Creagest Project. In section 3, we deal with methodological issues for data collection. Finally, in section 4, we examine technical aspects of LSF video data editing and corpus annotation, in the perspective of setting up a corpus-based formalized description of LSF. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,128 |
inproceedings | tomas-etal-2010-speech | Speech Translation in Pedagogical Environment Using Additional Sources of Knowledge | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1246/ | Tom{\'a}s, Jes{\'u}s and Canovas, Alejandro and Lloret, Jaime and Pineda, Miguel Garc{\'i}a and Abad, Jose L. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A key aspect in the development of statistical translators is the synergic combination of different sources of knowledge. This work describes the effect and implications that would have adding additional other-than-voice information in a voice translation system. In the model discussed the additional information serves as the bases for the log-linear combination of several statistical models. A prototype that implements a real-time speech translation system from Spanish to English that is adapted to specific teaching-related environments is presented. In the scenario of analysis a teacher as speaker giving an educational class could use a real time translation system with foreign students. The teacher could add slides or class notes as additional reference to the voice translation system. Should notes be already translated into the destination language the system could have even more accuracy. We present the theoretical framework of the problem, summarize the overall architecture of the system, show how the system is enhanced with capabilities related to capturing the additional information; and finally present the initial performance results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,129 |
inproceedings | grishina-etal-2010-design | Design and Data Collection for the Accentological Corpus of the {R}ussian Language | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1247/ | Grishina, Elena and Savchuk, Svetlana and Poljakov, Alexej | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Accentological corpus provides a researcher an opportunity to study word stress and stress variation, which are very important for the Russian language. Moreover, Accentological corpus allows studying the history of the Russian language stress development. The research presents the main characteristics of Accentological corpus available at ruscorpora.ru. Corpora size, type and sources of text material, the way it is represented in the corpora, types of linguistic annotation, corpora composition and ways of their effective use according to their purposes are described. There are two zones in the Accentological corpus. 1) The zone of prose includes oral texts and films transcripts, in which stressed syllables are marked according to the real pronunciation. 2) The zone of poetry contains texts with marked accented syllables, so it is possible to define the exact word stress using special rules. The Accentological corpus has four types of annotations (metatextual, morphological, semantic and sociological) and also has its own accentological mark-up. Due to accentological annotation each word is supplied with stress marks, so a user can make queries and retrieve the stressed or unstressed word forms in combination with grammatical and semantic features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,130 |
inproceedings | felt-etal-2010-ccash | {CCASH}: A Web Application Framework for Efficient, Distributed Language Resource Development | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1248/ | Felt, Paul and Merkling, Owen and Carmen, Marc and Ringger, Eric and Lemmon, Warren and Seppi, Kevin and Haertel, Robbie | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We introduce CCASH (Cost-Conscious Annotation Supervised by Humans), an extensible web application framework for cost-efficient annotation. CCASH provides a framework in which cost-efficient annotation methods such as Active Learning can be explored via user studies and afterwards applied to large annotation projects. CCASHs architecture is described as well as the technologies that it is built on. CCASH allows custom annotation tasks to be built from a growing set of useful annotation widgets. It also allows annotation methods (such as AL) to be implemented in any language. Being a web application framework, CCASH offers secure centralized data and annotation storage and facilitates collaboration among multiple annotations. By default it records timing information about each annotation and provides facilities for recording custom statistics. The CCASH framework has been used to evaluate a novel annotation strategy presented in a concurrently published paper, and will be used in the future to annotate a large Syriac corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,131 |
inproceedings | pucher-etal-2010-resources | Resources for Speech Synthesis of Viennese Varieties | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1249/ | Pucher, Michael and Neubarth, Friedrich and Strom, Volker and Moosm{\"uller, Sylvia and Hofer, Gregor and Kranzler, Christian and Schuchmann, Gudrun and Schabus, Dietmar | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes our work on developing corpora of three varieties of Viennese for unit selection speech synthesis. The synthetic voices for Viennese varieties, implemented with the open domain unit selection speech synthesis engine Multisyn of Festival will also be released within Festival. The paper especially focuses on two questions: how we selected the appropriate speakers and how we obtained the text sources needed for the recording of these non-standard varieties. Regarding the first one, it turned out that working with a prototypical professional speaker was much more preferable than striving for authenticity. In addition, we give a brief outline about the differences between the Austrian standard and its dialectal varieties and how we solved certain technical problems that are related to these differences. In particular, the specific set of phones applicable to each variety had to be determined by applying various constraints. Since such a set does not serve any descriptive purposes but rather is influencing the quality of speech synthesis, a careful design of such a set was an important task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,132 |
inproceedings | wiegand-klakow-2010-predictive | Predictive Features for Detecting Indefinite Polar Sentences | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1250/ | Wiegand, Michael and Klakow, Dietrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In recent years, text classification in sentiment analysis has mostly focused on two types of classification, the distinction between objective and subjective text, i.e. subjectivity detection, and the distinction between positive and negative subjective text, i.e. polarity classification. So far, there has been little work examining the distinction between definite polar subjectivity and indefinite polar subjectivity. While the former are utterances which can be categorized as either positive or negative, the latter cannot be categorized as either of these two categories. This paper presents a small set of domain independent features to detect indefinite polar sentences. The features reflect the linguistic structure underlying these types of utterances. We give evidence for the effectiveness of these features by incorporating them into an unsupervised rule-based classifier for sentence-level analysis and compare its performance with supervised machine learning classifiers, i.e. Support Vector Machines (SVMs) and Nearest Neighbor Classifier (kNN). The data used for the experiments are web-reviews collected from three different domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,133 |
inproceedings | heid-etal-2010-term | Term and Collocation Extraction by Means of Complex Linguistic Web Services | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1251/ | Heid, Ulrich and Fritzinger, Fabienne and Hinrichs, Erhard and Hinrichs, Marie and Zastrow, Thomas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a web service-based environment for the use of linguistic resources and tools to address issues of terminology and language varieties. We discuss the architecture, corpus representation formats, components and a chainer supporting the combination of tools into task-specific services. Integrated into this environment, single web services also become part of complex scenarios for web service use. Our web services take for example corpora of several million words as an input on which they perform preprocessing, such as tokenisation, tagging, lemmatisation and parsing, and corpus exploration, such as collocation extraction and corpus comparison. Here we present an example on extraction of single and multiword items typical of a specific domain or typical of a regional variety of German. We also give a critical review on needs and available functions from a user`s point of view. The work presented here is part of ongoing experimentation in the D-SPIN project, the German national counterpart of CLARIN. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,134 |
inproceedings | calzolari-soria-2010-preparing | Preparing the field for an Open Resource Infrastructure: the role of the {FL}a{R}e{N}et Network of Excellence | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1252/ | Calzolari, Nicoletta and Soria, Claudia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In order to overcome the fragmentation that affects the field of Language Resources and Technologies, an Open and Distributed Resource Infrastructure is the necessary step for building on each other achievements, integrating resources and technologies and avoiding dispersed or conflicting efforts. Since this endeavour represents a true cultural turnpoint in the LRs field, it needs a careful preparation, both in terms of acceptance by the community and thoughtful investigation of the various technical, organisational and practical aspects implied. To achieve this, we need to act as a community able to join forces on a set of shared priorities and we need to act at a worldwide level. FLaReNet {\textemdash} Fostering Language Resources Network {\textemdash} is a Thematic Network funded under the EU eContent program that aims at developing the needed common vision and fostering a European and International strategy for consolidating the sector, thus enhancing competitiveness at EU level and worldwide. In this paper we present the activities undertaken by FLaReNet in order to prepare and support the establishment of such an Infrastructure, which is becoming now a reality within the new MetaNet initiative. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,135 |
inproceedings | calzolari-etal-2010-lrec | The {LREC} Map of Language Resources and Technologies | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1253/ | Calzolari, Nicoletta and Soria, Claudia and Del Gratta, Riccardo and Goggi, Sara and Quochi, Valeria and Russo, Irene and Choukri, Khalid and Mariani, Joseph and Piperidis, Stelios | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we present the LREC Map of Language Resources and Tools, an innovative feature introduced with this LREC. The purpose of the Map is to shed light on the vast amount of resources and tools that represent the background of the research presented at LREC, in the attempt to fill in a gap in the community knowledge about the resources and tools that are used or created worldwide. It also aims at a change of culture in the field, actively engaging each researcher in the documentation task about resources. The Map has been developed on the basis of the information provided by LREC authors during the submission of papers to the LREC 2010 conference and the LREC workshops, and contains information about almost 2000 resources. The paper illustrates the motivation behind this initiative, its main characteristics, its relevance and future impact in the field, the metadata used to describe the resources, and finally presents some of the most relevant findings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,136 |
inproceedings | moreau-etal-2010-evaluation | Evaluation Protocol and Tools for Question-Answering on Speech Transcripts | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1254/ | Moreau, Nicolas and Hamon, Olivier and Mostefa, Djamel and Rosset, Sophie and Galibert, Olivier and Lamel, Lori and Turmo, Jordi and Comas, Pere R. and Rosso, Paolo and Buscaldi, Davide and Choukri, Khalid | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Question Answering (QA) technology aims at providing relevant answers to natural language questions. Most Question Answering research has focused on mining document collections containing written texts to answer written questions. In addition to written sources, a large (and growing) amount of potentially interesting information appears in spoken documents, such as broadcast news, speeches, seminars, meetings or telephone conversations. The QAST track (Question-Answering on Speech Transcripts) was introduced in CLEF to investigate the problem of question answering in such audio documents. This paper describes in detail the evaluation protocol and tools designed and developed for the CLEF-QAST evaluation campaigns that have taken place between 2007 and 2009. We first remind the data, question sets, and submission procedures that were produced or set up during these three campaigns. As for the evaluation procedure, the interface that was developed to ease the assessors work is described. In addition, this paper introduces a methodology for a semi-automatic evaluation of QAST systems based on time slot comparisons. Finally, the QAST Evaluation Package 2007-2009 resulting from these evaluation campaigns is also introduced. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,137 |
inproceedings | sanroma-boleda-2010-database | The Database of {C}atalan Adjectives | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1255/ | Sanrom{\`a}, Roser and Boleda, Gemma | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present the Database of Catalan Adjectives (DCA), a database with 2,296 adjective lemmata enriched with morphological, syntactic and semantic information. This set of adjectives has been collected from a fragment of the Corpus Textual Informatitzat de la Llengua Catalana of the Institut dEstudis Catalans and constitutes a representative sample of the adjective class in Catalan as a whole. The database includes both manually coded and automatically extracted information regarding the most prominent properties used in the literature regarding the semantics of adjectives, such as morphological origin, suffix (if any), predicativity, gradability, adjective position with respect to the head noun, adjective modifiers, or semantic class. The DCA can be useful for NLP applications using adjectives (from POS-taggers to Opinion Mining applications) and for linguistic analysis regarding the morphological, syntactic, and semantic properties of adjectives. We now make it available to the research community under a Creative Commons Attribution Share Alike 3.0 Spain license. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,138 |
inproceedings | zouaq-etal-2010-syntactic | Can Syntactic and Logical Graphs help Word Sense Disambiguation? | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1256/ | Zouaq, Amal and Gagnon, Michel and Ozell, Benoit | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a word sense disambiguation (WSD) approach based on syntactic and logical representations. The objective here is to run a number of experiments to compare standard contexts (word windows, sentence windows) with contexts provided by a dependency parser (syntactic context) and a logical analyzer (logico-semantic context). The approach presented here relies on a dependency grammar for the syntactic representations. We also use a pattern knowledge base over the syntactic dependencies to extract flat predicative logical representations. These representations (syntactic and logical) are then used to build context vectors that are exploited in the WSD process. Various state-of-the-art algorithms including Simplified Lesk, Banerjee and Pedersen and frequency of co-occurrences are tested with these syntactic and logical contexts. Preliminary results show that defining context vectors based on these features may improve WSD by comparison with classical word and sentence context windows. However, future experiments are needed to provide more evidence over these issues. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,139 |
inproceedings | wang-etal-2010-automatic | Automatic Acquisition of {C}hinese Novel Noun Compounds | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1257/ | Wang, Meng and Huang, Chu-Ren and Yu, Shiwen and Sun, Weiwei | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Automatic acquisition of novel compounds is notoriously difficult because most novel compounds have relatively low frequency in a corpus. The current study proposes a new method to deal with the novel compound acquisition challenge. We model this task as a two-class classification problem in which a candidate compound is either classified as a compound or a non-compound. A machine learning method using SVM, incorporating two types of linguistically motivated features: semantic features and character features, is applied to identify rare but valid noun compounds. We explore two kinds of training data: one is virtual training data which is obtained by three statistical scores, i.e. co-occurrence frequency, mutual information and dependent ratio, from the frequent compounds; the other is real training data which is randomly selected from the infrequent compounds. We conduct comparative experiments, and the experimental results show that even with limited direct evidence in the corpus for the novel compounds, we can make full use of the typical frequent compounds to help in the discovery of the novel compounds. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,140 |
inproceedings | oostdijk-etal-2010-constructing | Constructing a Broad-coverage Lexicon for Text Mining in the Patent Domain | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1258/ | Oostdijk, Nelleke and Verberne, Suzan and Koster, Cornelis | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | For mining intellectual property texts (patents), a broad-coverage lexicon that covers general English words together with terminology from the patent domain is indispensable. The patent domain is very diffuse as it comprises a variety of technical domains (e.g. Human Necessities, Chemistry {\&} Metallurgy and Physics in the International Patent Classification). As a result, collecting a lexicon that covers the language used in patent texts is not a straightforward task. In this paper we describe the approach that we have developed for the semi-automatic construction of a broad-coverage lexicon for classification and information retrieval in the patent domain and which combines information from multiple sources. Our contribution is twofold. First, we provide insight into the difficulties of developing lexical resources for information retrieval and text mining in the patent domain, a research and development field that is expanding quickly. Second, we create a broad coverage lexicon annotated with rich lexical information and containing both general English word forms and domain terminology for various technical domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,141 |
inproceedings | bedaride-gardent-2010-syntactic | Syntactic Testsuites and Textual Entailment Recognition | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1259/ | Bedaride, Paul and Gardent, Claire | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We focus on textual entailments mediated by syntax and propose a new methodology to evaluate textual entailment recognition systems on such data. The main idea is to generate a syntactically annotated corpus of pairs of (non-)entailments and to use error mining methodology from the parsing field to identify the most likely sources of errors. To generate the evaluation corpus we use a template based generation approach where sentences, semantic representations and syntactic annotations are all created at the same time. Furthermore, we adapt the error mining methodology initially proposed for parsing to the field of textual entailment. To illustrate the approach, we apply the proposed methodology to the Afazio RTE system (an hybrid system focusing on syntactic entailment) and show how it permits identifying the most likely sources of errors made by this system on a testsuite of 10 000 (non-)entailment pairs which is balanced in term of (non-)entailment and in term of syntactic annotations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,142 |
inproceedings | stepanek-pajas-2010-querying | Querying Diverse Treebanks in a Uniform Way | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1260/ | {\v{S}}t{\v{e}}p{\'a}nek, Jan and Pajas, Petr | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a system for querying treebanks in a uniform way. The system is able to work with both dependency and constituency based treebanks in any language. We demonstrate its abilities on 11 different treebanks. The query language used by the system provides many features not available in other existing systems while still keeping the performance efficient. The paper also describes the conversion of ten treebanks into a common XML-based format used by the system, touching the question of standards and formats. The paper then shows several examples of linguistically interesting questions that the system is able to answer, for example browsing verbal clauses without subjects or extraposed relative clauses, generating the underlying grammar in a constituency treebank, searching for non-projective edges in a dependency treebank, or word-order typology of a language based on the treebank. The performance of several implementations of the system is also discussed by measuring the time requirements of some of the queries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,143 |
inproceedings | delmonte-etal-2010-deep | Deep Linguistic Processing with {GETARUNS} for Spoken Dialogue Understanding | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1261/ | Delmonte, Rodolfo and Bristot, Antonella and Pallotta, Vincenzo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we will present work carried out to scale up the system for text understanding called GETARUNS, and port it to be used in dialogue understanding. The current goal is that of extracting automatically argumentative information in order to build argumentative structure. The long term goal is using argumentative structure to produce automatic summarization of spoken dialogues. Very much like other deep linguistic processing systems, our system is a generic text/dialogue understanding system that can be used in connection with an ontology {\textemdash} WordNet - and other similar repositories of commonsense knowledge. We will present the adjustments we made in order to cope with transcribed spoken dialogues like those produced in the ICSI Berkeley project. In a final section we present preliminary evaluation of the system on two tasks: the task of automatic argumentative labeling and another frequently addressed task: referential vs. non-referential pronominal detection. Results obtained fair much higher than those reported in similar experiments with machine learning approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,144 |
inproceedings | mohamed-kubler-2010-arabic | {A}rabic Part of Speech Tagging | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1262/ | Mohamed, Emad and K{\"ubler, Sandra | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Arabic is a morphologically rich language, which presents a challenge for part of speech tagging. In this paper, we compare two novel methods for POS tagging of Arabic without the use of gold standard word segmentation but with the full POS tagset of the Penn Arabic Treebank. The first approach uses complex tags that describe full words and does not require any word segmentation. The second approach is segmentation-based, using a machine learning segmenter. In this approach, the words are first segmented, then the segments are annotated with POS tags. Because of the word-based approach, we evaluate full word accuracy rather than segment accuracy. Word-based POS tagging yields better results than segment-based tagging (93.93{\%} vs. 93.41{\%}). Word based tagging also gives the best results on known words, the segmentation-based approach gives better results on unknown words. Combining both methods results in a word accuracy of 94.37{\%}, which is very close to the result obtained by using gold standard segmentation (94.91{\%}). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,145 |
inproceedings | pak-paroubek-2010-twitter | {T}witter as a Corpus for Sentiment Analysis and Opinion Mining | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1263/ | Pak, Alexander and Paroubek, Patrick | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Microblogging today has become a very popular communication tool among Internet users. Millions of users share opinions on different aspects of life everyday. Therefore microblogging web-sites are rich sources of data for opinion mining and sentiment analysis. Because microblogging has appeared relatively recently, there are a few research works that were devoted to this topic. In our paper, we focus on using Twitter, the most popular microblogging platform, for the task of sentiment analysis. We show how to automatically collect a corpus for sentiment analysis and opinion mining purposes. We perform linguistic analysis of the collected corpus and explain discovered phenomena. Using the corpus, we build a sentiment classifier, that is able to determine positive, negative and neutral sentiments for a document. Experimental evaluations show that our proposed techniques are efficient and performs better than previously proposed methods. In our research, we worked with English, however, the proposed technique can be used with any other language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,146 |
inproceedings | nemoto-etal-2010-word | Word Boundaries in {F}rench: Evidence from Large Speech Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1264/ | Nemoto, Rena and Adda-Decker, Martine and Durand, Jacques | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The goal of this paper is to investigate French word segmentation strategies using phonemic and lexical transcriptions as well as prosodic and part-of-speech annotations. Average fundamental frequency (f0) profiles and phoneme duration profiles are measured using 13 hours of broadcast news speech to study prosodic regularities of French words. Some influential factors are taken into consideration for f0 and duration measurements: word syllable length, word-final schwa, part-of-speech. Results from average f0 profiles confirm word final syllable accentuation and from average duration profiles, we can observe long word final syllable length. Both are common tendencies in French. From noun phrase studies, results of average f0 profiles illustrate higher noun first syllable after determiner. Inter-vocalic duration profile results show long inter-vocalic duration between determiner vowel and preceding word vowel. These results reveal measurable cues contributing to word boundary location. Further studies will include more detailed within syllable f0 patterns, other speaking styles and languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,147 |
inproceedings | sagot-etal-2010-lexicon | A Lexicon of {F}rench Quotation Verbs for Automatic Quotation Extraction | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1265/ | Sagot, Beno{\^i}t and Danlos, Laurence and Stern, Rosa | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Quotation extraction is an important information extraction task, especially when dealing with news wires. Quotations can be found in various configurations. In this paper, we focus on direct quotations introduced by a parenthetical clause, headed by a ''``quotation verb''''. Our study is based on a large French news wire corpus from the Agence France-Presse. We introduce and motivate an analysis at the discursive level of such quotations, which differs from the syntactic analyses generally proposed. We show how we enriched the Lefff syntactic lexicon so that it provides an account for quotation verbs heading a quotation parenthetical, especially those extracted from a news wire corpus. We also sketch how these lexical entries can be extended to the discursive level in order to model quotations introduced in a parenthetical clause in a complete way. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,148 |
inproceedings | mikulova-stepanek-2010-ways | Ways of Evaluation of the Annotators in Building the {P}rague {C}zech-{E}nglish {D}ependency {T}reebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1266/ | Mikulov{\'a}, Marie and {\v{S}}t{\v{e}}p{\'a}nek, Jan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present several ways to measure and evaluate the annotation and annotators, proposed and used during the building of the Czech part of the Prague Czech-English Dependency Treebank. At first, the basic principles of the treebank annotation project are introduced (division to three layers: morphological, analytical and tectogrammatical). The main part of the paper describes in detail one of the important phases of the annotation process: three ways of evaluation of the annotators - inter-annotator agreement, error rate and performance. The measuring of the inter-annotator agreement is complicated by the fact that the data contain added and deleted nodes, making the alignment between annotations non-trivial. The error rate is measured by a set of automatic checking procedures that guard the validity of some invariants in the data. The performance of the annotators is measured by a booking web application. All three measures are later compared and related to each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,149 |
inproceedings | vanopstal-etal-2010-assessing | Assessing the Impact of {E}nglish Language Skills and Education Level on {P}ub{M}ed Searches by {D}utch-speaking Users | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1267/ | Vanopstal, Klaar and Vander Stichele, Robert and Laureys, Godelieve and Buysschaert, Joost | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The aim of this study was to assess the retrieval effectiveness of nursing students in the Dutch-speaking part of Belgium. We tested two groups: students from the master of Nursing and Midwifery training, and students from the bachelor of Nursing program. The test consisted of five parts: first, the students completed an enquiry about their computer skills, experiences with PubMed and how they assessed their own language skills. Secondly, an introduction into the use of MeSH in PubMed was given, followed by a PubMed search. After the literature search, a second enquiry was completed in which the students were asked to give their opinion about the test. To conclude, an official language test was completed. The results of the PubMed search, i.e. a list of articles the students deemed relevant for a particular question, were compared to a gold standard. Precision, recall and F-score were calculated in order to evaluate the efficiency of the PubMed search. We used information from the search process, such as search term formulation and MeSH term selection to evaluate the search process and examined their relationship with the results of the language test and the level of education. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,150 |
inproceedings | den-etal-2010-two | Two-level Annotation of Utterance-units in {J}apanese Dialogs: An Empirically Emerged Scheme | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1268/ | Den, Yasuharu and Koiso, Hanae and Maruyama, Takehiko and Maekawa, Kikuo and Takanashi, Katsuya and Enomoto, Mika and Yoshida, Nao | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we propose a scheme for annotating utterance-level units in Japanese dialogs, which emerged from an analysis of the interrelationship among four schemes, i) inter-pausal units, ii) intonation units, iii) clause units, and iv) pragmatic units. The associations among the labels of these four units were illustrated by multiple correspondence analysis and hierarchical cluster analysis. Based on these results, we prescribe utterance-unit identification rules, which identify two sorts of utterance-units with different granularities: short and long utterance-units. Short utterance-units are identified by acoustic and prosodic disjuncture, and they are considered to constitute units of speaker`s planning and hearer`s understanding. Long utterance-units, on the other hand, are recognized by syntactic and pragmatic disjuncture, and they are regarded as units of interaction. We explore some characteristics of these utterance-units, focusing particularly on unit duration and syntactic property, other participants' responses, and mismatch between the two-levels. We also discuss how our two-level utterance-units are useful in analyzing cognitive and communicative aspects of spoken dialogs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,151 |
inproceedings | candito-etal-2010-statistical | Statistical {F}rench Dependency Parsing: Treebank Conversion and First Results | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1269/ | Candito, Marie and Crabb{\'e}, Beno{\^i}t and Denis, Pascal | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We first describe the automatic conversion of the French Treebank (Abeill{\'e} and Barrier, 2004), a constituency treebank, into typed projective dependency trees. In order to evaluate the overall quality of the resulting dependency treebank, and to quantify the cases where the projectivity constraint leads to wrong dependencies, we compare a subset of the converted treebank to manually validated dependency trees. We then compare the performance of two treebank-trained parsers that output typed dependency parses. The first parser is the MST parser (Mcdonald et al., 2006), which we directly train on dependency trees. The second parser is a combination of the Berkeley parser (Petrov et al., 2006) and a functional role labeler: trained on the original constituency treebank, the Berkeley parser first outputs constituency trees, which are then labeled with functional roles, and then converted into dependency trees. We found that used in combination with a high-accuracy French POS tagger, the MST parser performs a little better for unlabeled dependencies (UAS=90.3{\%} versus 89.6{\%}), and better for labeled dependencies (LAS=87.6{\%} versus 85.6{\%}). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,152 |
inproceedings | ma-etal-2010-formal | Formal Description of Resources for Ontology-based Semantic Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1270/ | Ma, Yue and Nazarenko, Adeline and Audibert, Laurent | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Ontology-based semantic annotation aims at putting fragments of a text in correspondence with proper elements of an ontology such that the formal semantics encoded by the ontology can be exploited to represent text interpretation. In this paper, we formalize a resource for this goal. The main difficulty in achieving good semantic annotations consists in identifying fragments to be annotated and labels to be associated with them. To this end, our approach takes advantage of standard web ontology languages as well as rich linguistic annotation platforms. This in turn is concerned with how to formalize the combination of the ontological and linguistical information, which is a topical issue that has got an increasing discussion recently. Different from existing formalizations, our purpose is to extend ontologies by semantic annotation rules whose complexity increases along two dimensions: the linguistic complexity and the rule syntactic complexity. This solution allows reusing best NLP tools for the production of various levels of linguistic annotations. It also has the merit to distinguish clearly the process of linguistic analysis and the ontological interpretation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,153 |
inproceedings | rodriguez-fuentes-etal-2010-kalaka | {KALAKA}: A {TV} Broadcast Speech Database for the Evaluation of Language Recognition Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1271/ | Rodr{\'i}guez-Fuentes, Luis Javier and Penagarikano, Mikel and Bordel, Germ{\'a}n and Varona, Amparo and D{\'i}ez, Mireia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A speech database, named KALAKA, was created to support the Albayzin 2008 Evaluation of Language Recognition Systems, organized by the Spanish Network on Speech Technologies from May to November 2008. This evaluation, designed according to the criteria and methodology applied in the NIST Language Recognition Evaluations, involved four target languages: Basque, Catalan, Galician and Spanish (official languages in Spain), and included speech signals in other (unknown) languages to allow open-set verification trials. In this paper, the process of designing, collecting data and building the train, development and evaluation datasets of KALAKA is described. Results attained in the Albayzin 2008 LRE are presented as a means of evaluating the database. The performance of a state-of-the-art language recognition system on a closed-set evaluation task is also presented for reference. Future work includes extending KALAKA by adding Portuguese and English as target languages and renewing the set of unknown languages needed to carry out open-set evaluations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,154 |
inproceedings | panevova-sevcikova-2010-annotation | Annotation of Morphological Meanings of Verbs Revisited | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1272/ | Panevov{\'a}, Jarmila and {\v{S}}ev{\v{c}}{\'i}kov{\'a}, Magda | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Meanings of morphological categories are an indispensable component of representation of sentence semantics. In the Prague Dependency Treebank 2.0, sentence semantics is represented as a dependency tree consisting of labeled nodes and edges. Meanings of morphological categories are captured as attributes of tree nodes; these attributes are called grammatemes. The present paper focuses on morphological meanings of verbs, i.e. on meanings of the morphological category of tense, mood, aspect etc. After several introductory remarks, seven verbal grammatemes used in the PDT 2.0 annotation scenario are briefly introduced. After that, each of the grammatemes is examined. Three verbal grammatemes of the original set were included in the new set without changes, one of the grammatemes was extended, and three of them were substituted for three new ones. The revised grammateme set is to be included in the forthcoming version of PDT (tentatively called PDT 3.0). Rules for automatic and manual assignment of the revised grammatemes are further discussed in the paper. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,155 |
inproceedings | huang-etal-2010-predicting | Predicting Morphological Types of {C}hinese Bi-Character Words by Machine Learning Approaches | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1274/ | Huang, Ting-Hao and Ku, Lun-Wei and Chen, Hsin-Hsi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presented an overview of Chinese bi-character words morphological types, and proposed a set of features for machine learning approaches to predict these types based on composite characters information. First, eight morphological types were defined, and 6,500 Chinese bi-character words were annotated with these types. After pre-processing, 6,178 words were selected to construct a corpus named Reduced Set. We analyzed Reduced Set and conducted the inter-annotator agreement test. The average kappa value of 0.67 indicates a substantial agreement. Second, Bi-character words morphological types are considered strongly related with the composite characters parts of speech in this paper, so we proposed a set of features which can simply be extracted from dictionaries to indicate the characters tendency of parts of speech. Finally, we used these features and adopted three machine learning algorithms, SVM, CRF, and Na{\"ive Bayes, to predict the morphological types. On the average, the best algorithm CRF achieved 75{\% of the annotators performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,157 |
inproceedings | novielli-strapparava-2010-studying | Studying the Lexicon of Dialogue Acts | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1275/ | Novielli, Nicole and Strapparava, Carlo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Dialogue Acts have been well studied in linguistics and attracted computational linguistics research for a long time: they constitute the basis of everyday conversations and can be identified with the communicative goal of a given utterance (e.g. asking for information, stating facts, expressing opinions, agreeing or disagreeing). Even if not constituting any deep understanding of the dialogue, automatic dialogue act labeling is a task that can be relevant for a wide range of applications in both human-computer and human-human interaction. We present a qualitative analysis of the lexicon of Dialogue Acts: we explore the relationship between the communicative goal of an utterance and its affective content as well as the salience of specific word classes for each speech act. The experiments described in this paper fit in the scope of a research study whose long-term goal is to build an unsupervised classifier that simply exploits the lexical semantics of utterances for automatically annotate dialogues with the proper speech acts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,158 |
inproceedings | vivaldi-etal-2010-automatic | Automatic Summarization Using Terminological and Semantic Resources | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1276/ | Vivaldi, Jorge and da Cunha, Iria and Torres-Moreno, Juan-Manuel and Vel{\'a}zquez-Morales, Patricia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a new algorithm for automatic summarization of specialized texts combining terminological and semantic resources: a term extractor and an ontology. The term extractor provides the list of the terms that are present in the text together their corresponding termhood. The ontology is used to calculate the semantic similarity among the terms found in the main body and those present in the document title. The general idea is to obtain a relevance score for each sentence taking into account both the termhood of the terms found in such sentence and the similarity among such terms and those terms present in the title of the document. The phrases with the highest score are chosen to take part of the final summary. We evaluate the algorithm with Rouge, comparing the resulting summaries with the summaries of other summarizers. The sentence selection algorithm was also tested as part of a standalone summarizer. In both cases it obtains quite good results although the perception is that there is a space for improvement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,159 |
inproceedings | hamon-2010-judge | Is my Judge a good One? | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1277/ | Hamon, Olivier | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper aims at measuring the reliability of judges in MT evaluation. The scope is two evaluation campaigns from the CESTA project, during which human evaluations were carried out on fluency and adequacy criteria for English-to-French documents. Our objectives were threefold: observe both inter- and intra-judge agreements, and then study the influence of the evaluation design especially implemented for the need of the campaigns. Indeed, a web interface was especially developed to help with the human judgments and store the results, but some design changes were made between the first and the second campaign. Considering the low agreements observed, the judges' behaviour has been analysed in that specific context. We also asked several judges to repeat their own evaluations a few times after the first judgments done during the official evaluation campaigns. Even if judges did not seem to agree fully at first sight, a less strict comparison led to a strong agreement. Furthermore, the evolution of the design during the project seemed to have been a source for the difficulties that judges encountered to keep the same interpretation of quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,160 |
inproceedings | brendel-etal-2010-building | Building a System for Emotions Detection from Speech to Control an Affective Avatar | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1278/ | Brendel, M{\'a}ty{\'a}s and Zaccarelli, Riccardo and Devillers, Laurence | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we describe a corpus set together from two sub-corpora. The CINEMO corpus contains acted emotional expression obtained by playing dubbing exercises. This new protocol is a way to collect mood-induced data in large amount which show several complex and shaded emotions. JEMO is a corpus collected with an emotion-detection game and contains more prototypical emotions than CINEMO. We show how the two sub-corpora balance and enrich each other and result in a better performance. We built male and female emotion models and use Sequential Fast Forward Feature Selection to improve detection performances. After feature-selection we obtain good results even with our strict speaker independent testing method. The global corpus contains 88 speakers (38 females, 50 males). This study has been done within the scope of the ANR (National Research Agency) Affective Avatar project which deals with building a system of emotions detection for monitoring an Artificial Agent by voice. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,161 |
inproceedings | segers-vossen-2010-facilitating | Facilitating Non-expert Users of the {KYOTO} Platform: the {TMEKO} Editing Protocol for Synset to Ontology Mappings | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1279/ | Segers, Roxane and Vossen, Piek | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the general architecture of the TMEKO protocol (Tutoring Methodology for Enriching the Kyoto Ontology) that guides non-expert users through the process of creating mappings from domain wordnet synsets to a shared ontology by answering natural language questions. TMEKO will be part of a Wiki-like community platform currently developed in the Kyoto project (\url{http://www.kyoto-project.eu}). The platform provides the architecture for ontology based fact mining to enable knowledge sharing across languages and cultures. A central part of the platform is the Wikyoto editing environment in which users can create their own domain wordnet for seven different languages and define relations to the central and shared ontology based on DOLCE. A substantial part of the mappings will involve important processes and qualities associated with the concept. Therefore, the TMEKO protocol provides specific interviews for creating complex mappings that go beyond subclass and equivalence relations. The Kyoto platform and the TMEKO protocol are developed and applied to the environment domain for seven different languages (English, Dutch, Italian, Spanish, Basque, Japanese and Chinese), but can easily be extended and adapted to other languages and domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,162 |
inproceedings | buyko-etal-2010-genereg | The {G}ene{R}eg Corpus for Gene Expression Regulation Events {---} An Overview of the Corpus and its In-Domain and Out-of-Domain Interoperability | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1280/ | Buyko, Ekaterina and Beisswanger, Elena and Hahn, Udo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Despite the large variety of corpora in the biomedical domain their annotations differ in many respects, e.g., the coverage of different, highly specialized knowledge domains, varying degrees of granularity of targeted relations, the specificity of linguistic anchoring of relations and named entities in documents, etc. We here present GeneReg (Gene Regulation Corpus), the result of an annotation campaign led by the Jena University Language {\&} Information Engineering (JULIE) Lab. The GeneReg corpus consists of 314 abstracts dealing with the regulation of gene expression in the model organism E. coli. Our emphasis in this paper is on the compatibility of the GeneReg corpus with the alternative Genia event corpus and with several in-domain and out-of-domain lexical resources, e.g., the Specialist Lexicon, FrameNet, and WordNet. The links we established from the GeneReg corpus to these external resources will help improve the performance of the automatic relation extraction engine JREx trained and evaluated on GeneReg. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,163 |
inproceedings | neubig-mori-2010-word | Word-based Partial Annotation for Efficient Corpus Construction | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1281/ | Neubig, Graham and Mori, Shinsuke | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In order to utilize the corpus-based techniques that have proven effective in natural language processing in recent years, costly and time-consuming manual creation of linguistic resources is often necessary. Traditionally these resources are created on the document or sentence-level. In this paper, we examine the benefit of annotating only particular words with high information content, as opposed to the entire sentence or document. Using the task of Japanese pronunciation estimation as an example, we devise a machine learning method that can be trained on data annotated word-by-word. This is done by dividing the estimation process into two steps (word segmentation and word-based pronunciation estimation), and introducing a point-wise estimator that is able to make each decision independent of the other decisions made for a particular sentence. In an evaluation, the proposed strategy is shown to provide greater increases in accuracy using a smaller number of annotated words than traditional sentence-based annotation techniques. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,164 |
inproceedings | faass-etal-2010-design | Design and Application of a Gold Standard for Morphological Analysis: {SMOR} as an Example of Morphological Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1282/ | Faa{\ss}, Gertrud and Heid, Ulrich and Schmid, Helmut | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes general requirements for evaluating and documenting NLP tools with a focus on morphological analysers and the design of a Gold Standard. It is argued that any evaluation must be measurable and documentation thereof must be made accessible for any user of the tool. The documentation must be of a kind that it enables the user to compare different tools offering the same service, hence the descriptions must contain measurable values. A Gold Standard presents a vital part of any measurable evaluation process, therefore, the corpus-based design of a Gold Standard, its creation and problems that occur are reported upon here. Our project concentrates on SMOR, a morphological analyser for German that is to be offered as a web-service. We not only utilize this analyser for designing the Gold Standard, but also evaluate the tool itself at the same time. Note that the project is ongoing, therefore, we cannot present final results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,165 |
inproceedings | schwarz-etal-2010-identification | Identification of Rare {\&} Novel Senses Using Translations in a Parallel Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1283/ | Schwarz, Richard and Sch{\"utze, Hinrich and Martin, Fabienne and Stein, Achim | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The identification of rare and novel senses is a challenge in lexicography. In this paper, we present a new method for finding such senses using a word aligned multilingual parallel corpus. We use the Europarl corpus and therein concentrate on French verbs. We represent each occurrence of a French verb as a high dimensional term vector. The dimensions of such a vector are the possible translations of the verb according to the underlying word alignment. The dimensions are weighted by a weighting scheme to adjust to the significance of any particular translation. After collecting these vectors we apply forms of the K-means algorithm on the resulting vector space to produce clusters of distinct senses, so that standard uses produce large homogeneous clusters while rare and novel uses appear in small or heterogeneous clusters. We show in a qualitative and quantitative evaluation that the method can successfully find rare and novel senses. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,166 |
inproceedings | freitas-etal-2010-second | Second {HAREM}: Advancing the State of the Art of Named Entity Recognition in {P}ortuguese | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1284/ | Freitas, Cl{\'a}udia and Mota, Cristina and Santos, Diana and Oliveira, Hugo Gon{\c{c}}alo and Carvalho, Paula | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present Second HAREM, the second edition of an evaluation campaign for Portuguese, addressing named entity recognition (NER). This second edition also included two new tracks: the recognition and normalization of temporal entities (proposed by a group of participants, and hence not covered on this paper) and ReRelEM, the detection of semantic relations between named entities. We summarize the setup of Second HAREM by showing the preserved distinctive features and discussing the changes compared to the first edition. Furthermore, we present the main results achieved and describe the available resources and tools developed under this evaluation, namely,(i) the golden collections, i.e. a set of documents whose named entities and semantic relations between those entities were manually annotated, (ii) the Second HAREM collection (which contains the unannotated version of the golden collection), as well as the participating systems results on it, (iii) the scoring tools, and (iv) SAHARA, a Web application that allows interactive evaluation. We end the paper by offering some remarks about what was learned. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,167 |
inproceedings | kupietz-etal-2010-german | The {G}erman Reference Corpus {D}e{R}e{K}o: A Primordial Sample for Linguistic Research | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1285/ | Kupietz, Marc and Belica, Cyril and Keibel, Holger and Witt, Andreas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes DeReKo (Deutsches Referenzkorpus), the Archive of General Reference Corpora of Contemporary Written German at the Institut f{\"ur Deutsche Sprache (IDS) in Mannheim, and the rationale behind its development. We discuss its design, its legal background, how to access it, available metadata, linguistic annotation layers, underlying standards, ongoing developments, and aspects of using the archive for empirical linguistic research. The focus of the paper is on the advantages of DeReKo`s design as a primordial sample from which virtual corpora can be drawn for the specific purposes of individual studies. Both concepts, primordial sample and virtual corpus are explained and illustrated in detail. Furthermore, we describe in more detail how DeReKo deals with the fact that all its texts are subject to third parties' intellectual property rights, and how it deals with the issue of replicability, which is particularly challenging given DeReKo`s dynamic growth and the possibility to construct from it an open number of virtual corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,168 |
inproceedings | sassolini-cinini-2010-cultural | Cultural Heritage: Knowledge Extraction from Web Documents | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1286/ | Sassolini, Eva and Cinini, Alessandra | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This article presents the use of NLP techniques (text mining, text analysis) to develop specific tools that allow to create linguistic resources related to the cultural heritage domain. The aim of our approach is to create tools for the building of an online knowledge network, automatically extracted from text materials concerning this domain. A particular methodology was experimented by dividing the automatic acquisition of texts, and consequently, the creation of reference corpus in two phases. In the first phase, on-line documents have been extracted from lists of links provided by human experts. All documents extracted from the web by means of automatic spider have been stored in a repository of text materials. On the basis of these documents, automatic parsers create the reference corpus for the cultural heritage domain. Relevant information and semantic concepts are then extracted from this corpus. In a second phase, all these semantically relevant elements (such as proper names, names of institutions, names of places, and other relevant terms) have been used as basis for a new search strategy of text materials from heterogeneous sources. In this case also specialized crawlers (TP-crawler) have been used to work on a bulk of text materials available on line. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,169 |
inproceedings | villegas-etal-2010-case | A Case Study on Interoperability for Language Resources and Applications | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1287/ | Villegas, Marta and Bel, N{\'u}ria and Bel, Santiago and Rodr{\'i}guez, V{\'i}ctor | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper reports our experience when integrating differ resources and services into a grid environment. The use case we address implies the deployment of several NLP applications as web services. The ultimate objective of this task was to create a scenario where researchers have access to a variety of services they can operate. These services should be easy to invoke and able to interoperate between one another. We essentially describe the interoperability problems we faced, which involve metadata interoperability, data interoperability and service interoperability. We devote special attention to service interoperability and explore the possibility to define common interfaces and semantic description of services. While the web services paradigm suits the integration of different services very well, this requires mutual understanding and the accommodation to common interfaces that not only provide technical solution but also ease the user{\^a}s work. Defining common interfaces benefits interoperability but requires the agreement about operations and the set of inputs/outputs. Semantic annotation allows defining some sort of taxonomy that organizes and collects the set of admissible operations and types input/output parameters. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,170 |
inproceedings | cartoni-zweigenbaum-2010-semi | Semi-Automated Extension of a Specialized Medical Lexicon for {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1288/ | Cartoni, Bruno and Zweigenbaum, Pierre | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the development of a specialized lexical resource for a specialized domain, namely medicine. First, in order to assess the linguistic phenomena that need to be adressed, we based our observation on a large collection of more than 300`000 terms, organised around conceptual identifiers. Based on these observations, we highlight the specificities that such a lexicon should take into account, namely in terms of inflectional and derivational knowledge. In a first experiment, we show that general resources lack a large part of the words needed to process specialized language. Secondly, we describe an experiment to feed semi-automatically a medical lexicon and populate it with inflectional information. This experiment is based on a semi-automatic methods that tries to acquire inflectional knowledge from frequent endings of words recorded in existing lexicon. Thanks to this, we increased the coverage of the target vocabulary from 14.1{\%} to 25.7{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,171 |
inproceedings | duarte-gibet-2010-heterogeneous | Heterogeneous Data Sources for Signed Language Analysis and Synthesis: The {S}ign{C}om Project | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1289/ | Duarte, Kyle and Gibet, Sylvie | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes how heterogeneous data sources captured in the SignCom project may be used for the analysis and synthesis of French Sign Language (LSF) utterances. The captured data combine video data and multimodal motion capture (mocap) data, including body and hand movements as well as facial expressions. These data are pre-processed, synchronized, and enriched by text annotations of signed language elicitation sessions. The addition of mocap data to traditional data structures provides additional phonetic data to linguists who desire to better understand the various parts of signs (handshape, movement, orientation, etc.) to very exacting levels, as well as their interactions and relative timings. We show how the phonologies of hand configurations and articulator movements may be studied using signal processing and statistical analysis tools to highlight regularities or temporal schemata between the different modalities. Finally, mocap data allows us to replay signs using a computer animation engine, specifically editing and rearranging movements and configurations in order to create novel utterances. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,172 |
inproceedings | paroubek-etal-2010-annotations | Annotations for Opinion Mining Evaluation in the Industrial Context of the {DOXA} project | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1290/ | Paroubek, Patrick and Pak, Alexander and Mostefa, Djamel | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | After presenting opinion and sentiment analysis state of the art and the DOXA project, we review the few evaluation campaigns that have dealt in the past with opinion mining. Then we present the two level opinion and sentiment model that we will use for evaluation in the DOXA project and the annotation interface we use for hand annotating a reference corpus. We then present the corpus which will be used on DOXA and report on the hand-annotation task on a corpus of comments on video games and the solution adopted to obtain a sufficient level of inter-annotator agreement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,173 |
inproceedings | kouylekov-etal-2010-mining | Mining {W}ikipedia for Large-scale Repositories of Context-Sensitive Entailment Rules | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1291/ | Kouylekov, Milen and Mehdad, Yashar and Negri, Matteo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper focuses on the central role played by lexical information in the task of Recognizing Textual Entailment. In particular, the usefulness of lexical knowledge extracted from several widely used static resources, represented in the form of entailment rules, is compared with a method to extract lexical information from Wikipedia as a dynamic knowledge resource. The proposed acquisition method aims at maximizing two key features of the resulting entailment rules: coverage (i.e. the proportion of rules successfully applied over a dataset of TE pairs), and context sensitivity (i.e. the proportion of rules applied in appropriate contexts). Evaluation results show that Wikipedia can be effectively used as a source of lexical entailment rules, featuring both higher coverage and context sensitivity with respect to other resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,174 |
inproceedings | stymne-ahrenberg-2010-using | Using a Grammar Checker for Evaluation and Postprocessing of Statistical Machine Translation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1292/ | Stymne, Sara and Ahrenberg, Lars | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | One problem in statistical machine translation (SMT) is that the output often is ungrammatical. To address this issue, we have investigated the use of a grammar checker for two purposes in connection with SMT: as an evaluation tool and as a postprocessing tool. To assess the feasibility of the grammar checker on SMT output, we performed an error analysis, which showed that the precision of error identification in general was higher on SMT output than in previous studies on human texts. Using the grammar checker as an evaluation tool gives a complementary picture to standard metrics such as Bleu, which do not account well for grammaticality. We use the grammar checker as a postprocessing tool by automatically applying the error correction suggestions it gives. There are only small overall improvements of the postprocessing on automatic metrics, but the sentences that are affected by the changes are improved, as shown both by automatic metrics and by a human error analysis. These results indicate that grammar checker techniques are a useful complement to SMT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,175 |
inproceedings | weller-heid-2010-extraction | Extraction of {G}erman Multiword Expressions from Parsed Corpora Using Context Features | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1293/ | Weller, Marion and Heid, Ulrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We report about tools for the extraction of German multiword expressions (MWEs) from text corpora; we extract word pairs, but also longer MWEs of different patterns, e.g. verb-noun structures with an additional prepositional phrase or adjective. Next to standard association-based extraction, we focus on morpho-syntactic, syntactic and lexical-choice features of the MWE candidates. A broad range of such properties (e.g. number and definiteness of nouns, adjacency of the MWEs components and their position in the sentence, preferred lexical modifiers, etc.) along with relevant example sentences, are extracted from dependency-parsed text and stored in a data base. A sample precision evaluation and an analysis of extraction errors are provided along with the discussion of our extraction architecture. We furthermore measure the contribution of the features to the precision of the extraction: by using both morpho-syntactic and syntactic features, we achieve a higher precision in the identification of idiomatic MWEs, than by using only properties of one type. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,176 |
inproceedings | rodriguez-etal-2010-anaphoric | Anaphoric Annotation of {W}ikipedia and Blogs in the Live Memories Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1295/ | Rodr{\'i}guez, Kepa Joseba and Delogu, Francesca and Versley, Yannick and Stemle, Egon W. and Poesio, Massimo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Live Memories corpus is an Italian corpus annotated for anaphoric relations. This annotation effort aims to contribute to two significant issues for the CL research: the lack of annotated anaphoric resources for Italian and the increasing interest for the social Web. The Live Memories Corpus contains texts from the Italian Wikipedia about the region Trentino/S{\"ud Tirol and from blog sites with users' comments. It is planned to add a set of articles of local news papers. The corpus includes manual annotated information about morphosyntactic agreement, anaphoricity, and semantic class of the NPs. The anaphoric annotation includes discourse deixis, bridging relations and markes cases of ambiguity with the annotation of alternative interpretations. For the annotation of the anaphoric links the corpus takes into account specific phenomena of the Italian language like incorporated clitics and phonetically non realized pronouns. Reliability studies for the annotation of the mentioned phenomena and for annotation of anaphoric links in general offer satisfactory results. The Wikipedia and blogs dataset will be distributed under Creative Commons Attributions licence. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,178 |
inproceedings | flickinger-etal-2010-wikiwoods | {W}iki{W}oods: Syntacto-Semantic Annotation for {E}nglish {W}ikipedia | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1296/ | Flickinger, Dan and Oepen, Stephan and Ytrest{\o}l, Gisle | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | WikiWoods is an ongoing initiative to provide rich syntacto-semantic annotations for English Wikipedia. We sketch an automated processing pipeline to extract relevant textual content from Wikipedia sources, segment documents into sentence-like units, parse and disambiguate using a broad-coverage precision grammar, and support the export of syntactic and semantic information in various formats. The full parsed corpus is accompanied by a subset of Wikipedia articles for which gold-standard annotations in the same format were produced manually. This subset was selected to represent a coherent domain, Wikipedia entries on the broad topic of Natural Language Processing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,179 |
inproceedings | ruppenhofer-etal-2010-speaker | Speaker Attribution in Cabinet Protocols | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1297/ | Ruppenhofer, Josef and Sporleder, Caroline and Shirokov, Fabian | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Historical cabinet protocols are a useful resource which enable historians to identify the opinions expressed by politicians on different subjects and at different points of time. While cabinet protocols are often available in digitized form, so far the only method to access their information content is by keyword-based search, which often returns sub-optimal results. We present a method for enriching German cabinet protocols with information about the originators of statements. This requires automatic speaker attribution. Unlike many other approaches, our method can also deal with cases in which the speaker is not explicitly identified in the sentence itself. Such cases are very common in our domain. To avoid costly manual annotation of training data, we design a rule-based system which exploits morpho-syntactic cues. We show that such a system obtains good results, especially with respect to recall which is particularly important for information access. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,180 |
inproceedings | webb-etal-2010-wizard | {W}izard of {O}z Experiments for a Companion Dialogue System: Eliciting Companionable Conversation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1298/ | Webb, Nick and Benyon, David and Bradley, Jay and Hansen, Preben and Mival, Oil | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Within the EU-funded COMPANIONS project, we are working to evaluate new collaborative conversational models of dialogue. Such an evaluation requires us to benchmark approaches to companionable dialogue. In order to determine the impact of system strategies on our evaluation paradigm, we need to generate a range of companionable conversations, using dialogue strategies such as `empathy' and `positivity'. By companionable dialogue, we mean interactions that take user input of some scenario, and respond in a manner appropriate to the emotional content of the user utterance. In this paper, we describe our working Wizard of Oz (WoZ) system for systematically creating dialogues that fulfil these potential strategies, and enables us to deploy a range of potential techniques for selecting which parts of user input to address is which order, to inform the wizard response to the user based on a manual, on-the-fly assessment of the polarity of the user input. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,181 |
inproceedings | gallo-etal-2010-database | A Database for the Exploration of {S}panish Planning | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1299/ | Gallo, Carlos G{\'o}mez and Jaeger, T. Florian and Furth, Katrina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a new task-based corpus in the Spanish language. The corpus consists of videos, transcripts, and annotations of the inter- action between a naive speaker and a confederate listener. The speaker instructs the listener to MOVE, ROTATE, or PAINT objects on a computer screen. This resource can be used to study how participants produce instructions in a collaborative goal-oriented scenario, in Spanish. The data set is ideally suited for investigating incremental processes of the production and interpretation of language. We demonstrate here how to use this corpus to explore language-specific differences in utterance planning, for English and Spanish speakers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,182 |
inproceedings | temnikova-2010-cognitive | Cognitive Evaluation Approach for a Controlled Language Post-Editing Experiment | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1300/ | Temnikova, Irina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In emergency situations it is crucial that instructions are straightforward to understand. For this reason a controlled language for crisis management (CLCM), based on psycholinguistic studies of human comprehension under stress, was developed. In order to test the impact of CLCM machine translatability of this particular kind of sub-language text, a previous experiment involving machine translation and human post-editing has been conducted. Employing two automatic evaluation metrics, a previous evaluation of the experiment has proved that instructions written according to this CL can improve machine translation (MT) performance. This paper presents a new cognitive evaluation approach for MT post-editing, which is tested on the previous controlled and uncontrolled textual data. The presented evaluation approach allows a deeper look into the post-editing process and specifically how much effort post-editors put into correcting the different kinds of MT errors. The method is based on existing MT error classification, which is enriched with a new error ranking motivated by the cognitive effort involved in the detection and correction of these MT errors. The preliminary results of applying this approach to a subset of the original data confirmed once again the positive impact of CLCM on emergency instructions' machine translatability and thus the validity of the approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,183 |
inproceedings | allwood-etal-2010-work | Work on Spoken (Multimodal) Language Corpora in {S}outh {A}frica | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1301/ | Allwood, Jens and Hammarstr{\"om, Harald and Hendrikse, Andries and Ngcobo, Mtholeni N. and Nomdebevana, Nozibele and Pretorius, Laurette and van der Merwe, Mac | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes past, ongoing and planned work on the collection and transcription of spoken language samples for all the South African official languages and as part of this the training of researchers in corpus linguistic research skills. More specifically the work has involved (and still involves) establishing an international corpus linguistic network linked to a network hub at a UNISA website and the development of research tools, a corpus research guide and workbook for multimodal communication and spoken language corpus research. As an example of the work we are doing and hope to do more of in the future, we present a small pilot study of the influence of English and Afrikaans on the 100 most frequent words in spoken Xhosa as this is evidenced in the corpus of spoken interaction we have gathered so far. Other planned work, besides work on spoken language phenomena, involves comparison of spoken and written language and work on communicative body movements (gestures) and their relation to speech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,184 |
inproceedings | reiter-etal-2010-using | Using {NLP} Methods for the Analysis of Rituals | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1302/ | Reiter, Nils and Hellwig, Oliver and Mishra, Anand and Frank, Anette and Burkhardt, Jens | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper gives an overview of an interdisciplinary research project that is concerned with the application of computational linguistics methods to the analysis of the structure and variance of rituals, as investigated in ritual science. We present motivation and prospects of a computational approach to ritual research, and explain the choice of specific analysis techniques. We discuss design decisions for data collection and processing and present the general NLP architecture. For the analysis of ritual descriptions, we apply the frame semantics paradigm with newly invented frames where appropriate. Using scientific ritual research literature, we experimented with several techniques of automatic extraction of domain terms for the domain of rituals. As ritual research is a highly interdisciplinary endeavour, a vocabulary common to all sub-areas of ritual research can is hard to specify and highly controversial. The domain terms extracted from ritual research literature are used as a basis for a common vocabulary and thus help the creation of ritual specific frames. We applied the tf*idf, 2 and PageRank algorithm to our ritual research literature corpus and two non-domain corpora: The British National Corpus and the British Academic Written English corpus. All corpora have been part of speech tagged and lemmatized. The domain terms have been evaluated by two ritual experts independently. Interestingly, the results of the algorithms were different for different parts of speech. This finding is in line with the fact that the inter-annotator agreement also differs between parts of speech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,185 |
inproceedings | rytting-etal-2010-error | Error Correction for {A}rabic Dictionary Lookup | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1303/ | Rytting, C. Anton and Rodrigues, Paul and Buckwalter, Tim and Zajic, David and Hirsch, Bridget and Carnes, Jeff and Lynn, Nathanael and Wayland, Sarah and Taylor, Chris and White, Jason and Blake III, Charles and Browne, Evelyn and Miller, Corey and Purvis, Tristan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a new Arabic spelling correction system which is intended for use with electronic dictionary search by learners of Arabic. Unlike other spelling correction systems, this system does not depend on a corpus of attested student errors but on student- and teacher-generated ratings of confusable pairs of phonemes or letters. Separate error modules for keyboard mistypings, phonetic confusions, and dialectal confusions are combined to create a weighted finite-state transducer that calculates the likelihood that an input string could correspond to each citation form in a dictionary of Iraqi Arabic. Results are ranked by the estimated likelihood that a citation form could be misheard, mistyped, or mistranscribed for the input given by the user. To evaluate the system, we developed a noisy-channel model trained on students speech errors and use it to perturb citation forms from a dictionary. We compare our system to a baseline based on Levenshtein distance and find that, when evaluated on single-error queries, our system performs 28{\%} better than the baseline (overall MRR) and is twice as good at returning the correct dictionary form as the top-ranked result. We believe this to be the first spelling correction system designed for a spoken, colloquial dialect of Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,186 |
inproceedings | walker-copperman-2010-evaluating | Evaluating Complex Semantic Artifacts | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1304/ | Walker, Christopher R and Copperman, Hannah | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Evaluating complex Natural Language Processing (NLP) systems can prove extremely difficult. In many cases, the best one can do is to evaluate these systems indirectly, by looking at the impact they have on the performance of the downstream use case. For complex end-to-end systems, these metrics are not always enlightening, especially from the perspective of NLP failure analysis, as component interaction can obscure issues specific to the NLP technology. We present an evaluation program for complex NLP systems designed to produce meaningful aggregate accuracy metrics with sufficient granularity to support active development by NLP specialists. Our goals were threefold: to produce reliable metrics, to produce useful metrics and to produce actionable data. Our use case is a graph-based Wikipedia search index. Since the evaluation of a complex graph structure is beyond the conceptual grasp of a single human judge, the problem needs to be broken down. Slices of complex data reflective of coherent Decision Points provide a good framework for evaluation using human judges (Medero et al., 2006). For NL semantics, there really is no substitute. Leveraging Decision Points allows complex semantic artifacts to be tracked with judge-driven evaluations that are accurate, timely and actionable. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,187 |
inproceedings | altantawy-etal-2010-morphological | Morphological Analysis and Generation of {A}rabic Nouns: A Morphemic Functional Approach | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1305/ | Altantawy, Mohamed and Habash, Nizar and Rambow, Owen and Saleh, Ibrahim | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | MAGEAD is a morphological analyzer and generator for Modern Standard Arabic (MSA) and its dialects. We introduced MAGEAD in previous work with an implementation of MSA and Levantine Arabic verbs. In this paper, we port that system to MSA nominals (nouns and adjectives), which are far more complex to model than verbs. Our system is a functional morphological analyzer and generator, i.e., it analyzes to and generates from a representation consisting of a lexeme and linguistic feature-value pairs, where the features are syntactically (and perhaps semantically) meaningful, rather than just morphologically. A detailed evaluation of the current implementation comparing it to a commonly used morphological analyzer shows that it has good morphological coverage with precision and recall scores in the 90s. An error analysis reveals that the majority of recall and precision errors are problems in the gold standard or a result of the discrepancy between different models of form-based/functional morphology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,188 |
inproceedings | kaji-etal-2010-using | Using Comparable Corpora to Adapt a Translation Model to Domains | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1306/ | Kaji, Hiroyuki and Tsunakawa, Takashi and Okada, Daisuke | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Statistical machine translation (SMT) requires a large parallel corpus, which is available only for restricted language pairs and domains. To expand the language pairs and domains to which SMT is applicable, we created a method for estimating translation pseudo-probabilities from bilingual comparable corpora. The essence of our method is to calculate pairwise correlations between the words associated with a source-language word, presently restricted to a noun, and its translations; word translation pseudo-probabilities are calculated based on the assumption that the more associated words a translation is correlated with, the higher its translation probability. We also describe a method we created for calculating noun-sequence translation pseudo-probabilities based on occurrence frequencies of noun sequences and constituent-word translation pseudo-probabilities. Then, we present a framework for merging the translation pseudo-probabilities estimated from in-domain comparable corpora with a translation model learned from an out-of-domain parallel corpus. Experiments using Japanese and English comparable corpora of scientific paper abstracts and a Japanese-English parallel corpus of patent abstracts showed promising results; the BLEU score was improved to some degree by incorporating the pseudo-probabilities estimated from the in-domain comparable corpora. Future work includes an optimization of the parameters and an extension to estimate translation pseudo-probabilities for verbs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,189 |
inproceedings | copperman-walker-2010-freds | Fred`s Reusable Evaluation Device: Providing Support for Quick and Reliable Linguistic Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1307/ | Copperman, Hannah and Walker, Christopher R. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an interface that was developed for processing large amounts of human judgments of linguistically annotated data. Freds Reusable Evaluation Device (Fred) provides administrators with a tool to submit linguistic evaluation tasks to judges. Each evaluation task is then presented to exactly two judges, who can submit their judgments at their own leisure. Fred then provides several metrics to administrators. The most important metric is precision, which is provided for each evaluation task and each annotator. Administrators can look at precision for a given data set over time, as well as by evaluation type, data set, or annotator. Inter-annotator agreement is also reported, and that can be tracked over time as well. The interface was developed to provide a tool for evaluating semantically marked up text. The types of evaluations Fred has been used for so far include things like correctness of subject-relation identification, and correctness of temporal relations. However, Freds full versatility has not yet been fully exploited. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,190 |
inproceedings | baird-walker-2010-creation | The Creation of a Large-Scale {LFG}-Based Gold Parsebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1308/ | Baird, Alexis and Walker, Christopher R. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Systems for syntactically parsing sentences have long been recognized as a priority in Natural Language Processing. Statistics-based systems require large amounts of high quality syntactically parsed data. Using the XLE toolkit developed at PARC and the LFG Parsebanker interface developed at Bergen, the Parsebank Project at Powerset has generated a rapidly increasing volume of syntactically parsed data. By using these tools, we are able to leverage the LFG framework to provide richer analyses via both constituent (c-) and functional (f-) structures. Additionally, the Parsebanking Project uses source data from Wikipedia rather than source data limited to a specific genre, such as the Wall Street Journal. This paper outlines the process we used in creating a large-scale LFG-Based Parsebank to address many of the shortcomings of previously-created parse banks such as the Penn Treebank. While the Parsebank corpus is still in progress, preliminary results using the data in a variety of contexts already show promise. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,191 |
inproceedings | baker-etal-2010-modality | A Modality Lexicon and its use in Automatic Tagging | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1309/ | Baker, Kathryn and Bloodgood, Michael and Dorr, Bonnie and Filardo, Nathaniel W. and Levin, Lori and Piatko, Christine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes our resource-building results for an eight-week JHU Human Language Technology Center of Excellence Summer Camp for Applied Language Exploration (SCALE-2009) on Semantically-Informed Machine Translation. Specifically, we describe the construction of a modality annotation scheme, a modality lexicon, and two automated modality taggers that were built using the lexicon and annotation scheme. Our annotation scheme is based on identifying three components of modality: a trigger, a target and a holder. We describe how our modality lexicon was produced semi-automatically, expanding from an initial hand-selected list of modality trigger words and phrases. The resulting expanded modality lexicon is being made publicly available. We demonstrate that one tagger{\textemdash}a structure-based tagger{\textemdash}results in precision around 86{\%} (depending on genre) for tagging of a standard LDC data set. In a machine translation application, using the structure-based tagger to annotate English modalities on an English-Urdu training corpus improved the translation quality score for Urdu by 0.3 Bleu points in the face of sparse training data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,192 |
inproceedings | tanenblatt-etal-2010-conceptmapper | The {C}oncept{M}apper Approach to Named Entity Recognition | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1310/ | Tanenblatt, Michael and Coden, Anni and Sominsky, Igor | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | ConceptMapper is an open source tool we created for classifying mentions in an unstructured text document based on concept terminologies (dictionaries) and yielding named entities as output. It is implemented as a UIMA (Unstructured Information Management Architecture) annotator and is highly configurable: concepts can come from standardised or proprietary terminologies; arbitrary attributes can be associated with dictionary entries, and those attributes can then be associated with the named entities in the output; numerous search strategies and search options can be specified; any tokenizer packaged as a UIMA annotator can be used to tokenize the dictionary, so the same tokenization can be guaranteed for the input and dictionary, minimising tokenization mismatch errors; and the types and features of UIMA annotations used as input and generated as output can also be controlled. We describe ConceptMapper and its configuration parameters and their trade-offs, then describe the results of an experiment wherein some of these parameters are varied and precision and recall are subsequently measured in the task of in identifying concepts in a collection English-language clinical reports (colon cancer pathology). ConceptMapper is available from the Apache UIMA Sandbox, covered by the Apache Open Source license. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,193 |
inproceedings | hayashi-etal-2010-laf | {LAF}/{G}r{AF}-grounded Representation of Dependency Structures | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1311/ | Hayashi, Yoshihiko and Declerck, Thierry and Narawa, Chiharu | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper shows that a LAF/GrAF-based annotation schema can be used for the adequate representation of syntactic dependency structures possibly in many languages. We first argue that there are at least two types of textual units that can be annotated with dependency information: words/tokens and chunks/phrases. We especially focus on importance of the latter dependency unit: it is particularly useful for representing Japanese dependency structures, known as Kakari-Uke structure. Based on this consideration, we then discuss a sub-typing of GrAF to represent the corresponding dependency structures. We derive three node types, two edge types, and the associated constraints for properly representing both the token-based and the chunk-based dependency structures. We finally propose a wrapper program that, as a proof of concept, converts output data from different dependency parsers in proprietary XML formats to the GrAF-compliant XML representation. It partially proves the value of an international standard like LAF/GrAF in the Web service context: an existing dependency parser can be, in a sense, standardized, once wrapped by a data format conversion process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,194 |
inproceedings | carmen-etal-2010-tag | Tag Dictionaries Accelerate Manual Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1312/ | Carmen, Marc and Felt, Paul and Haertel, Robbie and Lonsdale, Deryle and McClanahan, Peter and Merkling, Owen and Ringger, Eric and Seppi, Kevin | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Expert human input can contribute in various ways to facilitate automatic annotation of natural language text. For example, a part-of-speech tagger can be trained on labeled input provided offline by experts. In addition, expert input can be solicited by way of active learning to make the most of annotator expertise. However, hiring individuals to perform manual annotation is costly both in terms of money and time. This paper reports on a user study that was performed to determine the degree of effect that a part-of-speech dictionary has on a group of subjects performing the annotation task. The user study was conducted using a modular, web-based interface created specifically for text annotation tasks. The user study found that for both native and non-native English speakers a dictionary with greater than 60{\%} coverage was effective at reducing annotation time and increasing annotator accuracy. On the basis of this study, we predict that using a part-of-speech tag dictionary with coverage greater than 60{\%} can reduce the cost of annotation in terms of both time and money. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,195 |
inproceedings | konstantopoulos-2010-learning | Learning Language Identification Models: A Comparative Analysis of the Distinctive Features of Names and Common Words | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1313/ | Konstantopoulos, Stasinos | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The intuition and basic hypothesis that this paper explores is that names are more characteristic of their language than common words are, and that a single name can have enough clues to confidently identify its language where random text of the same length wouldn`t. To test this hypothesis, n-gramm modelling is used to learn language models which identify the language of isolated names and equally short document fragments. As the empirical results corroborate the prior intuition, an explanation is sought for the higher accuracy at which the language of names can be identified. The results of the application of these models, as well as the models themselves, are quantitatively and qualitatively analysed and a hypothesis is formed about the explanation of this difference. The conclusions derived are both technologically useful in information extraction or text-to-speech tasks, and theoretically interesting as a tool for improving our understanding of the morphology and phonology of the languages involved in the experiments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,196 |
inproceedings | davis-moldovan-2010-feasibility | Feasibility of Automatically Bootstrapping a {P}ersian {W}ord{N}et | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1314/ | Davis, Chris Irwin and Moldovan, Dan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we describe a proof-of-concept for the bootstrapping of a Persian WordNet. This effort was motivated by previous work done at Stanford University on bootstrapping an Arabic WordNet using a parallel corpus and an English WordNet. The principle of that work is based on the premise that paradigmatic relations are by nature deeply semantic, and as such, are likely to remain intact between languages. We performed our task on a Persian-English bilingual corpus of George Orwells Nineteen Eighty-Four. The corpus was neither aligned nor sense tagged, so it was necessary that these were undertaken first. A combination of manual and semiautomated methods were used to tag and sentence align the corpus. Actual mapping of English word senses onto Persian was done using automated techniques. Although Persian is written in Arabic script, it is an Indo-European language, while Arabic is a Central Semitic language. Despite their linguistic differences, we endeavor to test the applicability of the Stanford strategy to our task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,197 |
inproceedings | grover-etal-2010-south | The {S}outh {A}frican Human Language Technologies Audit | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1315/ | Grover, Aditi Sharma and van Huyssteen, Gerhard B. and Pretorius, Marthinus W. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Human language technologies (HLT) can play a vital role in bridging the digital divide and thus the HLT field has been recognised as a priority area by the South African government. We present our work on conducting a technology audit on the South African HLT landscape across the countrys eleven official languages. The process and the instruments employed in conducting the audit are described and an overview of the various complementary approaches used in the results analysis is provided. We find that a number of HLT language resources (LRs) are available in SA but they are of a very basic and exploratory nature. Lessons learnt in conducting a technology audit in a young and multilingual context are also discussed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,198 |
inproceedings | poesio-etal-2010-babyexp | {B}aby{E}xp: Constructing a Huge Multimodal Resource to Acquire Commonsense Knowledge Like Children Do | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1316/ | Poesio, Massimo and Baroni, Marco and Lanz, Oswald and Lenci, Alessandro and Potamianos, Alexandros and Sch{\"utze, Hinrich and Schulte im Walde, Sabine and Surian, Luca | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | There is by now widespread agreement that the most realistic way to construct the large-scale commonsense knowledge repositories required by natural language and artificial intelligence applications is by letting machines learn such knowledge from large quantities of data, like humans do. A lot of attention has consequently been paid to the development of increasingly sophisticated machine learning algorithms for knowledge extraction. However, the nature of the input that humans are exposed to while learning commonsense knowledge has received much less attention. The BabyExp project is collecting very dense audio and video recordings of the first 3 years of life of a baby. The corpus constructed in this way will be transcribed with automated techniques and made available to the research community. Moreover, techniques to extract commonsense conceptual knowledge incrementally from these multimodal data are also being explored within the project. The current paper describes BabyExp in general, and presents pilot studies on the feasibility of the automated audio and video transcriptions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,199 |
inproceedings | sainz-etal-2010-tts | {TTS} Evaluation Campaign with a Common {S}panish Database | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1317/ | Sainz, I{\~n}aki and Navas, Eva and Hern{\'a}ez, Inma and Bonafonte, Antonio and Campillo, Francisco | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the first TTS evaluation campaign designed for Spanish. Seven research institutions took part in the evaluation campaign and developed a voice from a common speech database provided by the organisation. Each participating team had a period of seven weeks to generate a voice. Next, a set of sentences were released and each team had to synthesise them within a week period. Finally, some of the synthesised test audio files were subjectively evaluated via an online test according to the following criteria: similarity to the original voice, naturalness and intelligibility. Box-plots, Wilcoxon tests and WER have been generated in order to analyse the results. Two main conclusions can be drawn: On the one hand, there is considerable margin for improvement to reach the quality level of the natural voice. On the other hand, two systems get significantly better results than the rest: one is based on statistical parametric synthesis and the other one is a concatenative system that makes use of a sinusoidal model to modify both prosody and smooth spectral joints. Therefore, it seems that some kind of spectral control is needed when building voices with a medium size database for unrestricted domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,200 |
inproceedings | santos-mota-2010-experiments | Experiments in Human-computer Cooperation for the Semantic Annotation of {P}ortuguese Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1318/ | Santos, Diana and Mota, Cristina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present a system to aid human annotation of semantic information in the scope of the project AC/DC, called corte-e-costura. This system leverages on the human annotation effort, by providing the annotator with a simple system that applies rules incrementally. Our goal was twofold: first, to develop an easy-to-use system that required a minimum of learning from the part of the linguist; second, one that provided a straightforward way of checking the results obtained, in order to immediately evaluate the results of the rules devised. After explaining the motivation for its development from scratch, we present the current status of the AC/DC project and provide a quantitative description of its material in what concerns semantic annotation. We then present the corte-e-costura system in detail, providing the result of our first experiments with the semantic fields of colour and clothing. We end the paper with some discussion of future work as well as of the experience gained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,201 |
inproceedings | honda-akiba-2010-language | Language Modeling Approach for Retrieving Passages in Lecture Audio Data | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1319/ | Honda, Koichiro and Akiba, Tomoyosi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Spoken Document Retrieval (SDR) is a promising technology for enhancing the utility of spoken materials. After the spoken documents have been transcribed by using a Large Vocabulary Continuous Speech Recognition (LVCSR) decoder, a text-based ad hoc retrieval method can be applied directly to the transcribed documents. However, recognition errors will significantly degrade the retrieval performance. To address this problem, we have previously proposed a method that aimed to fill the gap between automatically transcribed text and correctly transcribed text by using a statistical translation technique. In this paper, we extend the method by (1) using neighboring context to index the target passage, and (2) applying a language modeling approach for document retrieval. Our experimental evaluation shows that context information can improve retrieval performance, and that the language modeling approach is effective in incorporating context information into the proposed SDR method, which uses a translation model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,202 |
inproceedings | forner-etal-2010-evaluating | Evaluating Multilingual Question Answering Systems at {CLEF} | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1320/ | Forner, Pamela and Giampiccolo, Danilo and Magnini, Bernardo and Pe{\~n}as, Anselmo and Rodrigo, {\'A}lvaro and Sutcliffe, Richard | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The paper offers an overview of the key issues raised during the seven years activity of the Multilingual Question Answering Track at the Cross Language Evaluation Forum (CLEF). The general aim of the Multilingual Question Answering Track has been to test both monolingual and cross-language Question Answering (QA) systems that process queries and documents in several European languages, also drawing attention to a number of challenging issues for research in multilingual QA. The paper gives a brief description of how the task has evolved over the years and of the way in which the data sets have been created, presenting also a brief summary of the different types of questions developed. The document collections adopted in the competitions are sketched as well, and some data about the participation are provided. Moreover, the main evaluation measures used to evaluate system performances are explained and an overall analysis of the results achieved is presented. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,203 |
inproceedings | vincze-etal-2010-hungarian | {H}ungarian Dependency Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1321/ | Vincze, Veronika and Szauter, D{\'ora and Alm{\'asi, Attila and M{\'ora, Gy{\"orgy and Alexin, Zolt{\'an and Csirik, J{\'anos | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Herein, we present the process of developing the first Hungarian Dependency TreeBank. First, short references are made to dependency grammars we considered important in the development of our Treebank. Second, mention is made of existing dependency corpora for other languages. Third, we present the steps of converting the Szeged Treebank into dependency-tree format: from the originally phrase-structured treebank, we produced dependency trees by automatic conversion, checked and corrected them thereby creating the first manually annotated dependency corpus for Hungarian. We also go into detail about the two major sets of problems, i.e. coordination and predicative nouns and adjectives. Fourth, we give statistics on the treebank: by now, we have completed the annotation of business news, newspaper articles, legal texts and texts in informatics, at the same time, we are planning to convert the entire corpus into dependency tree format. Finally, we give some hints on the applicability of the system: the present database may be utilized {\textemdash} among others {\textemdash} in information extraction and machine translation as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,204 |
inproceedings | fallucchi-etal-2010-generic | Generic Ontology Learners on Application Domains | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1322/ | Fallucchi, Francesca and Pazienza, Maria Teresa and Zanzotto, Fabio Massimo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In ontology learning from texts, we have ontology-rich domains where we have large structured domain knowledge repositories or we have large general corpora with large general structured knowledge repositories such as WordNet (Miller, 1995). Ontology learning methods are more useful in ontology-poor domains. Yet, in these conditions, these methods have not a particularly high performance as training material is not sufficient. In this paper we present an LSP ontology learning method that can exploit models learned from a generic domain to extract new information in a specific domain. In our model, we firstly learn a model from training data and then we use the learned model to discover knowledge in a specific domain. We tested our model adaptation strategy using a background domain that is applied to learn the isa networks in the Earth Observation Domain as a specific domain. We will demonstrate that our method captures domain knowledge better than other generic models: our model better captures what is expected by domain experts than a baseline method based only on WordNet. This latter is better correlated with non-domain annotators asked to produce the ontology for the specific domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,205 |
inproceedings | oltramari-etal-2010-senso | Senso Comune | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1323/ | Oltramari, Alessandro and Vetere, Guido and Lenzerini, Maurizio and Gangemi, Aldo and Guarino, Nicola | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper introduces the general features of Senso Comune, an open knowledge base for the Italian language, focusing on the interplay of lexical and ontological knowledge, and outlining our approach to conceptual knowledge elicitation. Senso Comune consists of a machine-readable lexicon constrained by an ontological infrastructure. The idea at the basis of Senso Comune is that natural languages exist in use, and they belong to their users. In the line of Saussure`s linguistics, natural languages are seen as a social product and their main strength relies on the users consensus. At the same time, language has specific goals: i.e. referring to entities that belong to the users world (be it physical or not) and that are made up in social environments where expressions are produced and understood. This usage leverages the creativity of those who produce words and try to understand them. This is the reason why ontology, i.e. a shared conceptualization of the world, can be regarded to as the soil on which the speakers' consensus may be rooted. Some final remarks concerning future work and applications are also given. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,206 |
inproceedings | nazar-janssen-2010-combining | Combining Resources: Taxonomy Extraction from Multiple Dictionaries | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1324/ | Nazar, Rogelio and Janssen, Maarten | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The idea that dictionaries are a good source for (computational) information has been around for a long while, and the extraction of taxonomic information from them is something that has been attempted several times. However, such information extraction was typically based on the systematic analysis of the text of a single dictionary. In this paper, we demonstrate how it is possible to extract taxonomic information without any analysis of the specific text, by comparing the same lexical entry in a number of different dictionaries. Counting word frequencies in the dictionary entry for the same word in different dictionaries leads to a surprisingly good recovery of taxonomic information, without the need for any syntactic analysis of the entries in question nor any kind of language-specific treatment. As a case in point, we will show in this paper an experiment extracting hyperonymy relations from several Spanish dictionaries, measuring the effect that the different number of dictionaries have on the results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,207 |
inproceedings | zhao-van-noord-2010-pos | {POS} Multi-tagging Based on Combined Models | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1325/ | Zhao, Yan and van Noord, Gertjan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the POS tagging task, there are two kinds of statistical models: one is generative model, such as the HMM, the others are discriminative models, such as the Maximum Entropy Model (MEM). POS multi-tagging decoding method includes the N-best paths method and forward-backward method. In this paper, we use the forward-backward decoding method based on a combined model of HMM and MEM. If P(t) is the forward-backward probability of each possible tag t, we first calculate P(t) according HMM and MEM separately. For all tags options in a certain position in a sentence, we normalize P(t) in HMM and MEM separately. Probability of the combined model is the sum of normalized forward-backward probabilities P norm(t) in HMM and MEM. For each word w, we select the best tag in which the probability of combined model is the highest. In the experiments, we use combined model and get higher accuracy than any single model on POS tagging tasks of three languages, which are Chinese, English and Dutch. The result indicates that our combined model is effective. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,208 |
inproceedings | saratxaga-etal-2010-ahotransf | {A}ho{T}ransf: A Tool for Multiband Excitation Based Speech Analysis and Modification | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1326/ | Saratxaga, Ibon and Hern{\'a}ez, Inmaculada and Navas, Eva and Sainz, I{\~n}aki and Luengo, Iker and S{\'a}nchez, Jon and Odriozola, Igor and Erro, Daniel | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we present AhoTransf, a tool that enables analysis, visualization, modification and synthesis of speech. AhoTransf integrates a speech signal analysis model with a graphical user interface to allow visualization and modification of the parameters of the model. The synthesis capability allows hearing the modified signal thus providing a quick way to understand the perceptual effect of the changes in the parameters of the model. The speech analysis/synthesis algorithm is based in the Multiband Excitation technique, but uses a novel phase information representation the Relative Phase Shift (RPSs). With this representation, not only the amplitudes but also the phases of the harmonic components of the speech signal reveal their structured patterns in the visualization tool. AhoTransf is modularly conceived so that it can be used with different harmonic speech models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,209 |
inproceedings | deleger-zweigenbaum-2010-identifying | Identifying Paraphrases between Technical and Lay Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1327/ | Del{\'e}ger, Louise and Zweigenbaum, Pierre | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In previous work, we presented a preliminary study to identify paraphrases between technical and lay discourse types from medical corpora dedicated to the French language. In this paper, we test the hypothesis that the same kinds of paraphrases as for French can be detected between English technical and lay discourse types and report the adaptation of our method from French to English. Starting from the constitution of monolingual comparable corpora, we extract two kinds of paraphrases: paraphrases between nominalizations and verbal constructions and paraphrases between neo-classical compounds and modern-language phrases. We do this relying on morphological resources and a set of extraction rules we adapt from the original approach for French. Results show that paraphrases could be identified with a rather good precision, and that these types of paraphrase are relevant in the context of the opposition between technical and lay discourse types. These observations are consistent with the results obtained for French, which demonstrates the portability of the approach as well as the similarity of the two languages as regards the use of those kinds of expressions in technical and lay discourse types. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,210 |
inproceedings | ntalampiras-etal-2010-heterogeneous | Heterogeneous Sensor Database in Support of Human Behaviour Analysis in Unrestricted Environments: The Audio Part | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1328/ | Ntalampiras, Stavros and Ganchev, Todor and Potamitis, Ilyas and Fakotakis, Nikos | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the present paper we report on a recent effort that resulted in the establishment of a unique multimodal database, referred to as the PROMETHEUS database. This database was created in support of research and development activities, performed within the European Commission FP7 PROMETHEUS project, aiming at the creation of a framework for monitoring and interpretation of human behaviours in unrestricted indoors and outdoors environments. In the present paper we discuss the design and the implementation of the audio part of the database and offer statistical information about the audio content. Specifically, it contains single-person and multi-person scenarios, but also covers scenarios with interactions between groups of people. The database design was conceived with extended support of research and development activities devoted to detection of typical and atypical events, emergency and crisis situations, which assist for achieving situational awareness and more reliable interpretation of the context in which humans behave. The PROMETHEUS database allows for embracing a wide range of real-world applications, including smart-home and human-robot interaction interfaces, indoors/outdoors public areas surveillance, airport terminals or city park supervision, etc. A major portion of the PROMETHEUS database will be made publically available by the end of year 2010. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,211 |
inproceedings | dahab-belz-2010-game | A Game-based Approach to Transcribing Images of Text | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1329/ | Dahab, Khalil and Belz, Anja | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Creating language resources is expensive and time-consuming, and this forms a bottleneck in the development of language technology, for less-studied non-European languages in particular. The recent internet phenomenon of crowd-sourcing offers a cost-effective and potentially fast way of overcoming such language resource acquisition bottlenecks. We present a methodology that takes as its input scanned documents of typed or hand-written text, and produces transcriptions of the text as its output. Instead of using Optical Character Recognition (OCR) technology, the methodology is game-based and produces such transcriptions as a by-product. The approach is intended particularly for languages for which language technology and resources are scarce and reliable OCR technology may not exist. It can be used in place of OCR for transcribing individual documents, or to create corpora of paired images and transcriptions required to train OCR tools. We present Minefield, a prototype implementation of the approach which is currently collecting Arabic transcriptions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,212 |
inproceedings | serrano-etal-2010-rodrigo | The {RODRIGO} Database | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1330/ | Serrano, Nicolas and Castro, Francisco and Juan, Alfons | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Annotation of digitized pages from historical document collections is very important to research on automatic extraction of text blocks, lines, and handwriting recognition. We have recently introduced a new handwritten text database, GERMANA, which is based on a Spanish manuscript from 1891. To our knowledge, GERMANA is the first publicly available database mostly written in Spanish and comparable in size to standard databases. In this paper, we present another handwritten text database, RODRIGO, completely written in Spanish and comparable in size to GERMANA. However, RODRIGO comes from a much older manuscript, from 1545, where the typical difficult characteristics of historical documents are more evident. In particular, the writing style, which has clear Gothic influences, is significantly more complex than that of GERMANA. We also provide baseline results of handwriting recognition for reference in future studies, using standard techniques and tools for preprocessing, feature extraction, HMM-based image modelling, and language modelling. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,213 |
inproceedings | bentivogli-etal-2010-building | Building Textual Entailment Specialized Data Sets: a Methodology for Isolating Linguistic Phenomena Relevant to Inference | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1331/ | Bentivogli, Luisa and Cabrio, Elena and Dagan, Ido and Giampiccolo, Danilo and Leggio, Medea Lo and Magnini, Bernardo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The expected benefits derive from the intuition that investigating the linguistic phenomena separately, i.e. decomposing the complexity of the TE problem, would yield an improvement in the development of specific strategies to cope with them. The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,214 |
inproceedings | al-saif-markert-2010-leeds | The {L}eeds {A}rabic Discourse Treebank: Annotating Discourse Connectives for {A}rabic | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1332/ | Al-Saif, Amal and Markert, Katja | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present the first effort towards producing an Arabic Discourse Treebank,a news corpus where all discourse connectives are identified and annotated with the discourse relations they convey as well as with the two arguments they relate. We discuss our collection of Arabic discourse connectives as well as principles for identifying and annotating them in context, taking into account properties specific to Arabic. In particular, we deal with the fact that Arabic has a rich morphology: we therefore include clitics as connectives as well as a wide range of nominalizations as potential arguments. We present a dedicated discourse annotation tool for Arabic and a large-scale annotation study. We show that both the human identification of discourse connectives and the determination of the discourse relations they convey is reliable. Our current annotated corpus encompasses a final 5651 annotated discourse connectives in 537 news texts. In future, we will release the annotated corpus to other researchers and use it for training and testing automated methods for discourse connective and relation recognition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,215 |
inproceedings | vasilescu-etal-2010-role | On the Role of Discourse Markers in Interactive Spoken Question Answering Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1333/ | Vasilescu, Ioana and Rosset, Sophie and Adda-Decker, Martine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a preliminary analysis of the role of some discourse markers and the vocalic hesitation ''``euh'''' in a corpus of spoken human utterances collected with the Ritel system, an open domain and spoken dialog system. The frequency and contextual combinatory of classical discourse markers and of the vocalic hesitation have been studied. This analysis pointed out some specificity in terms of combinatory of the analyzed items. The classical discourse markers seem to help initiating larger discursive blocks both at initial and medial positions of the on-going turns. The vocalic hesitation stand also for marking the user`s embarrassments and wish to close the dialog. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,216 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.