entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | schulte-im-walde-2010-comparing | Comparing Computational Models of Selectional Preferences - Second-order Co-Occurrence vs. Latent Semantic Clusters | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1434/ | Schulte im Walde, Sabine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a comparison of three computational approaches to selectional preferences: (i) an intuitive distributional approach that uses second-order co-occurrence of predicates and complement properties; (ii) an EM-based clustering approach that models the strengths of predicate--noun relationships by latent semantic clusters (Rooth et al., 1999); and (iii) an extension of the latent semantic clusters by incorporating the MDL principle into the EM training, thus explicitly modelling the predicate--noun selectional preferences by WordNet classes (Schulte im Walde et al., 2008). Concerning the distributional approach, we were interested not only in how well the model describes selectional preferences, but moreover which second-order properties are most salient. For example, a typical direct object of the verb `drink' is usually fluid, might be hot or cold, can be bought, might be bottled, etc. The general question we ask is: what characterises the predicate`s restrictions to the semantic realisation of its complements? Our second interest lies in the actual comparison of the models: How does a very simple distributional model compare to much more complex approaches, and which representation of selectional preferences is more appropriate, using (i) second-order properties, (ii) an implicit generalisation of nouns (by clusters), or (iii) an explicit generalisation of nouns by WordNet classes within clusters? We describe various experiments on German data and two evaluations, and demonstrate that the simple distributional model outperforms the more complex cluster-based models in most cases, but does itself not always beat the powerful frequency baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,317 |
inproceedings | mcnamee-etal-2010-evaluation | An Evaluation of Technologies for Knowledge Base Population | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1435/ | McNamee, Paul and Dang, Hoa Trang and Simpson, Heather and Schone, Patrick and Strassel, Stephanie M. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Previous content extraction evaluations have neglected to address problems which complicate the incorporation of extracted information into an existing knowledge base. Previous question answering evaluations have likewise avoided tasks such as explicit disambiguation of target entities and handling a fixed set of questions about entities without previous determination of possible answers. In 2009 NIST conducted a Knowledge Base Population track at its Text Analysis Conference to unite the content extraction and question answering communities and jointly explore some of these issues. This exciting new evaluation attracted 13 teams from 6 countries that submitted results in two tasks, Entity Linking and Slot Filling. This paper explains the motivation and design of the tasks, describes the language resources that were developed for this evaluation, offers comparisons to previous community evaluations, and briefly summarizes the performance obtained by systems. We also identify relevant issues pertaining to target selection, challenging queries, and performance measures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,318 |
inproceedings | ferrandez-etal-2010-aligning | Aligning {F}rame{N}et and {W}ord{N}et based on Semantic Neighborhoods | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1436/ | Ferr{\'a}ndez, {\'O}scar and Ellsworth, Michael and Mu{\~n}oz, Rafael and Baker, Collin F. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents an algorithm for aligning FrameNet lexical units to WordNet synsets. Both, FrameNet and WordNet, are well-known as well as widely-used resources by the entire research community. They help systems in the comprehension of the semantics of texts, and therefore, finding strategies to link FrameNet and WordNet involves challenges related to a better understanding of the human language. Such deep analysis is exploited by researchers to improve the performance of their applications. The alignment is achieved by exploiting the particular characteristics of each lexical-semantic resource, with special emphasis on the explicit, formal semantic relations in each. Semantic neighborhoods are computed for each alignment of lemmas, and the algorithm calculates correlation scores by comparing such neighborhoods. The results suggest that the proposed algorithm is appropriate for aligning the FrameNet and WordNet hierarchies. Furthermore, the algorithm can aid research on increasing the coverage of FrameNet, building FrameNets in other languages, and creating a system for querying a joint FrameNet-WordNet hierarchy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,319 |
inproceedings | aliane-etal-2010-al | Al {---}{K}halil : The {A}rabic Linguistic Ontology Project | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1437/ | Aliane, Hassina and Alimazighi, Zaia and Mazari, Ahmed Cherif | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Despite Arabic is the language of hundred millions of people over the world, little has been done in terms of computerized linguistic resources, tools or applications. In this paper we describe a project which aim is to contribute filling this gap. The project consists in building an ontology centered infrastructure for Arabic Language resources and applications. The core of this infrastructure is a linguistic ontology that is founded on Arabic Traditional Grammar. The methodology we have chosen consists in reusing an existing ontology, namely the Gold linguistic ontology. GOLD is the first ontology being designed for linguistic description on the semantic web. We first construct our ontology manually by relating our concepts from Arabic Linguistics to the upper concepts of GOLD, furthermore an information extraction algorithm is implemented to automatically enrich the ontology. We discuss the development of the ontology and present our vision for the whole project which aims at using this ontology for creating tools and resources for both linguists and NLP Researchers. Indeed, the ontology is seen , not only as a domain ontology but also as a resource for different linguistic and NLP applications. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,320 |
inproceedings | kemps-snijders-etal-2010-lat | {LAT} Bridge: Bridging Tools for Annotation and Exploration of Rich Linguistic Data | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1438/ | Kemps-Snijders, Marc and Koller, Thomas and Sloetjes, Han and Verwey, Huib | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a software module, the LAT Bridge, which enables bidirectional communication between the annotation and exploration tools developed at the Max Planck Institute for Psycholinguistics as part of our Language Archiving Technology (LAT) tool suite. These existing annotation and exploration tools enable the annotation, enrichment, exploration and archive management of linguistic resources. The user community has expressed the desire to use different combinations of LAT tools in conjunction with each other. The LAT Bridge is designed to cater for a number of basic data interaction scenarios between the LAT annotation and exploration tools. These interaction scenarios (e.g. bootstrapping a wordlist, searching for annotation examples or lexical entries) have been identified in collaboration with researchers at our institute. We had to take into account that the LAT tools for annotation and exploration represent a heterogeneous application scenario with desktop-installed and web-based tools. Additionally, the LAT Bridge has to work in situations where the Internet is not available or only in an unreliable manner (i.e. with a slow connection or with frequent interruptions). As a result, the LAT Bridges architecture supports both online and offline communication between the LAT annotation and exploration tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,321 |
inproceedings | bojar-etal-2010-evaluating | Evaluating Utility of Data Sources in a Large Parallel {C}zech-{E}nglish Corpus {C}z{E}ng 0.9 | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1439/ | Bojar, Ond{\v{r}}ej and Li{\v{s}}ka, Adam and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | CzEng 0.9 is the third release of a large parallel corpus of Czech and English. For the current release, CzEng was extended by significant amount of texts from various types of sources, including parallel web pages, electronically available books and subtitles. This paper describes and evaluates filtering techniques employed in the process in order to avoid misaligned or otherwise damaged parallel sentences in the collection. We estimate the precision and recall of two sets of filters. The first set was used to process the data before their inclusion into CzEng. The filters from the second set were newly created to improve the filtering process for future releases of CzEng. Given the overall amount and variance of sources of the data, our experiments illustrate the utility of parallel data sources with respect to extractable parallel segments. As a similar behaviour can be expected for other language pairs, our results can be interpreted as guidelines indicating which sources should other researchers exploit first. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,322 |
inproceedings | liakata-etal-2010-corpora | Corpora for the Conceptualisation and Zoning of Scientific Papers | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1440/ | Liakata, Maria and Teufel, Simone and Siddharthan, Advaith and Batchelor, Colin | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present two complementary annotation schemes for sentence based annotation of full scientific papers, CoreSC and AZ-II, applied to primary research articles in chemistry. AZ-II is the extension of AZ for chemistry papers. AZ has been shown to have been reliably annotated by independent human coders and useful for various information access tasks. Like AZ, AZ-II follows the rhetorical structure of a scientific paper and the knowledge claims made by the authors. The CoreSC scheme takes a different view of scientific papers, treating them as the humanly readable representations of scientific investigations. It seeks to retrieve the structure of the investigation from the paper as generic high-level Core Scientific Concepts (CoreSC). CoreSCs have been annotated by 16 chemistry experts over a total of 265 full papers in physical chemistry and biochemistry. We describe the differences and similarities between the two schemes in detail and present the two corpora produced using each scheme. There are 36 shared papers in the corpora, which allows us to quantitatively compare aspects of the annotation schemes. We show the correlation between the two schemes, their strengths and weeknesses and discuss the benefits of combining a rhetorical based analysis of the papers with a content-based one. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,323 |
inproceedings | tretti-di-eugenio-2010-analysis | Analysis and Presentation of Results for Mobile Local Search | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1441/ | Tretti, Alberto and Di Eugenio, Barbara | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Aggregation of long lists of concepts is important to avoid overwhelming a small display. Focusing on the domain of mobile local search, this paper presents the development of an application to perform filtering and aggregation of results obtained through the Yahoo! Local web service. First, we performed an analysis of the data available through Yahoo! Local by crawling its database with over 170 thousand local listings located in Chicago. Then, we compiled resources and developed algorithms to filter and aggregate local search results. The methods developed exploit Yahoo!s listings categorization to reduce the result space and pinpoint the category containing the most relevant results. Finally, we evaluated a prototype through a user study, which pitted our system against Yahoo! Local and against a plain list of search results. The results obtained from the study show that our aggregation methods are quite effective, cutting down the number of entries returned to the user by 43{\%} on average, but leaving search efficiency and user satisfaction unaffected. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,324 |
inproceedings | esteve-etal-2010-epac | The {EPAC} Corpus: Manual and Automatic Annotations of Conversational Speech in {F}rench Broadcast News | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1442/ | Est{\`e}ve, Yannick and Bazillon, Thierry and Antoine, Jean-Yves and B{\'e}chet, Fr{\'e}d{\'e}ric and Farinas, J{\'e}r{\^o}me | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the EPAC corpus which is composed by a set of 100 hours of conversational speech manually transcribed and by the outputs of automatic tools (automatic segmentation, transcription, POS tagging, etc.) applied on the entire French ESTER 1 audio corpus: this concerns about 1700 hours of audio recordings from radiophonic shows. This corpus was built during the EPAC project funded by the French Research Agency (ANR) from 2007 to 2010. This corpus increases significantly the amount of French manually transcribed audio recordings easily available and it is now included as a part of the ESTER 1 corpus in the ELRA catalog without additional cost. By providing a large set of automatic outputs of speech processing tools, the EPAC corpus should be useful to researchers who want to work on such data without having to develop and deal with such tools. These automatic annotations are various: segmentation and speaker diarization, one-best hypotheses from the LIUM automatic speech recognition system with confidence measures, but also word-lattices and confusion networks, named entities, part-of-speech tags, chunks, etc. The 100 hours of speech manually transcribed were split into three data sets in order to get an official training corpus, an official development corpus and an official test corpus. These data sets were used to develop and to evaluate some automatic tools which have been used to process the 1700 hours of audio recording. For example, on the EPAC test data set our ASR system yields a word error rate equals to 17.25{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,325 |
inproceedings | tomanek-hahn-2010-annotation | Annotation Time Stamps {---} Temporal Metadata from the Linguistic Annotation Process | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1443/ | Tomanek, Katrin and Hahn, Udo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe the re-annotation of selected types of named entities (persons, organizations, locations) from the Muc7 corpus. The focus of this annotation initiative is on recording the time needed for the linguistic process of named entity annotation. Annotation times are measured on two basic annotation units -- sentences vs. complex noun phrases. We gathered evidence that decision times are non-uniformly distributed over the annotation units, while they do not substantially deviate among annotators. This data seems to support the hypothesis that annotation times very much depend on the inherent ''``hardness'''' of each single annotation decision. We further show how such time-stamped information can be used for empirically grounded studies of selective sampling techniques, such as Active Learning. We directly compare Active Learning costs on the basis of token-based vs. time-based measurements. The data reveals that Active Learning keeps its competitive advantage over random sampling in both scenarios though the difference is less marked for the time metric than for the token metric. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,326 |
inproceedings | moore-etal-2010-annotating | Annotating the {E}nron Email Corpus with Number Senses | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1444/ | Moore, Stuart and Buchholz, Sabine and Korhonen, Anna | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Enron Email Corpus provides ``Real World'' text in the business email domain, which is a target domain for many speech and language applications. We present a section of this corpus annotated with number senses - labelling each number as a date, time, year, telephone number etc. We show that sense categories and their frequencies are very different in this domain than in newswire text. The annotated corpus can provide valuable material for the development of number sense disambiguation techniques. We have released the annotations into the public domain, to allow other researchers to perform comparisons. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,327 |
inproceedings | lendvai-etal-2010-integration | Integration of Linguistic Markup into Semantic Models of Folk Narratives: The Fairy Tale Use Case | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1445/ | Lendvai, Piroska and Declerck, Thierry and Dar{\'a}nyi, S{\'a}ndor and Gerv{\'a}s, Pablo and Herv{\'a}s, Raquel and Malec, Scott and Peinado, Federico | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Propp`s influential structural analysis of fairy tales created a powerful schema for representing storylines in terms of character functions, which is directly exploitable for computational semantic analysis, and procedural generation of stories of this genre. We tackle two resources that draw on the Proppian model - one formalizes it as a semantic markup scheme and the other as an ontology -, both lacking linguistic phenomena explicitly represented in them. The need for integrating linguistic information into structured semantic resources is motivated by the emergence of suitable standards that facilitate this, as well as the benefits such joint representation would create for transdisciplinary research across Digital Humanities, Computational Linguistics, and Artificial Intelligence. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,328 |
inproceedings | savas-etal-2010-lmf | An {LMF}-based Web Service for Accessing {W}ord{N}et-type Semantic Lexicons | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1446/ | Savas, Bora and Hayashi, Yoshihiko and Monachini, Monica and Soria, Claudia and Calzolari, Nicoletta | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes a Web service for accessing WordNet-type semantic lexicons. The central idea behind the service design is: given a query, the primary functionality of lexicon access is to present a partial lexicon by extracting the relevant part of the target lexicon. Based on this idea, we implemented the system as a RESTful Web service whose input query is specified by the access URI and whose output is presented in a standardized XML data format. LMF, an ISO standard for modeling lexicons, plays the most prominent role: the access URI pattern basically reflects the lexicon structure as defined by LMF; the access results are rendered based on Wordnet-LMF, which is a version of LMF XML-serialization. The Web service currently provides accesses to Princeton WordNet, Japanese WordNet, as well as the EDR Electronic Dictionary as a trial. To accommodate the EDR dictionary within the same framework, we modeled it also as a WordNet-type semantic lexicon. This paper thus argues possible alternatives to model innately bilingual/multilingual lexicons like EDR with LMF, and proposes possible revisions to Wordnet-LMF. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,329 |
inproceedings | atserias-etal-2010-active | Active Learning for Building a Corpus of Questions for Parsing | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1447/ | Atserias, Jordi and Attardi, Giuseppe and Simi, Maria and Zaragoza, Hugo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes how we built a dependency Treebank for questions. The questions for the Treebank were drawn from questions from the TREC 10 QA task and from Yahoo! Answers. Among the uses for the corpus is to train a dependency parser achieving good accuracy on parsing questions without hurting its overall accuracy. We also explore active learning techniques to determine the suitable size for a corpus of questions in order to achieve adequate accuracy while minimizing the annotation efforts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,330 |
inproceedings | cook-stevenson-2010-automatically | Automatically Identifying Changes in the Semantic Orientation of Words | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1448/ | Cook, Paul and Stevenson, Suzanne | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The meanings of words are not fixed but in fact undergo change, with new word senses arising and established senses taking on new aspects of meaning or falling out of usage. Two types of semantic change are amelioration and pejoration; in these processes a word sense changes to become more positive or negative, respectively. In this first computational study of amelioration and pejoration we adapt a web-based method for determining semantic orientation to the task of identifying ameliorations and pejorations in corpora from differing time periods. We evaluate our proposed method on a small dataset of known historical ameliorations and pejorations, and find it to perform better than a random baseline. Since this test dataset is small, we conduct a further evaluation on artificial examples of amelioration and pejoration, and again find evidence that our proposed method is able to identify changes in semantic orientation. Finally, we conduct a preliminary evaluation in which we apply our methods to the task of finding words which have recently undergone amelioration or pejoration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,331 |
inproceedings | leuski-traum-2010-npceditor | {NPCE}ditor: A Tool for Building Question-Answering Characters | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1449/ | Leuski, Anton and Traum, David | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | NPCEditor is a system for building and deploying virtual characters capable of engaging a user in spoken dialog on a limited domain. The dialogue may take any form as long as the character responses can be specified a priori. For example, NPCEditor has been used for constructing question answering characters where a user asks questions and the character responds, but other scenarios are possible. At the core of the system is a state of the art statistical language classification technology for mapping from user`s text input to system responses. NPCEditor combines the classifier with a database that stores the character information and relevant language data, a server that allows the character designer to deploy the completed characters, and a user-friendly editor that helps the designer to accomplish both character design and deployment tasks. In the paper we define the overall system architecture, describe individual NPCEditor components, and guide the reader through the steps of building a virtual character. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,332 |
inproceedings | quan-ren-2010-automatic | Automatic Annotation of Word Emotion in Sentences Based on {R}en-{CEC}ps | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1450/ | Quan, Changqin and Ren, Fuji | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Textual information is an important communication medium contained rich expression of emotion, and emotion recognition on text has wide applications. Word emotion analysis is fundamental in the problem of textual emotion recognition. Through an analysis of the characteristics of word emotion expression, we use word emotion vector to describe the combined basic emotions in a word, which can be used to distinguish direct and indirect emotion words, express emotion ambiguity in words, and express multiple emotions in words. Based on Ren-CECps (a Chinese emotion corpus), we do an experiment to explore the role of emotion word for sentence emotion recognition and we find that the emotions of a simple sentence (sentence without negative words, conjunctions, or question mark) can be approximated by an addition of the word emotions. Then MaxEnt modeling is used to find which context features are effective for recognizing word emotion in sentences. The features of word, N-words, POS, Pre-N-words emotion, Pre-is-degree-word, Pre-is-negativeword, Pre-is-conjunction and their combination have been experimented. After that, we use the two metrics: Kappa coefficient of agreement and Voting agreement to measure the word annotation agreement of Ren-CECps. The experiments on above context features showed promising results compared with word emotion agreement on people`s judgments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,333 |
inproceedings | zablotskaya-etal-2010-speech | Speech Data Corpus for Verbal Intelligence Estimation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1451/ | Zablotskaya, Kseniya and Walter, Steffen and Minker, Wolfgang | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The goal of our research is the development of algorithms for automatic estimation of a person`s verbal intelligence based on the analysis of transcribed spoken utterances. In this paper we present the corpus of German native speakers' monologues and dialogues about the same topics collected at the University of Ulm, Germany. The monologues were descriptions of two short films; the dialogues were discussions about problems of German education. The data corpus contains the verbal intelligence quotients of each speaker, which were measured with the Hamburg Wechsler Intelligence Test for Adults. In this paper we describe our corpus, why we decided to create it, and how it was collected. We also describe some approaches which can be applied to the transcribed spoken utterances for extraction of different features which could have a correlation with a person`s verbal intelligence. The data corpus consists of 71 monologues and 30 dialogues (about 10 hours of audio data). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,334 |
inproceedings | liu-etal-2010-large | A Very Large Scale {M}andarin {C}hinese Broadcast Corpus for {GALE} Project | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1452/ | Liu, Yi and Fung, Pascale and Yang, Yongsheng and DiPersio, Denise and Glenn, Meghan and Strassel, Stephanie and Cieri, Christopher | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present the design, collection, transcription and analysis of a Mandarin Chinese Broadcast Collection of over 3000 hours. The data was collected by Hong Kong University of Science and Technology (HKUST) in China on a cable TV and satellite transmission platform established in support of the DARPA Global Autonomous Language Exploitation (GALE) program. The collection includes broadcast news (BN) and broadcast conversation (BC) including talk shows, roundtable discussions, call-in shows, editorials and other conversational programs that focus on news and current events. HKUST also collects detailed information about all recorded programs. A subset of BC and BN recordings are manually transcribed with standard Chinese characters in UTF-8 encoding, using specific mark-ups for a small set of spontaneous and conversational speech phenomena. The collection is among the largest and first of its kind for Mandarin Chinese Broadcast speech, providing abundant and diverse samples for Mandarin speech recognition and other application-dependent tasks, such as spontaneous speech processing and recognition, topic detection, information retrieval, and speaker recognition. HKUST{\^a}s acoustic analysis of 500 hours of the speech and transcripts demonstrates the positive impact this data could have on system performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,335 |
inproceedings | dinu-2010-building | Building a {G}enerative {L}exicon for {R}omanian | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1453/ | Dinu, Anca | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present in this paper an on-going research: the construction and annotation of a Romanian Generative Lexicon (RoGL). Our system follows the specifications of CLIPS project for Italian language. It contains a corpus, a type ontology, a graphical interface and a database from which we generate data in XML format. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,336 |
inproceedings | francom-etal-2010-specialized | How Specialized are Specialized Corpora? Behavioral Evaluation of Corpus Representativeness for {M}altese. | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1454/ | Francom, Jerid and LaCross, Amy and Ussishkin, Adam | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we bring to light a novel intersection between corpus linguistics and behavioral data that can be employed as an evaluation metric for resources for low-density languages, drawing on well-established psycholinguistic factors. Using the low-density language Maltese as a test case, we highlight the challenges that face researchers developing resources for languages with sparsely available data and identify a key empirical link between corpus and psycholinguistic research as a tool to evaluate corpus resources. Specifically, we compare two robust variables identified in the psycholinguistic literature: word frequency (as measured in a corpus) and word familiarity (as measured in a rating task). We then apply statistical methods to evaluate the extent to which familiarity ratings predict corpus frequency for verbs in the Maltese corpus from three angles: 1) token frequency, 2) frequency distributions and 3) morpho-syntactic type (binyan). This research provides a multidisciplinary approach to corpus development and evaluation, in particular for less-resourced languages that lack a wide access to diverse language data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,337 |
inproceedings | walker-etal-2010-large | Large Scale Multilingual Broadcast Data Collection to Support Machine Translation and Distillation Technology Development | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1455/ | Walker, Kevin and Caruso, Christopher and DiPersio, Denise | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The development of technologies to address machine translation and distillation of multilingual broadcast data depends heavily on the collection of large volumes of material from modern data providers. To address the needs of GALE researchers, the Linguistic Data Consortium (LDC) developed a system for collecting broadcast news and conversation from a variety of Arabic, Chinese and English broadcasters. The system is highly automated, easily extensible and robust and is capable of collecting, processing and evaluating hundreds of hours of content from several dozen sources per day. In addition to this extensive system, LDC manages three remote collection sites to maximize the variety of available broadcast data and has designed a portable broadcast collection platform to facilitate remote collection. This paper will present a detailed a description of the design and implementation of LDCs collection system, the technical challenges and solutions to large scale broadcast data collection efforts and an overview of the systems operation. This paper will also discuss the challenges of managing remote collections, in particular, the strategies used to normalize data formats, naming conventions and delivery methods to achieve optimal integration of remotely-collected data into LDCs collection database and downstream tasking workflow. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,338 |
inproceedings | street-etal-2010-like | Like Finding a Needle in a Haystack: Annotating the {A}merican National Corpus for Idiomatic Expressions | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1456/ | Street, Laura and Michalov, Nathan and Silverstein, Rachel and Reynolds, Michael and Ruela, Lurdes and Flowers, Felicia and Talucci, Angela and Pereira, Priscilla and Morgon, Gabriella and Siegel, Samantha and Barousse, Marci and Anderson, Antequa and Carroll, Tashom and Feldman, Anna | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Our paper presents the details of a pilot study in which we tagged portions of the American National Corpus (ANC) for idioms composed of verb-noun constructions, prepositional phrases, and subordinate clauses. The three data sets we analyzed included 1,500-sentence samples from the spoken, the nonfiction, and the fiction portions of the ANC. Our paper provides the details of the tagset we developed, the motivation behind our choices, and the inter-annotator agreement measures we deemed appropriate for this task. In tagging the ANC for idiomatic expressions, our annotators achieved a high level of agreement ({\ensuremath{>}} .80) on the tags but a low level of agreement ({\ensuremath{<}} .00) on what constituted an idiom. These findings support the claim that identifying idiomatic and metaphorical expressions is a highly difficult and subjective task. In total, 135 idiom types and 154 idiom tokens were identified. Based on the total tokens found for each idiom class, we suggest that future research on idiom detection and idiom annotation include prepositional phrases as this class of idioms occurred frequently in the nonfiction and spoken samples of our corpus | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,339 |
inproceedings | zaghouani-etal-2010-adapting | Adapting a resource-light highly multilingual Named Entity Recognition system to {A}rabic | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1457/ | Zaghouani, Wajdi and Pouliquen, Bruno and Ebrahim, Mohamed and Steinberger, Ralf | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a fully functional Arabic information extraction (IE) system that is used to analyze large volumes of news texts every day to extract the named entity (NE) types person, organization, location, date and number, as well as quotations (direct reported speech) by and about people. The Named Entity Recognition (NER) system was not developed for Arabic, but - instead - a highly multilingual, almost language-independent NER system was adapted to also cover Arabic. The Semitic language Arabic substantially differs from the Indo-European and Finno-Ugric languages currently covered. This paper thus describes what Arabic language-specific resources had to be developed and what changes needed to be made to the otherwise language-independent rule set in order to be applicable to the Arabic language. The achieved evaluation results are generally satisfactory, but could be improved for certain entity types. The results of the IE tools can be seen on the Arabic pages of the freely accessible Europe Media Monitor (EMM) application NewsExplorer, which can be found at \url{http://press.jrc.it/overview.html}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,340 |
inproceedings | li-etal-2010-enriching | Enriching Word Alignment with Linguistic Tags | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1458/ | Li, Xuansong and Ge, Niyu and Grimes, Stephen and Strassel, Stephanie M. and Maeda, Kazuaki | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Incorporating linguistic knowledge into word alignment is becoming increasingly important for current approaches in statistical machine translation research. To improve automatic word alignment and ultimately machine translation quality, an annotation framework is jointly proposed by LDC (Linguistic Data Consortium) and IBM. The framework enriches word alignment corpora to capture contextual, syntactic and language-specific features by introducing linguistic tags to the alignment annotation. Two annotation schemes constitute the framework: alignment and tagging. The alignment scheme aims to identify minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. The framework produces a solid ground-level alignment base upon which larger translation unit alignment can be automatically induced. To test the soundness of this work, evaluation is performed on a pilot annotation, resulting in inter- and intra- annotator agreement of above 90{\%}. To date LDC has produced manual word alignment and tagging on 32,823 Chinese-English sentences following this framework. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,341 |
inproceedings | eberhard-etal-2010-indiana | The {I}ndiana {\textquotedblleft}Cooperative Remote Search Task{\textquotedblright} ({CR}e{ST}) Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1459/ | Eberhard, Kathleen and Nicholson, Hannele and K{\"ubler, Sandra and Gundersen, Susan and Scheutz, Matthias | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper introduces a novel corpus of natural language dialogues obtained from humans performing a cooperative, remote, search task (CReST) as it occurs naturally in a variety of scenarios (e.g., search and rescue missions in disaster areas). This corpus is unique in that it involves remote collaborations between two interlocutors who each have to perform tasks that require the other`s assistance. In addition, one interlocutor`s tasks require physical movement through an indoor environment as well as interactions with physical objects within the environment. The multi-modal corpus contains the speech signals as well as transcriptions of the dialogues, which are additionally annotated for dialog structure, disfluencies, and for constituent and dependency syntax. On the dialogue level, the corpus was annotated for separate dialogue moves, based on the classification developed by Carletta et al. (1997) for coding task-oriented dialogues. Disfluencies were annotated using the scheme developed by Lickley (1998). The syntactic annotation comprises POS annotation, Penn Treebank style constituent annotations as well as dependency annotations based on the dependencies of pennconverter. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,342 |
inproceedings | christensen-etal-2010-principled | Principled Construction of Elicited Imitation Tests | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1460/ | Christensen, Carl and Hendrickson, Ross and Lonsdale, Deryle | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we discuss the methodology behind the construction of elicited imitation (EI) test items. First we examine varying uses for EI tests in research and in testing overall oral proficiency. We also mention criticisms of previous test items. Then we identify the factors that contribute to the difficulty of an EI item as shown in previous studies. Based on this discussion, we describe a way of automating the creation of test items in order to better evaluate language learners' oral proficiency while improving item naturalness. We present a new item construction tool and the process that it implements in order to create test items from a corpus, identifying relevant features needed to compile a database of EI test items. We examine results from administration of a new EI test engineered in this manner, illustrating the effect that standard language resources can have on creating an effective EI test item repository. We also sketch ongoing work on test item generation for other languages and an adaptive test that will use this collection of test items. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,343 |
inproceedings | ambati-etal-2010-high | A High Recall Error Identification Tool for {H}indi Treebank Validation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1461/ | Ambati, Bharat Ram and Gupta, Mridul and Husain, Samar and Sharma, Dipti Misra | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the development of a hybrid tool for a semi-automated process for validation of treebank annotation at various levels. The tool is developed for error detection at the part-of-speech, chunk and dependency levels of a Hindi treebank, currently under development. The tool aims to identify as many errors as possible at these levels to achieve consistency in the task of annotation. Consistency in treebank annotation is a must for making data as error-free as possible and for providing quality assurance. The tool is aimed at ensuring consistency and to make manual validation cost effective. We discuss a rule based and a hybrid approach (statistical methods combined with rule-based methods) by which a high-recall system can be developed and used to identify errors in the treebank. We report some results of using the tool on a sample of data extracted from the Hindi treebank. We also argue how the tool can prove useful in improving the annotation guidelines which would in turn, better the quality of annotation in subsequent iterations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,344 |
inproceedings | robinson-etal-2010-dialogues | Dialogues in Context: An Objective User-Oriented Evaluation Approach for Virtual Human Dialogue | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1462/ | Robinson, Susan and Roque, Antonio and Traum, David | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | As conversational agents are now being developed to encounter more complex dialogue situations it is increasingly difficult to find satisfactory methods for evaluating these agents. Task-based measures are insufficient where there is no clearly defined task. While user-based evaluation methods may give a general sense of the quality of an agent`s performance, they shed little light on the relative quality or success of specific features of dialogue that are necessary for system improvement. This paper examines current dialogue agent evaluation practices and motivates the need for a more detailed approach for defining and measuring the quality of dialogues between agent and user. We present a framework for evaluating the dialogue competence of artificial agents involved in complex and underspecified tasks when conversing with people. A multi-part coding scheme is proposed that provides a qualitative analysis of human utterances, and rates the appropriateness of the agent`s responses to these utterances. The scheme is outlined, and then used to evaluate Staff Duty Officer Moleno, a virtual guide in Second Life. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,345 |
inproceedings | yao-etal-2010-practical | Practical Evaluation of Speech Recognizers for Virtual Human Dialogue Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1463/ | Yao, Xuchen and Bhutada, Pravin and Georgila, Kallirroi and Sagae, Kenji and Artstein, Ron and Traum, David | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We perform a large-scale evaluation of multiple off-the-shelf speech recognizers across diverse domains for virtual human dialogue systems. Our evaluation is aimed at speech recognition consumers and potential consumers with limited experience with readily available recognizers. We focus on practical factors to determine what levels of performance can be expected from different available recognizers in various projects featuring different types of conversational utterances. Our results show that there is no single recognizer that outperforms all other recognizers in all domains. The performance of each recognizer may vary significantly depending on the domain, the size and perplexity of the corpus, the out-of-vocabulary rate, and whether acoustic and language model adaptation has been used or not. We expect that our evaluation will prove useful to other speech recognition consumers, especially in the dialogue community, and will shed some light on the key problem in spoken dialogue systems of selecting the most suitable available speech recognition system for a particular application, and what impact training will have. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,346 |
inproceedings | ohtake-etal-2010-dialogue | Dialogue Acts Annotation for {NICT} {K}yoto Tour Dialogue Corpus to Construct Statistical Dialogue Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1464/ | Ohtake, Kiyonori and Misu, Teruhisa and Hori, Chiori and Kashioka, Hideki and Nakamura, Satoshi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper introduces a new corpus of consulting dialogues designed for training a dialogue manager that can handle consulting dialogues through spontaneous interactions from the tagged dialogue corpus. We have collected more than 150 hours of consulting dialogues in the tourist guidance domain. We are developing the corpus that consists of speech, transcripts, speech act (SA) tags, morphological analysis results, dependency analysis results, and semantic content tags. This paper outlines our taxonomy of dialogue act (DA) annotation that can describe two aspects of an utterance: the communicative function (SA), and the semantic content of the utterance. We provide an overview of the Kyoto tour dialogue corpus and a preliminary analysis using the DA tags. We also show a result of a preliminary experiment for SA tagging via Support Vector Machines (SVMs). We introduce the current states of the corpus development In addition, we mention the usage of our corpus for the spoken dialogue system that is being developed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,347 |
inproceedings | bal-saint-dizier-2010-towards | Towards Building Annotated Resources for Analyzing Opinions and Argumentation in News Editorials | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1465/ | Bal, Bal Krishna and Saint Dizier, Patrick | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an annotation scheme for argumentation in opinionated texts such as newspaper editorials, developed from a corpus of approximately 500 English texts from Nepali and international newspaper sources. We present the results of analysis and evaluation of the corpus annotation {\textemdash} currently, the inter-annotator agreement kappa value being 0.80 which indicates substantial agreement between the annotators. We also discuss some of linguistic resources (key factors for distinguishing facts from opinions, opinion lexicon, intensifier lexicon, pre-modifier lexicon, modal verb lexicon, reporting verb lexicon, general opinion patterns from the corpus etc.) developed as a result of our corpus analysis, which can be used to identify an opinion or a controversial issue, arguments supporting an opinion, orientation of the supporting arguments and their strength (intrinsic, relative and in terms of persuasion). These resources form the backbone of our work especially for performing the opinion analysis in the lower levels, i.e., in the lexical and sentence levels. Finally, we shed light on the perspectives of the given work clearly outlining the challenges. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,348 |
inproceedings | eshkol-etal-2010-eslo | {E}slo: From Transcription to Speakers' Personal Information Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1466/ | Eshkol, Iris and Maurel, Denis and Friburger, Nathalie | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the preliminary works to put online a French oral corpus and its transcription. This corpus is the Socio-Linguistic Survey in Orleans, realized in 1968. First, we numerized the corpus, then we handwritten transcribed it with the Transcriber software adding different tags about speakers, time, noise, etc. Each document (audio file and XML file of the transcription) was described by a set of metadata stored in an XML format to allow an easy consultation. Second, we added different levels of annotations, recognition of named entities and annotation of personal information about speakers. This two annotation tasks used the CasSys system of transducer cascades. We used and modified a first cascade to recognize named entities. Then we built a second cascade to annote the designating entities, i.e. information about the speaker. These second cascade parsed the named entity annotated corpus. The objective is to locate information about the speaker and, also, what kind of information can designate him/her. These two cascades was evaluated with precision and recall measures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,349 |
inproceedings | wittenburg-etal-2010-resource | Resource and Service Centres as the Backbone for a Sustainable Service Infrastructure | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1467/ | Wittenburg, Peter and Bel, Nuria and Borin, Lars and Budin, Gerhard and Calzolari, Nicoletta and Hajicova, Eva and Koskenniemi, Kimmo and Lemnitzer, Lothar and Maegaard, Bente and Piasecki, Maciej and Pierrel, Jean-Marie and Piperidis, Stelios and Skadina, Inguna and Tufis, Dan and van Veenendaal, Remco and V{\'a}radi, Tamas and Wynne, Martin | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Currently, research infrastructures are being designed and established in many disciplines since they all suffer from an enormous fragmentation of their resources and tools. In the domain of language resources and tools the CLARIN initiative has been funded since 2008 to overcome many of the integration and interoperability hurdles. CLARIN can build on knowledge and work from many projects that were carried out during the last years and wants to build stable and robust services that can be used by researchers. Here service centres will play an important role that have the potential of being persistent and that adhere to criteria as they have been established by CLARIN. In the last year of the so-called preparatory phase these centres are currently developing four use cases that can demonstrate how the various pillars CLARIN has been working on can be integrated. All four use cases fulfil the criteria of being cross-national. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,350 |
inproceedings | spina-2010-dictionary | The Dictionary of {I}talian Collocations: Design and Integration in an Online Learning Environment | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1468/ | Spina, Stefania | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, I introduce the DICI, an electronic dictionary of Italian collocations designed to support the acquisition of the collocational competence in learners of Italian as a second or foreign language. I briefly describe the composition of the reference Italian corpus from which the collocations are extracted, and the methodology of extraction and filtering of candidate collocations. It is an experimental methodology, based on POS filtering, frequency and statistical measures, and tested on a 12-million-word sample from the reference corpus. Furthermore, I explain the main criteria for the composition of the dictionary, in addition to its integration with a Virtual Learning Environment (VLE), aimed at supporting learning activities on collocations. I briefly describe some of the main features of this integration with the VLE, such as the automatic recognition of collocations in written Italian texts, the possibility for students to obtain further linguistic information on selected collocations, and the automatic generation of tests for collocational competence assessment of language learners. While the main goal of the DICI is pedagogical, it is also intended to contribute to research in the field of collocations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,351 |
inproceedings | matsuyoshi-etal-2010-annotating | Annotating Event Mentions in Text with Modality, Focus, and Source Information | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1469/ | Matsuyoshi, Suguru and Eguchi, Megumi and Sao, Chitose and Murakami, Koji and Inui, Kentaro and Matsumoto, Yuji | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Many natural language processing tasks, including information extraction, question answering and recognizing textual entailment, require analysis of the polarity, focus of polarity, tense, aspect, mood and source of the event mentions in a text in addition to its predicate-argument structure analysis. We refer to modality, polarity and other associated information as extended modality. In this paper, we propose a new annotation scheme for representing the extended modality of event mentions in a sentence. Our extended modality consists of the following seven components: Source, Time, Conditional, Primary modality type, Actuality, Evaluation and Focus. We reviewed the literature about extended modality in Linguistics and Natural Language Processing (NLP) and defined appropriate labels of each component. In the proposed annotation scheme, information of extended modality of an event mention is summarized at the core predicate of the event mention for immediate use in NLP applications. We also report on the current progress of our manual annotation of a Japanese corpus of about 50,000 event mentions, showing a reasonably high ratio of inter-annotator agreement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,352 |
inproceedings | adugna-eisele-2010-english | {E}nglish {---} {O}romo Machine Translation: An Experiment Using a Statistical Approach | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1470/ | Adugna, Sisay and Eisele, Andreas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper deals with translation of English documents to Oromo using statistical methods. Whereas English is the lingua franca of online information, Oromo, despite its relative wide distribution within Ethiopia and neighbouring countries like Kenya and Somalia, is one of the most resource scarce languages. The paper has two main goals: one is to test how far we can go with the available limited parallel corpus for the English {\textemdash} Oromo language pair and the applicability of existing Statistical Machine Translation (SMT) systems on this language pair. The second goal is to analyze the output of the system with the objective of identifying the challenges that need to be tackled. Since the language is resource scarce as mentioned above, we cannot get as many parallel documents as we want for the experiment. However, using a limited corpus of 20,000 bilingual sentences and 163,000 monolingual sentences, translation accuracy in terms of BLEU Score of 17.74{\%} was achieved. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,353 |
inproceedings | fujii-2010-modeling | Modeling {W}ikipedia Articles to Enhance Encyclopedic Search | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1471/ | Fujii, Atsushi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. To integrate the advantages of both tools, we have been proposing methods for encyclopedic search targeting information on the Web and patent information. In this paper, we propose a method to categorize multiple expository texts for a single term based on viewpoints. Because viewpoints required for explanation are different depending on the type of a term, such as animals and diseases, it is difficult to manually produce a large scale system. We use Wikipedia to extract a prototype of a viewpoint structure for each term type. We also use articles in Wikipedia for a machine learning method, which categorizes a given text into an appropriate viewpoint. We evaluate the effectiveness of our method experimentally. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,354 |
inproceedings | hartung-frank-2010-semi | A Semi-supervised Type-based Classification of Adjectives: Distinguishing Properties and Relations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1472/ | Hartung, Matthias and Frank, Anette | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a semi-supervised machine-learning approach for the classification of adjectives into property- vs. relation-denoting adjectives, a distinction that is highly relevant for ontology learning. The feasibility of this classification task is evaluated in a human annotation experiment. We observe that token-level annotation of these classes is expensive and difficult. Yet, a careful corpus analysis reveals that adjective classes tend to be stable, with few occurrences of class shifts observed at the token level. As a consequence, we opt for a type-based semi-supervised classification approach. The class labels obtained from manual annotation are projected to large amounts of unannotated token samples. Training on heuristically labeled data yields high classification performance on our own data and on a data set compiled from WordNet. Our results suggest that it is feasible to automatically distinguish adjectives denoting properties and relations, using small amounts of annotated data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,355 |
inproceedings | eisele-chen-2010-multiun | {M}ulti{UN}: A Multilingual Corpus from United Nation Documents | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1473/ | Eisele, Andreas and Chen, Yu | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the acquisition, preparation and properties of a corpus extracted from the official documents of the United Nations (UN). This corpus is available in all 6 official languages of the UN, consisting of around 300 million words per language. We describe the methods we used for crawling, document formatting, and sentence alignment. This corpus also includes a common test set for machine translation. We present the results of a French-Chinese machine translation experiment performed on this corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,356 |
inproceedings | rakho-constant-2010-evaluating | Evaluating the Impact of Some Linguistic Information on the Performances of a Similarity-based and Translation-oriented Word-Sense Disambiguation Method | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1474/ | Rakho, Myriam and Constant, Matthieu | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this article, we present an experiment of linguistic parameter tuning in the representation of the semantic space of polysemous words. We evaluate quantitatively the influence of some basic linguistic knowledge (lemmas, multi-word expressions, grammatical tags and syntactic relations) on the performances of a similarity-based Word-Sense disambiguation method. The question we try to answer, by this experiment, is which kinds of linguistic knowledge are most useful for the semantic disambiguation of polysemous words, in a multilingual framework. The experiment is about 20 French polysemous words (16 nouns and 4 verbs) and we make use of the French-English part of the sentence-aligned EuroParl Corpus for training and testing. Our results show a strong correlation between the system accuracy and the degree of precision of the linguistic features used, particularly the syntactic dependency relations. Furthermore, the lemma-based approach absolutely outperforms the word form-based approach. The best accuracy achieved by our system amounts to 90{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,357 |
inproceedings | bick-2010-frag | {F}r{AG}, a Hybrid Constraint Grammar Parser for {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1475/ | Bick, Eckhard | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes a hybrid system (FrAG) for tagging / parsing French text, and presents results from ongoing development work, corpus annotation and evaluation. The core of the system is a sentence scope Constraint Grammar (CG), with linguist-written rules. However, unlike traditional CG, the system uses hybrid techniques on both its morphological input side and its syntactic output side. Thus, FrAG draws on a pre-existing probabilistic Decision Tree Tagger (DTT) before and in parallel with its own lexical stage, and feeds its output into a Phrase Structure Grammar (PSG) that uses CG syntactic function tags rather than ordinary terminals in its rewriting rules. As an alternative architecture, dependency tree structures are also supported. In the newest version, dependencies are assigned within the CG-framework itself, and can interact with other rules. To provide semantic context, a semantic prototype ontology for nouns is used, covering a large part of the lexicon. In a recent test run on Parliamentary debate transcripts, FrAG achieved F-scores of 98.7 {\%} for part of speech (PoS) and between 93.1 {\%} and 96.2 {\%} for syntactic function tags. Dependency links were correct in 95.9 {\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,358 |
inproceedings | schulz-etal-2010-multilingual | Multilingual Corpus Development for Opinion Mining | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1476/ | Schulz, Julia Maria and Womser-Hacker, Christa and Mandl, Thomas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Opinion Mining is a discipline that has attracted some attention lately. Most of the research in this field has been done for English or Asian languages, due to the lack of resources in other languages. In this paper we describe an approach of building a manually annotated multilingual corpus for the domain of product reviews, which can be used as a basis for fine-grained opinion analysis also considering direct and indirect opinion targets. For each sentence in a review, the mentioned product features with their respective opinion polarity and strength on a scale from 0 to 3 are labelled manually by two annotators. The languages represented in the corpus are English, German and Spanish and the corpus consists of about 500 product reviews per language. After a short introduction and a description of related work, we illustrate the annotation process, including a description of the annotation methodology and the developed tool for the annotation process. Then first results on the inter-annotator agreement for opinions and product features are presented. We conclude the paper with an outlook on future work. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,359 |
inproceedings | broda-etal-2010-building | Building a Node of the Accessible Language Technology Infrastructure | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1477/ | Broda, Bartosz and Marci{\'n}czuk, Micha{\l} and Piasecki, Maciej | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A limited prototype of the CLARIN Language Technology Infrastructure (LTI) node is presented. The node prototype provides several types of web services for Polish. The functionality encompasses morpho-syntactic processing, shallow semantic processing of corpus on the basis of the SuperMatrix system and plWordNet browsing. We take the prototype as the starting point for the discussion on requirements that must be fulfilled by the LTI. Some possible solutions are proposed for less frequently discussed problems, e.g. streaming processing of language data on the remote processing node. We experimentally investigate how to tackle with several requirements from many discussed. Such aspects as processing large volumes of data, asynchronous mode of processing and scalability of the architecture to large number of users got especial attention in the constructed prototype of the Web Service for morpho-syntactic processing of Polish called TaKIPI-WS (\url{http://plwordnet.pwr.wroc.pl/clarin/ws/takipi/}). TaKIPI-WS is a distributed system with a three-layer architecture, an asynchronous model of request handling and multi-agent-based processing. TaKIPI-WS consists of three layers: WS Interface, Database and Daemons. The role of the Database is to store and exchange data between the Interface and the Daemons. The Daemons (i.e. taggers) are responsible for executing the requests queued in the database. Results of the performance tests are presented in the paper, too. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,360 |
inproceedings | trojahn-etal-2010-api | An {API} for Multi-lingual Ontology Matching | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1478/ | Trojahn, C{\'a}ssia and Quaresma, Paulo and Vieira, Renata | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Ontology matching consists of generating a set of correspondences between the entities of two ontologies. This process is seen as a solution to data heterogeneity in ontology-based applications, enabling the interoperability between them. However, existing matching systems are designed by assuming that the entities of both source and target ontologies are written in the same languages ( English, for instance). Multi-lingual ontology matching is an open research issue. This paper describes an API for multi-lingual matching that implements two strategies, direct translation-based and indirect. The first strategy considers direct matching between two ontologies (i.e., without intermediary ontologies), with the help of external resources, i.e., translations. The indirect alignment strategy, proposed by (Jung et al., 2009), is based on composition of alignments. We evaluate these strategies using simple string similarity based matchers and three ontologies written in English, French, and Portuguese, an extension of the OAEI benchmark test 206. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,361 |
inproceedings | fritzsch-etal-2010-open | An Open Source Process Engine Framework for Realtime Pattern Recognition and Information Fusion Tasks | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1479/ | Fritzsch, Volker and Scherer, Stefan and Schwenker, Friedhelm | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The process engine for pattern recognition and information fusion tasks, the {\textbackslash}emph{\{}pepr framework{\}}, aims to empower the researcher to develop novel solutions in the field of pattern recognition and information fusion tasks in a timely manner, by supporting reuse and combination of well tested and established components in an environment, that eases the wiring of distinct algorithms and description of the control flow through graphical tooling. The framework, not only consisting of the runtime environment, comes with several highly useful components that can be leveraged as a starting point in creating new solutions, as well as a graphical process builder that allows for easy development of pattern recognition processes in a graphical, modeled manner. Additionally, numerous work has been invested in order to keep the entry barrier with regards to extending the framework as low as possible, enabling developers to add additional functionality to the framework in as less time as possible. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,362 |
inproceedings | aswani-gaizauskas-2010-english | {E}nglish-{H}indi Transliteration using Multiple Similarity Metrics | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1480/ | Aswani, Niraj and Gaizauskas, Robert | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present an approach to measure the transliteration similarity of English-Hindi word pairs. Our approach has two components. First we propose a bi-directional mapping between one or more characters in the Devanagari script and one or more characters in the Roman script (pronounced as in English). This allows a given Hindi word written in Devanagari to be transliterated into the Roman script and vice-versa. Second, we present an algorithm for computing a similarity measure that is a variant of Dices coefficient measure and the LCSR measure and which also takes into account the constraints needed to match English-Hindi transliterated words. Finally, by evaluating various similarity metrics individually and together under a multiple measure agreement scenario, we show that it is possible to achieve a 0.92 f-measure in identifying English-Hindi word pairs that are transliterations. In order to assess the portability of our approach to other similar languages we adapt our system to the Gujarati language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,363 |
inproceedings | agerri-garcia-serrano-2010-q | {Q}-{W}ord{N}et: Extracting Polarity from {W}ord{N}et Senses | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1481/ | Agerri, Rodrigo and Garc{\'i}a-Serrano, Ana | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents Q-WordNet, a lexical resource consisting of WordNet senses automatically annotated by positive and negative polarity. Polarity classification amounts to decide whether a text (sense, sentence, etc.) may be associated to positive or negative connotations. Polarity classification is becoming important within the fields of Opinion Mining and Sentiment Analysis for determining opinions about commercial products, on companies reputation management, brand monitoring, or to track attitudes by mining online forums, blogs, etc. Inspired by work on classification of word senses by polarity (e.g., SentiWordNet), and taking WordNet as a starting point, we build Q-WordNet. Instead of applying external tools such as supervised classifiers to annotated WordNet synsets by polarity, we try to effectively maximize the linguistic information contained in WordNet, thereby taking advantage of the human effort put by lexicographers and annotators. The resulting resource is a subset of WordNet senses classified as positive or negative. In this approach, neutral polarity is seen as the absence of positive or negative polarity. The evaluation of Q-WordNet shows an improvement with respect to previous approaches. We believe that Q-WordNet can be used as a starting point for data-driven approaches in sentiment analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,364 |
inproceedings | nagasaka-etal-2010-utilizing | Utilizing Semantic Equivalence Classes of {J}apanese Functional Expressions in Translation Rule Acquisition from Parallel Patent Sentences | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1482/ | Nagasaka, Taiji and Shimanouchi, Ran and Sakamoto, Akiko and Suzuki, Takafumi and Morishita, Yohei and Utsuro, Takehito and Matsuyoshi, Suguru | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the ``Sandglass'' MT architecture, we identify the class of monosemous Japanese functional expressions and utilize it in the task of translating Japanese functional expressions into English. We employ the semantic equivalence classes of a recently compiled large scale hierarchical lexicon of Japanese functional expressions. We then study whether functional expressions within a class can be translated into a single canonical English expression. Based on the results of identifying monosemous semantic equivalence classes, this paper studies how to extract rules for translating functional expressions in Japanese patent documents into English. In this study, we use about 1.8M Japanese-English parallel sentences automatically extracted from Japanese-English patent families, which are distributed through the Patent Translation Task at the NTCIR-7 Workshop. Then, as a toolkit of a phrase-based SMT (Statistical Machine Translation) model, Moses is applied and Japanese-English translation pairs are obtained in the form of a phrase translation table. Finally, we extract translation pairs of Japanese functional expressions from the phrase translation table. Through this study, we found that most of the semantic equivalence classes judged as monosemous based on manual translation into English have only one translation rules even in the patent domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,365 |
inproceedings | mille-wanner-2010-syntactic | Syntactic Dependencies for Multilingual and Multilevel Corpus Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1483/ | Mille, Simon and Wanner, Leo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The relevance of syntactic dependency annotated corpora is nowadays unquestioned. However, a broad debate on the optimal set of dependency relation tags did not take place yet. As a result, largely varying tag sets of a largely varying size are used in different annotation initiatives. We propose a hierarchical dependency structure annotation schema that is more detailed and more flexible than the known annotation schemata. The schema allows us to choose the level of the desired detail of annotation, which facilitates the use of the schema for corpus annotation for different languages and for different NLP applications. Thanks to the inclusion of semantico-syntactic tags into the schema, we can annotate a corpus not only with syntactic dependency structures, but also with valency patterns as they are usually found in separate treebanks such as PropBank and NomBank. Semantico-syntactic tags and the level of detail of the schema furthermore facilitate the derivation of deep-syntactic and semantic annotations, leading to truly multilevel annotated dependency corpora. Such multilevel annotations can be readily used for the task of ML-based acquisition of grammar resources that map between the different levels of linguistic representation {\textemdash} something which forms part of, for instance, any natural language text generator. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,366 |
inproceedings | sato-2010-framesql | How {F}rame{SQL} Shows the {J}apanese {F}rame{N}et Data | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1484/ | Sato, Hiroaki | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | FrameSQL is a web-based application which the author (Sato, 2003; Sato 2008) created originally for searching the Berkeley FrameNet lexical database. FrameSQL now can handle the Japanese lexical database built by the Japanese FrameNet project (JFN) of Keio University in Japan. FrameSQL can search and view the JFN data released in March of 2009 on a standard web browser. Users do not need to install any additional software tools to use FrameSQL, nor do they even need to download the JFN data to their local computer, because FrameSQL accesses the database of the server computer, and executes searches. FrameSQL not only shows a clear view of the headwords grammar and combinatorial properties of the database, but also relates a Japanese word with its counterparts in English. FrameSQL puts together the Japanese and English lexical databases, and the user can access them seamlessly, as if they were a unified database. Mutual hyperlinks among these databases and the bilingual search mode make it easy to compare semantic structures of corresponding lexical units between these languages, and it could be useful for building multilingual lexical resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,367 |
inproceedings | nishikawa-etal-2010-context | A Context Sensitive Variant Dictionary for Supporting Variant Selection | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1485/ | Nishikawa, Aya and Nishimura, Ryo and Watanabe, Yasuhiko and Okada, Yoshihiro | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In Japanese, there are a large number of notational variants of words. This is because Japanese words are written in three kinds of characters: kanji (Chinese) characters, hiragara letters, and katakana letters. Japanese students study basic rules of Japanese writing in school for many years. However, it is difficult to learn which variant is suitable for a certain context in official, business, and technical documents because the rules have many exceptions. Previous Japanese writing support systems were not concerned with them sufficiently. This is because their main purposes were misspelling detection. Students often use variants which are not misspelling but unsuitable for the contexts in official, business, and technical documents. To solve this problem, we developed a context sensitive variant dictionary. A writing support system based on the context sensitive variant dictionary detects unsuitable variants for the contexts in students' reports and shows suitable ones to the students. In this study, we first show how to develop a context sensitive variant dictionary by which our system determines which variant is suitable for a context in official, business, and technical documents. Finally, we conducted a control experiment and show the effectiveness of our dictionary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,368 |
inproceedings | sagot-walther-2010-morphological | A Morphological Lexicon for the {P}ersian Language | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1486/ | Sagot, Beno{\^i}t and Walther, G{\'e}raldine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We introduce PerLex, a large-coverage and freely-available morphological lexicon for the Persian language. We describe the main features of the Persian morphology, and the way we have represented it within the Alexina formalism, on which PerLex is based. We focus on the methodology we used for constructing lexical entries from various sources, as well as the problems related to typographic normalisation. The resulting lexicon shows a satisfying coverage on a reference corpus and should therefore be a good starting point for developing a syntactic lexicon for the Persian language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,369 |
inproceedings | sagot-2010-lefff | The Lefff, a Freely Available and Large-coverage Morphological and Syntactic Lexicon for {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1487/ | Sagot, Beno{\^i}t | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we introduce the Lefff, a freely available, accurate and large-coverage morphological and syntactic lexicon for French, used in many NLP tools such as large-coverage parsers. We first describe Alexina, the lexical framework in which the Lefff is developed as well as the linguistic notions and formalisms it is based on. Next, we describe the various sources of lexical data we used for building the Lefff, in particular semi-automatic lexical development techniques and conversion and merging of existing resources. Finally, we illustrate the coverage and precision of the resource by comparing it with other resources and by assessing its impact in various NLP tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,370 |
inproceedings | cuadros-etal-2010-integrating | Integrating a Large Domain Ontology of Species into {W}ord{N}et | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1488/ | Cuadros, Montse and Laparra, Egoitz and Rigau, German and Vossen, Piek and Bosma, Wauter | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | With the proliferation of applications sharing information represented in multiple ontologies, the development of automatic methods for robust and accurate ontology matching will be crucial to their success. Connecting and merging already existing semantic networks is perhaps one of the most challenging task related to knowledge engineering. This paper presents a new approach for aligning automatically a very large domain ontology of Species to WordNet in the framework of the KYOTO project. The approach relies on the use of knowledge-based Word Sense Disambiguation algorithm which accurately assigns WordNet synsets to the concepts represented in Species 2000. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,371 |
inproceedings | rouas-etal-2010-comparison | Comparison of Spectral Properties of Read, Prepared and Casual Speech in {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1489/ | Rouas, Jean-Luc and Beppu, Mayumi and Adda-Decker, Martine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we investigate the acoustic properties of phonemes in three speaking styles: read speech, prepared speech and spontaneous speech. Our aim is to better understand why speech recognition systems still fails to achieve good performances on spontaneous speech. This work follows the work of Nakamura et al. on Japanese speaking styles, with the difference that we here focus on French. Using Nakamura`s method, we use classical speech recognition features, MFCC, and try to represent the effects of the speaking styles on the spectral space. Two measurements are defined in order to represent the spectral space reduction and the spectral variance extension. Experiments are then carried on to investigate if indeed we find some differences between the three speaking styles using these measurements. We finally compare our results to those obtained by Nakamura on Japanese to see if the same phenomenon appears. We happen to find some cues, and it also seems that phone duration also plays an important role regarding spectral reduction, especially for spontaneous speech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,372 |
inproceedings | koeva-2010-lexicon | Lexicon and Grammar in {B}ulgarian {F}rame{N}et | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1490/ | Koeva, Svetla | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we report on our attempt at assigning semantic information from the English FrameNet to lexical units in the Bulgarian valency lexicon. The paper briefly presents the model underlying the Bulgarian FrameNet (BulFrameNet): each lexical entry consists of a lexical unit; a semantic frame from the English FrameNet, expressing abstract semantic structure; a grammatical class, defining the inflexional paradigm; a valency frame describing (some of) the syntactic and lexical-semantic combinatory properties (an optional component); and (semantically and syntactically) annotated examples. The target is a corpus-based lexicon giving an exhaustive account of the semantic and syntactic combinatory properties of an extensive number of Bulgarian lexical units. The Bulgarian FrameNet database so far contains unique descriptions of over 3 000 Bulgarian lexical units, approx. one tenth of them aligned with appropriate semantic frames, supports XML import and export and will be accessible, i.e., displayed and queried via the web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,373 |
inproceedings | spyns-dhalleweyn-2010-flemish | {F}lemish-{D}utch {HLT} Policy: Evolving to New Forms of Collaboration | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1491/ | Spyns, Peter and D{'}Halleweyn, Elisabeth | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the last decade, the Dutch Language Union has taken a serious interest in digital language resources and human language technologies (HLT), because they are crucial for a language to be able to survive in the information society. In this paper we report on the current state of the joint Flemish-Dutch efforts in the field of HLT for Dutch (HLTD) and how follow-up activities are being prepared. We explain the overall mechanism of evaluating an R{\&}D programme and the role of evaluation in the policy cycle to establish new R{\&}D funding activities. This is applied to the joint Flemish-Dutch STEVIN programme. Outcomes of the STEVIN scientific midterm review are shortly discussed as the overall final evaluation is currently still on-going. As part of preparing for future policy plans, an HLTD forecast is presented. Also new opportunities are outlined, in particular in the context of the European CLARIN infrastructure project that can lead to new avenues for joint Flemish-Dutch cooperation on HLTD. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,374 |
inproceedings | jezek-quochi-2010-capturing | Capturing Coercions in Texts: a First Annotation Exercise | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1492/ | Jezek, Elisabetta and Quochi, Valeria | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we report the first results of an annotation exercise of argument coercion phenomena performed on Italian texts. Our corpus consists of ca 4000 sentences from the PAROLE sottoinsieme corpus (Bindi et al. 2000) annotated with Selection and Coercion relations among verb-noun pairs formatted in XML according to the Generative Lexicon Mark-up Language (GLML) format (Pustejovsky et al., 2008). For the purposes of coercion annotation, we selected 26 Italian verbs that impose semantic typing on their arguments in either Subject, Direct Object or Complement position. Every sentence of the corpus is annotated with the source type for the noun arguments by two annotators plus a judge. An overall agreement of 0.87 kappa indicates that the annotation methodology is reliable. A qualitative analysis of the results allows us to outline some suggestions for improvement of the task: 1) a different account of complex types for nouns has to be devised and 2) a more comprehensive account of coercion mechanisms requires annotation of the deeper meaning dimensions that are targeted in coercion operations, such as those captured by Qualia relations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,375 |
inproceedings | jorg-etal-2010-lt | {LT} World: Ontology and Reference Information Portal | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1493/ | J{\"org, Brigitte and Uszkoreit, Hans and Burt, Alastair | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | LT World (www.lt-world.org) is an ontology-driven web portal aimed at serving the global language technology community. Ontology-driven means, that the system is driven by an ontological schema to manage the research information and knowledge life-cycles: identify relevant concepts of information, structure and formalize them, assign relationships, functions and views, add states and rules, modify them. For modelling such a complex structure, we employ (i) concepts from the research domain, such as person, organisation, project, tool, data, patent, news, event (ii) concepts from the LT domain, such as technology and resource (iii) concepts from closely related domains, such as language, linguistics, and mathematics. Whereas the research entities represent the general context, that is, a research environment as such, the LT entities define the information and knowledge space of the field, enhanced by entities from closely related areas. By managing information holistically {\textemdash} that is, within a research context {\textemdash} its inherent semantics becomes much more transparent. This paper introduces LT World as a reference information portal through ontological eyes: its content, its system, its method for maintaining knowledge-rich items, its ontology as an asset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,376 |
inproceedings | tadeu-etal-2010-extracting | Extracting Surface Realisation Templates from Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1494/ | Tadeu, Thiago D. and de Novais, Eder M. and Paraboni, Ivandr{\'e} | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In Natural Language Generation (NLG), template-based surface realisation is an effective solution to the problem of producing surface strings from a given semantic representation, but many applications may not be able to provide the input knowledge in the required level of detail, which in turn may limit the use of the available NLG resources. However, if we know in advance what the most likely output sentences are (e.g., because a corpus on the relevant application domain happens to be available), then corpus knowledge may be used to quickly deploy a surface realisation engine for small-scale applications, for which it may be sufficient to select a sentence (in natural language) that resembles the desired output, and then modify some or all of its constituents accordingly. In other words, the application may simply `point to' an existing sentence in the corpus and specify only the changes that need to take place to obtain the desired surface string. In this paper we describe one such approach to surface realisation, in which we extract syntactically-structured templates from a target corpus, and use these templates to produce existing and modified versions of the target sentences by a combination of canned text and basic dependency-tree operations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,377 |
inproceedings | bramantoro-etal-2010-towards | Towards an Integrated Architecture for Composite Language Services and Multiple Linguistic Processing Components | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1495/ | Bramantoro, Arif and Sch{\"afer, Ulrich and Ishida, Toru | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Web services are increasingly being used in the natural language processing community as a way to increase the interoperability amongst language resources. This paper extends our previous work on integrating two different platforms, i.e. Heart of Gold and Language Grid. The Language Grid is an infrastructure built on top of the Internet to provide distributed language services. Heart of Gold is known as middleware architecture for integrating deep and shallow natural language processing components. The new feature of the integrated architecture is the combination of composite language services in the Language Grid and the multiple linguistic processing components in Heart of Gold to provide a better quality of language resources available on the Web. Thus, language resources with different characteristics can be combined based on the concept of service oriented computing with different treatment for each combination. Having Heart of Gold fully integrated in the Language Grid environment would contribute to the heterogeneity of language services. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,378 |
inproceedings | ekbal-saha-2010-maximum | Maximum Entropy Classifier Ensembling using Genetic Algorithm for {NER} in {B}engali | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1496/ | Ekbal, Asif and Saha, Sriparna | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we propose classifier ensemble selection for Named Entity Recognition (NER) as a single objective optimization problem. Thereafter, we develop a method based on genetic algorithm (GA) to solve this problem. Our underlying assumption is that rather than searching for the best feature set for a particular classifier, ensembling of several classifiers which are trained using different feature representations could be a more fruitful approach. Maximum Entropy (ME) framework is used to generate a number of classifiers by considering the various combinations of the available features. In the proposed approach, classifiers are encoded in the chromosomes. A single measure of classification quality, namely F-measure is used as the objective function. Evaluation results on a resource constrained language like Bengali yield the recall, precision and F-measure values of 71.14{\%}, 84.07{\%} and 77.11{\%}, respectively. Experiments also show that the classifier ensemble identified by the proposed GA based approach attains higher performance than all the individual classifiers and two different conventional baseline ensembles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,379 |
inproceedings | belgacem-etal-2010-automatic | Automatic Identification of {A}rabic Dialects | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1497/ | Belgacem, Mohamed and Antoniadis, Georges and Besacier, Laurent | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this work, automatic recognition of Arabic dialects is proposed. An acoustic survey of the proportion of vocalic intervals and the standard deviation of consonantal intervals in nine dialects (Tunisia, Morocco, Algeria, Egypt, Syria, Lebanon, Yemen, Golfs Countries and Iraq) is performed using the platform Alize and Gaussian Mixture Models (GMM). The results show the complexity of the automatic identification of Arabic dialects since. No clear border can be found between the dialects, but a gradual transition between them. They can even vary slightly from one city to another. The existence of this gradual change is easy to understand: it corresponds to a human and social reality, to the contact, friendships forged and affinity in the environment more or less immediate of the individual. This document also raises questions about the classes or macro classes of Arabic dialects noticed from the confusion matrix and the design of the hierarchical tree obtained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,380 |
inproceedings | pammi-etal-2010-multilingual | Multilingual Voice Creation Toolkit for the {MARY} {TTS} Platform | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1498/ | Pammi, Sathish and Charfuelan, Marcela and Schr{\"oder, Marc | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an open source voice creation toolkit that supports the creation of unit selection and HMM-based voices, for the MARY (Modular Architecture for Research on speech Synthesis) TTS platform. We aim to provide the tools and generic reusable run-time system modules so that people interested in supporting a new language and creating new voices for MARY TTS can do so. The toolkit has been successfully applied to the creation of British English, Turkish, Telugu and Mandarin Chinese language components and voices. These languages are now supported by MARY TTS as well as German and US English. The toolkit can be easily employed to create voices in the languages already supported by MARY TTS. The voice creation toolkit is mainly intended to be used by research groups on speech technology throughout the world, notably those who do not have their own pre-existing technology yet. We try to provide them with a reusable technology that lowers the entrance barrier for them, making it easier to get started. The toolkit is developed in Java and includes intuitive Graphical User Interface (GUI) for most of the common tasks in the creation of a synthetic voice. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,381 |
inproceedings | osenova-etal-2010-exploring | Exploring Co-Reference Chains for Concept Annotation of Domain Texts | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1499/ | Osenova, Petya and Laskova, Laska and Simov, Kiril | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The paper explores the co-reference chains as a way for improving the density of concept annotation over domain texts. The idea extends authors previous work on relating the ontology to the text terms in two domains {\textemdash} IT and textile. Here IT domain is used. The challenge is to enhance relations among concepts instead of text entities, the latter pursued in most works. Our ultimate goal is to exploit these additional chains for concept disambiguation as well as sparseness resolution at concept level. First, a gold standard was prepared with manually connected links among concepts, anaphoric pronouns and contextual equivalents. This step was necessary not only for test purposes, but also for better orientation in the co-referent types and distribution. Then, two automatic systems were tested on the gold standard. Note that these systems were not designed specially for concept chaining. The conclusion is that the state-of-the-art co-reference resolution systems might address the concept sparseness problem, but not so much the concept disambiguation task. For the latter, word-sense disambiguation systems have to be integrated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,382 |
inproceedings | spreyer-etal-2010-training | Training Parsers on Partial Trees: A Cross-language Comparison | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1500/ | Spreyer, Kathrin and {\O}vrelid, Lilja and Kuhn, Jonas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a study that compares data-driven dependency parsers obtained by means of annotation projection between language pairs of varying structural similarity. We show how the partial dependency trees projected from English to Dutch, Italian and German can be exploited to train parsers for the target languages. We evaluate the parsers against manual gold standard annotations and find that the projected parsers substantially outperform our heuristic baselines by 9{\textemdash}25{\%} UAS, which corresponds to a 21{\textemdash}43{\%} reduction in error rate. A comparative error analysis focuses on how the projected target language parsers handle subjects, which is especially interesting for Italian as an instance of a pro-drop language. For Dutch, we further present experiments with German as an alternative source language. In both source languages, we contrast standard baseline parsers with parsers that are enhanced with the predictions from large-scale LFG grammars through a technique of parser stacking, and show that improvements of the source language parser can directly lead to similar improvements of the projected target language parser. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,383 |
inproceedings | menke-mehler-2010-ariadne | The Ariadne System: A Flexible and Extensible Framework for the Modeling and Storage of Experimental Data in the Humanities. | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1501/ | Menke, Peter and Mehler, Alexander | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | During the last decades, interdisciplinarity has become a central keyword in research. As a consequence, many concepts, theories and scientific methods get in contact with each other, resulting in many different strategies and variants of acquiring, structuring, and sharing data sets. To handle these kind of data sets, his paper introduces the Ariadne Corpus Management System that allows researchers to manage and create multimodal corpora from multiple heteogeneous data sources. After an introductory demarcation from other annotation and corpus management tools, the underlying data model is presented which enables users to represent and process heterogeneous data sets within a single, consistent framework. Secondly, a set of automatized procedures is described that offers assistance to researchers in various data-related use cases. Thirdly, an approach to easy yet powerful data retrieval is introduced in form of a specialised querying language for multimodal data. Finally, the web-based graphical user interface and its advantages are illustrated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,384 |
inproceedings | giordani-moschitti-2010-corpora | Corpora for Automatically Learning to Map Natural Language Questions into {SQL} Queries | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1502/ | Giordani, Alessandra and Moschitti, Alessandro | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Automatically translating natural language into machine-readable instructions is one of major interesting and challenging tasks in Natural Language (NL) Processing. This problem can be addressed by using machine learning algorithms to generate a function that find mappings between natural language and programming language semantics. For this purpose suitable annotated and structured data are required. In this paper, we describe our method to construct and semi-automatically annotate these kinds of data, consisting of pairs of NL questions and SQL queries. Additionally, we describe two different datasets obtained by applying our annotation method to two well-known corpora, GeoQueries and RestQueries. Since we believe that syntactic levels are important, we also generate and make available relational pairs represented by means of their syntactic trees whose lexical content has been generalized. We validate the quality of our corpora by experimenting with them and our machine learning models to derive automatic NL/SQL translators. Our promising results suggest that our corpora can be effectively used to carry out research in the field of natural language interface to database. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,385 |
inproceedings | ishikawa-etal-2010-detection | Detection of submitters suspected of pretending to be someone else in a community site | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1503/ | Ishikawa, Naoki and Nishimura, Ryo and Watanabe, Yasuhiko and Okada, Yoshihiro and Murata, Masaki | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | One of the essential factors in community sites is anonymous submission. This is because anonymity gives users chances to submit messages (questions, problems, answers, opinions, etc.) without regard to shame and reputation. However, some users abuse the anonymity and disrupt communications in a community site. These users and their submissions discourage other users, keep them from retrieving good communication records, and decrease the credibility of the communication site. To solve this problem, we conducted an experimental study to detect submitters suspected of pretending to be someone else to manipulate communications in a community site by using machine learning techniques. In this study, we used messages in the data of Yahoo! chiebukuro for data training and examination. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,386 |
inproceedings | fritzinger-etal-2010-survey | A Survey of Idiomatic Preposition-Noun-Verb Triples on Token Level | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1504/ | Fritzinger, Fabienne and Weller, Marion and Heid, Ulrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Most of the research on the extraction of idiomatic multiword expressions (MWEs) focused on the acquisition of MWE types. In the present work we investigate whether a text instance of a potentially idiomatic MWE is actually used idiomatically in a given context or not. Inspired by the dataset provided by (Cook et al., 2008), we manually analysed 9,700 instances of potentially idiomatic prepositionnoun- verb triples (a frequent pattern among German MWEs) to identify, on token level, idiomatic vs. literal uses. In our dataset, all sentences are provided along with their morpho-syntactic properties. We describe our data extraction and annotation steps, and we discuss quantitative results from both EUROPARL and a German newspaper corpus. We discuss the relationship between idiomaticity and morpho-syntactic fixedness, and we address issues of ambiguity between literal and idiomatic use of MWEs. Our data show that EUROPARL is particularly well suited for MWE extraction, as most MWEs in this corpus are indeed used only idiomatically. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,387 |
inproceedings | cer-etal-2010-parsing | Parsing to {S}tanford Dependencies: Trade-offs between Speed and Accuracy | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1505/ | Cer, Daniel and de Marneffe, Marie-Catherine and Jurafsky, Dan and Manning, Chris | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We investigate a number of approaches to generating Stanford Dependencies, a widely used semantically-oriented dependency representation. We examine algorithms specifically designed for dependency parsing (Nivre, Nivre Eager, Covington, Eisner, and RelEx) as well as dependencies extracted from constituent parse trees created by phrase structure parsers (Charniak, Charniak-Johnson, Bikel, Berkeley and Stanford). We found that constituent parsers systematically outperform algorithms designed specifically for dependency parsing. The most accurate method for generating dependencies is the Charniak-Johnson reranking parser, with 89{\%} (labeled) attachment F1 score. The fastest methods are Nivre, Nivre Eager, and Covington, used with a linear classifier to make local parsing decisions, which can parse the entire Penn Treebank development set (section 22) in less than 10 seconds on an Intel Xeon E5520. However, this speed comes with a substantial drop in F1 score (about 76{\%} for labeled attachment) compared to competing methods. By tuning how much of the search space is explored by the Charniak-Johnson parser, we are able to arrive at a balanced configuration that is both fast and nearly as good as the most accurate approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,388 |
inproceedings | reyes-etal-2010-evaluating | Evaluating Humour Features on Web Comments | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1506/ | Reyes, Antonio and Potthast, Martin and Rosso, Paolo and Stein, Benno | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Research on automatic humor recognition has developed several features which discriminate funny text from ordinary text. The features have been demonstrated to work well when classifying the funniness of single sentences up to entire blogs. In this paper we focus on evaluating a set of the best humor features reported in the literature over a corpus retrieved from the Slashdot Web site. The corpus is categorized in a community-driven process according to the following tags: funny, informative, insightful, offtopic, flamebait, interesting and troll. These kinds of comments can be found on almost every large Web site; therefore, they impose a new challenge to humor retrieval since they come along with unique characteristics compared to other text types. If funny comments were retrieved accurately, they would be of a great entertainment value for the visitors of a given Web page. Our objective, thus, is to distinguish between an implicit funny comment from a not funny one. Our experiments are preliminary but nonetheless large-scale: 600,000 Web comments. We evaluate the classification accuracy of naive Bayes classifiers, decision trees, and support vector machines. The results suggested interesting findings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,389 |
inproceedings | kawahara-kurohashi-2010-acquiring | Acquiring Reliable Predicate-argument Structures from Raw Corpora for Case Frame Compilation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1507/ | Kawahara, Daisuke and Kurohashi, Sadao | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a method for acquiring reliable predicate-argument structures from raw corpora for automatic compilation of case frames. Such lexicon compilation requires highly reliable predicate-argument structures to practically contribute to Natural Language Processing (NLP) applications, such as paraphrasing, text entailment, and machine translation. However, to precisely identify predicate-argument structures, case frames are required. This issue is similar to the question ''``what came first: the chicken or the egg?'''' In this paper, we propose the first step in the extraction of reliable predicate-argument structures without using case frames. We first apply chunking to raw corpora and then extract reliable chunks to ensure that high-quality predicate-argument structures are obtained from the chunks. We conducted experiments to confirm the effectiveness of our approach. We successfully extracted reliable chunks of an accuracy of 98{\%} and high-quality predicate-argument structures of an accuracy of 97{\%}. Our experiments confirmed that we succeeded in acquiring highly reliable predicate-argument structures that can be used to compile case frames. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,390 |
inproceedings | giovannetti-2010-unsupervised | An Unsupervised Approach for Semantic Relation Interpretation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1508/ | Giovannetti, Emiliano | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this work we propose a hybrid unsupervised approach for semantic relation extraction from Italian and English texts. The system takes as input pairs of ''``distributionally similar'''' terms, possibly involved in a semantic relation. To validate and label the anonymous relations holding between the terms in input, the candidate pairs of terms are looked for on the Web in the context of reliable lexico-syntactic patterns. This paper focuses on the definition of the patterns, on the measures used to assess the reliability of the suggested specific semantic relation and on the evaluation of the implemented system. So far, the system is able to extract the following types of semantic relations: hyponymy, meronymy, and co-hyponymy. The approach can however be easily extended to manage other relations by defining the appropriate battery of reliable lexico-syntactic patterns. Accuracy of the system was measured with scores of 83.3{\%} for hyponymy, 75{\%} for meronymy and 72.2{\%} for co-hyponymy extraction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,391 |
inproceedings | kwong-2010-constructing | Constructing an Annotated Story Corpus: Some Observations and Issues | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1509/ | Kwong, Oi Yee | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper discusses our ongoing work on constructing an annotated corpus of childrens stories for further studies on the linguistic, computational, and cognitive aspects of story structure and understanding. Given its semantic nature and the need for extensive common sense and world knowledge, story understanding has been a notoriously difficult topic in natural language processing. In particular, the notion of story structure for maintaining coherence has received much attention, while its strong version in the form of story grammar has triggered much debate. The relation between discourse coherence and the interestingness, or the point, of a story has not been satisfactorily settled. Introspective analysis on story comprehension has led to some important observations, based on which we propose a preliminary annotation scheme covering the structural, functional, and emotional aspects connecting discourse segments in stories. The annotation process will shed light on how story structure interacts with story point via various linguistic devices, and the annotated corpus is expected to be a useful resource for computational discourse processing, especially for studying various issues regarding the interface between coherence and interestingness of stories. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,392 |
inproceedings | vanopstal-etal-2010-towards | Towards a Learning Approach for Abbreviation Detection and Resolution. | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1510/ | Vanopstal, Klaar and Desmet, Bart and Hoste, V{\'e}ronique | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The explosion of biomedical literature and with it the -uncontrolled- creation of abbreviations presents some special challenges for both human readers and computer applications. We developed an annotated corpus of Dutch medical text, and experimented with two approaches to abbreviation detection and resolution. Our corpus is composed of abstracts from two medical journals from the Low Countries in which approximately 65 percent (NTvG) and 48 percent (TvG) of the abbreviations have a corresponding full form in the abstract. Our first approach, a pattern-based system, consists of two steps: abbreviation detection and definition matching. This system has an average F-score of 0.82 for the detection of both defined and undefined abbreviations and an average F-score of 0.77 was obtained for the definitions. For our second approach, an SVM-based classifier was used on the preprocessed data sets, leading to an average F-score of 0.93 for the abbreviations; for the definitions an average F-score of 0.82 was obtained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,393 |
inproceedings | magro-2010-cordial | When {CORDIAL} Becomes Friendly: Endowing the {CORDIAL} Corpus with a Syntactic Annotation Layer | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1511/ | Magro, Catarina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper reports on the syntactic annotation of a previously compiled and tagged corpus of European Portuguese (EP) dialects {\textemdash} The Syntax-oriented Corpus of Portuguese Dialects (CORDIAL-SIN). The parsed version of CORDIAL-SIN is intended to be a more efficient resource for the purpose of studying dialect syntax by allowing automated searches for various syntactic constructions of interest. To achieve this goal we adopted a rich annotation system (the UPenn corpora annotation system) which codifies syntactic information of high relevance. The annotation produces tree representations, in form of labelled parenthesis, that are integrally searchable with CorpusSearch, a search engine for parsed corpora (Randall, 2005-2007). The present paper focuses on CORDIAL-SIN annotation issues, namely it presents the general principles and guidelines of the adopted annotation system and describes the methodology for constructing the parsed version of the corpus and for searching it (tools and procedures). Last section addresses the question of how an annotation system originally designed for Middle English can be adapted to meet the particular needs of a Portuguese corpus of dialectal speech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,394 |
inproceedings | gupta-etal-2010-partial | Partial Parsing as a Method to Expedite Dependency Annotation of a {H}indi Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1512/ | Gupta, Mridul and Yadav, Vineet and Husain, Samar and Sharma, Dipti Misra | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The paper describes an approach to expedite the process of manual annotation of a Hindi dependency treebank which is currently under development. We propose a way by which consistency among a set of manual annotators could be improved. Furthermore, we show that our setup can also prove useful for evaluating when an inexperienced annotator is ready to start participating in the production of the treebank. We test our approach on sample sets of data obtained from an ongoing work on creation of this treebank. The results asserting our proposal are reported in this paper. We report results from a semi-automated approach of dependency annotation experiment. We find out the rate of agreement between annotators using Cohens Kappa. We also compare results with respect to the total time taken to annotate sample data-sets using a completely manual approach as opposed to a semi-automated approach. It is observed from the results that this semi-automated approach when carried out with experienced and trained human annotators improves the overall quality of treebank annotation and also speeds up the process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,395 |
inproceedings | verhagen-2010-brandeis | The {B}randeis Annotation Tool | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1513/ | Verhagen, Marc | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Brandeis Annotation Tool (BAT) is a web-based text annotation tool that is centered around the notions of layered annotation and task decomposition. It allows annotations to refer to other annotations and to take a complicated task and split it into easier subtasks. The central organizing concept of BAT is the annotation layer. A corpus administrator can create annotation layers that involve annotation of extents, attributes or relations. The layer definition includes the labels used, the attributes that are available and restrictions on the values for those attributes. For each annotation layer, files can be assigned to one or more annotators and one judge. When annotators log in, the assigned layers and files therein are presented. When selecting a file to annotate, the interface uses the layer definition to display the annotation interface. The web-interface connects administrators and annotators to a central repository for all data and simplifies many of the housekeeping tasks while keeping requirements at a minimum (that is, users only need an internet connection and a well-behaved browser). BAT has been used mainly for temporal annotation, but can be considered a more general tool for several kinds of textual annotation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,396 |
inproceedings | luengo-etal-2010-modified | Modified {LTSE}-{VAD} Algorithm for Applications Requiring Reduced Silence Frame Misclassification | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1514/ | Luengo, Iker and Navas, Eva and Odriozola, Igor and Saratxaga, Ibon and Hernaez, Inmaculada and Sainz, I{\~n}aki and Erro, Daniel | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The LTSE-VAD is one of the best known algorithms for voice activity detection. In this paper we present a modified version of this algorithm, that makes the VAD decision not taking into account account the estimated background noise level, but the signal to noise ratio (SNR). This makes the algorithm robust not only to noise level changes, but also to signal level changes. We compare the modified algorithm with the original one, and with three other standard VAD systems. The results show that the modified version gets the lowest silence misclassification rate, while maintaining a reasonably low speech misclassification rate. As a result, this algorithm is more suitable for identification tasks, such as speaker or emotion recognition, where silence misclassification can be very harmful. A series of automatic emotion identification experiments are also carried out, proving that the modified version of the algorithm helps increasing the correct emotion classification rate. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,397 |
inproceedings | ide-etal-2010-anc2go | {ANC}2{G}o: A Web Application for Customized Corpus Creation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1515/ | Ide, Nancy and Suderman, Keith and Simms, Brian | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a web application called ANC2Go that enables the user to select data from the Open American National Corpus (OANC) and the Manually Annotated Sub-corpus (MASC) together with some or all of the annotations available. The user also may select from among a variety of options for output format, or may receive the selected portions of the corpus and annotations in their original GrAF XML standoff format.. The request is processed by merging the annotations selected and rendering them in the desired output format, then bundling the results and making it available for download. Thus, users can create a customized corpus with data and annotations of their choosing, delivered in the format that is most convenient for their use. ANC2Go will be released as a web service in the near future. Both the OANC and MASC are freely available for any use from the American National Corpus website and may be accessed through the ANC2Go application, or they may downloaded in their entirety. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,398 |
inproceedings | kozawa-etal-2010-collection | Collection of Usage Information for Language Resources from Academic Articles | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1516/ | Kozawa, Shunsuke and Tohyama, Hitomi and Uchimoto, Kiyotaka and Matsubara, Shigeki | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Recently, language resources (LRs) are becoming indispensable for linguistic researches. However, existing LRs are often not fully utilized because their variety of usage is not well known, indicating that their intrinsic value is not recognized very well either. Regarding this issue, lists of usage information might improve LR searches and lead to their efficient use. In this research, therefore, we collect a list of usage information for each LR from academic articles to promote the efficient utilization of LRs. This paper proposes to construct a text corpus annotated with usage information (UI corpus). In particular, we automatically extract sentences containing LR names from academic articles. Then, the extracted sentences are annotated with usage information by two annotators in a cascaded manner. We show that the UI corpus contributes to efficient LR searches by combining the UI corpus with a metadata database of LRs and comparing the number of LRs retrieved with and without the UI corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,399 |
inproceedings | rizzolo-roth-2010-learning | Learning Based {J}ava for Rapid Development of {NLP} Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1517/ | Rizzolo, Nick and Roth, Dan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Today`s natural language processing systems are growing more complex with the need to incorporate a wider range of language resources and more sophisticated statistical methods. In many cases, it is necessary to learn a component with input that includes the predictions of other learned components or to assign simultaneously the values that would be assigned by multiple components with an expressive, data dependent structure among them. As a result, the design of systems with multiple learning components is inevitably quite technically complex, and implementations of conceptually simple NLP systems can be time consuming and prone to error. Our new modeling language, Learning Based Java (LBJ), facilitates the rapid development of systems that learn and perform inference. LBJ has already been used to build state of the art NLP systems. In this paper, we first demonstrate that there exists a theoretical model that describes most NLP approaches adeptly. Second, we show how our improvements to the LBJ language enable the programmer to describe the theoretical model succinctly. Finally, we introduce the concept of data driven compilation, a translation process in which the efficiency of the generated code benefits from the data given as input to the learning algorithms. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,400 |
inproceedings | vivaldi-rodriguez-2010-finding | Finding Domain Terms using {W}ikipedia | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1518/ | Vivaldi, Jorge and Rodr{\'i}guez, Horacio | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we present a new approach for obtaining the terminology of a given domain using the category and page structures of the Wikipedia in a language independent way. Our approach consists basically, for each domain, on navigating the Category graph of the Wikipedia starting from the root nodes associated to the domain. A heavy filtering mechanism is carried out for preventing as much as possible the inclusion of spurious categories. For each selected category all the pages belonging to it are then recovered and filtered. This procedure is iterate several times until achieving convergence. Both category names and page names are considered candidates to belong to the terminology of the domain. This approach has been applied to three broad coverage domains: astronomy, chemistry and medicine, and two languages, English and Spanish, showing a promising performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,401 |
inproceedings | brierley-atwell-2010-proposec | {P}ro{POSEC}: A Prosody and {P}o{S} Annotated Spoken {E}nglish Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1519/ | Brierley, Claire and Atwell, Eric | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We have previously reported on ProPOSEL, a purpose-built Prosody and PoS English Lexicon compatible with the Python Natural Language ToolKit. ProPOSEC is a new corpus research resource built using this lexicon, intended for distribution with the Aix-MARSEC dataset. ProPOSEC comprises multi-level parallel annotations, juxtaposing prosodic and syntactic information from different versions of the Spoken English Corpus, with canonical dictionary forms, in a query format optimized for Perl, Python, and text processing programs. The order and content of fields in the text file is as follows: (1) Aix-MARSEC file number; (2) word; (3) LOB PoS-tag; (4) C5 PoS-tag; (5) Aix SAM-PA phonetic transcription; (6) SAM-PA phonetic transcription from ProPOSEL; (7) syllable count; (8) lexical stress pattern; (9) default content or function word tag; (10) DISC stressed and syllabified phonetic transcription; (11) alternative DISC representation, incorporating lexical stress pattern; (12) nested arrays of phonemes and tonic stress marks from Aix. As an experimental dataset, ProPOSEC can be used to study correlations between these annotation tiers, where significant findings are then expressed as additional features for phrasing models integral to Text-to-Speech and Speech Recognition. As a training set, ProPOSEC can be used for machine learning tasks in Information Retrieval and Speech Understanding systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,402 |
inproceedings | ramos-etal-2010-towards | Towards a Motivated Annotation Schema of Collocation Errors in Learner Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1520/ | Ramos, Margarita Alonso and Wanner, Leo and Vincze, Orsolya and del Bosque, Gerard Casamayor and Veiga, Nancy V{\'a}zquez and Su{\'a}rez, Estela Mosqueira and Gonz{\'a}lez, Sabela Prieto | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Collocations play a significant role in second language acquisition. In order to be able to offer efficient support to learners, an NLP-based CALL environment for learning collocations should be based on a representative collocation error annotated learner corpus. However, so far, no theoretically-motivated collocation error tag set is available. Existing learner corpora tag collocation errors simply as lexical errors {\textemdash} which is clearly insufficient given the wide range of different collocation errors that the learners make. In this paper, we present a fine-grained three-dimensional typology of collocation errors that has been derived in an empirical study from the learner corpus CEDEL2 compiled by a team at the Autonomous University of Madrid. The first dimension captures whether the error concerns the collocation as a whole or one of its elements; the second dimension captures the language-oriented error analysis, while the third exemplifies the interpretative error analysis. To facilitate a smooth annotation along this typology, we adapted Knowtator, a flexible off-the-shelf annotation tool implemented as a Prot{\'e}g{\'e} plugin. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,403 |
inproceedings | lo-wu-2010-evaluating | Evaluating Machine Translation Utility via Semantic Role Labels | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1521/ | Lo, Chi-kiu and Wu, Dekai | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present the methodology that underlies mew metrics for semantic machine translation evaluation we are developing. Unlike widely-used lexical and n-gram based MT evaluation metrics, the aim of semantic MT evaluation is to measure the utility of translations. We discuss the design of empirical studies to evaluate the utility of machine translation output by assessing the accuracy for key semantic roles. These roles are from the English 5W templates (who, what, when, where, why) used in recent GALE distillation evaluations. Recent work by Wu and Fung (2009) introduced semantic role labeling into statistical machine translation to enhance the quality of MT output. However, this approach has so far only been evaluated using lexical and n-gram based SMT evaluation metrics like BLEU which are not aimed at evaluating the utility of MT output. Direct data analysis are still needed to understand how semantic models can be leveraged to evaluate the utility of MT output. In this paper, we discuss a new methodology for evaluating the utility of the machine translation output, by assessing the accuracy with which human readers are able to complete the English 5W templates. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,404 |
inproceedings | chen-eisele-2010-integrating | Integrating a Rule-based with a Hierarchical Translation System | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1522/ | Chen, Yu and Eisele, Andreas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Recent developments on hybrid systems that combine rule-based machine translation (RBMT) systems with statistical machine translation (SMT) generally neglect the fact that RBMT systems tend to produce more syntactically well-formed translations than data-driven systems. This paper proposes a method that alleviates this issue by preserving more useful structures produced by RBMT systems and utilizing them in a SMT system that operates on hierarchical structures instead of flat phrases alone. For our experiments, we use Joshua as the decoder. It is the first attempt towards a tighter integration of MT systems from different paradigms that both support hierarchical analysis. Preliminary results show consistent improvements over the previous approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,405 |
inproceedings | poesio-etal-2010-creating | Creating a Coreference Resolution System for {I}talian | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1523/ | Poesio, Massimo and Uryupina, Olga and Versley, Yannick | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper summarizes our work on creating a full-scale coreference resolution (CR) system for Italian, using BART {\textemdash} an open-source modular CR toolkit initially developed for English corpora. We discuss our experiments on language-specific issues of the task. As our evaluation experiments show, a language-agnostic system (designed primarily for English) can achieve a performance level in high forties (MUC F-score) when re-trained and tested on a new language, at least on gold mention boundaries. Compared to this level, we can improve our F-score by around 10{\%} introducing a small number of language-specific changes. This shows that, with a modular coreference resolution platform, such as BART, one can straightforwardly develop a family of robust and reliable systems for various languages. We hope that our experiments will encourage researchers working on coreference in other languages to create their own full-scale coreference resolution systems {\textemdash} as we have mentioned above, at the moment such modules exist only for very few languages other than English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,406 |
inproceedings | bojar-etal-2010-data | Data Issues in {E}nglish-to-{H}indi Machine Translation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1524/ | Bojar, Ond{\v{r}}ej and Stra{\v{n}}{\'a}k, Pavel and Zeman, Daniel | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Statistical machine translation to morphologically richer languages is a challenging task and more so if the source and target languages differ in word order. Current state-of-the-art MT systems thus deliver mediocre results. Adding more parallel data often helps improve the results; if it doesn`t, it may be caused by various problems such as different domains, bad alignment or noise in the new data. In this paper we evaluate the English-to-Hindi MT task from this data perspective. We discuss several available parallel data sources and provide cross-evaluation results on their combinations using two freely available statistical MT systems. We demonstrate various problems encountered in the data and describe automatic methods of data cleaning and normalization. We also show that the contents of two independently distributed data sets can unexpectedly overlap, which negatively affects translation quality. Together with the error analysis, we also present a new tool for viewing aligned corpora, which makes it easier to detect difficult parts in the data even for a developer not speaking the target language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,407 |
inproceedings | vanallemeersch-2010-belgisch | Belgisch Staatsblad Corpus: Retrieving {F}rench-{D}utch Sentences from Official Documents | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1525/ | Vanallemeersch, Tom | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe the compilation of a large corpus of French-Dutch sentence pairs from official Belgian documents which are available in the online version of the publication Belgisch Staatsblad/Moniteur belge, and which have been published between 1997 and 2006. After downloading files in batch, we filtered out documents which have no translation in the other language, documents which contain several languages (by checking on discriminating words), and pairs of documents with a substantial difference in length. We segmented the documents into sentences and aligned the latter, which resulted in 5 million sentence pairs (only one-to-one links were included in the parallel corpus); there are 2.4 million unique pairs. Sample-based evaluation of the sentence alignment results indicates a near 100{\%} accuracy, which can be explained by the text genre, the procedure filtering out weakly parallel articles and the restriction to one-to-one links. The corpus is larger than a number of well-known French-Dutch resources. It is made available to the community. Further investigation is needed in order to determine the original language in which documents were written. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,408 |
inproceedings | zikanova-etal-2010-typical | Typical Cases of Annotators' Disagreement in Discourse Annotations in {P}rague Dependency Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1526/ | Zik{\'a}nov{\'a}, {\v{S}}{\'a}rka and Mladov{\'a}, Lucie and M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i} and J{\'i}nov{\'a}, Pavl{\'i}na | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present the first results of the parallel Czech discourse annotation in the Prague Dependency Treebank 2.0. Having established an annotation scenario for capturing semantic relations crossing the sentence boundary in a discourse, and having annotated the first sections of the treebank according to these guidelines, we report now on the results of the first evaluation of these manual annotations. We give an overview of the process of the annotation itself, which we believe is to a large degree language-independent and therefore accessible to any discourse researcher. Next, we describe the inter-annotator agreement measurement, and, most importantly, we classify and analyze the most common types of annotators disagreement and propose solutions for the next phase of the annotation. The annotation is carried out on dependency trees (on the tectogrammatical layer), this approach is quite novel and it brings us some advantages when interpreting the syntactic structure of the discourse units. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,409 |
inproceedings | fu-etal-2010-determining | Determining the Origin and Structure of Person Names | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1527/ | Fu, Yu and Xu, Feiyu and Uszkoreit, Hans | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents a novel system HENNA (Hybrid Person Name Analyzer) for identifying language origin and analyzing linguistic structures of person names. We conduct ME-based classification methods for the language origin identification and achieve very promising performance. We will show that word-internal character sequences provide surprisingly strong evidence for predicting the language origin of person names. Our approach is context-, language- and domain-independent and can thus be easily adapted to person names in or from other languages. Furthermore, we provide a novel strategy to handle origin ambiguities or multiple origins in a name. HENNA also provides a person name parser for the analysis of linguistic and knowledge structures of person names. All the knowledge about a person name in HENNA is modelled in a person-name ontology, including relationships between language origins, linguistic features and grammars of person names of a specific language and interpretation of name elements. The approaches presented here are useful extensions of the named entity recognition task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,410 |
inproceedings | riester-etal-2010-recursive | A Recursive Annotation Scheme for Referential Information Status | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1528/ | Riester, Arndt and Lorenz, David and Seemann, Nina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We provide a robust and detailed annotation scheme for information status, which is easy to use, follows a semantic rather than cognitive motivation, and achieves reasonable inter-annotator scores. Our annotation scheme is based on two main assumptions: firstly, that information status strongly depends on (in)definiteness, and secondly, that it ought to be understood as a property of referents rather than words. Therefore, our scheme banks on overt (in)definiteness marking and provides different categories for each class. Definites are grouped according to the information source by which the referent is identified. A special aspect of the scheme is that non-anaphoric expressions (e.g.{\textbackslash} names) are classified as to whether their referents are likely to be known or unknown to an expected audience. The annotation scheme provides a solution for annotating complex nominal expressions which may recursively contain embedded expressions. In annotating a corpus of German radio news bulletins, a kappa score of .66 for the full scheme was achieved, a core scheme of six top-level categories yields kappa = .78. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,411 |
inproceedings | mousser-2010-large | A Large Coverage Verb Taxonomy for {A}rabic | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1529/ | Mousser, Jaouad | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this article I present a lexicon for Arabic verbs which exploits Levins verb-classes (Levin, 1993) and the basic development procedure used by (Schuler, 2005). The verb lexicon in its current state has 173 classes which contain 4392 verbs and 498 frames providing information about verb root, the deverbal form of the verb, the participle, thematic roles, subcategorisation frames and syntactic and semantic descriptions of each verb. The taxonomy is available in XML format. It can be ported to MYSQL, YAML or JSON and accessed either in Arabic characters or in the Buckwalter transliteration. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,412 |
inproceedings | spilkova-etal-2010-kachna | The Kachna {L}1/{L}2 Picture Replication Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1530/ | Spilkov{\'a, Helena and Brenner, Daniel and {\"Ottl, Anton and Vond{\v{ri{\v{cka, Pavel and van Dommelen, Wim and Ernestus, Mirjam | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the Kachna corpus of spontaneous speech, in which ten Czech and ten Norwegian speakers were recorded both in their native language and in English. The dialogues are elicited using a picture replication task that requires active cooperation and interaction of speakers by asking them to produce a drawing as close to the original as possible. The corpus is appropriate for the study of interactional features and speech reduction phenomena across native and second languages. The combination of productions in non-native English and in speakers native language is advantageous for investigation of L2 issues while providing a L1 behaviour reference from all the speakers. The corpus consists of 20 dialogues comprising 12 hours 53 minutes of recording, and was collected in 2008. Preparation of the transcriptions, including a manual orthographic transcription and an automatically generated phonetic transcription, is currently in progress. The phonetic transcriptions are automatically generated by aligning acoustic models with the speech signal on the basis of the orthographic transcriptions and a dictionary of pronunciation variants compiled for the relevant language. Upon completion the corpus will be made available via the European Language Resources Association (ELRA). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,413 |
inproceedings | baccianella-etal-2010-sentiwordnet | {S}enti{W}ord{N}et 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1531/ | Baccianella, Stefano and Esuli, Andrea and Sebastiani, Fabrizio | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this work we present SENTIWORDNET 3.0, a lexical resource explicitly devised for supporting sentiment classification and opinion mining applications. SENTIWORDNET 3.0 is an improved version of SENTIWORDNET 1.0, a lexical resource publicly available for research purposes, now currently licensed to more than 300 research groups and used in a variety of research projects worldwide. Both SENTIWORDNET 1.0 and 3.0 are the result of automatically annotating all WORDNET synsets according to their degrees of positivity, negativity, and neutrality. SENTIWORDNET 1.0 and 3.0 differ (a) in the versions of WORDNET which they annotate (WORDNET 2.0 and 3.0, respectively), (b) in the algorithm used for automatically annotating WORDNET, which now includes (additionally to the previous semi-supervised learning step) a random-walk step for refining the scores. We here discuss SENTIWORDNET 3.0, especially focussing on the improvements concerning aspect (b) that it embodies with respect to version 1.0. We also report the results of evaluating SENTIWORDNET 3.0 against a fragment of WORDNET 3.0 manually annotated for positivity, negativity, and neutrality; these results indicate accuracy improvements of about 20{\%} with respect to SENTIWORDNET 1.0. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,414 |
inproceedings | tirilly-etal-2010-news | News Image Annotation on a Large Parallel Text-image Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1532/ | Tirilly, Pierre and Claveau, Vincent and Gros, Patrick | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present a multimodal parallel text-image corpus, and propose an image annotation method that exploits the textual information associated with images. Our corpus contains news articles composed of a text, images and image captions, and is significantly larger than the other news corpora proposed in image annotation papers (27,041 articles and 42,568 captionned images). In our experiments, we use the text of the articles as a textual information source to annotate images, and image captions as a groundtruth to evaluate our annotation algorithm. Our annotation method identifies relevant named entities in the texts, and associates them with high-level visual concepts detected in the images (in this paper, faces and logos). The named entities most suited to image annotation are selected using an unsupervised score based on their statistics, inspired from the weights used in information retrieval. Our experiments show that, although it is very simple, our annotation method achieves an acceptable accuracy on our real-world news corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,415 |
inproceedings | de-cao-etal-2010-extensive | Extensive Evaluation of a {F}rame{N}et-{W}ord{N}et mapping resource | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1533/ | De Cao, Diego and Croce, Danilo and Basili, Roberto | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Lexical resources are basic components of many text processing system devoted to information extraction, question answering or dialogue. In paste years many resources have been developed such as FrameNet and WordNet. FrameNet describes prototypical situations (i.e. Frames) while WordNet defines lexical meaning (senses) for the majority of English nouns, verbs, adjectives and adverbs. A major difference between FrameNet and WordNet refers to their coverage. Due of this lack of coverage, in recent years some approaches have been studied to make a bridge between this two resources, so a resource is used to extend the coverage of the other one. The nature of these approaches leave from supervised to supervised methods. The major problem is that there is not a standard in evaluation of the mapping. Each different work have tested own approach with a custom gold standard. This work give an extensive evaluation of the model proposed in (De Cao et al., 2008) using gold standard proposed in other works. Moreover this work give an empirical comparison between other available resources. As outcome of this work we also release the full mapping resource made according to the model proposed in (De Cao et al., 2008). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,416 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.