entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | schuller-etal-2010-cinemo | {CINEMO} {---} A {F}rench Spoken Language Resource for Complex Emotions: Facts and Baselines | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1334/ | Schuller, Bj{\"orn and Zaccarelli, Riccardo and Rollet, Nicolas and Devillers, Laurence | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The CINEMO corpus of French emotional speech provides a richly annotated resource to help overcome the apparent lack of learning and testing speech material for complex, i.e. blended or mixed emotions. The protocol for its collection was dubbing selected emotional scenes from French movies. 51 speakers are contained and the total speech time amounts to 2 hours and 13 minutes and 4k speech chunks after segmentation. Extensive labelling was carried out in 16 categories for major and minor emotions and in 6 continuous dimensions. In this contribution we give insight into the corpus statistics focusing in particular on the topic of complex emotions, and provide benchmark recognition results obtained in exemplary large feature space evaluations. In the result the labelling oft he collected speech clearly demonstrates that a complex handling of emotion seems needed. Further, the automatic recognition experiments provide evidence that the automatic recognition of blended emotions appears to be feasible. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,217 |
inproceedings | romano-cutugno-2010-new | New Features in Spoken Language Search Hawk ({S}p{L}a{SH}): Query Language and Query Sequence | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1335/ | Romano, Sara and Cutugno, Francesco | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this work we present further development of the SpLaSH (Spoken Language Search Hawk) project. SpLaSH implements a data model for annotated speech corpora integrated with textual markup (i.e. POS tagging, syntax, pragmatics) including a toolkit used to perform complex queries across speech and text labels. The integration of time aligned annotations (TMA), represented making use of Annotation Graphs, with text aligned ones (TXA), stored in generic XML files, are provided by a data structure, the Connector Frame, acting as table-look-up linking temporal data to words in the text. SpLaSH imposes a very limited number of constraints to the data model design, allowing the integration of annotations developed separately within the same dataset and without any relative dependency. It also provides a GUI allowing three types of queries: simple query on TXA or TMA structures, sequence query on TMA structure and cross query on both TXA and TMA integrated structures. In this work new SpLaSH features will be presented: SpLaSH Query Language (SpLaSHQL) and Query Sequence. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,218 |
inproceedings | mouton-etal-2010-framenet | {F}rame{N}et Translation Using Bilingual Dictionaries with Evaluation on the {E}nglish-{F}rench Pair | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1336/ | Mouton, Claire and de Chalendar, Ga{\"el and Richert, Benoit | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Semantic Role Labeling cannot be performed without an associated linguistic resource. A key resource for such a task is the FrameNet resource based on Fillmores theory of frame semantics. Like many linguistic resources, FrameNet has been built by English native speakers for the English language. To overcome the lack of such resources in other languages, we propose a new approach to FrameNet translation by using bilingual dictionaries and filtering the wrong translations. We define six scores to filter, based on translation redundancy and FrameNet structure. We also present our work on the enrichment of the obtained resource with nouns. This enrichment uses semantic spaces built on syntactical dependencies and a multi-represented k-NN classifier. We evaluate both the tasks on the French language over a subset of ten frames and show improved results compared to the existing French FrameNet. Our final resource contains 15,132 associations lexical units-frames for an estimated precision of 86{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,219 |
inproceedings | mirovsky-etal-2010-annotation | Annotation Tool for Extended Textual Coreference and Bridging Anaphora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1337/ | M{\'i}rovsk{\'y}, Ji{\v{r}}{\'i} and Pajas, Petr and Nedoluzhko, Anna | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present an annotation tool for the extended textual coreference and the bridging anaphora in the Prague Dependency Treebank{\^A}{~}2.0 (PDT 2.0). After we very briefly describe the annotation scheme, we focus on details of the annotation process from the technical point of view. We present the way of helping the annotators by several useful features implemented in the annotation tool, such as a possibility to combine surface and deep syntactic representation of sentences during the annotation, an automatic maintaining of the coreferential chain, underlining candidates for antecedents, etc. For studying differences among parallel annotations, the tool offers a simultaneous depicting of several annotations of the same data. The annotation tool can be used for other corpora too, as long as they have been transformed to the PML format. We present modifications of the tool for working with the coreference relations on other layers of language description, namely on the analytical layer and the morphological layer of PDT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,220 |
inproceedings | abad-etal-2010-resource | A Resource for Investigating the Impact of Anaphora and Coreference on Inference. | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1338/ | Abad, Azad and Bentivogli, Luisa and Dagan, Ido and Giampiccolo, Danilo and Mirkin, Shachar and Pianta, Emanuele and Stern, Asher | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Discourse phenomena play a major role in text processing tasks. However, so far relatively little study has been devoted to the relevance of discourse phenomena for inference. Therefore, an experimental study was carried out to assess the relevance of anaphora and coreference for Textual Entailment (TE), a prominent inference framework. First, the annotation of anaphoric and coreferential links in the RTE-5 Search data set was performed according to a specifically designed annotation scheme. As a result, a new data set was created where all anaphora and coreference instances in the entailing sentences which are relevant to the entailment judgment are solved and annotated.. A by-product of the annotation is a new augmented data set, where all the referring expressions which need to be resolved in the entailing sentences are replaced by explicit expressions. Starting from the final output of the annotation, the actual impact of discourse phenomena on inference engines was investigated, identifying the kind of operations that the systems need to apply to address discourse phenomena and trying to find direct mappings between these operation and annotation types. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,221 |
inproceedings | remus-etal-2010-sentiws | {S}enti{WS} - A Publicly Available {G}erman-language Resource for Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1339/ | Remus, Robert and Quasthoff, Uwe and Heyer, Gerhard | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | SentimentWortschatz, or SentiWS for short, is a publicly available German-language resource for sentiment analysis, opinion mining etc. It lists positive and negative sentiment bearing words weighted within the interval of [-1; 1] plus their part of speech tag, and if applicable, their inflections. The current version of SentiWS (v1.8b) contains 1,650 negative and 1,818 positive words, which sum up to 16,406 positive and 16,328 negative word forms, respectively. It not only contains adjectives and adverbs explicitly expressing a sentiment, but also nouns and verbs implicitly containing one. The present work describes the resources structure, the three sources utilised to assemble it and the semi-supervised method incorporated to weight the strength of its entries. Furthermore the resources contents are extensively evaluated using a German-language evaluation set we constructed. The evaluation set is verified being reliable and its shown that SentiWS provides a beneficial lexical resource for German-language sentiment analysis related tasks to build on. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,222 |
inproceedings | panicheva-etal-2010-personal | Personal Sense and Idiolect: Combining Authorship Attribution and Opinion Analysis | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1340/ | Panicheva, Polina and Cardiff, John and Rosso, Paolo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Subjectivity analysis and authorship attribution are very popular areas of research. However, work in these two areas has been done separately. We believe that by combining information about subjectivity in texts and authorship, the performance of both tasks can be improved. In the paper a personalized approach to opinion mining is presented, in which the notions of personal sense and idiolect are introduced; the approach is applied to the polarity classification task. It is assumed that different authors express their private states in text individually, and opinion mining results could be improved by analyzing texts by different authors separately. The hypothesis is tested on a corpus of movie reviews by ten authors. The results of applying the personalized approach to opinion mining are presented, confirming that the approach increases the performance of the opinion mining task. Automatic authorship attribution is further applied to model the personalized approach, classifying documents by their assumed authorship. Although the automatic authorship classification imposes a number of limitations on the dataset for further experiments, after overcoming these issues the authorship attribution technique modeling the personalized approach confirms the increase over the baseline with no authorship information used. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,223 |
inproceedings | goldhahn-quasthoff-2010-automatic | Automatic Annotation of Co-Occurrence Relations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1341/ | Goldhahn, Dirk and Quasthoff, Uwe | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We introduce a method for automatically labelling edges of word co-occurrence graphs with semantic relations. Therefore we only make use of training data already contained within the graph. Starting point of this work is a graph based on word co-occurrence of the German language, which is created by applying iterated co-occurrence analysis. The edges of the graph have been partially annotated by hand with semantic relationships. In our approach we make use of the commonly appearing network motif of three words forming a triangular pattern. We assume that the fully annotated occurrences of these structures contain information useful for our purpose. Based on these patterns rules for reasoning are learned. The obtained rules are then combined using Dempster-Shafer theory to infer new semantic relations between words. Iteration of the annotation process is possible to increase the number of obtained relations. By applying the described process the graph can be enriched with semantic information at a high precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,224 |
inproceedings | jakob-etal-2010-mapping | Mapping between Dependency Structures and Compositional Semantic Representations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1342/ | Jakob, Max and Lopatkov{\'a}, Mark{\'e}ta and Kordoni, Valia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper investigates the mapping between two semantic formalisms, namely the tectogrammatical layer of the Prague Dependency Treebank 2.0 (PDT) and (Robust) Minimal Recursion Semantics ((R)MRS). It is a first attempt to relate the dependency-based annotation scheme of PDT to a compositional semantics approach like (R)MRS. A mapping algorithm that converts PDT trees to (R)MRS structures is developed, associating (R)MRSs to each node on the dependency tree. Furthermore, composition rules are formulated and the relation between dependency in PDT and semantic heads in (R)MRS is analyzed. It turns out that structure and dependencies, morphological categories and some coreferences can be preserved in the target structures. Moreover, valency and free modifications are distinguished using the valency dictionary of PDT as an additional resource. The validation results show that systematically correct underspecified target representations can be obtained by a rule-based mapping approach, which is an indicator that (R)MRS is indeed robust in relation to the formal representation of Czech data. This finding is novel, for Czech, with its free word order and rich morphology, is typologically different than languages analyzed with (R)MRS to date. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,225 |
inproceedings | ben-gera-etal-2010-semantic | Semantic Feature Engineering for Enhancing Disambiguation Performance in Deep Linguistic Processing | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1343/ | Ben-Gera, Danielle and Zhang, Yi and Kordoni, Valia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The task of parse disambiguation has gained in importance over the last decade as the complexity of grammars used in deep linguistic processing has been increasing. In this paper we propose to employ the fine-grained HPSG formalism in order to investigate the contribution of deeper linguistic knowledge to the task of ranking the different trees the parser outputs. In particular, we focus on the incorporation of semantic features in the disambiguation component and the stability of our model cross domains. Our work is carried out within DELPH-IN (\url{http://www.delph-in.net}), using the LinGo Redwoods and the WeScience corpora, parsed with the English Resource Grammar and the PET parser. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,226 |
inproceedings | gala-etal-2010-tool | A Tool for Linking Stems and Conceptual Fragments to Enhance word Access | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1344/ | Gala, Nuria and Rey, V{\'e}ronique and Zock, Michael | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Electronic dictionaries offer many possibilities unavailable in paper dictionaries to view, display or access information. However, even these resources fall short when it comes to access words sharing semantic features and certain aspects of form: few applications offer the possibility to access a word via a morphologically or semantically related word. In this paper, we present such an application, Polymots, a lexical database for contemporary French containing 20.000 words grouped in 2.000 families. The purpose of this resource is to group words into families on the basis of shared morpho-phonological and semantic information. Words with a common stem form a family; words in a family also share a set of common conceptual fragments (in some families there is a continuity of meaning, in others meaning is distributed). With this approach, we capitalize on the bidirectional link between semantics and morpho-phonology : the user can thus access words not only on the basis of ideas, but also on the basis of formal characteristics of the word, i.e. its morphological features. The resulting lexical database should help people learn French vocabulary and assist them to find words they are looking for, going thus beyond other existing lexical resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,227 |
inproceedings | kordoni-zhang-2010-disambiguating | Disambiguating Compound Nouns for a Dynamic {HPSG} Treebank of {W}all {S}treet {J}ournal Texts | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1345/ | Kordoni, Valia and Zhang, Yi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The aim of this paper is twofold. We focus, on the one hand, on the task of dynamically annotating English compound nouns, and on the other hand we propose disambiguation methods and techniques which facilitate the annotation task. Both the aforementioned are part of a larger on-going effort which aims to create HPSG annotation for the texts from theWall Street Journal (henceforward WSJ) sections of the Penn Treebank (henceforward PTB) with the help of a hand-written large-scale and wide-coverage grammar of English, the English Resource Grammar (henceforward ERG; Flickinger (2002)). As we show in this paper, such annotations are very rich linguistically, since apart from syntax they also incorporate semantics, which does not only ensure that the treebank is guaranteed to be a truly sharable, re-usable and multi-functional linguistic resource, but also calls for the necessity of a better disambiguation of the internal (syntactic) structure of larger units of words, such as compound nouns, since this has an impact on the representation of their meaning, which is of utmost interest if the linguistic annotation of a given corpus is to be further understood as the practice of adding interpretative linguistic information of the highest quality in order to give added value to the corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,228 |
inproceedings | michelbacher-etal-2010-building | Building a Cross-lingual Relatedness Thesaurus using a Graph Similarity Measure | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1346/ | Michelbacher, Lukas and Laws, Florian and Dorow, Beate and Heid, Ulrich and Sch{\"utze, Hinrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Internet is an ever growing source of information stored in documents of different languages. Hence, cross-lingual resources are needed for more and more NLP applications. This paper presents (i) a graph-based method for creating one such resource and (ii) a resource created using the method, a cross-lingual relatedness thesaurus. Given a word in one language, the thesaurus suggests words in a second language that are semantically related. The method requires two monolingual corpora and a basic dictionary. Our general approach is to build two monolingual word graphs, with nodes representing words and edges representing linguistic relations between words. A bilingual dictionary containing basic vocabulary provides seed translations relating nodes from both graphs. We then use an inter-graph node-similarity algorithm to discover related words. Evaluation with three human judges revealed that 49{\%} of the English and 57{\%} of the German words discovered by our method are semantically related to the target words. We publish two resources in conjunction with this paper. First, noun coordinations extracted from the German and English Wikipedias. Second, the cross-lingual relatedness thesaurus which can be used in experiments involving interactive cross-lingual query expansion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,229 |
inproceedings | broscheit-etal-2010-extending | Extending {BART} to Provide a Coreference Resolution System for {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1347/ | Broscheit, Samuel and Ponzetto, Simone Paolo and Versley, Yannick and Poesio, Massimo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a flexible toolkit-based approach to automatic coreference resolution on German text. We start with our previous work aimed at reimplementing the system from Soon et al. (2001) for English, and extend it to duplicate a version of the state-of-the-art proposal from Klenner and Ailloud (2009). Evaluation performed on a benchmarking dataset, namely the TueBa-D/Z corpus (Hinrichs et al., 2005b), shows that machine learning based coreference resolution can be robustly performed in a language other than English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,230 |
inproceedings | heid-etal-2010-corpus | A Corpus Representation Format for Linguistic Web Services: The {D}-{SPIN} Text Corpus Format and its Relationship with {ISO} Standards | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1348/ | Heid, Ulrich and Schmid, Helmut and Eckart, Kerstin and Hinrichs, Erhard | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the framework of the preparation of linguistic web services for corpus processing, the need for a representation format was felt, which supports interoperability between different web services in a corpus processing pipeline, but also provides a well-defined interface to both, legacy tools and their data formats and upcoming international standards. We present the D-SPIN text corpus format, TCF, which was designed for this purpose. It is a stand-off XML format, inspired by the philosophy of the emerging standards LAF (Linguistic Annotation Framework) and its ``instances'' MAF for morpho-syntactic annotation and SynAF for syntactic annotation. Tools for the exchange with existing (best practice) formats are available, and a converter from MAF to TCF is being tested in spring 2010. We describe the usage scenario where TCF is embedded and the properties and architecture of TCF. We also give examples of TCF encoded data and describe the aspects of syntactic and semantic interoperability already addressed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,231 |
inproceedings | specia-etal-2010-dataset | A Dataset for Assessing Machine Translation Evaluation Metrics | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1349/ | Specia, Lucia and Cancedda, Nicola and Dymetman, Marc | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a dataset containing 16,000 translations produced by four machine translation systems and manually annotated for quality by professional translators. This dataset can be used in a range of tasks assessing machine translation evaluation metrics, from basic correlation analysis to training and test of machine learning-based metrics. By providing a standard dataset for such tasks, we hope to encourage the development of better MT evaluation metrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,232 |
inproceedings | halskov-etal-2010-quality | Quality Indicators of {LSP} Texts {---} Selection and Measurements Measuring the Terminological Usefulness of Documents for an {LSP} Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1350/ | Halskov, Jakob and Hansen, Dorte Haltrup and Braasch, Anna and Olsen, Sussi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes and evaluates a prototype quality assurance system for LSP corpora. The system will be employed in compiling a corpus of 11 M tokens for various linguistic and terminological purposes. The system utilizes a number of linguistic features as quality indicators. These represent two dimensions of quality, namely readability/formality (e.g. word length and passive constructions) and density of specialized knowledge (e.g. out-of-vocabulary items). Threshold values for each indicator are induced from a reference corpus of general (fiction, magazines and newspapers) and specialized language (the domains of Health/Medicine and Environment/Climate). In order to test the efficiency of the indicators, a number of terminologically relevant, irrelevant and possibly relevant texts are manually selected from target web sites as candidate texts. By applying the indicators to these candidate texts, the system is able to filter out non-LSP and poor LSP texts with a precision of 100{\%} and a recall of 55{\%}. Thus, the experiment described in this paper constitutes fundamental work towards a formulation of best practice for implementing quality assurance when selecting appropriate texts for an LSP corpus. The domain independence of the quality indicators still remains to be thoroughly tested on more than just two domains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,233 |
inproceedings | bertrand-etal-2010-towards | Towards Investigating Effective Affective Dialogue Strategies | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1351/ | Bertrand, Gregor and Nothdurft, Florian and Walter, Steffen and Scheck, Andreas and Kessler, Henrik and Minker, Wolfgang | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe an experimentalWizard-of-Oz-setup for the integration of emotional strategies into spoken dialogue management. With this setup we seek to evaluate different approaches to emotional dialogue strategies in human computer interaction with a spoken dialogue system. The study aims to analyse what kinds of emotional strategies work best in spoken dialogue management especially facing the problem that users may not be honest about their emotions. Therefore as well direct (user is asked about his state) as indirect (measurements of psychophysiological features) evidence is considered for the evaluation of our strategies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,234 |
inproceedings | gleim-mehler-2010-computational | Computational Linguistics for Mere Mortals - Powerful but Easy-to-use Linguistic Processing for Scientists in the Humanities | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1352/ | Gleim, R{\"udiger and Mehler, Alexander | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Delivering linguistic resources and easy-to-use methods to a broad public in the humanities is a challenging task. On the one hand users rightly demand easy to use interfaces but on the other hand want to have access to the full flexibility and power of the functions being offered. Even though a growing number of excellent systems exist which offer convenient means to use linguistic resources and methods, they usually focus on a specific domain, as for example corpus exploration or text categorization. Architectures which address a broad scope of applications are still rare. This article introduces the eHumanities Desktop, an online system for corpus management, processing and analysis which aims at bridging the gap between powerful command line tools and intuitive user interfaces. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,235 |
inproceedings | holmqvist-2010-heuristic | Heuristic Word Alignment with Parallel Phrases | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1353/ | Holmqvist, Maria | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a heuristic method for word alignment, which is the task of identifying corresponding words in parallel text. The heuristic method is based on parallel phrases extracted from manually word aligned sentence pairs. Word alignment is performed by matching parallel phrases to new sentence pairs, and adding word links from the parallel phrase to words in the matching sentence segment. Experiments on an English--Swedish parallel corpus showed that the heuristic phrase-based method produced word alignments with high precision but low recall. In order to improve alignment recall, phrases were generalized by replacing words with part-of-speech categories. The generalization improved recall but at the expense of precision. Two filtering strategies were investigated to prune the large set of generalized phrases. Finally, the phrase-based method was compared to statistical word alignment with Giza++ and we found that although statistical alignments based on large datasets will outperform phrase-based word alignment, a combination of phrase-based and statistical word alignment outperformed pure statistical alignment in terms of Alignment Error Rate (AER). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,236 |
inproceedings | goudbeek-broersma-2010-demo | The Demo / Kemo Corpus: A Principled Approach to the Study of Cross-cultural Differences in the Vocal Expression and Perception of Emotion | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1354/ | Goudbeek, Martijn and Broersma, Mirjam | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the Demo / Kemo corpus of Dutch and Korean emotional speech. The corpus has been specifically developed for the purpose of cross-linguistic comparison, and is more balanced than any similar corpus available so far: a) it contains expressions by both Dutch and Korean actors as well as judgments by both Dutch and Korean listeners; b) the same elicitation technique and recording procedure was used for recordings of both languages; c) the same nonsense sentence, which was constructed to be permissible in both languages, was used for recordings of both languages; and d) the emotions present in the corpus are balanced in terms of valence, arousal, and dominance. The corpus contains a comparatively large number of emotions (eight) uttered by a large number of speakers (eight Dutch and eight Korean). The counterbalanced nature of the corpus will enable a stricter investigation of language-specific versus universal aspects of emotional expression than was possible so far. Furthermore, given the carefully controlled phonetic content of the expressions, it allows for analysis of the role of specific phonetic features in emotional expression in Dutch and Korean. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,237 |
inproceedings | stellato-etal-2010-maskkot | {M}askkot {---} An Entity-centric Annotation Platform | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1355/ | Stellato, Armando and Stoermer, Heiko and Bortoli, Stefano and Scarpato, Noemi and Turbati, Andrea and Bouquet, Paolo and Pazienza, Maria Teresa | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Semantic Web is facing the important challenge to maintain its promise of a real world-wide graph of interconnected resources. Unfortunately, while URIs almost guarantee a direct reference to entities, the relation between the two is not bijective. Many different URI references to same concepts and entities can arise when -- in such a heterogeneous setting as the WWW -- people independently build new ontologies, or populate shared ones with new arbitrarily identified individuals. The proliferation of URIs is an unwanted, though natural effect strictly bound to the same principles which characterize the Semantic Web; reducing this phenomenon will improve the recall of Semantic Search engines, which could rely on explicit links between heterogeneous information sources. To address this problem, in this paper we present an integrated environment combining the semantic annotation and ontology building features available in the Semantic Turkey web browser extension, with globally unique identifiers for entities provided by the okkam Entity Name System, thus realizing a valuable resource for preventing diffusion of multiple URIs on the (Semantic) Web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,238 |
inproceedings | pollak-rajnoha-2010-multi | Multi-Channel Database of Spontaneous {C}zech with Synchronization of Channels Recorded by Independent Devices | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1356/ | Poll{\'a}k, Petr and Rajnoha, Josef | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes Czech spontaneous speech database of lectures on digital signal processing topic collected at Czech Technical University in Prague, commonly with the procedure of its recording and annotation. The database contains 21.7 hours of speech material from 22 speakers recorded in 4 channels with 3 principally different microphones. The annotation of the database is composed from basic time segmentation, orthographic transcription including marks for speaker and environmental non-speech events, pronunciation lexicon in SAMPA alphabet, session and speaker information describing recording conditions, and the documentation. The orthographic transcription with time segmentation is saved in XML format supported by frequently used annotation tool Transcriber. In this article, special attention is also paid to the description of time synchronization of signals recorded by two independent devices: computer based recording platform using two external sound cards and commercial audio recorder Edirol R09. This synchronization is based on cross-correlation analysis with simple automated selection of suitable short signal subparts. The collection and annotation of this database is now complete and its availability via ELRA is currently under preparation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,239 |
inproceedings | bernard-etal-2010-question | A Question-answer Distance Measure to Investigate {QA} System Progress | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1357/ | Bernard, Guillaume and Rosset, Sophie and Adda-Decker, Martine and Galibert, Olivier | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The performance of question answering system is evaluated through successive evaluations campaigns. A set of questions are given to the participating systems which are to find the correct answer in a collection of documents. The creation process of the questions may change from one evaluation to the next. This may entail an uncontroled question difficulty shift. For the QAst 2009 evaluation campaign, a new procedure was adopted to build the questions. Comparing results of QAst 2008 and QAst 2009 evaluations, a strong performance loss could be measured in 2009 for French and English, while the Spanish systems globally made progress. The measured loss might be related to this new way of elaborating questions. The general purpose of this paper is to propose a measure to calibrate the difficulty of a question set. In particular, a reasonable measure should output higher values for 2009 than for 2008. The proposed measure relies on a distance measure between the critical elements of a question and those of the associated correct answer. An increase of the proposed distance measure for French and English 2009 evaluations as compared to 2008 could be established. This increase correlates with the previously observed degraded performances. We conclude on the potential of this evaluation criterion: the importance of such a measure for the elaboration of new question corpora for questions answering systems and a tool to control the level of difficulty for successive evaluation campaigns. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,240 |
inproceedings | blessing-schutze-2010-fine | Fine-Grained Geographical Relation Extraction from {W}ikipedia | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1358/ | Blessing, Andre and Sch{\"utze, Hinrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present work on enhancing the basic data resource of a context-aware system. Electronic text offers a wealth of information about geospatial data and can be used to improve the completeness and accuracy of geospatial resources (e.g., gazetteers). First, we introduce a supervised approach to extracting geographical relations on a fine-grained level. Second, we present a novel way of using Wikipedia as a corpus based on self-annotation. A self-annotation is an automatically created high-quality annotation that can be used for training and evaluation. Wikipedia contains two types of different context: (i) unstructured text and (ii) structured data: templates (e.g., infoboxes about cities), lists and tables. We use the structured data to annotate the unstructured text. Finally, the extracted fine-grained relations are used to complete gazetteer data. The precision and recall scores of more than 97 percent confirm that a statistical IE pipeline can be used to improve the data quality of community-based resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,241 |
inproceedings | ayari-etal-2010-fine | Fine-grained Linguistic Evaluation of Question Answering Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1359/ | Ayari, Sarra El and Grau, Brigitte and Ligozat, Anne-Laure | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Question answering systems are complex systems using natural language processing. Some evaluation campaigns are organized to evaluate such systems in order to propose a classification of systems based on final results (number of correct answers). Nevertheless, teams need to evaluate more precisely the results obtained by their systems if they want to do a diagnostic evaluation. There are no tools or methods to do these evaluations systematically. We present REVISE, a tool for glass box evaluation based on diagnostic of question answering system results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,242 |
inproceedings | tatsumi-etal-2010-evaluating | Evaluating Semantic Relations and Distances in the Associative Concept Dictionary using {NIRS}-imaging | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1360/ | Tatsumi, Nao and Okamoto, Jun and Ishizaki, Shun | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this study, we extracted brain activities related to semantic relations and distances to improve the precision of distance calculation among concepts in the Associated Concept Dictionary (ACD). For the experiments, we used a multi-channel Near-infrared Spectroscopy (NIRS) device to measure the response properties of the changes in hemoglobin concentration during word-concept association tasks. The experiments stimuli were selected from pairs of stimulus words and associated words in the ACD and presented in the form of a visual stimulation to the subjects. In our experiments, we obtained subject response data and brain activation data in Broca`s area {\textemdash}a human brain region that is active in linguistic/word-concept decision tasks{\textemdash} and these data imply relations with the length of associative distance. This study showed that it was possible to connect brain activities to the semantic relation among concepts, and that it would improve the method for concept distance calculation in order to build a more human-like ontology model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,243 |
inproceedings | damljanovic-etal-2010-identification | Identification of the Question Focus: Combining Syntactic Analysis and Ontology-based Lookup through the User Interaction | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1361/ | Damljanovic, Danica and Agatonovic, Milan and Cunningham, Hamish | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Most question-answering systems contain a classifier module which determines a question category, based on which each question is assigned an answer type. However, setting up syntactic patterns for this classification is a big challenge. In addition, in the case of ontology-based systems, the answer type should be aligned to the queried knowledge structure. In this paper, we present an approach for determining the answer type semi-automatically. We first identify the question focus using syntactic parsing, and then try to identify the answer type by combining the head of the focus with the ontology-based lookup. When this combination is not enough to make conclusions automatically, the user is engaged into a dialog in order to resolve the answer type. User selections are saved and used for training the system in order to improve its performance over time. Further on, the answer type is used to show the feedback and the concise answer to the user. Our approach is evaluated using 250 questions from the Mooney Geoquery dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,244 |
inproceedings | grappy-etal-2010-corpus | A Corpus for Studying Full Answer Justification | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1362/ | Grappy, Arnaud and Grau, Brigitte and Ferret, Olivier and Grouin, Cyril and Moriceau, V{\'e}ronique and Robba, Isabelle and Tannier, Xavier and Vilnat, Anne and Barbier, Vincent | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Question answering (QA) systems aim at retrieving precise information from a large collection of documents. To be considered as reliable by users, a QA system must provide elements to evaluate the answer. This notion of answer justification can also be useful when developping a QA system in order to give criteria for selecting correct answers. An answer justification can be found in a sentence, a passage made of several consecutive sentences or several passages of a document or several documents. Thus, we are interesting in pinpointing the set of information that allows to verify the correctness of the answer in a candidate passage and the question elements that are missing in this passage. Moreover, the relevant information is often given in texts in a different form from the question form: anaphora, paraphrases, synonyms. In order to have a better idea of the importance of all the phenomena we underlined, and to provide enough examples at the QA developer`s disposal to study them, we decided to build an annotated corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,245 |
inproceedings | biggio-etal-2010-entity | Entity Mention Detection using a Combination of Redundancy-Driven Classifiers | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1363/ | Biggio, Silvana Marianela Bernaola and Speranza, Manuela and Zanoli, Roberto | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present an experimental framework for Entity Mention Detection in which two different classifiers are combined to exploit Data Redundancy attained through the annotation of a large text corpus, as well as a number of Patterns extracted automatically from the same corpus. In order to recognize proper name, nominal, and pronominal mentions we not only exploit the information given by mentions recognized within the corpus being annotated, but also given by mentions occurring in an external and unannotated corpus. The system was first evaluated in the Evalita 2009 evaluation campaign obtaining good results. The current version is being used in a number of applications: on the one hand, it is used in the LiveMemories project, which aims at scaling up content extraction techniques towards very large scale extraction from multimedia sources. On the other hand, it is used to annotate corpora, such as Italian Wikipedia, thus providing easy access to syntactic and semantic annotation for both the Natural Language Processing and Information Retrieval communities. Moreover a web service version of the system is available and the system is going to be integrated into the TextPro suite of NLP tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,246 |
inproceedings | recski-etal-2010-np | {NP} Alignment in Bilingual Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1364/ | Recski, G{\'a}bor and Rung, Andr{\'a}s and Zs{\'e}der, Attila and Kornai, Andr{\'a}s | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Aligning the NPs of parallel corpora is logically halfway between the sentence- and word-alignment tasks that occupy much of the MT literature, but has received far less attention. NP alignment is a challenging problem, capable of rapidly exposing flaws both in the word-alignment and in the NP chunking algorithms one may bring to bear. It is also a very rewarding problem in that NPs are semantically natural translation units, which means that (i) word alignments will cross NP boundaries only exceptionally, and (ii) within sentences already aligned, the proportion of 1-1 alignments will be higher for NPs than words. We created a simple gold standard for English-Hungarian, Orwells 1984, (since this already exists in manually verified POS-tagged format in many languages thanks to the Multex and MultexEast project) by manually verifying the automaticaly generated NP chunking (we used the yamcha, mallet and hunchunk taggers) and manually aligning the maximal NPs and PPs. The maximum NP chunking problem is much harder than base NP chunking, with F-measure in the .7 range (as opposed to over .94 for base NPs). Since the results are highly impacted by the quality of the NP chunking, we tested our alignment algorithms both with real world (machine obtained) chunkings, where results are in the .35 range for the baseline algorithm which propagates GIZA++ word alignments to the NP level, and on idealized (manually obtained) chunkings, where the baseline reaches .4 and our current system reaches .64. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,247 |
inproceedings | gargett-etal-2010-give | The {GIVE}-2 Corpus of Giving Instructions in Virtual Environments | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1365/ | Gargett, Andrew and Garoufi, Konstantina and Koller, Alexander and Striegnitz, Kristina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present the GIVE-2 Corpus, a new corpus of human instruction giving. The corpus was collected by asking one person in each pair of subjects to guide the other person towards completing a task in a virtual 3D environment with typed instructions. This is the same setting as that of the recent GIVE Challenge, and thus the corpus can serve as a source of data and as a point of comparison for NLG systems that participate in the GIVE Challenge. The instruction-giving data we collect is multilingual (45 German and 63 English dialogues), and can easily be extended to further languages by using our software, which we have made available. We analyze the corpus to study the effects of learning by repeated participation in the task and the effects of the participants' spatial navigation abilities. Finally, we present a novel annotation scheme for situated referring expressions and compare the referring expressions in the German and English data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,248 |
inproceedings | vorwerk-etal-2010-wapusk20 | {WAPUSK}20 - A Database for Robust Audiovisual Speech Recognition | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1366/ | Vorwerk, Alexander and Wang, Xiaohui and Kolossa, Dorothea and Zeiler, Steffen and Orglmeister, Reinhold | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Audiovisual speech recognition (AVSR) systems have been proven superior over audio-only speech recognizers in noisy environments by incorporating features of the visual modality. In order to develop reliable AVSR systems, appropriate simultaneously recorded speech and video data is needed. In this paper, we will introduce a corpus (WAPUSK20) that consists of audiovisual data of 20 speakers uttering 100 sentences each with four channels of audio and a stereoscopic video. The latter is intended to support more accurate lip tracking and the development of stereo data based normalization techniques for greater robustness of the recognition results. The sentence design has been adopted from the GRID corpus that has been widely used for AVSR experiments. Recordings have been made under acoustically realistic conditions in a usual office room. Affordable hardware equipment has been used, such as a pre-calibrated stereo camera and standard PC components. The software written to create this corpus was designed in MATLAB with help of hardware specific software provided by the hardware manufacturers and freely available open source software. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,249 |
inproceedings | agirre-etal-2010-exploring | Exploring Knowledge Bases for Similarity | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1367/ | Agirre, Eneko and Cuadros, Montse and Rigau, German and Soroa, Aitor | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Graph-based similarity over WordNet has been previously shown to perform very well on word similarity. This paper presents a study of the performance of such a graph-based algorithm when using different relations and versions of Wordnet. The graph algorithm is based on Personalized PageRank, a random-walk based algorithm which computes the probability of a random-walk initiated in the target word to reach any synset following the relations in WordNet (Haveliwala, 2002). Similarity is computed as the cosine of the probability distributions for each word over WordNet. The best combination of relations includes all relations in WordNet 3.0, included disambiguated glosses, and automatically disambiguated topic signatures called KnowNets. All relations are part of the official release of WordNet, except KnowNets, which have been derived automatically. The results over the WordSim 353 dataset show that using the adequate relations the performance improves over previously published WordNet-based results on the WordSim353 dataset (Finkelstein et al., 2002). The similarity software and some graphs used in this paper are publicly available at \url{http://ixa2.si.ehu.es/ukb}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,250 |
inproceedings | sanchez-marco-etal-2010-annotation | Annotation and Representation of a Diachronic Corpus of {S}panish | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1368/ | S{\'a}nchez-Marco, Cristina and Boleda, Gemma and Fontana, Josep Maria and Domingo, Judith | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this article we describe two different strategies for the automatic tagging of a Spanish diachronic corpus involving the adaptation of existing NLP tools developed for modern Spanish. In the initial approach we follow a state-of-the-art strategy, which consists on standardizing the spelling and the lexicon. This approach boosts POS-tagging accuracy to 90, which represents a raw improvement of over 20{\%} with respect to the results obtained without any pre-processing. In order to enable non-expert users in NLP to use this new resource, the corpus has been integrated into IAC (Corpora Interface Access). We discuss the shortcomings of the initial approach and propose a new one, which does not consist in adapting the source texts to the tagger, but rather in modifying the tagger for the direct treatment of the old variants. This second strategy addresses some important shortcomings in the previous approach and is likely to be useful not only in the creation of diachronic linguistic resources but also for the treatment of dialectal or non-standard variants of synchronic languages as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,251 |
inproceedings | raza-2010-inferring | Inferring Subcat Frames of Verbs in {U}rdu | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1369/ | Raza, Ghulam | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an approach for inferring syntactic frames of verbs in Urdu from an untagged corpus. Urdu, like many other South Asian languages, is a free word order and case-rich language. Separable lexical units mark different constituents for case in phrases and clauses and are called case clitics. There is not always a one to one correspondence between case clitic form and case, and case and grammatical function in Urdu. Case clitics, therefore, can not serve as direct clues for extracting the syntactic frames of verbs. So a two-step approach has been implemented. In a first step, all case clitic combinations for a verb are extracted and the unreliable ones are filtered out by applying the inferential statistics. In a second step, the information of occurrences of case clitic forms in different combinations as a whole and on individual level is processed to infer all possible syntactic frames of the verb. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,252 |
inproceedings | besancon-etal-2010-lima | {LIMA} : A Multilingual Framework for Linguistic Analysis and Linguistic Resources Development and Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1370/ | Besan{\c{con, Romaric and de Chalendar, Ga{\"el and Ferret, Olivier and Gara, Faiza and Mesnard, Olivier and La{\"ib, Meriama and Semmar, Nasredine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The increasing amount of available textual information makes necessary the use of Natural Language Processing (NLP) tools. These tools have to be used on large collections of documents in different languages. But NLP is a complex task that relies on many processes and resources. As a consequence, NLP tools must be both configurable and efficient: specific software architectures must be designed for this purpose. We present in this paper the LIMA multilingual analysis platform, developed at CEA LIST. This configurable platform has been designed to develop NLP based industrial applications while keeping enough flexibility to integrate various processes and resources. This design makes LIMA a linguistic analyzer that can handle languages as different as French, English, German, Arabic or Chinese. Beyond its architecture principles and its capabilities as a linguistic analyzer, LIMA also offers a set of tools dedicated to the test and the evaluation of linguistic modules and to the production and the management of new linguistic resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,253 |
inproceedings | chrupala-klakow-2010-named | A Named Entity Labeler for {G}erman: Exploiting {W}ikipedia and Distributional Clusters | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1371/ | Chrupa{\l}a, Grzegorz and Klakow, Dietrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Named Entity Recognition is a relatively well-understood NLP task, with many publicly available training resources and software for processing English data. Other languages tend to be underserved in this area. For German, CoNLL-2003 Shared Task provided training data, but there are no publicly available, ready-to-use tools. We fill this gap and develop a German NER system with state-of-the-art performance. In addition to CoNLL 2003 labeled training data, we use two additional resources: (i) 32 million words of unlabeled news article text and (ii) infobox labels from German Wikipedia articles. From the unlabeled text we derive distributional word clusters. Then we use cluster membership features and Wikipedia infobox label features to train a supervised model on the labeled training data. This approach allows us to deal better with word-types unseen in the training data and achieve good performance on German with little engineering effort. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,254 |
inproceedings | rosell-2010-text | Text Cluster Trimming for Better Descriptions and Improved Quality | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1372/ | Rosell, Magnus | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Text clustering is potentially very useful for exploration of text sets that are too large to study manually. The success of such a tool depends on whether the results can be explained to the user. An automatically extracted cluster description usually consists of a few words that are deemed representative for the cluster. It is preferably short in order to be easily grasped. However, text cluster content is often diverse. We introduce a trimming method that removes texts that do not contain any, or a few of the words in the cluster description. The result is clusters that match their descriptions better. In experiments on two quite different text sets we obtain significant improvements in both internal and external clustering quality for the trimmed clustering compared to the original. The trimming thus has two positive effects: it forces the clusters to agree with their descriptions (resulting in better descriptions) and improves the quality of the trimmed clusters. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,255 |
inproceedings | gonzalez-rubio-etal-2010-saturnalia | {S}aturnalia: A {L}atin-{C}atalan Parallel Corpus for Statistical {MT} | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1373/ | Gonz{\'a}lez-Rubio, Jes{\'u}s and Civera, Jorge and Juan, Alfons and Casacuberta, Francisco | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Currently, a great effort is being carried out in the digitalisation of large historical document collections for preservation purposes. The documents in these collections are usually written in ancient languages, such as Latin or Greek, which limits the access of the general public to their content due to the language barrier. Therefore, digital libraries aim not only at storing raw images of digitalised documents, but also to annotate them with their corresponding text transcriptions and translations into modern languages. Unfortunately, ancient languages have at their disposal scarce electronic resources to be exploited by natural language processing techniques. This paper describes the compilation process of a novel Latin-Catalan parallel corpus as a new task for statistical machine translation (SMT). Preliminary experimental results are also reported using a state-of-the-art phrase-based SMT system. The results presented in this work reveal the complexity of the task and its challenging, but interesting nature for future development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,256 |
inproceedings | apostolova-etal-2010-djangology | {D}jangology: A Light-weight Web-based Tool for Distributed Collaborative Text Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1374/ | Apostolova, Emilia and Neilan, Sean and An, Gary and Tomuro, Noriko and Lytinen, Steven | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Manual text annotation is a resource-consuming endeavor necessary for NLP systems when they target new tasks or domains for which there are no existing annotated corpora. Distributing the annotation work across multiple contributors is a natural solution to reduce and manage the effort required. Although there are a few publicly available tools which support distributed collaborative text annotation, most of them have complex user interfaces and require a significant amount of involvement from the annotators/contributors as well as the project developers and administrators. We present a light-weight web application for highly distributed annotation projects - Djangology. The application takes advantage of the recent advances in web framework architecture that allow rapid development and deployment of web applications thus minimizing development time for customization. The application`s web-based interface gives project administrators the ability to easily upload data, define project schemas, assign annotators, monitor progress, and review inter-annotator agreement statistics. The intuitive web-based user interface encourages annotator participation as contributors are not burdened by tool manuals, local installation, or configuration. The system has achieved a user response rate of 70{\%} in two annotation projects involving more than 250 medical experts from various geographic locations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,257 |
inproceedings | derczynski-gaizauskas-2010-analysing | Analysing Temporally Annotated Corpora with {CAV}a{T} | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1375/ | Derczynski, Leon and Gaizauskas, Robert | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present CAVaT, a tool that performs Corpus Analysis and Validation for TimeML. CAVaT is an open source, modular checking utility for statistical analysis of features specific to temporally-annotated natural language corpora. It provides reporting, highlights salient links between a variety of general and time-specific linguistic features, and also validates a temporal annotation to ensure that it is logically consistent and sufficiently annotated. Uniquely, CAVaT provides analysis specific to TimeML-annotated temporal information. TimeML is a standard for annotating temporal information in natural language text. In this paper, we present the reporting part of CAVaT, and then its error-checking ability, including the workings of several novel TimeML document verification methods. This is followed by the execution of some example tasks using the tool to show relations between times, events, signals and links. We also demonstrate inconsistencies in a TimeML corpus (TimeBank) that have been detected with CAVaT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,258 |
inproceedings | reynaert-etal-2010-balancing | Balancing {S}o{N}a{R}: {IPR} versus Processing Issues in a 500-Million-Word Written {D}utch Reference Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1376/ | Reynaert, Martin and Oostdijk, Nelleke and De Clercq, Orph{\'e}e and van den Heuvel, Henk and de Jong, Franciska | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In The Low Countries, a major reference corpus for written Dutch is being built. We discuss the interplay between data acquisition and data processing during the creation of the SoNaR Corpus. Based on developments in traditional corpus compiling and new web harvesting approaches, SoNaR is designed to contain 500 million words, balanced over 36 text types including both traditional and new media texts. Beside its balanced design, every text sample included in SoNaR will have its IPR issues settled to the largest extent possible. This data collection task presents many challenges because every decision taken on the level of text acquisition has ramifications for the level of processing and the general usability of the corpus. As far as the traditional text types are concerned, each text brings its own processing requirements and issues. For new media texts - SMS, chat - the problem is even more complex, issues such as anonimity, recognizability and citation right, all present problems that have to be tackled. The solutions actually lead to the creation of two corpora: a gigaword SoNaR, IPR-cleared for research purposes, and the smaller - of commissioned size - more privacy compliant SoNaR, IPR-cleared for commercial purposes as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,259 |
inproceedings | cruz-lara-etal-2010-mlif | {MLIF} : A Metamodel to Represent and Exchange Multilingual Textual Information | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1377/ | Cruz-Lara, Samuel and Francopoulo, Gil and Romary, Laurent and Semmar, Nasredine | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The fast evolution of language technology has produced pressing needs in standardization. The multiplicity of language resources representation levels and the specialization of these representations make difficult the interaction between linguistic resources and components manipulating these resources. In this paper, we describe the MultiLingual Information Framework (MLIF {\textemdash} ISO CD 24616). MLIF is a metamodel which allows the representation and the exchange of multilingual textual information. This generic metamodel is designed to provide a common platform for all the tools developed around the existing multilingual data exchange formats. This platform provides, on the one hand, a set of generic data categories for various application domains, and on the other hand, strategies for the interoperability with existing standards. The objective is to reach a better convergence between heterogeneous standardisation activities that are taking place in the domain of data modeling (XML; W3C), text management (TEI; TEIC), multilingual information (TMX-LISA; XLIFF-OASIS) and multimedia (SMILText; W3C). This is a work in progress within ISO-TC37 in order to define a new ISO standard. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,260 |
inproceedings | ruppenhofer-etal-2010-generating | Generating {F}rame{N}ets of Various Granularities: The {F}rame{N}et Transformer | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1378/ | Ruppenhofer, Josef and Sunde, Jonas and Pinkal, Manfred | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a method and a software tool, the FrameNet Transformer, for deriving customized versions of the FrameNet database based on frame and frame element relations. The FrameNet Transformer allows users to iteratively coarsen the FrameNet sense inventory in two ways. First, the tool can merge entire frames that are related by user-specified relations. Second, it can merge word senses that belong to frames related by specified relations. Both methods can be interleaved. The Transformer automatically outputs format-compliant FrameNet versions, including modified corpus annotation files that can be used for automatic processing. The customized FrameNet versions can be used to determine which granularity is suitable for particular applications. In our evaluation of the tool, we show that our method increases accuracy of statistical semantic parsers by reducing the number of word-senses (frames) per lemma, and increasing the number of annotated sentences per lexical unit and frame. We further show in an experiment on the FATE corpus that by coarsening FrameNet we do not incur a significant loss of information that is relevant to the Recognizing Textual Entailment task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,261 |
inproceedings | bonin-etal-2010-contrastive | A Contrastive Approach to Multi-word Extraction from Domain-specific Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1379/ | Bonin, Francesca and Dell{'}Orletta, Felice and Montemagni, Simonetta and Venturi, Giulia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present a novel approach to multi-word terminology extraction combining a well-known automatic term recognition approach, the C--NC value method, with a contrastive ranking technique, aimed at refining obtained results either by filtering noise due to common words or by discerning between semantically different types of terms within heterogeneous terminologies. Differently from other contrastive methods proposed in the literature that focus on single terms to overcome the multi-word terms' sparsity problem, the proposed contrastive function is able to handle variation in low frequency events by directly operating on pre-selected multi-word terms. This methodology has been tested in two case studies carried out in the History of Art and Legal domains. Evaluation of achieved results showed that the proposed two--stage approach improves significantly multi--word term extraction results. In particular, for what concerns the legal domain it provides an answer to a well-known problem in the semi--automatic construction of legal ontologies, namely that of singling out law terms from terms of the specific domain being regulated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,262 |
inproceedings | blanc-etal-2010-partial | Partial Parsing of Spontaneous Spoken {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1380/ | Blanc, Olivier and Constant, Matthieu and Dister, Anne and Watrin, Patrick | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the process and the resources used to automatically annotate a French corpus of spontaneous speech transcriptions in super-chunks. Super-chunks are enhanced chunks that can contain lexical multiword units. This partial parsing is based on a preprocessing stage of the spoken data that consists in reformatting and tagging utterances that break the syntactic structure of the text, such as disfluencies. Spoken specificities were formalized thanks to a systematic linguistic study of a 40-hour-long speech transcription corpus. The chunker uses large-coverage and fine-grained language resources for general written language that have been augmented with resources specific to spoken French. It consists in iteratively applying finite-state lexical and syntactic resources and outputing a finite automaton representing all possible chunk analyses. The best path is then selected thanks to a hybrid disambiguation stage. We show that our system reaches scores that are comparable with state-of-the-art results in the field. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,263 |
inproceedings | braffort-etal-2010-sign | Sign Language Corpora for Analysis, Processing and Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1381/ | Braffort, Annelies and Bolot, Laurence and Ch{\'e}telat-Pel{\'e}, Emilie and Choisier, Annick and Delorme, Maxime and Filhol, Michael and Segouat, J{\'e}r{\'e}mie and Verrecchia, Cyril and Badin, Flora and Devos, Nad{\`e}ge | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Sign Languages (SLs) are the visuo-gestural languages practised by the deaf communities. Research on SLs requires to build, to analyse and to use corpora. The aim of this paper is to present various kinds of new uses of SL corpora. The way data are used take advantage of the new capabilities of annotation software for visualisation, numerical annotation, and processing. The nature of the data can be video-based or motion capture-based. The aims of the studies include language analysis, animation processing, and evaluation. We describe here some LIMSIs studies, and some studies from other laboratories as examples. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,264 |
inproceedings | tonelli-etal-2010-venpro | {V}en{P}ro: A Morphological Analyzer for Venetan | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1382/ | Tonelli, Sara and Pianta, Emanuele and Delmonte, Rodolfo and Brunelli, Michele | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This document reports the process of extending MorphoPro for Venetan, a lesser-used language spoken in the Nort-Eastern part of Italy. MorphoPro is the morphological component of TextPro, a suite of tools oriented towards a number of NLP tasks. In order to extend this component to Venetan, we developed a declarative representation of the morphological knowledge necessary to analyze and synthesize Venetan words. This task was challenging for several reasons, which are common to a number of lesser-used languages: although Venetan is widely used as an oral language in everyday life, its written usage is very limited; efforts for defining a standard orthography and grammar are very recent and not well established; despite recent attempts to propose a unified orthography, no Venetan standard is widely used. Besides, there are different geographical varieties and it is strongly influenced by Italian. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,265 |
inproceedings | maamouri-etal-2010-speech | From Speech to Trees: Applying Treebank Annotation to {A}rabic Broadcast News | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1383/ | Maamouri, Mohamed and Bies, Ann and Kulick, Seth and Zaghouani, Wajdi and Graff, Dave and Ciul, Mike | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Arabic Treebank (ATB) Project at the Linguistic Data Consortium (LDC) has embarked on a large corpus of Broadcast News (BN) transcriptions, and this has led to a number of new challenges for the data processing and annotation procedures that were originally developed for Arabic newswire text (ATB1, ATB2 and ATB3). The corpus requirements currently posed by the DARPA GALE Program, including English translation of Arabic BN transcripts, word-level alignment of Arabic and English data, and creation of a corresponding English Treebank, place significant new constraints on ATB corpus creation, and require careful coordination among a wide assortment of concurrent activities and participants. Nonetheless, in spite of the new challenges posed by BN data, the ATBs newly improved pipeline and revised annotation guidelines for newswire have proven to be robust enough that very few changes were necessary to account for the new genre of data. This paper presents the points where some adaptation has been necessary, and the overall pipeline as used in the production of BN ATB data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,266 |
inproceedings | heja-2010-role | The Role of Parallel Corpora in Bilingual Lexicography | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1384/ | H{\'e}ja, Enik{\H{o}} | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an approach based on word alignment on parallel corpora, which aims at facilitating the lexicographic work of dictionary building. Although this method has been widely used in the MT community for at least 16 years, as far as we know, it has not been applied to facilitate the creation of bilingual dictionaries for human use. The proposed corpus-driven technique, in particular the exploitation of parallel corpora, proved to be helpful in the creation of such dictionaries for several reasons. Most importantly, a parallel corpus of appropriate size guarantees that the most relevant translations are included in the dictionary. Moreover, based on the translational probabilities it is possible to rank translation candidates, which ensures that the most frequently used translation variants go first within an entry. A further advantage is that all the relevant example sentences from the parallel corpora are easily accessible, thus facilitating the selection of the most appropriate translations from possible translation candidates. Due to these properties the method is particularly apt to enable the production of active or encoding dictionaries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,267 |
inproceedings | bunt-etal-2010-towards | Towards an {ISO} Standard for Dialogue Act Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1385/ | Bunt, Harry and Alexandersson, Jan and Carletta, Jean and Choe, Jae-Woong and Fang, Alex Chengyu and Hasida, Koiti and Lee, Kiyong and Petukhova, Volha and Popescu-Belis, Andrei and Romary, Laurent and Soria, Claudia and Traum, David | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes an ISO project which aims at developing a standard for annotating spoken and multimodal dialogue with semantic information concerning the communicative functions of utterances, the kind of semantic content they address, and their relations with what was said and done earlier in the dialogue. The project, ISO 24617-2 ''``Semantic annotation framework, Part 2: Dialogue acts'''', is currently at DIS stage. The proposed annotation schema distinguishes 9 orthogonal dimensions, allowing each functional segment in dialogue to have a function in each of these dimensions, thus accounting for the multifunctionality that utterances in dialogue often have. A number of core communicative functions is defined in the form of ISO data categories, available at \url{http://semantic-annotation.uvt.nl/dialogue-acts/iso-datcats.pdf}; they are divided into ''``dimension-specific'''' functions, which can be used only in a particular dimension, such as Turn Accept in the Turn Management dimension, and ''``general-purpose'''' functions, which can be used in any dimension, such as Inform and Request. An XML-based annotation language, ''``DiAML'''' is defined, with an abstract syntax, a semantics, and a concrete syntax. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,268 |
inproceedings | bhatia-etal-2010-empty | Empty Categories in a {H}indi Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1386/ | Bhatia, Archna and Bhatt, Rajesh and Narasimhan, Bhuvana and Palmer, Martha and Rambow, Owen and Sharma, Dipti Misra and Tepper, Michael and Vaidya, Ashwini and Xia, Fei | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We are in the process of creating a multi-representational and multi-layered treebank for Hindi/Urdu (Palmer et al., 2009), which has three main layers: dependency structure, predicate-argument structure (PropBank), and phrase structure. This paper discusses an important issue in treebank design which is often neglected: the use of empty categories (ECs). All three levels of representation make use of ECs. We make a high-level distinction between two types of ECs, trace and silent, on the basis of whether they are postulated to mark displacement or not. Each type is further refined into several subtypes based on the underlying linguistic phenomena which the ECs are introduced to handle. This paper discusses the stages at which we add ECs to the Hindi/Urdu treebank and why. We investigate methodically the different types of ECs and their role in our syntactic and semantic representations. We also examine our decisions whether or not to coindex each type of ECs with other elements in the representation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,269 |
inproceedings | lloberes-etal-2010-spanish | {S}panish {F}ree{L}ing Dependency Grammar | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1387/ | Lloberes, Marina and Castell{\'o}n, Irene and Padr{\'o}, Llu{\'i}s | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the development of an open-source Spanish Dependency Grammar implemented in FreeLing environment. This grammar was designed as a resource for NLP applications that require a step further in natural language automatic analysis, as is the case of Spanish-to-Basque translation. The development of wide-coverage rule-based grammars using linguistic knowledge contributes to extend the existing Spanish deep parsers collection, which sometimes is limited. Spanish FreeLing Dependency Grammar, named EsTxala, provides deep and robust parse trees, solving attachments for any structure and assigning syntactic functions to dependencies. These steps are dealt with hand-written rules based on linguistic knowledge. As a result, FreeLing Dependency Parser gives a unique analysis as a dependency tree for each sentence analyzed. Since it is a resource open to the scientific community, exhaustive grammar evaluation is being done to determine its accuracy as well as strategies for its manteinance and improvement. In this paper, we show the results of an experimental evaluation carried out over EsTxala in order to test our evaluation methodology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,270 |
inproceedings | duran-etal-2010-assigning | Assigning Wh-Questions to Verbal Arguments: Annotation Tools Evaluation and Corpus Building | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1388/ | Duran, Magali Sanches and Am{\^a}ncio, Marcelo Adriano and Alu{\'i}sio, Sandra Maria | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This work reports the evaluation and selection of annotation tools to assign wh-question labels to verbal arguments in a sentence. Wh-question assignment discussed herein is a kind of semantic annotation which involves two tasks: making delimitation of verbs and arguments, and linking verbs to its arguments by question labels. As it is a new type of semantic annotation, there is no report about requirements an annotation tool should have to face it. For this reason, we decided to select the most appropriated tool in two phases. In the first phase, we executed the task with an annotation tool we have used before in another task. Such phase helped us to test the task and enabled us to know which features were or not desirable in an annotation tool for our purpose. In the second phase, guided by such requirements, we evaluated several tools and selected a tool for the real task. After corpus annotation conclusion, we report some of the annotation results and some comments on the improvements there should be made in an annotation tool to better support such kind of annotation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,271 |
inproceedings | grishman-2010-impact | The Impact of Task and Corpus on Event Extraction Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1389/ | Grishman, Ralph | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The term event extraction covers a wide range of information extraction tasks, and methods developed and evaluated for one task may prove quite unsuitable for another. Understanding these task differences is essential to making broad progress in event extraction. We look back at the MUC and ACE tasks in terms of one characteristic, the breadth of the scenario {\textemdash} how wide a range of information is subsumed in a single extraction task. We examine how this affects strategies for collecting information and methods for semi-supervised training of new extractors. We also consider the heterogeneity of corpora {\textemdash} how varied the topics of documents in a corpus are. Extraction systems may be intended in principle for general news but are typically evaluated on topic-focused corpora, and this evaluation context may affect system design. As one case study, we examine the task of identifying physical attack events in news corpora, observing the effect on system performance of shifting from an attack-event-rich corpus to a more varied corpus and considering how the impact of this shift may be mitigated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,272 |
inproceedings | kulick-etal-2010-consistent | Consistent and Flexible Integration of Morphological Annotation in the {A}rabic Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1390/ | Kulick, Seth and Bies, Ann and Maamouri, Mohamed | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Complications arise for standoff annotation when the annotation is not on the source text itself, but on a more abstract representation. This is particularly the case in a language such as Arabic with morphological and orthographic challenges, and we discuss various aspects of these issues in the context of the Arabic Treebank. The Standard Arabic Morphological Analyzer (SAMA) is closely integrated into the annotation workflow, as the basis for the abstraction between the explicit source text and the more abstract token representation. However, this integration with SAMA gives rise to various problems for the annotation workflow and for maintaining the link between the Treebank and SAMA. In this paper we discuss how we have overcome these problems with consistent and more precise categorization of all of the tokens for their relationship with SAMA. We also discuss how we have improved the creation of several distinct alternative forms of the tokens used in the syntactic trees. As a result, the Treebank provides a resource relating the different forms of the same underlying token with varying degrees of vocalization, in terms of how they relate (1) to each other, (2) to the syntactic structure, and (3) to the morphological analyzer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,273 |
inproceedings | zaninello-nissim-2010-creation | Creation of Lexical Resources for a Characterisation of Multiword Expressions in {I}talian | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1391/ | Zaninello, Andrea and Nissim, Malvina | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The theoretical characterisation of multiword expressions (MWEs) is tightly connected to their actual occurrences in data and to their representation in lexical resources. We present three lexical resources for Italian MWEs, namely an electronic lexicon, a series of example corpora and a database of MWEs represented around morphosyntactic patterns. These resources are matched against, and created from, a very large web-derived corpus for Italian that spans across registers and domains. We can thus test expressions coded by lexicographers in a dictionary, thereby discarding unattested expressions, revisiting lexicographers`s choices on the basis of frequency information, and at the same time creating an example sub-corpus for each entry. We organise MWEs on the basis of the morphosyntactic information obtained from the data in an electronic, flexible knowledge-base containing structured annotation exploitable for multiple purposes. We also suggest further work directions towards characterising MWEs by analysing the data organised in our database through lexico-semantic information available in WordNet or MultiWordNet-like resources, also in the perspective of expanding their set through the extraction of other similar compact expressions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,274 |
inproceedings | sindlerova-bojar-2010-building | Building a Bilingual {V}al{L}ex Using Treebank Token Alignment: First Observations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1392/ | {\v{S}}indlerov{\'a}, Jana and Bojar, Ond{\v{r}}ej | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We explore the potential and limitations of a concept of building a bilingual valency lexicon based on the alignment of nodes in a parallel treebank. Our aim is to build an electronic Czech-{\ensuremath{>}}English Valency Lexicon by collecting equivalences from bilingual treebank data and storing them in two already existing electronic valency lexicons, PDT-VALLEX and Engvallex. For this task a special annotation interface has been built upon the TrEd editor, allowing quick and easy collecting of frame equivalences in either of the source lexicons. The issues encountered so far include limitations of technical character, theory-dependent limitations and limitations concerning the achievable degree of quality of human annotation. The issues of special interest for both linguists and MT specialists involved in the project include linguistically motivated non-balance between the frame equivalents, either in number or in type of valency participants. The first phases of annotation so far attest the assumption that there is a unique correspondence between the functors of the translation-equivalent frames. Also, hardly any linguistically significant non-balance between the frames has been found, which is partly promising considering the linguistic theory used and partly caused by little stylistic variety of the annotated corpus texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,275 |
inproceedings | diaz-etal-2010-development | Development and Use of an Evaluation Collection for Personalisation of Digital Newspapers | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1393/ | D{\'i}az, Alberto and Gerv{\'a}s, Pablo and Garc{\'i}a, Antonio and Plaza, Laura | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the process of development and the characteristics of an evaluation collection for a personalisation system for digital newspapers. This system selects, adapts and presents contents according to a user model that define information needs. The collection presented here contains data that are cross-related over four different axes: a set of news items from an electronic newspaper, collected into subsets corresponding to a particular sequence of days, packaged together and cross-indexed with a set of user profiles that represent the particular evolution of interests of a set of real users over the given days, expressed in each case according to four different representation frameworks: newspaper sections, Yahoo categories, keywords, and relevance feedback over the set of news items for the previous day. This information provides a minimum starting material over which one can evaluate for a given system how it addresses the first two observations - adapting to different users and adapting to particular users over time - providing that the particular system implements the representation of information needs according to the four frameworks employed in the collection. This collection has been successfully used to perform some different experiments to determine the effectiveness of the personalization system presented. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,276 |
inproceedings | clark-lavie-2010-loonybin | {L}oony{B}in: Keeping Language Technologists Sane through Automated Management of Experimental (Hyper)Workflows | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1394/ | Clark, Jonathan H. and Lavie, Alon | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Many contemporary language technology systems are characterized by long pipelines of tools with complex dependencies. Too often, these workflows are implemented by ad hoc scripts; or, worse, tools are run manually, making experiments difficult to reproduce. These practices are difficult to maintain in the face of rapidly evolving workflows while they also fail to expose and record important details about intermediate data. Further complicating these systems are hyperparameters, which often cannot be directly optimized by conventional methods, requiring users to determine which combination of values is best via trial and error. We describe LoonyBin, an open-source tool that addresses these issues by providing: 1) a visual interface for the user to create and modify workflows; 2) a well-defined mechanism for tracking metadata and provenance; 3) a script generator that compiles visual workflows into shell scripts; and 4) a new workflow representation we call a HyperWorkflow, which intuitively and succinctly encodes small experimental variations within a larger workflow. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,277 |
inproceedings | miller-etal-2010-improving | Improving Personal Name Search in the {TIGR} System | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1395/ | Miller, Keith J. and McLeod, Sarah and Schroeder, Elizabeth and Arehart, Mark and Samuel, Kenneth and Finley, James and Jurica, Vanesa and Polk, John | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the development and evaluation of enhancements to the specialized information retrieval capabilities of a multimodal reporting system. The system enables collection and dissemination of information through a distributed data architecture by allowing users to input free text documents, which are indexed for subsequent search and retrieval by other users. This unstructured data entry method is essential for users of this system, but it requires an intelligent support system for processing queries against the data. The system, known as TIGR (Tactical Ground Reporting), allows keyword searching and geospatial filtering of results, but lacked the ability to efficiently index and search person names and perform approximate name matching. To improve TIGRs ability to provide accurate, comprehensive results for queries on person names we iteratively updated existing entity extraction and name matching technologies to better align with the TIGR use case. We evaluated each version of the entity extraction and name matching components to find the optimal configuration for the TIGR context, and combined those pieces into a named entity extraction, indexing, and search module that integrates with the current TIGR system. By comparing system-level evaluations of the original and updated TIGR search processes, we show that our enhancements to personal name search significantly improved the performance of the overall information retrieval capabilities of the TIGR system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,278 |
inproceedings | laskowski-edlund-2010-snack | A Snack Implementation and Tcl/Tk Interface to the Fundamental Frequency Variation Spectrum Algorithm | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1396/ | Laskowski, Kornel and Edlund, Jens | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Intonation is an important aspect of vocal production, used for a variety of communicative needs. Its modeling is therefore crucial in many speech understanding systems, particularly those requiring inference of speaker intent in real-time. However, the estimation of pitch, traditionally the first step in intonation modeling, is computationally inconvenient in such scenarios. This is because it is often, and most optimally, achieved only after speech segmentation and recognition. A consequence is that earlier speech processing components, in todays state-of-the-art systems, lack intonation awareness by fiat; it is not known to what extent this circumscribes their performance. In the current work, we present a freely available implementation of an alternative to pitch estimation, namely the computation of the fundamental frequency variation (FFV) spectrum, which can be easily employed at any level within a speech processing system. It is our hope that the implementation we describe aid in the understanding of this novel acoustic feature space, and that it facilitate its inclusion, as desired, in the front-end routines of speech recognition, dialog act recognition, and speaker recognition systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,279 |
inproceedings | viethen-etal-2010-dialogue | Dialogue Reference in a Visual Domain | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1397/ | Viethen, Jette and Zwarts, Simon and Dale, Robert and Guhe, Markus | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A central purpose of referring expressions is to distinguish intended referents from other entities that are in the context; but how is this context determined? This paper draws a distinction between discourse context {\textemdash}other entities that have been mentioned in the dialogue{\textemdash} and visual context {\textemdash}visually available objects near the intended referent. It explores how these two different aspects of context have an impact on subsequent reference in a dialogic situation where the speakers share both discourse and visual context. In addition we take into account the impact of the reference history {\textemdash}forms of reference used previously in the discourse{\textemdash} on forming what have been called conceptual pacts. By comparing the output of different parameter settings in our model to a data set of human-produced referring expressions, we determine that an approach to subsequent reference based on conceptual pacts provides a better explanation of our data than previously proposed algorithmic approaches which compute a new distinguishing description for the intended referent every time it is mentioned. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,280 |
inproceedings | hara-etal-2010-estimation | Estimation Method of User Satisfaction Using N-gram-based Dialog History Model for Spoken Dialog System | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1398/ | Hara, Sunao and Kitaoka, Norihide and Takeda, Kazuya | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we propose an estimation method of user satisfaction for a spoken dialog system using an N-gram-based dialog history model. We have collected a large amount of spoken dialog data accompanied by usability evaluation scores by users in real environments. The database is made by a field-test in which naive users used a client-server music retrieval system with a spoken dialog interface on their own PCs. An N-gram model is trained from the sequences that consist of users' dialog acts and/or the system`s dialog acts for each one of six user satisfaction levels: from 1 to 5 and {\ensuremath{\varphi}} (task not completed). Then, the satisfaction level is estimated based on the N-gram likelihood. Experiments were conducted on the large real data and the results show that our proposed method achieved good classification performance; the classification accuracy was 94.7{\%} in the experiment on a classification into dialogs with task completion and those without task completion. Even if the classifier detected all of the task incomplete dialog correctly, our proposed method achieved the false detection rate of only 6{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,281 |
inproceedings | chen-etal-2010-language | A Language Approach to Modeling Human Behaviors | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1399/ | Chen, Peng-Wen and Chennuru, Snehal Kumar and Zhang, Ying | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The modeling of human behavior becomes more and more important due to the increasing popularity of context-aware computing and people-centric mobile applications. Inspired by the principle of action-as-language, we propose that human ambulatory behavior shares similar properties as natural languages. In addition, by exploiting this similarity, we will be able to index, recognize, cluster, retrieve, and infer high-level semantic meanings of human behaviors via the use of natural language processing techniques. In this paper, we developed a Life Logger system to help build the behavior language corpus which supports our ''``Behavior as Language'''' research. The constructed behavior corpus shows Zipf`s distribution over the frequency of vocabularies which is aligned with our ''``Behavior as Language'''' assumption. Our preliminary results of using smoothed n-gram language model for activity recognition achieved an average accuracy rate of 94{\%} in distinguishing among human ambulatory behaviors including walking, running, and cycling. This behavior-as-language corpus will enable researchers to study higher level human behavior based on the syntactic and semantic analysis of the corpus data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,282 |
inproceedings | murata-etal-2010-construction | Construction of Chunk-Aligned Bilingual Lecture Corpus for Simultaneous Machine Translation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1400/ | Murata, Masaki and Ohno, Tomohiro and Matsubara, Shigeki and Inagaki, Yasuyoshi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | With the development of speech and language processing, speech translation systems have been developed. These studies target spoken dialogues, and employ consecutive interpretation, which uses a sentence as the translation unit. On the other hand, there exist a few researches about simultaneous interpreting, and recently, the language resources for promoting simultaneous interpreting research, such as the publication of an analytical large-scale corpus, has been prepared. For the future, it is necessary to make the corpora more practical toward realization of a simultaneous interpreting system. In this paper, we describe the construction of a bilingual corpus which can be used for simultaneous lecture interpreting research. Simultaneous lecture interpreting systems are required to recognize translation units in the middle of a sentence, and generate its translation at the proper timing. We constructed the bilingual lecture corpus by the following steps. First, we segmented sentences in the lecture data into semantically meaningful units for the simultaneous interpreting. And then, we assigned the translations to these units from the viewpoint of the simultaneous interpreting. In addition, we investigated the possibility of automatically detecting the simultaneous interpreting timing from our corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,283 |
inproceedings | afantenos-etal-2010-learning | Learning Recursive Segments for Discourse Parsing | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1401/ | Afantenos, Stergos and Denis, Pascal and Muller, Philippe and Danlos, Laurence | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Automatically detecting discourse segments is an important preliminary step towards full discourse parsing. Previous research on discourse segmentation have relied on the assumption that elementary discourse units (EDUs) in a document always form a linear sequence (i.e., they can never be nested). Unfortunately, this assumption turns out to be too strong, for some theories of discourse, like the ''``Segmented Discourse Representation Theory'''' or SDRT, allow for nested discourse units. In this paper, we present a simple approach to discourse segmentation that is able to produce nested EDUs. Our approach builds on standard multi-class classification techniques making use of a regularized maximum entropy model, combined with a simple repairing heuristic that enforces global coherence. Our system was developed and evaluated on the first round of annotations provided by the French Annodis project (an ongoing effort to create a discourse bank for French). Cross-validated on only 47 documents (1,445 EDUs), our system achieves encouraging performance results with an F-score of 73{\%} for finding EDUs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,284 |
inproceedings | zhang-etal-2010-extracting | Extracting Product Features and Sentiments from {C}hinese Customer Reviews | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1402/ | Zhang, Shu and Jia, Wenjie and Xia, Yingju and Meng, Yao and Yu, Hao | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | With the growing interest in opinion mining from web data, more works are focused on mining in English and Chinese reviews. Probing into the problem of product opinion mining, this paper describes the details of our language resources, and imports them into the task of extracting product feature and sentiment task. Different from the traditional unsupervised methods, a supervised method is utilized to identify product features, combining the domain knowledge and lexical information. Nearest vicinity match and syntactic tree based methods are proposed to identify the opinions regarding the product features. Multi-level analysis module is proposed to determine the sentiment orientation of the opinions. With the experiments on the electronic reviews of COAE 2008, the validities of the product features identified by CRFs and the two opinion words identified methods are testified and compared. The results show the resource is well utilized in this task and our proposed method is valid. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,285 |
inproceedings | bohnet-wanner-2010-open | Open Soucre Graph Transducer Interpreter and Grammar Development Environment | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1403/ | Bohnet, Bernd and Wanner, Leo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Graph and tree transducers have been applied in many NLP areas{\textemdash}among them, machine translation, summarization, parsing, and text generation. In particular, the successful use of tree rewriting transducers for the introduction of syntactic structures in statistical machine translation contributed to their popularity. However, the potential of such transducers is limited because they do not handle graphs and because they consume the source structure in that they rewrite it instead of leaving it intact for intermediate consultations. In this paper, we describe an open source tree and graph transducer interpreter, which combines the advantages of graph transducers and two-tape Finite State Transducers and surpasses the limitations of state-of-the-art tree rewriting transducers. Along with the transducer, we present a graph grammar development environment that supports the compilation and maintenance of graph transducer grammatical and lexical resources. Such an environment is indispensable for any effort to create consistent large coverage NLP-resources by human experts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,286 |
inproceedings | kwon-etal-2010-linking | Linking {K}orean Words with an Ontology | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1404/ | Kwon, Min-Jae and Lee, Hae-Yun and Chae, Hee-Rahk | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The need for ontologies has increased in computer science or information science recently. Especially, NLP systems such as information retrieval, machine translation, etc. require ontologies whose concepts are connected to natural language words. There are a few Korean wordnets such as U-WIN, KorLex, CoreNet, etc. Most of them, however, stand alone without any link to an ontology. Hence, we need a Korean wordnet which is linked to a language-neutral ontology such as SUMO, OpenCyc, DOLCE, etc. In this paper, we will present a method of linking Korean word senses with the concepts of an ontology, which is part of an ongoing project. We use a Korean-English bilingual dictionary, Princeton WordNet (Fellbaum 1998), and the ontology SmartSUMO (Oberle et al. 2007). The current version of WordNet is mapped into SUMO, which constitutes a major part of SmartSUMO. We focus on mapping Korean word senses with corresponding English word senses by way of Princeton WordNet which is mapped into SUMO. This paper will show that we need to apply different algorithms of linking, depending on the information types that a bilingual dictionary contains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,287 |
inproceedings | ploux-etal-2010-semantic | The Semantic Atlas: an Interactive Model of Lexical Representation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1405/ | Ploux, Sabine and Boussidan, Armelle and Ji, Hyungsuk | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we describe two geometrical models of meaning representation, the Semantic Atlas (SA) and the Automatic Contexonym Organizing Model (ACOM). The SA provides maps of meaning generated through correspondence factor analysis. The models can handle different types of word relations: synonymy in the SA and co-occurrence in ACOM. Their originality relies on an artifact called `cliques' - a fine grained infra linguistic sub-unit of meaning. The SA is composed of several dictionaries and thesauri enhanced with a process of symmetrisation. It is currently available for French and English in monolingual versions as well as in a bilingual translation version. Other languages are under development and testing. ACOM deals with unannotated corpora. The models are used by research teams worldwide that investigate synonymy, translation processes, genre comparison, psycholinguistics and polysemy modeling. Both models can be consulted online via a flexible interface allowing for interactive navigation on \url{http://dico.isc.cnrs.fr}. This site is the most consulted address of the French National Center for Scientific Researchs domain (CNRS), one of the major research bodies in France. The international interest it has triggered led us to initiate the process of going open source. In the meantime, all our databases are freely available on request. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,288 |
inproceedings | araujo-etal-2010-sinotas | {SIN}otas: the Evaluation of a {NLG} Application | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1406/ | Araujo, Roberto P. A. and de Oliveira, Rafael L. and de Novais, Eder M. and Tadeu, Thiago D. and Pereira, Daniel B. and Paraboni, Ivandr{\'e} | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | SINotas is a data-to-text NLG application intended to produce short textual reports on students academic performance from a database conveying their grades, weekly attendance rates and related academic information. Although developed primarily as a testbed for Portuguese Natural Language Generation, SINotas generates reports of interest to both students keen to learn how their professors would describe their efforts, and to the professors themselves, who may benefit from an at-a-glance view of the students performance. In a traditional machine learning approach, SINotas uses a data-text aligned corpus as training data for decision-tree induction. The current system comprises a series of classifiers that implement major Document Planning subtasks (namely, data interpretation, content selection, within- and between-sentence structuring), and a small surface realisation grammar of Brazilian Portuguese. In this paper we focus on the evaluation work of the system, applying a number of intrinsic and user-based evaluation metrics to a collection of text reports generated from real application data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,289 |
inproceedings | poggi-etal-2010-types | Types of Nods. The Polysemy of a Social Signal | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1407/ | Poggi, Isabella and D{'}Errico, Francesca and Vincze, Laura | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The work analyses the head nod, a down-up movement of the head, as a polysemic social signal, that is, a signal with a number of different meanings which all share some common semantic element. Based on the analysis of 100 nods drawn from the SSPNet corpus of TV political debates, a typology of nods is presented that distinguishes Speakers, Interlocutors and Third Listeners nods, with their subtypes (confirmation, agreement, approval, submission and permission, greeting and thanks, backchannel giving and backchannel request, emphasis, ironic agreement, literal and rhetoric question, and others). For each nod the analysis specifies: 1. characteristic features of how it is produced, among which main direction, amplitude, velocity and number of repetitions; 2. cues in other modalities, like direction and duration of gaze; 3. conversational context in which the nod typically occurs. For the Interlocutors or Third Listeners nod, the preceding speech act is relevant: yes/no answer or information for a nod of confirmation, expression of opinion for one of agreement, prosocial action for greetings and thanks; for the Speakers nods, instead, their meanings are mainly distinguished by accompanying signals. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,290 |
inproceedings | altosaar-etal-2010-speech | A Speech Corpus for Modeling Language Acquisition: {CAREGIVER} | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1408/ | Altosaar, Toomas and ten Bosch, Louis and Aimetti, Guillaume and Koniaris, Christos and Demuynck, Kris and van den Heuvel, Henk | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A multi-lingual speech corpus used for modeling language acquisition called CAREGIVER has been designed and recorded within the framework of the EU funded Acquisition of Communication and Recognition Skills (ACORNS) project. The paper describes the motivation behind the corpus and its design by relying on current knowledge regarding infant language acquisition. Instead of recording infants and children, the voices of their primary and secondary caregivers were captured in both infant-directed and adult-directed speech modes over four languages in a read speech manner. The challenges and methods applied to obtain similar prompts in terms of complexity and semantics across different languages, as well as the normalized recording procedures employed at different locations, is covered. The corpus contains nearly 66000 utterance based audio files spoken over a two-year period by 17 male and 17 female native speakers of Dutch, English, Finnish, and Swedish. An orthographical transcription is available for every utterance. Also, time-aligned word and phone annotations for many of the sub-corpora also exist. The CAREGIVER corpus will be published via ELRA. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,291 |
inproceedings | seljan-etal-2010-corpus | Corpus Aligner ({C}or{A}l) Evaluation on {E}nglish-{C}roatian Parallel Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1409/ | Seljan, Sanja and Tadi{\'c}, Marko and Agi{\'c}, {\v{Z}}eljko and {\v{S}}najder, Jan and Ba{\v{s}}i{\'c}, Bojana Dalbelo and Osmann, Vjekoslav | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | An increasing demand for new language resources of recent EU members and accessing countries has in turn initiated the development of different language tools and resources, such as alignment tools and corresponding translation memories for new languages pairs. The primary goal of this paper is to provide a description of a free sentence alignment tool CorAl (Corpus Aligner), developed at the Faculty of Electrical Engineering and Computing, University of Zagreb. The tool performs paragraph alignment at the first step of the alignment process, which is followed by sentence alignment. Description of the tool is followed by its evaluation. The paper describes an experiment with applying the CorAl aligner to a English-Croatian parallel corpus of legislative domain using metrics of precision, recall and F1-measure. Results are discussed and the concluding sections discuss future directions of CorAl development. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,292 |
inproceedings | orasmaa-etal-2010-information | Information Retrieval of Word Form Variants in Spoken Language Corpora Using Generalized Edit Distance | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1410/ | Orasmaa, Siim and K{\"a{\"arik, Reina and Vilo, Jaak and Hennoste, Tiit | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | An important feature of spoken language corpora is existence of different spelling variants of words in transcription. So there is an important problem for linguist who works with large spoken corpora: how to find all variants of the word without annotating them manually? Our work describes a search engine that enables finding different spelling variants (true positives) from corpus of spoken language, and reduces efficiently the amount of false positives returned during the search. Our search engine uses a generalized variant of the edit distance algorithm that allows defining text-specific string to string transformations in addition to the default edit operations defined in edit distance. We have extended our algorithm with capability to block transformations in specific substrings of search words. User can mark certain regions (blocked regions) of the search word where edit operations are not allowed. Our material comes from the Corpus of Spoken Estonian of the University of Tartu which consists of about 2000 dialogues and texts, about 1.4 million running text units in total. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,293 |
inproceedings | marimon-2010-spanish | The {S}panish Resource Grammar | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1411/ | Marimon, Montserrat | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical framework of Head-driven Phrase Structure Grammar (HPSG), and it uses Minimal Recursion Semantics (MRS) for the semantic representation. We have developed a hybrid architecture which integrates shallow processing functionalities -- morphological analysis, and Named Entity recognition and classification -- into the parsing process. The SRG has a full coverage lexicon of closed word classes and it contains 50,852 lexical entries for open word classes. The grammar also has 64 lexical rules to perform valence changing operations on lexical items, and 191 phrase structure rules that combine words and phrases into larger constituents and compositionally build up their semantic representation. The annotation of each parsed sentence in an LKB grammar simultaneously represents a traditional phrase structure tree, and a MRS semantic representation. We provide evaluation results on sentences from newspaper texts and discuss future work. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,294 |
inproceedings | vilnat-etal-2010-passage | {PASSAGE} Syntactic Representation: a Minimal Common Ground for Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1412/ | Vilnat, Anne and Paroubek, Patrick and Villemonte de la Clergerie, Eric and Francopoulo, Gil and Gu{\'e}not, Marie-Laure | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The current PASSAGE syntactic representation is the result of 9 years of constant evolution with the aim of providing a common ground for evaluating parsers of French whatever their type and supporting theory. In this paper we present the latest developments concerning the formalism and show first through a review of basic linguistic phenomena that it is a plausible minimal common ground for representing French syntax in the context of generic black box quantitative objective evaluation. For the phenomena reviewed, which include: the notion of syntactic head, apposition, control and coordination, we explain how PASSAGE representation relates to other syntactic representation schemes for French and English, slightly extending the annotation to address English when needed. Second, we describe the XML format chosen for PASSAGE and show that it is compliant with the latest propositions in terms of linguistic annotation standard. We conclude discussing the influence that corpus-based evaluation has on the characteristics of syntactic representation when willing to assess the performance of any kind of parser. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,295 |
inproceedings | fishel-kirik-2010-linguistically | Linguistically Motivated Unsupervised Segmentation for Machine Translation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1413/ | Fishel, Mark and Kirik, Harri | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we use statistical machine translation and morphology information from two different morphological analyzers to try to improve translation quality by linguistically motivated segmentation. The morphological analyzers we use are the unsupervised Morfessor morpheme segmentation and analyzer toolkit and the rule-based morphological analyzer T3. Our translations are done using the Moses statistical machine translation toolkit with training on the JRC-Acquis corpora and translating on Estonian to English and English to Estonian language directions. In our work we model such linguistic phenomena as word lemmas and endings and splitting compound words into simpler parts. Also lemma information was used to introduce new factors to the corpora and to use this information for better word alignment or for alternative path back-off translation. From the results we find that even though these methods have shown previously and keep showing promise of improved translation, their success still largely depends on the corpora and language pairs used. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,296 |
inproceedings | strapparava-etal-2010-predicting | Predicting Persuasiveness in Political Discourses | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1414/ | Strapparava, Carlo and Guerini, Marco and Stock, Oliviero | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In political speeches, the audience tends to react or resonate to signals of persuasive communication, including an expected theme, a name or an expression. Automatically predicting the impact of such discourses is a challenging task. In fact nowadays, with the huge amount of textual material that flows on the Web (news, discourses, blogs, etc.), it can be useful to have a measure for testing the persuasiveness of what we retrieve or possibly of what we want to publish on Web. In this paper we exploit a corpus of political discourses collected from various Web sources, tagged with audience reactions, such as applause, as indicators of persuasive expressions. In particular, we use this data set in a machine learning framework to explore the possibility of classifying the transcript of political discourses, according to their persuasive power, predicting the sentences that possibly trigger applause. We also explore differences between Democratic and Republican speeches, experiment the resulting classifiers in grading some of the discourses in the Obama-McCain presidential campaign available on the Web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,297 |
inproceedings | cadic-etal-2010-towards | Towards Optimal {TTS} Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1415/ | Cadic, Didier and Boidin, C{\'e}dric and d{'}Alessandro, Christophe | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Unit selection text-to-speech systems currently produce very natural synthesized phrases by concatenating speech segments from a large database. Recently, increasing demand for designing high quality voices with less data has created need for further optimization of the textual corpus recorded by the speaker. This corpus is traditionally the result of a condensation process: sentences are selected from a reference corpus, using an optimization algorithm (generally greedy) guided by the coverage rate of classic units (diphones, triphones, words{\^a}{\textbrokenbar}). Such an approach is, however, strongly constrained by the finite content of the reference corpus, providing limited language possibilities. To gain flexibility in the optimization process, in this paper, we introduce a new corpus building procedure based on sentence construction rather than sentence selection. Sentences are generated using Finite State Transducers, assisted by a human operator and guided by a new frequency-weighted coverage criterion based on Vocalic Sandwiches. This semi-automatic process requires time-consuming human intervention but seems to give access to much denser corpora, with a density increase of 30 to 40{\%} for a given coverage rate. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,298 |
inproceedings | borg-etal-2010-automatic | Automatic Grammar Rule Extraction and Ranking for Definitions | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1416/ | Borg, Claudia and Rosner, Mike and Pace, Gordon J. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Plain text corpora contain much information which can only be accessed through human annotation and semantic analysis, which is typically very time consuming to perform. Analysis of such texts at a syntactic or grammatical structure level can however extract some of this information in an automated manner, even if identifying effective rules can be extremely difficult. One such type of implicit information present in texts is that of definitional phrases and sentences. In this paper, we investigate the use of evolutionary algorithms to learn classifiers to discriminate between definitional and non-definitional sentences in non-technical texts, and show how effective grammar-based definition discriminators can be automatically learnt with minor human intervention. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,299 |
inproceedings | rayner-etal-2010-multilingual | A Multilingual {CALL} Game Based on Speech Translation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1417/ | Rayner, Manny and Bouillon, Pierrette and Tsourakis, Nikos and Gerlach, Johanna and Georgescul, Maria and Nakao, Yukie and Baur, Claudia | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a multilingual Open Source CALL game, CALL-SLT, which reuses speech translation technology developed using the Regulus platform to create an automatic conversation partner that allows intermediate-level language students to improve their fluency. We contrast CALL-SLT with Wang`s and Seneff`s ``translation game'' system, in particular focussing on three issues. First, we argue that the grammar-based recognition architecture offered by Regulus is more suitable for this type of application; second, that it is preferable to prompt the student in a language-neutral form, rather than in the L1; and third, that we can profitably record successful interactions by native speakers and store them to be reused as online help for students. The current system, which will be demoed at the conference, supports four L2s (English, French, Japanese and Swedish) and two L1s (English and French). We conclude by describing an evaluation exercise, where a version of CALL-SLT configured for English L2 and French L1 was used by several hundred high school students. About half of the subjects reported positive impressions of the system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,300 |
inproceedings | adolphs-etal-2010-question | Question Answering Biographic Information and Social Network Powered by the Semantic Web | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1418/ | Adolphs, Peter and Cheng, Xiwen and Kl{\"uwer, Tina and Uszkoreit, Hans and Xu, Feiyu | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | After several years of development, the vision of the Semantic Web is gradually becoming reality. Large data repositories have been created and offer semantic information in a machine-processable form for various domains. Semantic Web data can be published on the Web, gathered automatically, and reasoned about. All these developments open interesting perspectives for building a new class of domain-specific, broad-coverage information systems that overcome a long-standing bottleneck of AI systems, the notoriously incomplete knowledge base. We present a system that shows how the wealth of information in the Semantic Web can be interfaced with humans once again, using natural language for querying and answering rather than technical formalisms. Whereas current Question Answering systems typically select snippets from Web documents retrieved by a search engine, we utilize Semantic Web data, which allows us to provide natural-language answers that are tailored to the current dialog context. Furthermore, we show how to use natural language processing technologies to acquire new data and enrich existing data in a Semantic Web framework. Our system has acquired a rich biographic data resource by combining existing Semantic Web resources, which are discovered from semi-structured textual data in Web pages, with information extracted from free natural language texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,301 |
inproceedings | shutova-teufel-2010-metaphor | Metaphor Corpus Annotated for Source - Target Domain Mappings | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1419/ | Shutova, Ekaterina and Teufel, Simone | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Besides making our thoughts more vivid and filling our communication with richer imagery, metaphor also plays an important structural role in our cognition. Although there is a consensus in the linguistics and NLP research communities that the phenomenon of metaphor is not restricted to similarity-based extensions of meanings of isolated words, but rather involves reconceptualization of a whole area of experience (target domain) in terms of another (source domain), there still has been no proposal for a comprehensive procedure for annotation of cross-domain mappings. However, a corpus annotated for conceptual mappings could provide a new starting point for both linguistic and cognitive experiments. The annotation scheme we present in this paper is a step towards filling this gap. We test our procedure in an experimental setting involving multiple annotators and estimate their agreement on the task. The associated corpus annotated for source {\textemdash} target domain mappings will be publicly available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,302 |
inproceedings | sangati-etal-2010-efficiently | Efficiently Extract Rrecurring Tree Fragments from Large Treebanks | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1420/ | Sangati, Federico and Zuidema, Willem and Bod, Rens | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper we describe FragmentSeeker, a tool which is capable to identify all those tree constructions which are recurring multiple times in a large Phrase Structure treebank. The tool is based on an efficient kernel-based dynamic algorithm, which compares every pair of trees of a given treebank and computes the list of fragments which they both share. We describe two different notions of fragments we will use, i.e. standard and partial fragments, and provide the implementation details on how to extract them from a syntactically annotated corpus. We have tested our system on the Penn Wall Street Journal treebank for which we present quantitative and qualitative analysis on the obtained recurring structures, as well as provide empirical time performance. Finally we propose possible ways our tool could contribute to different research fields related to corpus analysis and processing, such as parsing, corpus statistics, annotation guidance, and automatic detection of argument structure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,303 |
inproceedings | lazaridis-etal-2010-vergina | {V}ergina: A {M}odern {G}reek Speech Database for Speech Synthesis | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1421/ | Lazaridis, Alexandros and Kostoulas, Theodoros and Ganchev, Todor and Mporas, Iosif and Fakotakis, Nikos | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The present paper outlines the Vergina speech database, which was developed in support of research and development of corpus-based unit selection and statistical parametric speech synthesis systems for Modern Greek language. In the following, we describe the design, development and implementation of the recording campaign, as well as the annotation of the database. Specifically, a text corpus of approximately 5 million words, collected from newspaper articles, periodicals, and paragraphs of literature, was processed in order to select the utterances-sentences needed for producing the speech database and to achieve a reasonable phonetic coverage. The broad coverage and contents of the selected utterances-sentences of the database {\textemdash} text corpus collected from different domains and writing styles {\textemdash} makes this database appropriate for various application domains. The database, recorded in audio studio, consists of approximately 3,000 phonetically balanced Modern Greek utterances corresponding to approximately four hours of speech. Annotation of the Vergina speech database was performed using task-specific tools, which are based on a hidden Markov model (HMM) segmentation method, and then manual inspection and corrections were performed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,304 |
inproceedings | nastase-etal-2010-wikinet | {W}iki{N}et: A Very Large Scale Multi-Lingual Concept Network | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1422/ | Nastase, Vivi and Strube, Michael and Boerschinger, Benjamin and Zirn, Caecilia and Elghafari, Anas | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes a multi-lingual large-scale concept network obtained automatically by mining for concepts and relations and exploiting a variety of sources of knowledge from Wikipedia. Concepts and their lexicalizations are extracted from Wikipedia pages, in particular from article titles, hyperlinks, disambiguation pages and cross-language links. Relations are extracted from the category and page network, from the category names, from infoboxes and the body of the articles. The resulting network has two main components: (i) a central, language independent index of concepts, which serves to keep track of the concepts' lexicalizations both within a language and across languages, and to separate linguistic expressions of concepts from the relations in which they are involved (concepts themselves are represented as numeric IDs); (ii) a large network built on the basis of the relations extracted, represented as relations between concepts (more specifically, the numeric IDs). The various stages of obtaining the network were separately evaluated, and the results show a qualitative resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,305 |
inproceedings | aswani-gaizauskas-2010-developing | Developing Morphological Analysers for {S}outh {A}sian Languages: Experimenting with the {H}indi and {G}ujarati Languages | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1423/ | Aswani, Niraj and Gaizauskas, Robert | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | A considerable amount of work has been put into development of stemmers and morphological analysers. The majority of these approaches use hand-crafted suffix-replacement rules but a few try to discover such rules from corpora. While most of the approaches remove or replace suffixes, there are examples of derivational stemmers which are based on prefixes as well. In this paper we present a rule-based morphological analyser. We propose an approach that takes both prefixes as well as suffixes into account. Given a corpus and a dictionary, our method can be used to obtain a set of suffix-replacement rules for deriving an inflected words root form. We developed an approach for the Hindi language but show that the approach is portable, at least to related languages, by adapting it to the Gujarati language. Given that the entire process of developing such a ruleset is simple and fast, our approach can be used for rapid development of morphological analysers and yet it can obtain competitive results with analysers built relying on human authored rules. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,306 |
inproceedings | hanaoka-etal-2010-japanese | A {J}apanese Particle Corpus Built by Example-Based Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1424/ | Hanaoka, Hiroki and Mima, Hideki and Tsujii, Jun{'}ichi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper is a report on an on-going project of creating a new corpus focusing on Japanese particles. The corpus will provide deeper syntactic/semantic information than the existing resources. The initial target particle is ``to'' which occurs 22,006 times in 38,400 sentences of the existing corpus: the Kyoto Text Corpus. In this annotation task, an ``example-based'' methodology is adopted for the corpus annotation, which is different from the traditional annotation style. This approach provides the annotators with an example sentence rather than a linguistic category label. By avoiding linguistic technical terms, it is expected that any native speakers, with no special knowledge on linguistic analysis, can be an annotator without long training, and hence it can reduce the annotation cost. So far, 10,475 occurrences have been already annotated, with an inter-annotator agreement of 0.66 calculated by Cohen`s kappa. The initial disagreement analyses and future directions are discussed in the paper. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,307 |
inproceedings | sporleder-etal-2010-idioms | Idioms in Context: The {IDIX} Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1425/ | Sporleder, Caroline and Li, Linlin and Gorinski, Philip and Koch, Xaver | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Idioms and other figuratively used expressions pose considerable problems to natural language processing applications because they are very frequent and often behave idiosyncratically. Consequently, there has been much research on the automatic detection and extraction of idiomatic expressions. Most studies focus on type-based idiom detection, i.e., distinguishing whether a given expression can (potentially) be used idiomatically. However, many expressions such as ''``break the ice'''' can have both literal and non-literal readings and need to be disambiguated in a given context (token-based detection). So far relatively few approaches have attempted context-based idiom detection. One reason for this may be that few annotated resources are available that disambiguate expressions in context. With the IDIX corpus, we aim to address this. IDIX is available as an add-on to the BNC and disambiguates different usages of a subset of idioms. We believe that this resource will be useful both for linguistic and computational linguistic studies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,308 |
inproceedings | kostoulas-etal-2010-playmancer | The {P}lay{M}ancer Database: A Multimodal Affect Database in Support of Research and Development Activities in Serious Game Environment | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1426/ | Kostoulas, Theodoros and Kocsis, Otilia and Ganchev, Todor and Fern{\'a}ndez-Aranda, Fernando and Santamar{\'i}a, Juan J. and Jim{\'e}nez-Murcia, Susana and Moussa, Maher Ben and Magnenat-Thalmann, Nadia and Fakotakis, Nikos | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The present paper reports on a recent effort that resulted in the establishment of a unique multimodal affect database, referred to as the PlayMancer database. This database was created in support of the research and development activities, taking place within the PlayMancer project, which aim at the development of a serious game environment in support of treatment of patients with behavioural and addictive disorders, such as eating disorders and gambling addictions. Specifically, for the purpose of data collection, we designed and implemented a pilot trial with healthy test subjects. Speech, video and bio-signals (pulse-rate, SpO2) were captured synchronously, during the interaction of healthy people with a number of video games. The collected data were annotated by the test subjects (self-annotation), targeting proper interpretation of the underlying affective states. The broad-shouldered design of the PlayMancer database allows its use for the needs of research on multimodal affect-emotion recognition and multimodal human-computer interaction in serious games environment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,309 |
inproceedings | lata-kumar-2010-development | Development of Linguistic Resources and Tools for Providing Multilingual Solutions in {I}ndian Languages {---} A Report on National Initiative | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1427/ | Lata, Swaran and Kumar, Somnath Chandra Vijay | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The multilingual diversity of India is one of the most unique in world. Currently there are 22 constitutionally recognized languages with 12 scripts. Apart from these, there are at least 35 different languages and 2000 dialects in 4 major language families. It is thus evident that, development and proliferation of software solutions in the Indic multilingual environment requires continuous and sustained effort to edge out challenges in all core areas namely storage and encoding, input mechanism, browser support and data exchange. Linguistic Resources and Tools are the key building blocks to develop multilingual solutions. In this paper, we shall present an overview of the major national initiative in India for the development and standardization of Linguistic Resources and Tools for developing and deployment of multilingual ICT solutions in India. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,310 |
inproceedings | nicolae-etal-2010-c | {C}-3: Coherence and Coreference Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1428/ | Nicolae, Cristina and Nicolae, Gabriel and Roberts, Kirk | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The phenomenon of coreference, covering entities, their mentions and their properties, is intricately linked to the phenomenon of coherence, covering the structure of rhetorical relations in a discourse. A text corpus that has both phenomena annotated can be used to test hypotheses about their interrelation or to detect other phenomena. We present the process by which C-3, a new corpus, was obtained by annotating the Discourse GraphBank coherence corpus with entity and mention information. The annotation followed a set of ACE guidelines adapted to favor coreference and to include entities of unknown types in the annotation. Together with the corpus we offer a new annotation tool specifically designed to annotate entity and mention information within a simple and functional graphical interface that combines the best of all worlds from available annotation tools. The potential usefulness of C-3 is discussed, as well as an application in which the corpus proved to be a valuable resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,311 |
inproceedings | boxwell-brew-2010-pilot | A Pilot {A}rabic {CCG}bank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1429/ | Boxwell, Stephen A. and Brew, Chris | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe a process for converting the Penn Arabic Treebank into the CCG formalism. Previous efforts have yielded CCGbanks in English, German, and Turkish, thus opening these languages to the sophisticated computational tools developed for CCG and enabling further cross-linguistic development. Conversion from a context free grammar treebank to a CCGbank is a four stage process: head finding, argument classification, binarization, and category conversion. In the process of implementing a basic CCGbank conversion algorithm, we reveal properties of Arabic grammar that interfere with conversion, such as subject topicalization, genitive constructions, relative clauses, and optional pronominal subjects. All of these problematic phenomena can be resolved in a variety of ways - we discuss advantages and disadvantages of each in their respective sections. We detail these and describe our categorial analysis of each of these Arabic grammatical phenomena in depth, as well as technical details on their integration into the conversion algorithm. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,312 |
inproceedings | fougeron-etal-2010-despho | The {D}es{P}ho-{AP}a{D}y Project: Developing an Acoustic-phonetic Characterization of Dysarthric Speech in {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1430/ | Fougeron, C{\'e}cile and Crevier-Buchman, Lise and Fredouille, Corinne and Ghio, Alain and Meunier, Christine and Chevrie-Muller, Claude and Bonastre, Jean-Francois and Colazo Simon, Antonia and Delooze, C{\'e}line and Duez, Danielle and Gendrot, C{\'e}dric and Legou, Thierry and Lev{\`e}que, Nathalie and Pillot-Loiseau, Claire and Pinto, Serge and Pouchoulin, Gilles and Robert, Dani{\`e}le and Vaissiere, Jacqueline and Viallet, Fran{\c{c}}ois and Vincent, Coralie | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents the rationale, objectives and advances of an on-going project (the DesPho-APaDy project funded by the French National Agency of Research) which aims to provide a systematic and quantified description of French dysarthric speech, over a large population of patients and three dysarthria types (related to the parkinson`s disease, the Amyotrophic Lateral Sclerosis disease, and a pure cerebellar alteration). The two French corpora of dysarthric patients, from which the speech data have been selected for analysis purposes, are firstly described. Secondly, this paper discusses and outlines the requirement of a structured and organized computerized platform in order to store, organize and make accessible (for selected and protected usage) dysarthric speech corpora and associated patients clinical information (mostly disseminated in different locations: labs, hospitals, {\^a}{\textbrokenbar}). The design of both a computer database and a multi-field query interface is proposed for the clinical context. Finally, advances of the project related to the selection of the population used for the dysarthria analysis, the preprocessing of the speech files, their orthographic transcription and their automatic alignment are also presented. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,313 |
inproceedings | balakrishna-etal-2010-semi | Semi-Automatic Domain Ontology Creation from Text Resources | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1431/ | Balakrishna, Mithun and Moldovan, Dan and Tatu, Marta and Olteanu, Marian | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Analysts in various domains, especially intelligence and financial, have to constantly extract useful knowledge from large amounts of unstructured or semi-structured data. Keyword-based search, faceted search, question-answering, etc. are some of the automated methodologies that have been used to help analysts in their tasks. General-purpose and domain-specific ontologies have been proposed to help these automated methods in organizing data and providing access to useful information. However, problems in ontology creation and maintenance have resulted in expensive procedures for expanding/maintaining the ontology library available to support the growing and evolving needs of analysts. In this paper, we present a generalized and improved procedure to automatically extract deep semantic information from text resources and rapidly create semantically-rich domain ontologies while keeping the manual intervention to a minimum. We also present evaluation results for the intelligence and financial ontology libraries, semi-automatically created by our proposed methodologies using freely-available textual resources from the Web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,314 |
inproceedings | melero-etal-2010-language | Language Technology Challenges of a {\textquoteleft}Small' Language ({C}atalan) | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1432/ | Melero, Maite and Boleda, Gemma and Cuadros, Montse and Espa{\~n}a-Bonet, Cristina and Padr{\'o}, Llu{\'i}s and Quixal, Mart{\'i} and Rodr{\'i}guez, Carlos and Saur{\'i}, Roser | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present a brief snapshot of the state of affairs in computational processing of Catalan and the initiatives that are starting to take place in an effort to bring the field a step forward, by making a better and more efficient use of the already existing resources and tools, by bridging the gap between research and market, and by establishing periodical meeting points for the community. In particular, we present the results of the First Workshop on the Computational Processing of Catalan, which succeeded in putting together a fair representation of the research in the area, and received attention from both the industry and the administration. Aside from facilitating communication among researchers and between developers and users, the Workshop provided the organizers with valuable information about existing resources, tools, developers and providers. This information has allowed us to go a step further by setting up a harvesting procedure which will hopefully build the seed of a portal-catalogue-observatory of language resources and technologies in Catalan. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,315 |
inproceedings | lee-haug-2010-porting | Porting an {A}ncient {G}reek and {L}atin Treebank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1433/ | Lee, John and Haug, Dag | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We have recently converted a dependency treebank, consisting of ancient Greek and Latin texts, from one annotation scheme to another that was independently designed. This paper makes two observations about this conversion process. First, we show that, despite significant surface differences between the two treebanks, a number of straightforward transformation rules yield a substantial level of compatibility between them, giving evidence for their sound design and high quality of annotation. Second, we analyze some linguistic annotations that require further disambiguation, proposing some simple yet effective machine learning methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 79,316 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.