entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
tanasijevic-etal-2012-multimedia
Multimedia database of the cultural heritage of the {B}alkans
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1367/
Tanasijevi{\'c}, Ivana and Sikimi{\'c}, Biljana and Pavlovi{\'c}-La{\v{z}}eti{\'c}, Gordana
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2874--2881
This paper presents a system that is designed to make possible the organization and search within the collected digitized material of intangible cultural heritage. The motivation for building the system was a vast quantity of multimedia documents collected by a team from the Institute for Balkan Studies in Belgrade. The main topic of their research were linguistic properties of speeches that are used in various places in the Balkans by different groups of people. This paper deals with a prototype system that enables the annotation of the collected material and its organization into a native XML database through a graphical interface. The system enables the search of the database and the presentation of digitized multimedia documents and spatial as well as non-spatial information of the queried data. The multimedia content can be read, listened to or watched while spatial properties are presented on the graphics that consists of geographic regions in the Balkans. The system also enables spatial queries by consulting the graph of geographic regions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,568
inproceedings
landragin-etal-2012-analec
{ANALEC}: a New Tool for the Dynamic Annotation of Textual Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1368/
Landragin, Fr{\'e}d{\'e}ric and Poibeau, Thierry and Victorri, Bernard
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
357--362
We introduce ANALEC, a tool which aim is to bring together corpus annotation, visualization and query management. Our main idea is to provide a unified and dynamic way of annotating textual data. ANALEC allows researchers to dynamically build their own annotation scheme and use the possibilities of scheme revision, data querying and graphical visualization during the annotation process. Each query result can be visualized using a graphical representation that puts forward a set of annotations that can be directly corrected or completed. Text annotation is then considered as a cyclic process. We show that statistics like frequencies and correlations make it possible to verify annotated data on the fly during the annotation. In this paper we introduce the annotation functionalities of ANALEC, some of the annotated data visualization functionalities, and three statistical modules: frequency, correlation and geometrical representations. Some examples dealing with reference and coreference annotation illustrate the main contributions of ANALEC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,569
inproceedings
navarretta-etal-2012-feedback
Feedback in {N}ordic First-Encounters: a Comparative Study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1369/
Navarretta, Costanza and Ahls{\'e}n, Elisabeth and Allwood, Jens and Jokinen, Kristiina and Paggio, Patrizia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2494--2499
The paper compares how feedback is expressed via speech and head movements in comparable corpora of first encounters in three Nordic languages: Danish, Finnish and Swedish. The three corpora have been collected following common guidelines, and they have been annotated according to the same scheme in the NOMCO project. The results of the comparison show that in this data the most frequent feedback-related head movement is Nod in all three languages. Two types of Nods were distinguished in all corpora: Down-nods and Up-nods; the participants from the three countries use Down- and Up-nods with different frequency. In particular, Danes use Down-nods more frequently than Finns and Swedes, while Swedes use Up-nods more frequently than Finns and Danes. Finally, Finns use more often single Nods than repeated Nods, differing from the Swedish and Danish participants. The differences in the frequency of both Down-nods and Up-Nods in the Danish, Finnish and Swedish interactions are interesting given that Nordic countries are not only geographically near, but are also considered to be very similar culturally. Finally, a comparison of feedback-related words in the Danish and Swedish corpora shows that Swedes and Danes use common feedback words corresponding to yes and no with similar frequency.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,570
inproceedings
li-etal-2012-annotating
Annotating Opinions in {G}erman Political News
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1370/
Li, Hong and Cheng, Xiwen and Adson, Kristina and Kirshboim, Tal and Xu, Feiyu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1183--1188
This paper presents an approach to construction of an annotated corpus for German political news for the opinion mining task. The annotated corpus has been applied to learn relation extraction rules for extraction of opinion holders, opinion content and classification of polarities. An adapted annotated schema has been developed on top of the state-of-the-art research. Furthermore, a general tool for annotating relations has been utilized for the annotation task. An evaluation of the inter-annotator agreement has been conducted. The rule learning is realized with the help of a minimally supervised machine learning framework DARE.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,571
inproceedings
chen-eisele-2012-multiun
{M}ulti{UN} v2: {UN} Documents with Multilingual Alignments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1371/
Chen, Yu and Eisele, Andreas
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2500--2504
MultiUN is a multilingual parallel corpus extracted from the official documents of the United Nations. It is available in the six official languages of the UN and a small portion of it is also available in German. This paper presents a major update on the first public version of the corpus released in 2010. This version 2 consists of over 513,091 documents, including more than 9{\%} of new documents retrieved from the United Nations official document system. We applied several modifications to the corpus preparation method. In this paper, we describe the methods we used for processing the UN documents and aligning the sentences. The most significant improvement compared to the previous release is the newly added multilingual sentence alignment information. The alignment information is encoded together with the text in XML instead of additional files. Our representation of the sentence alignment allows quick construction of aligned texts parallel in arbitrary number of languages, which is essential for building machine translation systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,572
inproceedings
saggion-szasz-2012-concisus
The {CONCISUS} Corpus of Event Summaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1372/
Saggion, Horacio and Szasz, Sandra
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2031--2037
Text summarization and information extraction systems require adaptation to new domains and languages. This adaptation usually depends on the availability of language resources such as corpora. In this paper we present a comparable corpus in Spanish and English for the study of cross-lingual information extraction and summarization: the CONCISUS Corpus. It is a rich human-annotated dataset composed of comparable event summaries in Spanish and English covering four different domains: aviation accidents, rail accidents, earthquakes, and terrorist attacks. In addition to the monolingual summaries in English and Spanish, we provide automatic translations and ``comparable'' full event reports of the events. The human annotations are concepts marked in the textual sources representing the key event information associated to the event type. The dataset has also been annotated using text processing pipelines. It is being made freely available to the research community for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,573
inproceedings
carvalho-etal-2012-building
Building and Exploring Semantic Equivalences Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1373/
Carvalho, Gracinda and de Matos, David Martins and Rocio, Vitor
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2038--2042
Language resources that include semantic equivalences at word level are common, and its usefulness is well established in text processing applications, as in the case of search. Named entities also play an important role for text based applications, but are not usually covered by the previously mentioned resources. The present work describes the WES base, Wikipedia Entity Synonym base, a freely available resource based on the Wikipedia. The WES base was built for the Portuguese Language, with the same format of another freely available thesaurus for the same language, the TeP base, which allows integration of equivalences both at word level and entity level. The resource has been built in a language independent way, so that it can be extended to different languages. The WES base was used in a Question Answering system, enhancing significantly its performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,574
inproceedings
larasati-2012-identic
{IDENTIC} Corpus: Morphologically Enriched {I}ndonesian-{E}nglish Parallel Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1374/
Larasati, Septina Dian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
902--906
This paper describes the creation process of an Indonesian-English parallel corpus (IDENTIC). The corpus contains 45,000 sentences collected from different sources in different genres. Several manual text preprocessing tasks, such as alignment and spelling correction, are applied to the corpus to assure its quality. We also apply language specific text processing such as tokenization on both sides and clitic normalization on the Indonesian side. The corpus is available in two different formats: ‘plain', stored in text format and ‘morphologically enriched', stored in CoNLL format. Some parts of the corpus are publicly available at the IDENTIC homepage.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,575
inproceedings
bojar-etal-2012-joy
The Joy of Parallelism with {C}z{E}ng 1.0
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1375/
Bojar, Ond{\v{r}}ej and {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and Du{\v{s}}ek, Ond{\v{r}}ej and Galu{\v{s}}{\v{c}}{\'a}kov{\'a}, Petra and Majli{\v{s}}, Martin and Mare{\v{c}}ek, David and Mar{\v{s}}{\'i}k, Ji{\v{r}}{\'i} and Nov{\'a}k, Michal and Popel, Martin and Tamchyna, Ale{\v{s}}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3921--3928
CzEng 1.0 is an updated release of our Czech-English parallel corpus, freely available for non-commercial research or educational purposes. In this release, we approximately doubled the corpus size, reaching 15 million sentence pairs (about 200 million tokens per language). More importantly, we carefully filtered the data to reduce the amount of non-matching sentence pairs. CzEng 1.0 is automatically aligned at the level of sentences as well as words. We provide not only the plain text representation, but also automatic morphological tags, surface syntactic as well as deep syntactic dependency parse trees and automatic co-reference links in both English and Czech. This paper describes key properties of the released resource including the distribution of text domains, the corpus data formats, and a toolkit to handle the provided rich annotation. We also summarize the procedure of the rich annotation (incl. co-reference resolution) and of the automatic filtering. Finally, we provide some suggestions on exploiting such an automatically annotated sentence-parallel corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,576
inproceedings
dukes-atwell-2012-lamp
{LAMP}: A Multimodal Web Platform for Collaborative Linguistic Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1376/
Dukes, Kais and Atwell, Eric
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3268--3275
This paper describes the underlying software platform used to develop and publish annotations for the Quranic Arabic Corpus (QAC). The QAC (Dukes, Atwell and Habash, 2011) is a multimodal language resource that integrates deep tagging, interlinear translation, multiple speech recordings, visualization and collaborative analysis for the Classical Arabic language of the Quran. Available online at \url{http://corpus.quran.com}, the website is a popular study guide for Quranic Arabic, used by over 1.2 million visitors over the past year. We provide a description of the underlying software system that has been used to develop the corpus annotations. The multimodal data is made available online through an accessible cross-referenced web interface. Although our Linguistic Analysis Multimodal Platform (LAMP) has been applied to the Classical Arabic language of the Quran, we argue that our annotation model and software architecture may be of interest to other related corpus linguistics projects. Work related to LAMP includes recent efforts for annotating other Classical languages, such as Ancient Greek and Latin (Bamman, Mambrini and Crane, 2009), as well as commercial systems (e.g. Logos Bible study) that provide access to syntactic tagging for the Hebrew Bible and Greek New Testament (Brannan, 2011).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,577
inproceedings
ogrodniczuk-lenart-2012-web
Web Service integration platform for {P}olish linguistic resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1377/
Ogrodniczuk, Maciej and Lenart, Micha{\l}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1164--1168
This paper presents a robust linguistic Web service framework for Polish, combining several mature offline linguistic tools in a common online platform. The toolset comprise paragraph-, sentence- and token-level segmenter, morphological analyser, disambiguating tagger, shallow and deep parser, named entity recognizer and coreference resolver. Uniform access to processing results is provided by means of a stand-off packaged adaptation of National Corpus of Polish TEI P5-based representation and interchange format. A concept of asynchronous handling of requests sent to the implemented Web service (Multiservice) is introduced to enable processing large amounts of text by setting up language processing chains of desired complexity. Apart from a dedicated API, a simpleWeb interface to the service is presented, allowing to compose a chain of annotation services, run it and periodically check for execution results, made available as plain XML or in a simple visualization. Usage examples and results from performance and scalability tests are also included.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,578
inproceedings
kennington-etal-2012-suffix
Suffix Trees as Language Models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1378/
Kennington, Casey Redd and Kay, Martin and Friedrich, Annemarie
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
446--453
Suffix trees are data structures that can be used to index a corpus. In this paper, we explore how some properties of suffix trees naturally provide the functionality of an n-gram language model with variable n. We explain these properties of suffix trees, which we leverage for our Suffix Tree Language Model (STLM) implementation and explain how a suffix tree implicitly contains the data needed for n-gram language modeling. We also discuss the kinds of smoothing techniques appropriate to such a model. We then show that our suffix-tree language model implementation is competitive when compared to the state-of-the-art language model SRILM (Stolke, 2002) in statistical machine translation experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,579
inproceedings
dinu-etal-2012-romanian
The {R}omanian Neuter Examined Through A Two-Gender N-Gram Classification System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1379/
Dinu, Liviu P. and Niculae, Vlad and {\c{S}}ulea, Octavia-Maria
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
907--910
Romanian has been traditionally seen as bearing three lexical genders: masculine, feminine and neuter, although it has always been known to have only two agreement patterns (for masculine and feminine). A recent analysis of the Romanian gender system described in (Bateman and Polinsky, 2010), based on older observations, argues that there are two lexically unspecified noun classes in the singular and two different ones in the plural and that what is generally called neuter in Romanian shares the class in the singular with masculines, and the class in the plural with feminines based not only on agreement features but also on form. Previous machine learning classifiers that have attempted to discriminate Romanian nouns according to gender have so far taken as input only the singular form, presupposing the traditional tripartite analysis. We propose a classifier based on two parallel support vector machines using n-gram features from the singular and from the plural which outperforms previous classifiers in its high ability to distinguish the neuter. The performance of our system suggests that the two-gender analysis of Romanian, on which it is based, is on the right track.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,580
inproceedings
eom-etal-2012-using
Using semi-experts to derive judgments on word sense alignment: a pilot study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1380/
Eom, Soojeong and Dickinson, Markus and Katz, Graham
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
605--611
The overall goal of this project is to evaluate the performance of word sense alignment (WSA) systems, focusing on obtaining examples appropriate to language learners. Building a gold standard dataset based on human expert judgments is costly in time and labor, and thus we gauge the utility of using semi-experts in performing the annotation. In an online survey, we present a sense of a target word from one dictionary with senses from the other dictionary, asking for judgments of relatedness. We note the difficulty of agreement, yet the utility in using such results to evaluate WSA work. We find that one`s treatment of related senses heavily impacts the results for WSA.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,581
inproceedings
ogrodniczuk-2012-polish
The {P}olish Sejm Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1381/
Ogrodniczuk, Maciej
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2219--2223
This document presents the first edition of the Polish Sejm Corpus -- a new specialized resource containing transcribed, automatically annotated utterances of the Members of Polish Sejm (lower chamber of the Polish Parliament). The corpus data encoding is inherited from the National Corpus of Polish and enhanced with session metadata and structure. The multi-layered stand-off annotation contains sentence- and token-level segmentation, disambiguated morphosyntactic information, syntactic words and groups resulting from shallow parsing and named entities. The paper also outlines several novel ideas for corpus preparation, e.g. the notion of a live corpus, constantly populated with new data or the concept of linking corpus data with external databases to enrich content. Although initial statistical comparison of the resource with the balanced corpus of general Polish reveals substantial differences in language richness, the resource makes a valuable source of linguistic information as a large (300 M segments) collection of quasi-spoken data ready to be aligned with the audio/video recording of sessions, currently being made publicly available by Sejm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,582
inproceedings
lawrie-etal-2012-creating
Creating and Curating a Cross-Language Person-Entity Linking Collection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1382/
Lawrie, Dawn and Mayfield, James and McNamee, Paul and Oard, Douglas
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3106--3110
To stimulate research in cross-language entity linking, we present a new test collection for evaluating the accuracy of cross-language entity linking in twenty-one languages. This paper describes an efficient way to create and curate such a collection, judiciously exploiting existing language resources. Queries are created by semi-automatically identifying person names on the English side of a parallel corpus, using judgments obtained through crowdsourcing to identify the entity corresponding to the name, and projecting the English name onto the non-English document using word alignments. Name projections are then curated, again through crowdsourcing. This technique resulted in the first publicly available multilingual cross-language entity linking collection. The collection includes approximately 55,000 queries, comprising between 875 and 4,329 queries for each of twenty-one non-English languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,583
inproceedings
verhagen-pustejovsky-2012-tarsqi
The {TARSQI} Toolkit
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1383/
Verhagen, Marc and Pustejovsky, James
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2043--2048
We present and demonstrate the updated version of the TARSQI Toolkit, a suite of temporal processing modules that extract temporal information from natural language texts. It parses the document and identifies temporal expressions, recognizes events, anchor events to temporal expressions and orders events relative to each other. The toolkit was previously demonstrated at COLING 2008, but has since seen substantial changes including: (1) incorporation of a new time expression tagger, (2){\textasciitilde}embracement of stand-off annotation, (3) application to the medical domain and (4) introduction of narrative containers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,584
inproceedings
louis-nenkova-2012-corpus
A corpus of general and specific sentences from news
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1384/
Louis, Annie and Nenkova, Ani
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1818--1821
We present a corpus of sentences from news articles that are annotated as general or specific. We employed annotators on Amazon Mechanical Turk to mark sentences from three kinds of news articles{\textemdash}reports on events, finance news and science journalism. We introduce the resulting corpus, with focus on annotator agreement, proportion of general/specific sentences in the articles and results for automatic classification of the two sentence types.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,585
inproceedings
wright-etal-2012-annotation
Annotation Trees: {LDC}`s customizable, extensible, scalable, annotation infrastructure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1385/
Wright, Jonathan and Griffitt, Kira and Ellis, Joe and Strassel, Stephanie and Callahan, Brendan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
479--485
In recent months, LDC has developed a web-based annotation infrastructure centered around a tree model of annotations and a Ruby on Rails application called the LDC User Interface (LUI). The effort aims to centralize all annotation into this single platform, which means annotation is always available remotely, with no more software required than a web browser. While the design is monolithic in the sense of handling any number of annotation projects, it is also scalable, as it is distributed over many physical and virtual machines. Furthermore, minimizing customization was a core design principle, and new functionality can be plugged in without writing a full application. The creation and customization of GUIs is itself done through the web interface, without writing code, with the aim of eventually allowing project managers to create a new task without developer intervention. Many of the desirable features follow from the model of annotations as trees, and the operationalization of annotation as tree modification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,586
inproceedings
filatova-2012-irony
Irony and Sarcasm: Corpus Generation and Analysis Using Crowdsourcing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1386/
Filatova, Elena
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
392--398
The ability to reliably identify sarcasm and irony in text can improve the performance of many Natural Language Processing (NLP) systems including summarization, sentiment analysis, etc. The existing sarcasm detection systems have focused on identifying sarcasm on a sentence level or for a specific phrase. However, often it is impossible to identify a sentence containing sarcasm without knowing the context. In this paper we describe a corpus generation experiment where we collect regular and sarcastic Amazon product reviews. We perform qualitative and quantitative analysis of the corpus. The resulting corpus can be used for identifying sarcasm on two levels: a document and a text utterance (where a text utterance can be as short as a sentence and as long as a whole document).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,587
inproceedings
al-sabbagh-girju-2012-yadac
{YADAC}: Yet another Dialectal {A}rabic Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1387/
Al-Sabbagh, Rania and Girju, Roxana
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2882--2889
This paper presents the first phase of building YADAC {\textemdash} a multi-genre Dialectal Arabic (DA) corpus {\textemdash} that is compiled using Web data from microblogs (i.e. Twitter), blogs/forums and online knowledge market services in which both questions and answers are user-generated. In addition to introducing two new genres to the current efforts of building DA corpora (i.e. microblogs and question-answer pairs extracted from online knowledge market services), the paper highlights and tackles several new issues related to building DA corpora that have not been handled in previous studies: function-based Web harvesting and dialect identification, vowel-based spelling variation, linguistic hypercorrection and its effect on spelling variation, unsupervised Part-of-Speech (POS) tagging and base phrase chunking for DA. Although the algorithms for both POS tagging and base-phrase chunking are still under development, the results are promising.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,588
inproceedings
clarke-etal-2012-nlp
An {NLP} Curator (or: How {I} Learned to Stop Worrying and Love {NLP} Pipelines)
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1388/
Clarke, James and Srikumar, Vivek and Sammons, Mark and Roth, Dan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3276--3283
Natural Language Processing continues to grow in popularity in a range of research and commercial applications, yet managing the wide array of potential NLP components remains a difficult problem. This paper describes Curator, an NLP management framework designed to address some common problems and inefficiencies associated with building NLP process pipelines; and Edison, an NLP data structure library in Java that provides streamlined interactions with Curator and offers a range of useful supporting functionality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,589
inproceedings
marujo-etal-2012-supervised
Supervised Topical Key Phrase Extraction of News Stories using Crowdsourcing, Light Filtering and Co-reference Normalization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1389/
Marujo, Lu{\'i}s and Gershman, Anatole and Carbonell, Jaime and Frederking, Robert and Neto, Jo{\~a}o P.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
399--403
Fast and effective automated indexing is critical for search and personalized services. Key phrases that consist of one or more words and represent the main concepts of the document are often used for the purpose of indexing. In this paper, we investigate the use of additional semantic features and pre-processing steps to improve automatic key phrase extraction. These features include the use of signal words and freebase categories. Some of these features lead to significant improvements in the accuracy of the results. We also experimented with 2 forms of document pre-processing that we call light filtering and co-reference normalization. Light filtering removes sentences from the document, which are judged peripheral to its main content. Co-reference normalization unifies several written forms of the same named entity into a unique form. We also needed a “Gold Standard” {\textemdash} a set of labeled documents for training and evaluation. While the subjective nature of key phrase selection precludes a true “Gold Standard”, we used Amazon`s Mechanical Turk service to obtain a useful approximation. Our data indicates that the biggest improvements in performance were due to shallow semantic features, news categories, and rhetorical signals (nDCG 78.47{\%} vs. 68.93{\%}). The inclusion of deeper semantic features such as Freebase sub-categories was not beneficial by itself, but in combination with pre-processing, did cause slight improvements in the nDCG scores.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,590
inproceedings
bakliwal-etal-2012-hindi
{H}indi Subjective Lexicon: A Lexical Resource for {H}indi Adjective Polarity Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1390/
Bakliwal, Akshat and Arora, Piyush and Varma, Vasudeva
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1189--1196
With recent developments in web technologies, percentage web content in Hindi is growing up at a lighting speed. This information can prove to be very useful for researchers, governments and organization to learn what`s on public mind, to make sound decisions. In this paper, we present a graph based wordnet expansion method to generate a full (adjective and adverb) subjective lexicon. We used synonym and antonym relations to expand the initial seed lexicon. We show three different evaluation strategies to validate the lexicon. We achieve 70.4{\%} agreement with human annotators and {\^a}ˆ{\textonequarter}79{\%} accuracy on product review classification. Main contribution of our work 1) Developing a lexicon of adjectives and adverbs with polarity scores using Hindi Wordnet. 2) Developing an annotated corpora of Hindi Product Reviews.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,591
inproceedings
recasens-etal-2012-annotating
Annotating Near-Identity from Coreference Disagreements
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1391/
Recasens, Marta and Mart{\'i}, M. Ant{\`o}nia and Orasan, Constantin
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
165--172
We present an extension of the coreference annotation in the English NP4E and the Catalan AnCora-CA corpora with near-identity relations, which are borderline cases of coreference. The annotated subcorpora have 50K tokens each. Near-identity relations, as presented by Recasens et al. (2010; 2011), build upon the idea that identity is a continuum rather than an either/or relation, thus introducing a middle ground category to explain currently problematic cases. The first annotation effort that we describe shows that it is not possible to annotate near-identity explicitly because subjects are not fully aware of it. Therefore, our second annotation effort used an indirect method, and arrived at near-identity annotations by inference from the disagreements between five annotators who had only a two-alternative choice between coreference and non-coreference. The results show that whereas as little as 2-6{\%} of the relations were explicitly annotated as near-identity in the former effort, up to 12-16{\%} of the relations turned out to be near-identical following the indirect method of the latter effort.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,592
inproceedings
tokunaga-etal-2012-rex
The {REX} corpora: A collection of multimodal corpora of referring expressions in collaborative problem solving dialogues
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1392/
Tokunaga, Takenobu and Iida, Ryu and Terai, Asuka and Kuriyama, Naoko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
422--429
This paper describes a collection of multimodal corpora of referring expressions, the REX corpora. The corpora have two notable features, namely (1) they include time-aligned extra-linguistic information such as participant actions and eye-gaze on top of linguistic information, (2) dialogues were collected with various configurations in terms of the puzzle type, hinting and language. After describing how the corpora were constructed and sketching out each, we present an analysis of various statistics for the corpora with respect to the various configurations mentioned above. The analysis showed that the corpora have different characteristics in the number of utterances and referring expressions in a dialogue, the task completion time and the attributes used in the referring expressions. In this respect, we succeeded in constructing a collection of corpora that included a variety of characteristics by changing the configurations for each set of dialogues, as originally planned. The corpora are now under preparation for publication, to be used for research on human reference behaviour.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,593
inproceedings
kusumoto-akiba-2012-statistical
Statistical Machine Translation without Source-side Parallel Corpus Using Word Lattice and Phrase Extension
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1393/
Kusumoto, Takanori and Akiba, Tomoyosi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3929--3932
Statistical machine translation (SMT) requires a parallel corpus between the source and target languages. Although a pivot-translation approach can be applied to a language pair that does not have a parallel corpus directly between them, it requires both source{\textemdash}pivot and pivot{\textemdash}target parallel corpora. We propose a novel approach to apply SMT to a resource-limited source language that has no parallel corpus but has only a word dictionary for the pivot language. The problems with dictionary-based translations lie in their ambiguity and incompleteness. The proposed method uses a word lattice representation of the pivot-language candidates and word lattice decoding to deal with the ambiguity; the lattice expansion is accomplished by using a pivot{\textemdash}target phrase translation table to compensate for the incompleteness. Our experimental evaluation showed that this approach is promising for applying SMT, even when a source-side parallel corpus is lacking.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,594
inproceedings
schumann-2012-knowledge
Knowledge-Rich Context Extraction and Ranking with {K}now{P}ipe
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1394/
Schumann, Anne-Kathrin
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3626--3630
This paper presents ongoing Phd thesis work dealing with the extraction of knowledge-rich contexts from text corpora for terminographic purposes. Although notable progress in the field has been made over recent years, there is yet no methodology or integrated workflow that is able to deal with multiple, typologically different languages and different domains, and that can be handled by non-expert users. Moreover, while a lot of work has been carried out to research the KRC extraction step, the selection and further analysis of results still involves considerable manual work. In this view, the aim of this paper is two-fold. Firstly, the paper presents a ranking algorithm geared at supporting the selection of high-quality contexts once the extraction has been finished and describes ranking experiments with Russian context candidates. Secondly, it presents the KnowPipe framework for context extraction: KnowPipe aims at providing a processing environment that allows users to extract knowledge-rich contexts from text corpora in different languages using shallow and deep processing techniques. In its current state of development, KnowPipe provides facilities for preprocessing Russian and German text corpora, for pattern-based knowledge-rich context extraction from these corpora using shallow analysis as well as tools for ranking Russian context candidates.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,595
inproceedings
ozbal-etal-2012-brand
Brand Pitt: A Corpus to Explore the Art of Naming
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1395/
{\"Ozbal, G{\"ozde and Strapparava, Carlo and Guerini, Marco
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1822--1828
The name of a company or a brand is the key element to a successful business. A good name is able to state the area of competition and communicate the promise given to customers by evoking semantic associations. Although various resources provide distinct tips for inventing creative names, little research was carried out to investigate the linguistic aspects behind the naming mechanism. Besides, there might be latent methods that copywriters unconsciously use. In this paper, we describe the annotation task that we have conducted on a dataset of creative names collected from various resources to create a gold standard for linguistic creativity in naming. Based on the annotations, we compile common and latent methods of naming and explore the correlations among linguistic devices, provoked effects and business domains. This resource represents a starting point for a corpus based approach to explore the art of naming.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,596
inproceedings
de-clercq-etal-2012-evaluating
Evaluating automatic cross-domain {D}utch semantic role annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1396/
De Clercq, Orph{\'e}e and Hoste, Veronique and Monachesi, Paola
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
88--93
In this paper we present the first corpus where one million Dutch words from a variety of text genres have been annotated with semantic roles. 500K have been completely manually verified and used as training material to automatically label another 500K. All data has been annotated following an adapted version of the PropBank guidelines. The corpus`s rich text type diversity and the availability of manually verified syntactic dependency structures allowed us to experiment with an existing semantic role labeler for Dutch. In order to test the system`s portability across various domains, we experimented with training on individual domains and compared this with training on multiple domains by adding more data. Our results show that training on large data sets is necessary but that including genre-specific training material is also crucial to optimize classification. We observed that a small amount of in-domain training data is already sufficient to improve our semantic role labeler.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,597
inproceedings
bazillon-etal-2012-syntactic
Syntactic annotation of spontaneous speech: application to call-center conversation data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1397/
Bazillon, Thierry and Deplano, Melanie and Bechet, Frederic and Nasr, Alexis and Favre, Benoit
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1338--1342
This paper describes the syntactic annotation process of the DECODA corpus. This corpus contains manual transcriptions of spoken conversations recorded in the French call-center of the Paris Public Transport Authority (RATP). Three levels of syntactic annotation have been performed with a semi-supervised approach: POS tags, Syntactic Chunks and Dependency parses. The main idea is to use off-the-shelf NLP tools and models, originaly developped and trained on written text, to perform a first automatic annotation on the manually transcribed corpus. At the same time a fully manual annotation process is performed on a subset of the original corpus, called the GOLD corpus. An iterative process is then applied, consisting in manually correcting errors found in the automatic annotations, retraining the linguistic models of the NLP tools on this corrected corpus, then checking the quality of the adapted models on the fully manual annotations of the GOLD corpus. This process iterates until a certain error rate is reached. This paper describes this process, the main issues raising when adapting NLP tools to process speech transcriptions, and presents the first evaluations performed with these new adapted tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,598
inproceedings
hong-etal-2012-korean
{K}orean Children`s Spoken {E}nglish Corpus and an Analysis of its Pronunciation Variability
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1398/
Hong, Hyejin and Kim, Sunhee and Chung, Minhwa
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2362--2365
This paper introduces a corpus of Korean-accented English speech produced by children (the Korean Children`s Spoken English Corpus: the KC-SEC), which is constructed by Seoul National University. The KC-SEC was developed in support of research and development of CALL systems for Korean learners of English, especially for elementary school learners. It consists of read-speech produced by 96 Korean learners aged from 9 to 12. Overall corpus size is 11,937 sentences, which amount to about 16 hours of speech. Furthermore, a statistical analysis of pronunciation variability appearing in the corpus is performed in order to investigate the characteristics of the Korean children`s spoken English. The realized phonemes (hypothesis) are extracted through time-based phoneme alignment, and are compared to the targeted phonemes (reference). The results of the analysis show that: i) the pronunciation variations found frequently in Korean children`s speech are devoicing and changing of articulation place or/and manner; and ii) they largely correspond to those of general Korean learners' speech presented in previous studies, despite some differences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,599
inproceedings
bechet-etal-2012-decoda
{DECODA}: a call-centre human-human spoken conversation corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1399/
Bechet, Frederic and Maza, Benjamin and Bigouroux, Nicolas and Bazillon, Thierry and El-B{\`e}ze, Marc and De Mori, Renato and Arbillot, Eric
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1343--1347
The goal of the DECODA project is to reduce the development cost of Speech Analytics systems by reducing the need for manual annotat ion. This project aims to propose robust speech data mining tools in the framework of call-center monitoring and evaluation, by means of weakl y supervised methods. The applicative framework of the project is the call-center of the RATP (Paris public transport authority). This project tackles two very important open issues in the development of speech mining methods from spontaneous speech recorded in call-centers : robus tness (how to extract relevant information from very noisy and spontaneous speech messages) and weak supervision (how to reduce the annotation effort needed to train and adapt recognition and classification models). This paper describes the DECODA corpus collected at the RATP during the project. We present the different annotation levels performed on the corpus, the methods used to obtain them, as well as some evaluation o f the quality of the annotations produced.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,600
inproceedings
scherrer-cartoni-2012-trilingual
The Trilingual {ALLEGRA} Corpus: Presentation and Possible Use for Lexicon Induction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1400/
Scherrer, Yves and Cartoni, Bruno
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2890--2896
In this paper, we present a trilingual parallel corpus for German, Italian and Romansh, a Swiss minority language spoken in the canton of Grisons. The corpus called ALLEGRA contains press releases automatically gathered from the website of the cantonal administration of Grisons. Texts have been preprocessed and aligned with a current state-of-the-art sentence aligner. The corpus is one of the first of its kind, and can be of great interest, particularly for the creation of natural language processing resources and tools for Romansh. We illustrate the use of such a trilingual resource for automatic induction of bilingual lexicons, which is a real challenge for under-represented languages. We induce a bilingual lexicon for German-Romansh by phrase alignment and evaluate the resulting entries with the help of a reference lexicon. We then show that the use of the third language of the corpus {\textemdash} Italian {\textemdash} as a pivot language can improve the precision of the induced lexicon, without loss in terms of quality of the extracted pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,601
inproceedings
costantini-etal-2012-intelligibility
Intelligibility assessment in forensic applications
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1401/
Costantini, Giovanni and Paoloni, Andrea and Todisco, Massimiliano
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4113--4116
In the context of forensic phonetics the transcription of intercepted signals is particularly important. However, these signals are often degraded and the transcript may not reflect what was actually pronounced. In the absence of the original signal, the only way to see the level of accuracy that can be obtained in the transcription of poor recordings is to develop an objective methodology for intelligibility measurements. This study has been carried out on a corpus specially built to simulate the real conditions of forensic signals. With reference to this corpus a measurement system of intelligibility based on STI (Speech Transmission Index) has been evaluated so as to assess its performance. The result of the experiment shows a high correlation between objective measurements and subjective evaluations. Therefore it is recommended to use the proposed methodology in order to establish whether a given intercepted signal can be transcribed with sufficient reliability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,602
inproceedings
pardelli-etal-2012-medical
From medical language processing to {B}io{NLP} domain
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1402/
Pardelli, Gabriella and Sassi, Manuela and Goggi, Sara and Biagioni, Stefania
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2049--2055
This paper presents the results of a terminological work on a reference corpus in the domain of Biomedicine. In particular, the research tends to analyse the use of certain terms in Biomedicine in order to verify their change over the time with the aim of retrieving from the net the very essence of documentation. The terminological sample contains words used in BioNLP and biomedicine and identifies which terms are passing from scientific publications to the daily press and which are rather reserved to scientific production. The final scope of this work is to determine how scientific dissemination to an ever larger part of the society enables a public of common citizens to approach communication on biomedical research and development; and its main source is a reference corpus made up of three main repositories from which information related to BioNLP and Biomedicine is extracted. The paper is divided in three sections: 1) an introduction dedicated to data extracted from scientific documentation; 2) the second section devoted to methodology and data description; 3) the third part containing a statistical representation of terms extracted from the archive: indexes and concordances allow to reflect on the use of certain terms in this field and give possible keys for having access to the extraction of knowledge in the digital era.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,603
inproceedings
akiba-etal-2012-designing
Designing an Evaluation Framework for Spoken Term Detection and Spoken Document Retrieval at the {NTCIR}-9 {S}poken{D}oc Task
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1403/
Akiba, Tomoyosi and Nishizaki, Hiromitsu and Aikawa, Kiyoaki and Kawahara, Tatsuya and Matsui, Tomoko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3527--3534
We describe the evaluation framework for spoken document retrieval for the IR for the Spoken Documents Task, conducted in the ninth NTCIR Workshop. The two parts of this task were a spoken term detection (STD) subtask and an ad hoc spoken document retrieval subtask (SDR). Both subtasks target search terms, passages and documents included in academic and simulated lectures of the Corpus of Spontaneous Japanese. Seven teams participated in the STD subtask and five in the SDR subtask. The results obtained through the evaluation in the workshop are discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,604
inproceedings
moreno-sandoval-etal-2012-spontaneous
Spontaneous Speech Corpora for language learners of {S}panish, {C}hinese and {J}apanese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1404/
Moreno-Sandoval, Antonio and Llanos, Leonardo Campillos and Dong, Yang and Takamori, Emi and Guirao, Jos{\'e} M. and Gozalo, Paula and Kimura, Chieko and Matsui, Kengo and Garrote-Salazar, Marta
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2695--2701
This paper presents a method for designing, compiling and annotating corpora intended for language learners. In particular, we focus on spoken corpora for being used as complementary material in the classroom as well as in examinations. We describe the three corpora (Spanish, Chinese and Japanese) compiled by the Laboratorio de Ling{\"u{\'istica Inform{\'atica at the Autonomous University of Madrid (LLI-UAM). A web-based concordance tool has been used to search for examples in the corpus, and providing the text along with the corresponding audio. Teaching materials from the corpus, consisting the texts, the audio files and exercises on them, are currently on development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,605
inproceedings
rousseau-etal-2012-ted
{TED}-{LIUM}: an Automatic Speech Recognition dedicated corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1405/
Rousseau, Anthony and Del{\'e}glise, Paul and Est{\`e}ve, Yannick
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
125--129
This paper presents the corpus developed by the LIUM for Automatic Speech Recognition (ASR), based on the TED Talks. This corpus was built during the IWSLT 2011 Evaluation Campaign, and is composed of 118 hours of speech with its accompanying automatically aligned transcripts. We describe the content of the corpus, how the data was collected and processed, how it will be publicly available and how we built an ASR system using this data leading to a WER score of 17.4 {\%}. The official results we obtained at the IWSLT 2011 evaluation campaign are also discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,606
inproceedings
petasis-2012-sync3
The {SYNC}3 Collaborative Annotation Tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1406/
Petasis, Georgios
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
363--370
The huge amount of the available information in the Web creates the need of effective information extraction systems that are able to produce metadata that satisfy user`s information needs. The development of such systems, in the majority of cases, depends on the availability of an appropriately annotated corpus in order to learn or evaluate extraction models. The production of such corpora can be significantly facilitated by annotation tools, that provide user-friendly facilities and enable annotators to annotate documents according to a predefined annotation schema. However, the construction of annotation tools that operate in a distributed environment is a challenging task: the majority of these tools are implemented as Web applications, having to cope with the capabilities offered by browsers. This paper describes the SYNC3 collaborative annotation tool, which implements an alternative architecture: it remains a desktop application, fully exploiting the advantages of desktop applications, but provides collaborative annotation through the use of a centralised server for storing both the documents and their metadata, and instance messaging protocols for communicating events among all annotators. The annotation tool is implemented as a component of the Ellogon language engineering platform, exploiting its extensive annotation engine, its cross-platform abilities and its linguistic processing components, if such a need arises. Finally, the SYNC3 annotation tool is distributed with an open source license, as part of the Ellogon platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,607
inproceedings
mapelli-etal-2012-elra
{ELRA} in the heart of a cooperative {HLT} world
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1407/
Mapelli, Val{\'e}rie and Arranz, Victoria and Carr{\'e}, Matthieu and Mazo, H{\'e}l{\`e}ne and Mostefa, Djamel and Choukri, Khalid
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
55--59
This paper aims at giving an overview of ELRA’s recent activities. The first part elaborates on ELRA’s means of boosting the sharing Language Resources (LRs) within the HLT community through its catalogues, LRE-Map initiative, as well as its work towards the integration of its LRs within the META-SHARE open infrastructure. The second part shows how ELRA helps in the development and evaluation of HLT, in particular through its numerous participations to collaborative projects for the production of resources and platforms to facilitate their production and exploitation. A third part focuses on ELRA’s work for clearing IPR issues in a HLT-oriented context, one of its latest initiative being its involvement in a Fair Research Act proposal to promote the easy access to LRs to the widest community. Finally, the last part elaborates on recent actions for disseminating information and promoting cooperation in the field, e.g. an the Language Library being launched at LREC2012 and the creation of an International Standard LR Number, a LR unique identifier to enable the accurate identification of LRs. Among the other messages ELRA will be conveying the attendees are the announcement of a set of freely available resources, the establishment of a LR and Evaluation forum, etc.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,608
inproceedings
lambert-etal-2012-automatic
Automatic Translation of Scientific Documents in the {HAL} Archive
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1408/
Lambert, Patrik and Schwenk, Holger and Blain, Fr{\'e}d{\'e}ric
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3933--3936
This paper describes the development of a statistical machine translation system between French and English for scientific papers. This system will be closely integrated into the French HAL open archive, a collection of more than 100.000 scientific papers. We describe the creation of in-domain parallel and monolingual corpora, the development of a domain specific translation system with the created resources, and its adaptation using monolingual resources only. These techniques allowed us to improve a generic system by more than 10 BLEU points.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,609
inproceedings
sluban-etal-2012-irregularity
Irregularity Detection in Categorized Document Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1409/
Sluban, Borut and Pollak, Senja and Coesemans, Roel and Lavra{\v{c}}, Nada
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1598--1603
The paper presents an approach to extract irregularities in document corpora, where the documents originate from different sources and the analyst`s interest is to find documents which are atypical for the given source. The main contribution of the paper is a voting-based approach to irregularity detection and its evaluation on a collection of newspaper articles from two sources: Western (UK and US) and local (Kenyan) media. The evaluation of a domain expert proves that the method is very effective in uncovering interesting irregularities in categorized document corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,610
inproceedings
giraudel-etal-2012-repere
The {REPERE} Corpus : a multimodal corpus for person recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1410/
Giraudel, Aude and Carr{\'e}, Matthieu and Mapelli, Val{\'e}rie and Kahn, Juliette and Galibert, Olivier and Quintard, Ludovic
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1102--1107
The REPERE Challenge aims to support research on people recognition in multimodal conditions. To assess the technology progression, annual evaluation campaigns will be organized from 2012 to 2014. In this context, the REPERE corpus, a French videos corpus with multimodal annotation, has been developed. This paper presents datasets collected for the dry run test that took place at the beginning of 2012. Specific annotation tools and guidelines are mainly described. At the time being, 6 hours of data have been collected and annotated. Last section presents analyses of annotation distribution and interaction between modalities in the corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,611
inproceedings
gesmundo-samardzic-2012-lemmatising
Lemmatising {S}erbian as Category Tagging with Bidirectional Sequence Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1411/
Gesmundo, Andrea and Samard{\v{z}}i{\'c}, Tanja
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2103--2106
We present a novel tool for morphological analysis of Serbian, which is a low-resource language with rich morphology. Our tool produces lemmatisation and morphological analysis reaching accuracy that is considerably higher compared to the existing alternative tools: 83.6{\%} relative error reduction on lemmatisation and 8.1{\%} relative error reduction on morphological analysis. The system is trained on a small manually annotated corpus with an approach based on Bidirectional Sequence Classification and Guided Learning techniques, which have recently been adapted with success to a broad set of NLP tagging tasks. In the system presented in this paper, this general approach to tagging is applied to the lemmatisation task for the first time thanks to our novel formulation of lemmatisation as a category tagging task. We show that learning lemmatisation rules from annotated corpus and integrating the context information in the process of morphological analysis provides a state-of-the-art performance despite the lack of resources. The proposed system can be used via a web GUI that deploys its best scoring configuration
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,612
inproceedings
proisl-uhrig-2012-efficient
Efficient Dependency Graph Matching with the {IMS} Open Corpus Workbench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1412/
Proisl, Thomas and Uhrig, Peter
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2750--2756
State-of-the-art dependency representations such as the Stanford Typed Dependencies may represent the grammatical relations in a sentence as directed, possibly cyclic graphs. Querying a syntactically annotated corpus for grammatical structures that are represented as graphs requires graph matching, which is a non-trivial task. In this paper, we present an algorithm for graph matching that is tailored to the properties of large, syntactically annotated corpora. The implementation of the algorithm is built on top of the popular IMS Open Corpus Workbench, allowing corpus linguists to re-use existing infrastructure. An evaluation of the resulting software, CWB-treebank, shows that its performance in real world applications, such as a web query interface, compares favourably to implementations that rely on a relational database or a dedicated graph database while at the same time offering a greater expressive power for queries. An intuitive graphical interface for building the query graphs is available via the Treebank.info project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,613
inproceedings
tavarez-etal-2012-strategies
Strategies to Improve a Speaker Diarisation Tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1413/
Tavarez, David and Navas, Eva and Erro, Daniel and Saratxaga, Ibon
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4117--4121
This paper describes the different strategies used to improve the results obtained by an off-line speaker diarisation tool with the Albayzin 2010 diarisation database. The errors made by the system have been analyzed and different strategies have been proposed to reduce each kind of error. Very short segments incorrectly labelled and different appearances of one speaker labelled with different identifiers are the most common errors. A post-processing module that refines the segmentation by retraining the GMM models of the speakers involved has been built to cope with these errors. This post-processing module has been tuned with the training dataset and improves the result of the diarisation system by 16.4{\%} in the test dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,614
inproceedings
fujii-etal-2012-effects
Effects of Document Clustering in Modeling {W}ikipedia-style Term Descriptions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1414/
Fujii, Atsushi and Fujii, Yuya and Tokunaga, Takenobu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2543--2546
Reflecting the rapid growth of science, technology, and culture, it has become common practice to consult tools on the World Wide Web for various terms. Existing search engines provide an enormous volume of information, but retrieved information is not organized. Hand-compiled encyclopedias provide organized information, but the quantity of information is limited. In this paper, aiming to integrate the advantages of both tools, we propose a method to organize a search result based on multiple viewpoints as in Wikipedia. Because viewpoints required for explanation are different depending on the type of a term, such as animal and disease, we model articles in Wikipedia to extract a viewpoint structure for each term type. To identify a set of term types, we independently use manual annotation and automatic document clustering for Wikipedia articles. We also propose an effective feature for clustering of Wikipedia articles. We experimentally show that the document clustering reduces the cost for the manual annotation while maintaining the accuracy for modeling Wikipedia articles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,615
inproceedings
ballesteros-nivre-2012-maltoptimizer-system
{M}alt{O}ptimizer: A System for {M}alt{P}arser Optimization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1415/
Ballesteros, Miguel and Nivre, Joakim
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2757--2763
Freely available statistical parsers often require careful optimization to produce state-of-the-art results, which can be a non-trivial task especially for application developers who are not interested in parsing research for its own sake. We present MaltOptimizer, a freely available tool developed to facilitate parser optimization using the open-source system MaltParser, a data-driven parser-generator that can be used to train dependency parsers given treebank data. MaltParser offers a wide range of parameters for optimization, including nine different parsing algorithms, two different machine learning libraries (each with a number of different learners), and an expressive specification language that can be used to define arbitrarily rich feature models. MaltOptimizer is an interactive system that first performs an analysis of the training set in order to select a suitable starting point for optimization and then guides the user through the optimization of parsing algorithm, feature model, and learning algorithm. Empirical evaluation on data from the CoNLL 2006 and 2007 shared tasks on dependency parsing shows that MaltOptimizer consistently improves over the baseline of default settings and sometimes even surpasses the result of manual optimization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,616
inproceedings
conkie-etal-2012-building
Building Text-To-Speech Voices in the Cloud
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1416/
Conkie, Alistair and Okken, Thomas and Kim, Yeon-Jun and Di Fabbrizio, Giuseppe
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3317--3321
The AT{\&}T VoiceBuilder provides a new tool to researchers and practitioners who want to have their voices synthesized by a high-quality commercial-grade text-to-speech system without the need to install, configure, or manage speech processing software and equipment. It is implemented as a web service on the AT{\&}T Speech Mashup Portal.The system records and validates users' utterances, processes them to build a synthetic voice and provides a web service API to make the voice available to real-time applications through a scalable cloud-based processing platform. All the procedures are automated to avoid human intervention. We present experimental comparisons of voices built using the system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,617
inproceedings
stymne-ahrenberg-2012-practice
On the practice of error analysis for machine translation evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1417/
Stymne, Sara and Ahrenberg, Lars
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1785--1790
Error analysis is a means to assess machine translation output in qualitative terms, which can be used as a basis for the generation of error profiles for different systems. As for other subjective approaches to evaluation it runs the risk of low inter-annotator agreement, but very often in papers applying error analysis to MT, this aspect is not even discussed. In this paper, we report results from a comparative evaluation of two systems where agreement initially was low, and discuss the different ways we used to improve it. We compared the effects of using more or less fine-grained taxonomies, and the possibility to restrict analysis to short sentences only. We report results on inter-annotator agreement before and after measures were taken, on error categories that are most likely to be confused, and on the possibility to establish error profiles also in the absence of a high inter-annotator agreement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,618
inproceedings
berovic-etal-2012-croatian
{C}roatian Dependency Treebank: Recent Development and Initial Experiments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1418/
Berovi{\'c}, Da{\v{s}}a and Agi{\'c}, {\v{Z}}eljko and Tadi{\'c}, Marko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1902--1906
We present the current state of development of the Croatian Dependency Treebank {\textemdash} with special empahsis on adapting the Prague Dependency Treebank formalism to Croatian language specifics {\textemdash} and illustrate its possible applications in an experiment with dependency parsing using MaltParser. The treebank currently contains approximately 2870 sentences, out of which the 2699 sentences and 66930 tokens were used in this experiment. Three linear-time projective algorithms implemented by the MaltParser system {\textemdash} Nivre eager, Nivre standard and stack projective {\textemdash} running on default settings were used in the experiment. The highest performing system, implementing the Nivre eager algorithm, scored (LAS 71.31 UAS 80.93 LA 83.87) within our experiment setup. The results obtained serve as an illustration of treebank`s usefulness in natural language processing research and as a baseline for further research in dependency parsing of Croatian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,619
inproceedings
kluewer-etal-2012-evaluation
Evaluation of the {K}om{P}arse Conversational Non-Player Characters in a Commercial Virtual World
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1419/
Kluewer, Tina and Xu, Feiyu and Adolphs, Peter and Uszkoreit, Hans
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3535--3542
The paper describes the evaluation of the KomParse system. KomParse is a dialogue system embedded in a 3-D massive multiplayer online game, allowing conversations between non player characters (NPCs) and game users. In a field test with game users, the system was evaluated with respect to acceptability and usability of the overall system as well as task completion, dialogue control and efficiency of three conversational tasks. Furthermore, subjective feedback has been collected for evaluating the single communication components of the system such as natural language understanding. The results are very satisfying and promising. In general, both the usability and acceptability tests show that the tested NPC is useful and well-accepted by the users. Even if the NPC does not always understand the users well and expresses things unexpected, he could still provide appropriate responses to help users to solve their problems or entertain them.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,620
inproceedings
villegas-etal-2012-using
Using Language Resources in Humanities research
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1420/
Villegas, Marta and Bel, Nuria and Gonzalo, Carlos and Moreno, Amparo and Simelio, Nuria
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3284--3288
In this paper we present two real cases, in the fields of discourse analysis of newspapers and communication research which demonstrate the impact of Language Resources (LR) and NLP in the humanities. We describe our collaboration with (i) the Feminario research group from the UAB which has been investigating androcentric practices in Spanish general press since the 80s and whose research suggests that Spanish general press has undergone a dehumanization process that excludes women and men and (ii) the “Municipals`11 online” project which investigates the Spanish local election campaign in the blogosphere. We will see how NLP tools and LRs make possible the so called ‘e-Humanities research' as they provide Humanities with tools to perform intensive and automatic text analyses. Language technologies have evolved a lot and are mature enough to provide useful tools to researchers dealing with large amount of textual data. The language resources that have been developed within the field of NLP have proven to be useful for other disciplines that are unaware of their existence and nevertheless would greatly benefit from them as they provide (i) exhaustiveness -to guarantee that data coverage is wide and representative enough- and (ii) reliable and significant results -to guarantee that the reported results are statistically significant.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,621
inproceedings
osenova-etal-2012-treebank
A Treebank-driven Creation of an {O}nto{V}alence Verb lexicon for {B}ulgarian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1421/
Osenova, Petya and Simov, Kiril and Laskova, Laska and Kancheva, Stanislava
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2636--2640
The paper presents a treebank-driven approach to the construction of a Bulgarian valence lexicon with ontological restrictions over the inner participants of the event. First, the underlying ideas behind the Bulgarian Ontology-based lexicon are outlined. Then, the extraction and manipulation of the valence frames is discussed with respect to the BulTreeBank annotation scheme and DOLCE ontology. Also, the most frequent types of syntactic frames are specified as well as the most frequent types of ontological restrictions over the verb arguments. The envisaged application of such a lexicon would be: in assigning ontological labels to syntactically parsed corpora, and expanding the lexicon and lexical information in the Bulgarian Resource Grammar.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,622
inproceedings
rubino-etal-2012-integrating
Integrating {NLP} Tools in a Distributed Environment: A Case Study Chaining a Tagger with a Dependency Parser
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1422/
Rubino, Francesco and Frontini, Francesca and Quochi, Valeria
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2125--2131
The present paper tackles the issue of PoS tag conversion within the framework of a distributed web service platform for the automatic creation of language resources. PoS tagging is now considered a ''''''``solved problem''''''''; yet, because of the differences in the tagsets, interchange of the various PoS tagger available is still hampered. In this paper we describe the implementation of a pos-tagged-corpus converter, which is needed for chaining together in a workflow the Freeling PoS tagger for Italian and the DESR dependency parser, given that these two tools have been developed independently. The conversion problems experienced during the implementation, related to the properties of the different tagsets and of tagset conversion in general, are discussed together with the heuristics implemented in the attempt to solve them. Finally, the converter is evaluated by assessing the impact of conversion on the performance of the dependency parser. From this we learn that in most cases parsing errors are due to actual tagging errors, and not to conversion itself. Besides, information on accuracy loss is an important feature in a distributed environment of (NLP) services, where users need to decide which services best suit their needs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,623
inproceedings
yang-etal-2012-spell
Spell Checking for {C}hinese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1423/
Yang, Shaohua and Zhao, Hai and Wang, Xiaolin and Lu, Bao-liang
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
730--736
This paper presents some novel results on Chinese spell checking. In this paper, a concise algorithm based on minimized-path segmentation is proposed to reduce the cost and suit the needs of current Chinese input systems. The proposed algorithm is actually derived from a simple assumption that spelling errors often make the number of segments larger. The experimental results are quite positive and implicitly verify the effectiveness of the proposed assumption. Finally, all approaches work together to output a result much better than the baseline with 12{\%} performance improvement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,624
inproceedings
islam-mehler-2012-customization
Customization of the {E}uroparl Corpus for Translation Studies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1424/
Islam, Zahurul and Mehler, Alexander
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2505--2510
Currently, the area of translation studies lacks corpora by which translation scholars can validate their theoretical claims, for example, regarding the scope of the characteristics of the translation relation. In this paper, we describe a customized resource in the area of translation studies that mainly addresses research on the properties of the translation relation. Our experimental results show that the Type-Token-Ratio (TTR) is not a universally valid indicator of the simplification of translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,625
inproceedings
strapparava-etal-2012-parallel
A Parallel Corpus of Music and Lyrics Annotated with Emotions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1425/
Strapparava, Carlo and Mihalcea, Rada and Battocchi, Alberto
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2343--2346
In this paper, we introduce a novel parallel corpus of music and lyrics, annotated with emotions at line level. We first describe the corpus, consisting of 100 popular songs, each of them including a music component, provided in the MIDI format, as well as a lyrics component, made available as raw text. We then describe our work on enhancing this corpus with emotion annotations using crowdsourcing. We also present some initial experiments on emotion classification using the music and the lyrics representations of the songs, which lead to encouraging results, thus demonstrating the promise of using joint music-lyric models for song processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,626
inproceedings
bianchi-etal-2012-creation
Creation of a bottom-up corpus-based ontology for {I}talian Linguistics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1426/
Bianchi, Elisa and Tavosanis, Mirko and Giovannetti, Emiliano
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2641--2647
This paper describes the steps of construction of a shallow lexical ontology of Italian Linguistics, set to be used by a meta-search engine for query refinement. The ontology was constructed with the software Prot{\'e}g{\'e} 4.0.2 and is in OWL format; its construction has been carried out following the steps described in the well-known Ontology Learning From Text (OLFT) layer cake. The starting point was the automatic term extraction from a corpus of web documents concerning the domain of interest (304,000 words); as regards corpus construction, we describe the main criteria of the web documents selection and its critical points, concerning the definition of user profile and of degrees of specialisation. We describe then the process of term validation and construction of a glossary of terms of Italian Linguistics; afterwards, we outline the identification of synonymic chains and the main criteria of ontology design: top classes of ontology are Concept (containing taxonomy of concepts) and Terms (containing terms of the glossary as instances), while concepts are linked through part-whole and involved-role relation, both borrowed from Wordnet. Finally, we show some examples of the application of the ontology for query refinement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,627
inproceedings
nordhoff-hammarstrom-2012-glottolog
Glottolog/Langdoc:Increasing the visibility of grey literature for low-density languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1427/
Nordhoff, Sebastian and Hammarstr{\"om, Harald
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3289--3294
Language resources can be divided into structural resources treating phonology, morphosyntax, semantics etc. and resources treating the social, demographic, ethnic, political context. A third type are meta-resources, like bibliographies, which provide access to the resources of the first two kinds. This poster will present the Glottolog/Langdoc project, a comprehensive bibliography providing web access to 180k bibliographical records to (mainly) low visibility resources from low-density languages. The resources are annotated for macro-area, content language, and document type and are available in XHTML and RDF.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,628
inproceedings
dayrell-etal-2012-rhetorical
Rhetorical Move Detection in {E}nglish Abstracts: Multi-label Sentence Classifiers and their Annotated Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1428/
Dayrell, Carmen and Candido Jr., Arnaldo and Lima, Gabriel and Machado Jr., Danilo and Copestake, Ann and Feltrim, Val{\'e}ria and Tagnin, Stella and Aluisio, Sandra
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1604--1609
The relevance of automatically identifying rhetorical moves in scientific texts has been widely acknowledged in the literature. This study focuses on abstracts of standard research papers written in English and aims to tackle a fundamental limitation of current machine-learning classifiers: they are mono-labeled, that is, a sentence can only be assigned one single label. However, such approach does not adequately reflect actual language use since a move can be realized by a clause, a sentence, or even several sentences. Here, we present MAZEA (Multi-label Argumentative Zoning for English Abstracts), a multi-label classifier which automatically identifies rhetorical moves in abstracts but allows for a given sentence to be assigned as many labels as appropriate. We have resorted to various other NLP tools and used two large training corpora: (i) one corpus consists of 645 abstracts from physical sciences and engineering (PE) and (ii) the other corpus is made up of 690 from life and health sciences (LH). This paper presents our preliminary results and also discusses the various challenges involved in multi-label tagging and works towards satisfactory solutions. In addition, we also make our two training corpora publicly available so that they may serve as benchmark for this new task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,629
inproceedings
aleksandrov-strapparava-2012-ngramquery
{N}gram{Q}uery - Smart Information Extraction from {G}oogle N-gram using External Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1429/
Aleksandrov, Martin and Strapparava, Carlo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
563--568
This paper describes the implementation of a generalized query language on Google Ngram database. This language allows for very expressive queries that exploit semantic similarity acquired both from corpora (e.g. LSA) and from WordNet, and phonetic similarity available from the CMU Pronouncing Dictionary. It contains a large number of new operators, which combined in a proper query can help users to extract n-grams having similarly close syntactic and semantic relational properties. We also characterize the operators with respect to their corpus affiliation and their functionality. The query syntax is considered next given in terms of Backus-Naur rules followed by a few interesting examples of how the tool can be used. We also describe the command-line arguments the user could input comparing them with the ones for retrieving n-grams through the interface of Google Ngram database. Finally we discuss possible improvements on the extraction process and some relevant query completeness issues.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,630
inproceedings
weiss-ahrenberg-2012-error
Error profiling for evaluation of machine-translated text: a {P}olish-{E}nglish case study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1430/
Weiss, Sandra and Ahrenberg, Lars
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1764--1770
We present a study of Polish-English machine translation, where the impact of various types of errors on cohesion and comprehensibility of the translations were investigated. The following phenomena are in focus: (i) The most common errors produced by current state-of-the-art MT systems for Polish-English MT. (ii) The effect of different types of errors on text cohesion. (iii) The effect of different types of errors on readers' understanding of the translation. We found that errors of incorrect and missing translations are the most common for current systems, while the category of non-translated words had the most negative impact on comprehension. All three of these categories contributed to the breaking of cohesive chains. The correlation between number of errors found in a translation and number of wrong answers in the comprehension tests was low. Another result was that non-native speakers of English performed at least as good as native speakers on the comprehension tests.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,631
inproceedings
lis-2012-polish
{P}olish Multimodal Corpus {---} a collection of referential gestures
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1431/
Lis, Magdalena
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1108--1113
In face to face interaction, people refer to objects and events not only by means of speech but also by means of gesture. The present paper describes building a corpus of referential gestures. The aim is to investigate gestural reference by incorporating insights from semantic ontologies and by employing a more holistic view on referential gestures. The paper`s focus is on presenting the data collection procedure and discussing the corpus' design; additionally the first insights from constructing the annotation scheme are described.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,632
inproceedings
braffort-boutora-2012-degels1
{DEGELS}1: A comparable corpus of {F}rench {S}ign {L}anguage and co-speech gestures
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1432/
Braffort, Annelies and Boutora, Le{\"ila
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2426--2429
In this paper, we describe DEGELS1, a comparable corpus of French Sign Language and co-speech gestures that has been created to serve as a testbed corpus for the DEGELS workshops. These workshop series were initiated in France for researchers studying French Sign Language and co-speech gestures in French, with the aim of comparing methodologies for corpus annotation. An extract was used for the first event DEGELS2011 dedicated to the annotation of pointing, and the same extract will be used for DEGELS2012, dedicated to segmentation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,633
inproceedings
gonzalez-etal-2012-semi
Semi-Automatic Sign Language Corpora Annotation using Lexical Representations of Signs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1433/
Gonzalez, Matilde and Filhol, Michael and Collet, Christophe
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2430--2434
Nowadays many researches focus on the automatic recognition of sign language. High recognition rates are achieved using lot of training data. This data is, generally, collected by manual annotating SL video corpus. However this is time consuming and the results depend on the annotators knowledge. In this work we intend to assist the annotation in terms of glosses which consist on writing down the sign meaning sign for sign thanks to automatic video processing techniques. In this case using learning data is not suitable since at the first step it will be needed to manually annotate the corpus. Also the context dependency of signs and the co-articulation effect in continuous SL make the collection of learning data very difficult. Here we present a novel approach which uses lexical representations of sign to overcome these problems and image processing techniques to match sign performances to sign representations. Signs are described using Zeebede (ZBD) which is a descriptor of signs that considers the high variability of signs. A ZBD database is used to stock signs and can be queried using several characteristics. From a video corpus sequence features are extracted using a robust body part tracking approach and a semi-automatic sign segmentation algorithm. Evaluation has shown the performances and limitation of the proposed approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,634
inproceedings
iliev-genov-2012-expanding
Expanding Parallel Resources for Medium-Density Languages for Free
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1434/
Iliev, Georgi and Genov, Angel
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3937--3943
We discuss a previously proposed method for augmenting parallel corpora of limited size for the purposes of machine translation through monolingual paraphrasing of the source language. We develop a three-stage shallow paraphrasing procedure to be applied to the Swedish-Bulgarian language pair for which limited parallel resources exist. The source language exhibits specifics not typical of high-density languages already studied in a similar setting. Paraphrases of a highly productive type of compound nouns in Swedish are generated by a corpus-based technique. Certain Swedish noun-phrase types are paraphrased using basic heuristics. Further we introduce noun-phrase morphological variations for better wordform coverage. We evaluate the performance of a phrase-based statistical machine translation system trained on a baseline parallel corpus and on three stages of artificial enlargement of the source-language training data. Paraphrasing is shown to have no effect on performance for the Swedish-English translation task. We show a small, yet consistent, increase in the BLEU score of Swedish-Bulgarian translations of larger token spans on the first enlargement stage. A small improvement in the overall BLEU score of Swedish-Bulgarian translation is achieved on the second enlargement stage. We find that both improvements justify further research into the method for the Swedish-Bulgarian translation task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,635
inproceedings
vasiljevs-etal-2012-creation
Creation of an Open Shared Language Resource Repository in the {N}ordic and {B}altic Countries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1435/
Vasi{\c{ljevs, Andrejs and Forsberg, Markus and Gornostay, Tatiana and Hansen, Dorte Haltrup and J{\'ohannsd{\'ottir, Krist{\'in and Lyse, Gunn and Lind{\'en, Krister and Offersgaard, Lene and Olsen, Sussi and Pedersen, Bolette and R{\"ognvaldsson, Eir{\'ikur and Skadi{\c{na, Inguna and De Smedt, Koenraad and Oksanen, Ville and Rozis, Roberts
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1076--1083
The META-NORD project has contributed to an open infrastructure for language resources (data and tools) under the META-NET umbrella. This paper presents the key objectives of META-NORD and reports on the results achieved in the first year of the project. META-NORD has mapped and described the national language technology landscape in the Nordic and Baltic countries in terms of language use, language technology and resources, main actors in the academy, industry, government and society; identified and collected the first batch of language resources in the Nordic and Baltic countries; documented, processed, linked, and upgraded the identified language resources to agreed standards and guidelines. The three horizontal multilingual actions in META-NORD are overviewed in this paper: linking and validating Nordic and Baltic wordnets, the harmonisation of multilingual Nordic and Baltic treebanks, and consolidating multilingual terminology resources across European countries. This paper also touches upon intellectual property rights for the sharing of language resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,636
inproceedings
gojun-etal-2012-adapting
Adapting and evaluating a generic term extraction tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1436/
Gojun, Anita and Heid, Ulrich and Wei{\ss}bach, Bernd and Loth, Carola and Mingers, Insa
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
651--656
We present techniques for monolingual term candidate extraction which are being developed in the EU project TTC. We designed an application for German and English data that serves as a first evaluation of the methods for terminology extraction used in the project. The application situation highlighted the need for tools to handle lemmatization errors and to remove incomplete word sequences from multi-word term candidate lists, as well as the fact that the provision of German citation forms requires more morphological knowledge than TTC`s slim approach can provide. We show a detailed evaluation of our extraction results and discuss the method for the evaluation of terminology extraction systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,637
inproceedings
reynaert-etal-2012-beyond
Beyond {S}o{N}a{R}: towards the facilitation of large corpus building efforts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1437/
Reynaert, Martin and Schuurman, Ineke and Hoste, V{\'e}ronique and Oostdijk, Nelleke and van Gompel, Maarten
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2897--2904
In this paper we report on the experiences gained in the recent construction of the SoNaR corpus, a 500 MW reference corpus of contemporary, written Dutch. It shows what can realistically be done within the confines of a project setting where there are limitations to the duration in time as well to the budget, employing current state-of-the-art tools, standards and best practices. By doing so we aim to pass on insights that may be beneficial for anyone considering to undertake an effort towards building a large, varied yet balanced corpus for use by the wider research community. Various issues are discussed that come into play while compiling a large corpus, including approaches to acquiring texts, the arrangement of IPR, the choice of text formats, and steps to be taken in the preprocessing of data from widely different origins. We describe FoLiA, a new XML format geared at rich linguistic annotations. We also explain the rationale behind the investment in the high-quali ty semi-automatic enrichment of a relatively small (1 MW) subset with very rich syntactic and semantic annotations. Finally, we present some ideas about future developments and the direction corpus development may take, such as setting up an integrated work flow between web services and the potential role for ISOcat. We list tips for potential corpus builders, tricks they may want to try and further recommendations regarding technical developments future corpus builders may wish to hope for.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,638
inproceedings
lefevre-etal-2012-leveraging
Leveraging study of robustness and portability of spoken language understanding systems across languages and domains: the {PORTMEDIA} corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1438/
Lef{\`e}vre, Fabrice and Mostefa, Djamel and Besacier, Laurent and Est{\`e}ve, Yannick and Quignard, Matthieu and Camelin, Nathalie and Favre, Benoit and Jabaian, Bassam and Rojas-Barahona, Lina M.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1436--1442
The PORTMEDIA project is intended to develop new corpora for the evaluation of spoken language understanding systems. The newly collected data are in the field of human-machine dialogue systems for tourist information in French in line with the MEDIA corpus. Transcriptions and semantic annotations, obtained by low-cost procedures, are provided to allow a thorough evaluation of the systems' capabilities in terms of robustness and portability across languages and domains. A new test set with some adaptation data is prepared for each case: in Italian as an example of a new language, for ticket reservation as an example of a new domain. Finally the work is complemented by the proposition of a new high level semantic annotation scheme well-suited to dialogue data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,639
inproceedings
agarwal-etal-2012-gui
A {GUI} to Detect and Correct Errors in {H}indi Dependency Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1439/
Agarwal, Rahul and Ambati, Bharat Ram and Singh, Anil Kumar
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1907--1911
A treebank is an important resource for developing many NLP based tools. Errors in the treebank may lead to error in the tools that use it. It is essential to ensure the quality of a treebank before it can be deployed for other purposes. Automatic (or semi-automatic) detection of errors in the treebank can reduce the manual work required to find and remove errors. Usually, the errors found automatically are manually corrected by the annotators. There is not much work reported so far on error correction tools which helps the annotators in correcting errors efficiently. In this paper, we present such an error correction tool that is an extension of the error detection method described earlier (Ambati et al., 2010; Ambati et al., 2011; Agarwal et al., 2012).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,640
inproceedings
atserias-etal-2012-spell
Spell Checking in {S}panish: The Case of Diacritic Accents
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1440/
Atserias, Jordi and Fuentes, Maria and Nazar, Rogelio and Renau, Irene
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
737--742
This article presents the problem of diacritic restoration (or diacritization) in the context of spell-checking, with the focus on an orthographically rich language such as Spanish. We argue that despite the large volume of work published on the topic of diacritization, currently available spell-checking tools have still not found a proper solution to the problem in those cases where both forms of a word are listed in the checker`s dictionary. This is the case, for instance, when a word form exists with and without diacritics, such as continuo ‘continuous' and continu{\'o} ‘he/she/it continued', or when different diacritics make other word distinctions, as in contin{\'u}o ‘I continue'. We propose a very simple solution based on a word bigram model derived from correctly typed Spanish texts and evaluate the ability of this model to restore diacritics in artificial as well as real errors. The case of diacritics is only meant to be an example of the possible applications for this idea, yet we believe that the same method could be applied to other kinds of orthographic or even grammatical errors. Moreover, given that no explicit linguistic knowledge is required, the proposed model can be used with other languages provided that a large normative corpus is available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,641
inproceedings
hahn-etal-2012-iterative
Iterative Refinement and Quality Checking of Annotation Guidelines {---} How to Deal Effectively with Semantically Sloppy Named Entity Types, such as Pathological Phenomena
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1441/
Hahn, Udo and Beisswanger, Elena and Buyko, Ekaterina and Faessler, Erik and Traum{\"uller, Jenny and Schr{\"oder, Susann and Hornbostel, Kerstin
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3881--3885
We here discuss a methodology for dealing with the annotation of semantically hard to delineate, i.e., sloppy, named entity types. To illustrate sloppiness of entities, we treat an example from the medical domain, namely pathological phenomena. Based on our experience with iterative guideline refinement we propose to carefully characterize the thematic scope of the annotation by positive and negative coding lists and allow for alternative, short vs. long mention span annotations. Short spans account for canonical entity mentions (e.g., standardized disease names), while long spans cover descriptive text snippets which contain entity-specific elaborations (e.g., anatomical locations, observational details, etc.). Using this stratified approach, evidence for increasing annotation performance is provided by kappa-based inter-annotator agreement measurements over several, iterative annotation rounds using continuously refined guidelines. The latter reflects the increasing understanding of the sloppy entity class both from the perspective of guideline writers and users (annotators). Given our data, we have gathered evidence that we can deal with sloppiness in a controlled manner and expect inter-annotator agreement values around 80{\%} for PathoJen, the pathological phenomena corpus currently under development in our lab.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,642
inproceedings
augustinus-etal-2012-example
Example-Based Treebank Querying
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1442/
Augustinus, Liesbeth and Vandeghinste, Vincent and Van Eynde, Frank
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3161--3167
The recent construction of large linguistic treebanks for spoken and written Dutch (e.g. CGN, LASSY, Alpino) has created new and exciting opportunities for the empirical investigation of Dutch syntax and semantics. However, the exploitation of those treebanks requires knowledge of specific data structures and query languages such as XPath. Linguists who are unfamiliar with formal languages are often reluctant towards learning such a language. In order to make treebank querying more attractive for non-technical users we developed GrETEL (Greedy Extraction of Trees for Empirical Linguistics), a query engine in which linguists can use natural language examples as a starting point for searching the Lassy treebank without knowledge about tree representations nor formal query languages. By allowing linguists to search for similar constructions as the example they provide, we hope to bridge the gap between traditional and computational linguistics. Two case studies are conducted to provide a concrete demonstration of the tool. The architecture of the tool is optimised for searching the LASSY treebank, but the approach can be adapted to other treebank lay-outs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,643
inproceedings
grissom-ii-miyao-2012-annotating
Annotating Factive Verbs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1443/
Grissom II, Alvin and Miyao, Yusuke
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4068--4072
We have created a scheme for annotating corpora designed to capture relevant aspects of factivity in verb-complement constructions. Factivity constructions are a well-known linguistic phenomenon that embed presuppositions about the state of the world into a clause. These embedded presuppositions provide implicit information about facts assumed to be true in the world, and are thus potentially valuable in areas of research such as textual entailment. We attempt to address both clear-cut cases of factivity and non-factivity, as well as account for the fluidity and ambiguous nature of some realizations of this construction. Our extensible scheme is designed to account for distinctions between claims, performatives, atypical uses of factivity, and the authority of the one making the utterance. We introduce a simple XML-based syntax for the annotation of factive verbs and clauses, in order to capture this information. We also provide an analysis of the issues which led to these annotative decisions, in the hope that these analyses will be beneficial to those dealing with factivity in a practical context.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,644
inproceedings
dickinson-ledbetter-2012-annotating
Annotating Errors in a {H}ungarian Learner Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1444/
Dickinson, Markus and Ledbetter, Scott
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1659--1664
We are developing and annotating a learner corpus of Hungarian, composed of student journals from three different proficiency levels written at Indiana University. Our annotation marks learner errors that are of different linguistic categories, including phonology, morphology, and syntax, but defining the annotation for an agglutinative language presents several issues. First, we must adapt an analysis that is centered on the morpheme rather than the word. Second, and more importantly, we see a need to distinguish errors from secondary corrections. We argue that although certain learner errors require a series of corrections to reach a target form, these secondary corrections, conditioned on those that come before, are our own adjustments that link the learner`s productions to the target form and are not representative of the learner`s internal grammar. In this paper, we report the annotation scheme and the principles that guide it, as well as examples illustrating its functionality and directions for expansion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,645
inproceedings
besancon-etal-2012-evaluation
Evaluation of a Complex Information Extraction Application in Specific Domain
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1445/
Besan{\c{c}}on, Romaric and Ferret, Olivier and Jean-Louis, Ludovic
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2056--2063
Operational intelligence applications in specific domains are developed using numerous natural language processing technologies and tools. A challenge for this integration is to take into account the limitations of each of these technologies in the global evaluation of the application. We present in this article a complex intelligence application for the gathering of information from the Web about recent seismic events. We present the different components needed for the development of such system, including Information Extraction, Filtering and Clustering, and the technologies behind each component. We also propose an independent evaluation of each component and an insight of their influence in the overall performance of the system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,646
inproceedings
bott-etal-2012-text
Text Simplification Tools for {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1446/
Bott, Stefan and Saggion, Horacio and Mille, Simon
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1665--1671
In this paper we describe the development of a text simplification system for Spanish. Text simplification is the adaptation of a text to the special needs of certain groups of readers, such as language learners, people with cognitive difficulties and elderly people, among others. There is a clear need for simplified texts, but manual production and adaptation of existing texts is labour intensive and costly. Automatic simplification is a field which attracts growing attention in Natural Language Processing, but, to the best of our knowledge, there are no simplification tools for Spanish. We present a prototype for automatic simplification, which shows that the most important structural simplification operations can be successfully treated with an approach based on rules which can potentially be improved by statistical methods. For the development of this prototype we carried out a corpus study which aims at identifying the operations a text simplification system needs to carry out in order to produce an output similar to what human editors produce when they simplify texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,647
inproceedings
comelles-etal-2012-verta
{VERT}a: Linguistic features in {MT} evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1447/
Comelles, Elisabet and Atserias, Jordi and Arranz, Victoria and Castell{\'o}n, Irene
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3944--3950
In the last decades, a wide range of automatic metrics that use linguistic knowledge has been developed. Some of them are based on lexical information, such as METEOR; others rely on the use of syntax, either using constituent or dependency analysis; and others use semantic information, such as Named Entities and semantic roles. All these metrics work at a specific linguistic level, but some researchers have tried to combine linguistic information, either by combining several metrics following a machine-learning approach or focusing on the combination of a wide variety of metrics in a simple and straightforward way. However, little research has been conducted on how to combine linguistic features from a linguistic point of view. In this paper we present VERTa, a metric which aims at using and combining a wide variety of linguistic features at lexical, morphological, syntactic and semantic level. We provide a description of the metric and report some preliminary experiments which will help us to discuss the use and combination of certain linguistic features in order to improve the metric performance
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,648
inproceedings
voutilainen-etal-2012-specifying
Specifying Treebanks, Outsourcing Parsebanks: {F}inn{T}ree{B}ank 3
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1448/
Voutilainen, Atro and Muhonen, Kristiina and Purtonen, Tanja and Lind{\'e}n, Krister
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1927--1931
Corpus-based treebank annotation is known to result in incomplete coverage of mid- and low-frequency linguistic constructions: the linguistic representation and corpus annotation quality are sometimes suboptimal. Large descriptive grammars cover also many mid- and low-frequency constructions. We argue for use of large descriptive grammars and their sample sentences as a basis for specifying higher-coverage grammatical representations. We present an sample case from an ongoing project (FIN-CLARIN FinnTreeBank) where an grammatical representation is documented as an annotator`s manual alongside manual annotation of sample sentences extracted from a large descriptive grammar of Finnish. We outline the linguistic representation (morphology and dependency syntax) for Finnish, and show how the resulting `Grammar Definition Corpus' and the documentation is used as a task specification for an external subcontractor for building a parser engine for use in morphological and dependency syntactic analysis of large volumes of Finnish for parsebanking purposes. The resulting corpus, FinnTreeBank 3, is due for release in June 2012, and will contain tens of millions of words from publicly available corpora of Finnish with automatic morphological and dependency syntactic analysis, for use in research on the corpus linguistics and language engineering.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,649
inproceedings
cutugno-etal-2012-w
{W}-{P}h{AMT}: A web tool for phonetic multilevel timeline visualization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1449/
Cutugno, Francesco and Leano, Vincenza Anna and Origlia, Antonio
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4127--4131
This paper presents a web platform with an its own graphic environment to visualize and filter multilevel phonetic annotations. The tool accepts as input Annotation Graph XML and Praat TextGrids files and converts these files into a specific XML format. XML output is used to browse data by means of a web tool using a visualization metaphor, namely a timeline. A timeline is a graphical representation of a period of time, on which relevant events are marked. Events are usually distributed over many layers in a geometrical metaphor represented by segments and points spatially distributed with reference to a temporal axis. The tool shows all the annotations included in the uploaded dataset, allowing the listening of the entire file or of its parts. Filtering is allowed on annotation labels by means of string pattern matching. The web service includes cloud services to share data with other users. The tool is available at \url{http://w-phamt.fisica.unina.it}
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,650
inproceedings
calzolari-etal-2012-lre
The {LRE} Map. Harmonising Community Descriptions of Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1450/
Calzolari, Nicoletta and Del Gratta, Riccardo and Francopoulo, Gil and Mariani, Joseph and Rubino, Francesco and Russo, Irene and Soria, Claudia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1084--1089
Accurate and reliable documentation of Language Resources is an undisputable need: documentation is the gateway to discovery of Language Resources, a necessary step towards promoting the data economy. Language resources that are not documented virtually do not exist: for this reason every initiative able to collect and harmonise metadata about resources represents a valuable opportunity for the NLP community. In this paper we describe the LRE Map, reporting statistics on resources associated with LREC2012 papers and providing comparisons with LREC2010 data. The LRE Map, jointly launched by FLaReNet and ELRA in conjunction with the LREC 2010 Conference, is an instrument for enhancing availability of information about resources, either new or already existing ones. It wants to reinforce and facilitate the use of standards in the community. The LRE Map web interface provides the possibility of searching according to a fixed set of metadata and to view the details of extracted resources. The LRE Map is continuing to collect bottom-up input about resources from authors of other conferences through standard submission process. This will help broadening the notion of “language resources” and attract to the field neighboring disciplines that so far have been only marginally involved by the standard notion of language resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,651
inproceedings
forascu-tufis-2012-romanian
{R}omanian {T}ime{B}ank: An Annotated Parallel Corpus for Temporal Information
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1451/
For{\u{a}}scu, Corina and Tufi{\c{s}}, Dan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3762--3766
The paper describes the main steps for the construction, annotation and validation of the Romanian version of the TimeBank corpus. Starting from the English TimeBank corpus {\textemdash} the reference annotated corpus in the temporal domain, we have translated all the 183 English news texts into Romanian and mapped the English annotations onto Romanian, with a success rate of 96.53{\%}. Based on ISO-Time - the emerging standard for representing temporal information, which includes many of the previous annotations schemes -, we have evaluated the automatic transfer onto Romanian and, and, when necessary, corrected the Romanian annotations so that in the end we obtained a 99.18{\%} transfer rate for the TimeML annotations. In very few cases, due to language peculiarities, some original annotations could not be transferred. For the portability of the temporal annotation standard to Romanian, we suggested some additions for the ISO-Time standard, concerning especially the EVENT tag, based on linguistic evidence, the Romanian grammar, and also on the localisations of TimeML to other Romance languages. Future improvements to the Ro-TimeBank will take into consideration all temporal expressions, signals and events in texts, even those with a not very clear temporal anchoring.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,652
inproceedings
negri-etal-2012-chinese
{C}hinese Whispers: Cooperative Paraphrase Acquisition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1452/
Negri, Matteo and Mehdad, Yashar and Marchetti, Alessandro and Giampiccolo, Danilo and Bentivogli, Luisa
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2659--2665
We present a framework for the acquisition of sentential paraphrases based on crowdsourcing. The proposed method maximizes the lexical divergence between an original sentence s and its valid paraphrases by running a sequence of paraphrasing jobs carried out by a crowd of non-expert workers. Instead of collecting direct paraphrases of s, at each step of the sequence workers manipulate semantically equivalent reformulations produced in the previous round. We applied this method to paraphrase English sentences extracted from Wikipedia. Our results show that, keeping at each round n the most promising paraphrases (i.e. the more lexically dissimilar from those acquired at round n-1), the monotonic increase of divergence allows to collect good-quality paraphrases in a cost-effective manner.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,653
inproceedings
johannessen-etal-2012-nordic
The {N}ordic Dialect Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1453/
Johannessen, Janne Bondi and Priestley, Joel and Hagen, Kristin and N{\o}klestad, Anders and Lynum, Andr{\'e}
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3387--3391
In this paper, we describe the Nordic Dialect Corpus, which has recently been completed. The corpus has a variety of features that combined makes it an advanced tool for language researchers. These features include: Linguistic contents (dialects from five closely related languages), annotation (tagging and two types of transcription), search interface (advanced possibilities for combining a large array of search criteria and results presentation in an intuitive and simple interface), many search variables (linguistics-based, informant-based, time-based), multimedia display (linking of sound and video to transcriptions), display of results in maps, display of informant details (number of words and other information on informants), advanced results handling (concordances, collocations, counts and statistics shown in a variety of graphical modes, plus further processing). Finally, and importantly, the corpus is freely available for research on the web. We give examples of both various kinds of searches, of displays of results and of results handling.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,654
inproceedings
read-etal-2012-wesearch
The {W}e{S}earch Corpus, Treebank, and Treecache {--} A Comprehensive Sample of User-Generated Content
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1454/
Read, Jonathon and Flickinger, Dan and Dridan, Rebecca and Oepen, Stephan and {\O}vrelid, Lilja
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1829--1835
We present the WeSearch Data Collection (WDC){\textemdash}a freely redistributable, partly annotated, comprehensive sample of User-Generated Content. The WDC contains data extracted from a range of genres of varying formality (user forums, product review sites, blogs and Wikipedia) and covers two different domains (NLP and Linux). In this article, we describe the data selection and extraction process, with a focus on the extraction of linguistic content from different sources. We present the format of syntacto-semantic annotations found in this resource and present initial parsing results for these data, as well as some reflections following a first round of treebanking.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,655
inproceedings
cvrcek-etal-2012-legal
Legal electronic dictionary for {C}zech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1455/
Cvr{\v{c}}ek, Franti{\v{s}}ek and Pala, Karel and Rychl{\'y}, Pavel
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
283--287
In the paper the results of the project of Czech Legal Electronic dictionary (PES) are presented. During the 4 year project the large legal terminological dictionary of Czech was created in the form of the electronic lexical database enriched with a hierarchical ontology of legal terms. It contains approx. 10,000 entries {\textemdash} legal terms together with their ontological relations and hypertext references. In the second part of the project the web interface based on the platform DEBII has been designed and implemented that allows users to browse and search effectively the database. At the same time the Czech Dictionary of Legal Terms will be generated from the database and later printed as a book. Inter-annotator`s agreement in manual selection of legal terms was high {\textemdash} approx. 95 {\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,656
inproceedings
kaspersson-etal-2012-also
This also affects the context - Errors in extraction based summaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1456/
Kaspersson, Thomas and Smith, Christian and Danielsson, Henrik and J{\"onsson, Arne
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
173--178
Although previous studies have shown that errors occur in texts summarized by extraction based summarizers, no study has investigated how common different types of errors are and how that changes with degree of summarization. We have conducted studies of errors in extraction based single document summaries using 30 texts, summarized to 5 different degrees and tagged for errors by human judges. The results show that the most common errors are absent cohesion or context and various types of broken or missing anaphoric references. The amount of errors is dependent on the degree of summarization where some error types have a linear relation to the degree of summarization and others have U-shaped or cut-off linear relations. These results show that the degree of summarization has to be taken into account to minimize the amount of errors by extraction based summarizers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,657
inproceedings
soria-etal-2012-flarenet
The {FL}a{R}e{N}et Strategic Language Resource Agenda
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1457/
Soria, Claudia and Bel, N{\'u}ria and Choukri, Khalid and Mariani, Joseph and Monachini, Monica and Odijk, Jan and Piperidis, Stelios and Quochi, Valeria and Calzolari, Nicoletta
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1379--1386
The FLaReNet Strategic Agenda highlights the most pressing needs for the sector of Language Resources and Technologies and presents a set of recommendations for its development and progress in Europe, as issued from a three-year consultation of the FLaReNet European project. The FLaReNet recommendations are organised around nine dimensions: a) documentation b) interoperability c) availability, sharing and distribution d) coverage, quality and adequacy e) sustainability f) recognition g) development h) infrastructure and i) international cooperation. As such, they cover a broad range of topics and activities, spanning over production and use of language resources, licensing, maintenance and preservation issues, infrastructures for language resources, resource identification and sharing, evaluation and validation, interoperability and policy issues. The intended recipients belong to a large set of players and stakeholders in Language Resources and Technology, ranging from individuals to research and education institutions, to policy-makers, funding agencies, SMEs and large companies, service and media providers. The main goal of these recommendations is to serve as an instrument to support stakeholders in planning for and addressing the urgencies of the Language Resources and Technologies of the future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,658
inproceedings
ruppenhofer-rehbein-2012-yes
Yes we can!? Annotating {E}nglish modal verbs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1458/
Ruppenhofer, Josef and Rehbein, Ines
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1538--1545
This paper presents an annotation scheme for English modal verbs together with sense-annotated data from the news domain. We describe our annotation scheme and discuss problematic cases for modality annotation based on the inter-annotator agreement during the annotation. Furthermore, we present experiments on automatic sense tagging, showing that our annotations do provide a valuable training resource for NLP systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,659
inproceedings
suarez-etal-2012-building
Building a Multimodal Laughter Database for Emotion Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1459/
Suarez, Merlin Teodosia and Cu, Jocelynn and Maria, Madelene Sta.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2347--2350
Laughter is a significant paralinguistic cue that is largely ignored in multimodal affect analysis. In this work, we investigate how a multimodal laughter corpus can be constructed and annotated both with discrete and dimensional labels of emotions for acted and spontaneous laughter. Professional actors enacted emotions to produce acted clips, while spontaneous laughter was collected from volunteers. Experts annotated acted laughter clips, while volunteers who possess an acceptable empathic quotient score annotated spontaneous laughter clips. The data was pre-processed to remove noise from the environment, and then manually segmented starting from the onset of the expression until its offset. Our findings indicate that laughter carries distinct emotions, and that emotion in laughter is best recognized using audio information rather than facial information. This may be explained by emotion regulation, i.e. laughter is used to suppress or regulate certain emotions. Furthermore, contextual information plays a crucial role in understanding the kind of laughter and emotion in the enactment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,660
inproceedings
frommer-etal-2012-towards
Towards Emotion and Affect Detection in the Multimodal {LAST} {MINUTE} Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1460/
Frommer, J{\"org and Michaelis, Bernd and R{\"osner, Dietmar and Wendemuth, Andreas and Friesen, Rafael and Haase, Matthias and Kunze, Manuela and Andrich, Rico and Lange, Julia and Panning, Axel and Siegert, Ingo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3064--3069
The LAST MINUTE corpus comprises multimodal recordings (e.g. video, audio, transcripts) from WOZ interactions in a mundane planning task (R{\"osner et al., 2011). It is one of the largest corpora with naturalistic data currently available. In this paper we report about first results from attempts to automatically and manually analyze the different modes with respect to emotions and affects exhibited by the subjects. We describe and discuss difficulties encountered due to the strong contrast between the naturalistic recordings and traditional databases with acted emotions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,661
inproceedings
zseder-etal-2012-rapid
Rapid creation of large-scale corpora and frequency dictionaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1461/
Zs{\'e}der, Attila and Recski, G{\'a}bor and Varga, D{\'a}niel and Kornai, Andr{\'a}s
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1462--1465
We describe, and make public, large-scale language resources and the toolchain used in their creation, for fifteen medium density European languages: Catalan, Czech, Croatian, Danish, Dutch, Finnish, Lithuanian, Norwegian, Polish, Portuguese, Romanian, Serbian, Slovak, Spanish, and Swedish. To make the process uniform across languages, we selected tools that are either language-independent or easily customizable for each language, and reimplemented all stages that were taking too long. To achieve processing times that are insignificant compared to the time data collection (crawling) takes, we reimplemented the standard sentence- and word-level tokenizers and created new boilerplate and near-duplicate detection algorithms. Preliminary experiments with non-European languages indicate that our methods are now applicable not just to our sample, but the entire population of digitally viable languages, with the main limiting factor being the availability of high quality stemmers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,662
inproceedings
hazem-morin-2012-adaptive
Adaptive Dictionary for Bilingual Lexicon Extraction from Comparable Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1462/
Hazem, Amir and Morin, Emmanuel
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
288--292
One of the main resources used for the task of bilingual lexicon extraction from comparable corpora is : the bilingual dictionary, which is considered as a bridge between two languages. However, no particular attention has been given to this lexicon, except its coverage, and the fact that it can be issued from the general language, the specialised one, or a mix of both. In this paper, we want to highlight the idea that a better consideration of the bilingual dictionary by studying its entries and filtering the non-useful ones, leads to a better lexicon extraction and thus, reach a higher precision. The experiments are conducted on a medical domain corpora. The French-English specialised corpus `breast cancer' of 1 million words. We show that the empirical results obtained with our filtering process improve the standard approach traditionally dedicated to this task and are promising for future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,663
inproceedings
song-etal-2012-linguistic
Linguistic Resources for Handwriting Recognition and Translation Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1463/
Song, Zhiyi and Ismael, Safa and Grimes, Stephen and Doermann, David and Strassel, Stephanie
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3951--3955
We describe efforts to create corpora to support development and evaluation of handwriting recognition and translation technology. LDC has developed a stable pipeline and infrastructures for collecting and annotating handwriting linguistic resources to support the evaluation of MADCAT and OpenHaRT. We collect and annotate handwritten samples of pre-processed Arabic and Chinese data that has been already translated in English that is used in the GALE program. To date, LDC has recruited more than 600 scribes and collected, annotated and released more than 225,000 handwriting images. Most linguistic resources created for these programs will be made available to the larger research community by publishing in LDC`s catalog. The phase 1 MADCAT corpus is now available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,664
inproceedings
origlia-alfano-2012-prosomarker
{P}rosomarker: a prosodic analysis tool based on optimal pitch stylization and automatic syllabi fication
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1464/
Origlia, Antonio and Alfano, Iolanda
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
997--1002
Prosodic research in recent years has been supported by a number of automatic analysis tools aimed at simplifying the work that is requested to study intonation. The need to analyze large amounts of data and to inspect phenomena that are often ambiguous and difficult to model makes the prosodic research area an ideal application field for computer based processing. One of the main challenges in this field is to model the complex relations occurring between the segmental level, mainly in terms of syllable nuclei and boundaries, and the supra-segmental level, mainly in terms of tonal movements. The goal of our contribution is to provide a tool for automatic annotation of prosodic data, the Prosomarker, designed to give a visual representation of both segmental and suprasegmental events. The representation is intended to be as generic as possible to let researchers analyze specific phenomena without being limited by assumptions introduced by the annotation itself. A perceptual account of the pitch curve is provided along with an automatic segmentation of the speech signal into syllable-like segments and the tool can be used both for data exploration, in semi-automatic mode, and to process large sets of data, in automatic mode.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,665
inproceedings
strik-etal-2012-disco
The {DISCO} {ASR}-based {CALL} system: practicing {L}2 oral skills and beyond
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1465/
Strik, Helmer and Colpaert, Jozef and van Doremalen, Joost and Cucchiarini, Catia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2702--2707
In this paper we describe the research that was carried out and the resources that were developed within the DISCO (Development and Integration of Speech technology into COurseware for language learning) project. This project aimed at developing an ASR-based CALL system that automatically detects pronunciation and grammar errors in Dutch L2 speaking and generates appropriate, detailed feedback on the errors detected. We briefly introduce the DISCO system and present its design, architecture and speech recognition modules. We then describe a first evaluation of the complete DISCO system and present some results. The resources generated through DISCO are subsequently described together with possible ways of efficiently generating additional resources in the future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,666
inproceedings
sirin-etal-2012-metu
{METU} {T}urkish Discourse Bank Browser
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1466/
{\c{S}}irin, Utku and {\c{C}}ak{\i}c{\i}, Ruket and Zeyrek, Deniz
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2808--2812
In this paper, the METU Turkish Discourse Bank Browser, a tool developed for browsing the annotated annotated discourse relations in Middle East Technical University (METU) Turkish Discourse Bank (TDB) project is presented. The tool provides both a clear interface for browsing the annotated corpus and a wide range of search options to analyze the annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,667