entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
banski-etal-2012-new
The New {IDS} Corpus Analysis Platform: Challenges and Prospects
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1467/
Ba{\'n}ski, Piotr and Fischer, Peter M. and Frick, Elena and Ketzan, Erik and Kupietz, Marc and Schnober, Carsten and Schonefeld, Oliver and Witt, Andreas
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2905--2911
The present article describes the first stage of the KorAP project, launched recently at the Institut f{\"ur Deutsche Sprache (IDS) in Mannheim, Germany. The aim of this project is to develop an innovative corpus analysis platform to tackle the increasing demands of modern linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse primary data and annotations in the petabyte range, while at the same time allowing an undistorted view of the primary linguistic data, and thus fully satisfying the demands of a scientific tool. An additional important aim of the project is to make corpus data as openly accessible as possible in light of unavoidable legal restrictions, for instance through support for distributed virtual corpora, user-defined annotations and adaptable user interfaces, as well as interfaces and sandboxes for user-supplied analysis applications. We discuss our motivation for undertaking this endeavour and the challenges that face it. Next, we outline our software implementation plan and describe development to-date.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,668
inproceedings
quarteroni-etal-2012-evaluating
Evaluating Multi-focus Natural Language Queries over Data Services
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1468/
Quarteroni, Silvia and Guerrisi, Vincenzo and Torre, Pietro La
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2547--2552
Natural language interfaces to data services will be a key technology to guarantee access to huge data repositories in an effortless way. This involves solving the complex problem of recognizing a relevant service or service composition given an ambiguous, potentially ungrammatical natural language question. As a first step toward this goal, we study methods for identifying the salient terms (or foci) in natural language questions, classifying the latter according to a taxonomy of services and extracting additional relevant information in order to route them to suitable data services. While current approaches deal with single-focus (and therefore single-domain) questions, we investigate multi-focus questions in the aim of supporting conjunctive queries over the data services they refer to. Since such complex queries have seldom been studied in the literature, we have collected an ad-hoc dataset, SeCo-600, containing 600 multi-domain queries annotated with a number of linguistic and pragmatic features. Our experiments with the dataset have allowed us to reach very high accuracy in different phases of query analysis, especially when adopting machine learning methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,669
inproceedings
pazienza-etal-2012-application
Application of a Semantic Search Algorithm to Semi-Automatic {GUI} Generation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1469/
Pazienza, Maria Teresa and Scarpato, Noemi and Stellato, Armando
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3631--3638
The Semantic Search research field aims to query metadata and to identify relevant subgraphs. While in traditional search engines queries are composed by lists of keywords connected through boolean operators, Semantic Search instead, requires the submission of semantic queries that are structured as a graph of concepts, entities and relations. Submission of this graph is however not trivial as while a list of keywords of interest can be provided by any user, the formulation of semantic queries is not easy as well. One of the main challenges of RDF Browsers lies in the implementation of interfaces that allow the common user to submit semantic queries by hiding their complexity. Furthermore a good semantic search algorithm is not enough to fullfil user needs, it is worthwhile to implement visualization methods which can support users in intuitively understanding why and how the results were retrieved. In this paper we present a novel solution to query RDF datasets and to browse the results of the queries in an appealing manner.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,670
inproceedings
karan-etal-2012-evaluation
Evaluation of Classification Algorithms and Features for Collocation Extraction in {C}roatian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1470/
Karan, Vanja Mladen and {\v{S}}najder, Jan and Ba{\v{s}}i{\'c}, Bojana Dalbelo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
657--662
Collocations can be defined as words that occur together significantly more often than it would be expected by chance. Many natural language processing applications such as natural language generation, word sense disambiguation and machine translation can benefit from having access to information about collocated words. We approach collocation extraction as a classification problem where the task is to classify a given n-gram as either a collocation (positive) or a non-collocation (negative). Among the features used are word frequencies, classical association measures (Dice, PMI, chi2), and POS tags. In addition, semantic word relatedness modeled by latent semantic analysis is also included. We apply wrapper feature subset selection to determine the best set of features. Performance of various classification algorithms is tested. Experiments are conducted on a manually annotated set of bigrams and trigrams sampled from a Croatian newspaper corpus. Best results obtained are 79.8 F1 measure for bigrams and 67.5 F1 measure for trigrams. The best classifier for bigrams was SVM, while for trigrams the decision tree gave the best performance. Features which contributed the most to overall performance were PMI, semantic relatedness, and POS information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,671
inproceedings
frick-etal-2012-evaluating
Evaluating Query Languages for a Corpus Processing System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1471/
Frick, Elena and Schnober, Carsten and Ba{\'n}ski, Piotr
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2286--2294
This paper documents a pilot study conducted as part of the development of a new corpus processing system at the Institut f{\"ur Deutsche Sprache in Mannheim and in the context of the ISO TC37 SC4/WG6 activity on the suggested work item proposal “Corpus Query Lingua Franca”. We describe the first phase of our research: the initial formulation of functionality criteria for query language evaluation and the results of the application of these criteria to three representatives of corpus query languages, namely COSMAS II, Poliqarp, and ANNIS QL. In contrast to previous works on query language evaluation that compare a range of existing query languages against a small number of queries, our approach analyses only three query languages against criteria derived from a suite of 300 use cases that cover diverse aspects of linguistic research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,672
inproceedings
shi-etal-2012-two
Two Phase Evaluation for Selecting Machine Translation Services
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1472/
Shi, Chunqi and Lin, Donghui and Shimada, Masahiko and Ishida, Toru
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1771--1778
An increased number of machine translation services are now available. Unfortunately, none of them can provide adequate translation quality for all input sources. This forces the user to select from among the services according to his needs. However, it is tedious and time consuming to perform this manual selection. Our solution, proposed here, is an automatic mechanism that can select the most appropriate machine translation service. Although evaluation methods are available, such as BLEU, NIST, WER, etc., their evaluation results are not unanimous regardless of the translation sources. We proposed a two-phase architecture for selecting translation services. The first phase uses a data-driven classification to allow the most appropriate evaluation method to be selected according to each translation source. The second phase selects the most appropriate machine translation result by the selected evaluation method. We describe the architecture, detail the algorithm, and construct a prototype. Tests show that the proposal yields better translation quality than employing just one machine translation service.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,673
inproceedings
su-babych-2012-development
Development and Application of a Cross-language Document Comparability Metric
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1473/
Su, Fangzhong and Babych, Bogdan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3956--3962
In this paper we present a metric that measures comparability of documents across different languages. The metric is developed within the FP7 ICT ACCURAT project, as a tool for aligning comparable corpora on the document level; further these aligned comparable documents are used for phrase alignment and extraction of translation equivalents, with the aim to extend phrase tables of statistical MT systems without the need to use parallel texts. The metric uses several features, such as lexical information, document structure, keywords and named entities, which are combined in an ensemble manner. We present the results by measuring the reliability and effectiveness of the metric, and demonstrate its application and the impact for the task of parallel phrase extraction from comparable corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,674
inproceedings
weitz-schafer-2012-graphical
A Graphical Citation Browser for the {ACL} {A}nthology
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1474/
Weitz, Benjamin and Sch{\"afer, Ulrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1718--1722
Navigation in large scholarly paper collections is tedious and not well supported in most scientific digital libraries. We describe a novel browser-based graphical tool implemented using HTML5 Canvas. It displays citation information extracted from the paper text to support useful navigation. The tool is implemented using a client/server architecture. A citation graph of the digital library is built in the memory of the server. On the client side, egdes of the displayed citation (sub)graph surrounding a document are labeled with keywords signifying the kind of citation made from one document to another. These keywords were extracted using NLP tools such as tokenizer, sentence boundary detection and part-of-speech tagging applied to the text extracted from the original PDF papers (currently 22,500). By clicking on an egde, the user can inspect the corresponding citation sentence in context, in most cases even also highlighted in the original PDF layout. The system is publicly accessible as part of the ACL Anthology Searchbench.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,675
inproceedings
wattam-etal-2012-document
Document Attrition in Web Corpora: an Exploration
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1475/
Wattam, Stephen and Rayson, Paul and Berridge, Damon
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1486--1489
Increases in the use of web data for corpus-building, coupled with the use of specialist, single-use corpora, make for an increasing reliance on language that changes quickly, affecting the long-term validity of studies based on these methods. This ‘drift' through time affects both users of open-source corpora and those attempting to interpret the results of studies based on web data. The attrition of documents online, also called link rot or document half-life, has been studied many times for the purposes of optimising search engine web crawlers, producing robust and reliable archival systems, and ensuring the integrity of distributed information stores, however, the affect that attrition has upon corpora of varying construction remains largely unknown. This paper presents a preliminary investigation into the differences in attrition rate between corpora selected using different corpus construction methods. It represents the first step in a larger longitudinal analysis, and as such presents URI-based content clues, chosen to relate to studies from other areas. The ultimate goal of this larger study is to produce a detailed enumeration of the primary biases online, and identify sampling strategies which control and minimise unwanted effects of document attrition.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,676
inproceedings
belz-gatt-2012-repository
A Repository of Data and Evaluation Resources for Natural Language Generation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1476/
Belz, Anja and Gatt, Albert
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4027--4032
Starting in 2007, the field of natural language generation (NLG) has organised shared-task evaluation events every year, under the Generation Challenges umbrella. In the course of these shared tasks, a wealth of data has been created, along with associated task definitions and evaluation regimes. In other contexts too, sharable NLG data is now being created. In this paper, we describe the online repository that we have created as a one-stop resource for obtaining NLG task materials, both from Generation Challenges tasks and from other sources, where the set of materials provided for each task consists of (i) task definition, (ii) input and output data, (iii) evaluation software, (iv) documentation, and (v) publications reporting previous results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,677
inproceedings
shi-etal-2012-service
Service Composition Scenarios for Task-Oriented Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1477/
Shi, Chunqi and Lin, Donghui and Ishida, Toru
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2951--2958
Due to instant availability and low cost, machine translation is becoming popular. Machine translation mediated communication plays a more and more important role in international collaboration. However, machine translators cannot guarantee high quality translation. In a multilingual communication task, many in-domain resources, for example domain dictionaries, are needed to promote translation quality. This raises the problem of how to help communication task designers provide higher quality translation systems, systems that can take advantage of various in-domain resources. The Language Grid, a service-oriented collective intelligent platform, allows in-domain resources to be wrapped into language services. For task-oriented translation, we propose service composition scenarios for the composition of different language services, where various in-domain resources are utilized effectively. We design the architecture, provide a script language as the interface for the task designer, which is easy for describing the composition scenario, and make a case study of a Japanese-English campus orientation task. Based on the case study, we analyze the increase in translation quality possible and the usage of in-domain resources. The results demonstrate a clear improvement in translation accuracy when the in-domain resources are used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,678
inproceedings
kano-2012-towards
Towards automation in using multi-modal language resources: compatibility and interoperability for multi-modal features in {K}achako
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1478/
Kano, Yoshinobu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1098--1101
Use of language resources including annotated corpora and tools is not easy for users, as it requires expert knowledge to determine which resources are compatible and interoperable. Sometimes it requires programming skill in addition to the expert knowledge to make the resources compatible and interoperable when the resources are not created so. If a platform system could provide automation features for using language resources, users do not have to waste their time as the above issues are not necessarily essential for the users' goals. While our system, Kachako, provides such automation features for single-modal resources, multi-modal resources are more difficult to combine automatically. In this paper, we discuss designs of multi-modal resource compatibility and interoperability from such an automation point of view in order for the Kachako system to provide automation features of multi-modal resources. Our discussion is based on the UIMA framework, and focuses on resource metadata description optimized for ideal automation features while harmonizing with the UIMA framework using other standards as well.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,679
inproceedings
mondary-etal-2012-quaero
The Quaero Evaluation Initiative on Term Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1479/
Mondary, Thibault and Nazarenko, Adeline and Zargayouna, Ha{\"ifa and Barreaux, Sabine
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
663--669
The Quaero program has organized a set of evaluations for terminology extraction systems in 2010 and 2011. Three objectives were targeted in this initiative: the first one was to evaluate the behavior and scalability of term extractors regarding the size of corpora, the second goal was to assess progress between different versions of the same systems, the last one was to measure the influence of corpus type. The protocol used during this initiative was a comparative analysis of 32 runs against a gold standard. Scores were computed using metrics that take into account gradual relevance. Systems produced by Quaero partners and publicly available systems were evaluated on pharmacology corpora composed of European Patents or abstracts of scientific articles, all in English. The gold standard was an unstructured version of the pharmacology thesaurus used by INIST-CNRS for indexing purposes. Most systems scaled with large corpora, contrasted differences were observed between different versions of the same systems and with better results on scientific articles than on patents. During the ongoing adjudication phase domain experts are enriching the thesaurus with terms found by several systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,680
inproceedings
russo-etal-2012-italian
{I}talian and {S}panish Null Subjects. A Case Study Evaluation in an {MT} Perspective.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1480/
Russo, Lorenza and Lo{\'a}iciga, Sharid and Gulati, Asheesh
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1779--1784
Thanks to their rich morphology, Italian and Spanish allow pro-drop pronouns, i.e., non lexically-realized subject pronouns. Here we distinguish between two different types of null subjects: personal pro-drop and impersonal pro-drop. We evaluate the translation of these two categories into French, a non pro-drop language, using Its-2, a transfer-based system developed at our laboratory; and Moses, a statistical system. Three different corpora are used: two subsets of the Europarl corpus and a third corpus built using newspaper articles. Null subjects turn out to be quantitatively important in all three corpora, but their distribution varies depending on the language and the text genre though. From a MT perspective, translation results are determined by the type of pro-drop and the pair of languages involved. Impersonal pro-drop is harder to translate than personal pro-drop, especially for the translation from Italian into French, and a significant portion of incorrect translations consists of missing pronouns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,681
inproceedings
steinberger-etal-2012-dgt
{DGT}-{TM}: A freely available Translation Memory in 22 languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1481/
Steinberger, Ralf and Eisele, Andreas and Klocek, Szymon and Pilos, Spyridon and Schl{\"uter, Patrick
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
454--459
The European Commission`s (EC) Directorate General for Translation, together with the EC`s Joint Research Centre, is making available a large translation memory (TM; i.e. sentences and their professionally produced translations) covering twenty-two official European Union (EU) languages and their 231 language pairs. Such a resource is typically used by translation professionals in combination with TM software to improve speed and consistency of their translations. However, this resource has also many uses for translation studies and for language technology applications, including Statistical Machine Translation (SMT), terminology extraction, Named Entity Recognition (NER), multilingual classification and clustering, and many more. In this reference paper for DGT-TM, we introduce this new resource, provide statistics regarding its size, and explain how it was produced and how to use it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,682
inproceedings
elfardy-diab-2012-simplified
Simplified guidelines for the creation of Large Scale Dialectal {A}rabic Annotations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1482/
Elfardy, Heba and Diab, Mona
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
371--378
The Arabic language is a collection of dialectal variants along with the standard form, Modern Standard Arabic (MSA). MSA is used in official Settings while the dialectal variants (DA) correspond to the native tongue of the Arabic speakers. Arabic speakers typically code switch between DA and MSA, which is reflected extensively in written online social media. Automatic processing such Arabic genre is very difficult for automated NLP tools since the linguistic difference between MSA and DA is quite profound. However, no annotated resources exist for marking the regions of such switches in the utterance. In this paper, we present a simplified Set of guidelines for detecting code switching in Arabic on the word/token level. We use these guidelines in annotating a corpus that is rich in DA with frequent code switching to MSA. We present both a quantitative and qualitative analysis of the annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,683
inproceedings
federmann-etal-2012-meta
{META}-{SHARE} v2: An Open Network of Repositories for Language Resources including Data and Tools
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1483/
Federmann, Christian and Giannopoulou, Ioanna and Girardi, Christian and Hamon, Olivier and Mavroeidis, Dimitris and Minutoli, Salvatore and Schr{\"oder, Marc
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3300--3303
We describe META-SHARE which aims at providing an open, distributed, secure, and interoperable infrastructure for the exchange of language resources, including both data and tools. The application has been designed and is developed as part of the T4ME Network of Excellence. We explain the underlying motivation for such a distributed repository for metadata storage and give a detailed overview on the META-SHARE application and its various components. This includes a discussion of the technical architecture of the system as well as a description of the component-based metadata schema format which has been developed in parallel. Development of the META-SHARE infrastructure adopts state-of-the-art technology and follows an open-source approach, allowing the general community to participate in the development process. The META-SHARE software package including full source code has been released to the public in March 2012. We look forward to present an up-to-date version of the META-SHARE software at the conference.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,684
inproceedings
weller-heid-2012-analyzing
Analyzing and Aligning {G}erman compound nouns
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1484/
Weller, Marion and Heid, Ulrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2395--2400
In this paper, we present and evaluate an approach for the compositional alignment of compound nouns using comparable corpora from technical domains. The task of term alignment consists in relating a source language term to its translation in a list of target language terms with the help of a bilingual dictionary. Compound splitting allows to transform a compound into a sequence of components which can be translated separately and then related to multi-word target language terms. We present and evaluate a method for compound splitting, and compare two strategies for term alignment (bag-of-word vs. pattern-based). The simple word-based approach leads to a considerable amount of erroneous alignments, whereas the pattern-based approach reaches a decent precision. We also assess the reasons for alignment failures: in the comparable corpora used for our experiments, a substantial number of terms has no translation in the target language data; furthermore, the non-isomorphic structures of source and target language terms cause alignment failures in many cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,685
inproceedings
nawaz-etal-2012-identification
Identification of Manner in Bio-Events
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1485/
Nawaz, Raheel and Thompson, Paul and Ananiadou, Sophia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3505--3510
Due to the rapid growth in the volume of biomedical literature, there is an increasing requirement for high-performance semantic search systems, which allow biologists to perform precise searches for events of interest. Such systems are usually trained on corpora of documents that contain manually annotated events. Until recently, these corpora, and hence the event extraction systems trained on them, focussed almost exclusively on the identification and classification of event arguments, without taking into account how the textual context of the events could affect their interpretation. Previously, we designed an annotation scheme to enrich events with several aspects (or dimensions) of interpretation, which we term meta-knowledge, and applied this scheme to the entire GENIA corpus. In this paper, we report on our experiments to automate the assignment of one of these meta-knowledge dimensions, i.e. Manner, to recognised events. Manner is concerned with the rate, strength intensity or level of the event. We distinguish three different values of manner, i.e., High, Low and Neutral. To our knowledge, our work represents the first attempt to classify the manner of events. Using a combination of lexical, syntactic and semantic features, our system achieves an overall accuracy of 99.4{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,686
inproceedings
eberle-etal-2012-tool
A Tool/Database Interface for Multi-Level Analyses
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1486/
Eberle, Kurt and Eckart, Kerstin and Heid, Ulrich and Haselbach, Boris
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2912--2916
Depending on the nature of a linguistic theory, empirical investigations of its soundness may focus on corpus studies related to lexical, syntactic, semantic or other phenomena. Especially work in research networks usually comprises analyses of different levels of description, where each one must be as reliable as possible when the same sentences and texts are investigated under very different perspectives. This paper describes an infrastructure that interfaces an analysis tool for multi-level annotation with a generic relational database. It supports three dimensions of analysis-handling and thereby builds an integrated environment for quality assurance in corpus based linguistic analysis: a vertical dimension relating analysis components in a pipeline, a horizontal dimension taking alternative results of the same analysis level into account and a temporal dimension to follow up cases where analyses for the same input have been produced with different versions of a tool. As an example we give a detailed description of a typical workflow for the vertical dimension.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,687
inproceedings
declerck-etal-2012-accessing
Accessing and standardizing {W}iktionary lexical entries for the translation of labels in Cultural Heritage taxonomies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1487/
Declerck, Thierry and M{\"orth, Karlheinz and Lendvai, Piroska
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2511--2514
We describe the usefulness of Wiktionary, the freely available web-based lexical resource, in providing multilingual extensions to catalogues that serve content-based indexing of folktales and related narratives. We develop conversion tools between Wiktionary and TEI, using ISO standards (LMF, MAF), to make such resources available to both the Digital Humanities community and the Language Resources community. The converted data can be queried via a web interface, while the tools of the workflow are to be released with an open source license. We report on the actual state and functionality of our tools and analyse some shortcomings of Wiktionary, as well as potential domains of application.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,688
inproceedings
odriozola-etal-2012-using
Using an {ASR} database to design a pronunciation evaluation system in {B}asque
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1488/
Odriozola, Igor and Navas, Eva and Hernaez, Inma and Sainz, I{\~n}aki and Saratxaga, Ibon and S{\'a}nchez, Jon and Erro, Daniel
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4122--4126
This paper presents a method to build CAPT systems for under resourced languages, as Basque, using a general purpose ASR speech database. More precisely, the proposed method consists in automatically determine the threshold of GOP (Goodness Of Pronunciation) scores, which have been used as pronunciation scores in phone-level. Two score distributions have been obtained for each phoneme corresponding to its correct and incorrect pronunciations. The distribution of the scores for erroneous pronunciation has been calculated inserting controlled errors in the dictionary, so that each changed phoneme has been randomly replaced by a phoneme from the same group. These groups have been obtained by means of a phonetic clustering performed using regression trees. After obtaining both distributions, the EER (Equal Error Rate) of each distribution pair has been calculated and used as a decision threshold for each phoneme. The results show that this method is useful when there is no database specifically designed for CAPT systems, although it is not as accurate as those specifically designed for this purpose.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,689
inproceedings
bongelli-etal-2012-corpus
A Corpus of Scientific Biomedical Texts Spanning over 168 Years Annotated for Uncertainty
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1489/
Bongelli, Ramona and Canestrari, Carla and Riccioni, Ilaria and Zuczkowski, Andrzej and Buldorini, Cinzia and Pietrobon, Ricardo and Lavelli, Alberto and Magnini, Bernardo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2009--2014
Uncertainty language permeates biomedical research and is fundamental for the computer interpretation of unstructured text. And yet, a coherent, cognitive-based theory to interpret Uncertainty language and guide Natural Language Processing is, to our knowledge, non-existing. The aim of our project was therefore to detect and annotate Uncertainty markers {\textemdash} which play a significant role in building knowledge or beliefs in readers' minds {\textemdash} in a biomedical research corpus. Our corpus includes 80 manually annotated articles from the British Medical Journal randomly sampled from a 168-year period. Uncertainty markers have been classified according to a theoretical framework based on a combined linguistic and cognitive theory. The corpus was manually annotated according to such principles. We performed preliminary experiments to assess the manually annotated corpus and establish a baseline for the automatic detection of Uncertainty markers. The results of the experiments show that most of the Uncertainty markers can be recognized with good accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,690
inproceedings
mostefa-etal-2012-new
New language resources for the {P}ashto language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1490/
Mostefa, Djamel and Choukri, Khalid and Brunessaux, Sylvie and Boudahmane, Karim
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2917--2922
This paper reports on the development of new language resources for the Pashto language, a very low-resource language spoken in Afghanistan and Pakistan. In the scope of a multilingual data collection project, three large corpora are collected for Pashto. Firstly a monolingual text corpus of 100 million words is produced. Secondly a 100 hours speech database is recorded and manually transcribed. Finally a bilingual Pashto-French parallel corpus of around 2 million is produced by translating Pashto texts into French. These resources will be used to develop Human Language Technology systems for Pashto with a special focus on Machine Translation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,691
inproceedings
liu-etal-2012-extending
Extending the {MPC} corpus to {C}hinese and {U}rdu - A Multiparty Multi-Lingual Chat Corpus for Modeling Social Phenomena in Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1491/
Liu, Ting and Shaikh, Samira and Strzalkowski, Tomek and Broadwell, Aaron and Stromer-Galley, Jennifer and Taylor, Sarah and Boz, Umit and Ren, Xiaoai and Wu, Jingsi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2868--2873
In this paper, we report our efforts in building a multi-lingual multi-party online chat corpus in order to develop a firm understanding in a set of social constructs such as agenda control, influence, and leadership as well as to computationally model such constructs in online interactions. These automated models will help capture the dialogue dynamics that are essential for developing, among others, realistic human-machine dialogue systems, including autonomous virtual chat agents. In this paper, we first introduce our experiment design and data collection method in Chinese and Urdu, and then report on the current stage of our data collection. We annotated the collected corpus on four levels: communication links, dialogue acts, local topics, and meso-topics. Results from the analyses of annotated data on different languages indicate some interesting phenomena, which are reported in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,692
inproceedings
kafkas-etal-2012-calbc
{CALBC}: Releasing the Final Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1492/
Kafkas, {\c{S}}enay and Lewin, Ian and Milward, David and van Mulligen, Erik and Kors, Jan and Hahn, Udo and Rebholz-Schuhmann, Dietrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2923--2926
A number of gold standard corpora for named entity recognition are available to the public. However, the existing gold standard corpora are limited in size and semantic entity types. These usually lead to implementation of trained solutions (1) for a limited number of semantic entity types and (2) lacking in generalization capability. In order to overcome these problems, the CALBC project has aimed to automatically generate large scale corpora annotated with multiple semantic entity types in a community-wide manner based on the consensus of different named entity solutions. The generated corpus is called the silver standard corpus since the corpus generation process does not involve any manual curation. In this publication, we announce the release of the final CALBC corpora which include the silver standard corpus in different versions and several gold standard corpora for the further usage of the biomedical text mining community. The gold standard corpora are utilised to benchmark the methods used in the silver standard corpora generation process and released in a shared format. All the corpora are released in a shared format and accessible at www.calbc.eu.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,693
inproceedings
adell-etal-2012-buceador
{BUCEADOR}, a multi-language search engine for digital libraries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1493/
Adell, Jordi and Bonafonte, Antonio and Cardenal, Antonio and Costa-Juss{\`a}, Marta R. and Fonollosa, Jos{\'e} A. R. and Moreno, Asunci{\'o}n and Navas, Eva and Banga, Eduardo R.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1705--1709
This paper presents a web-based multimedia search engine built within the Buceador (www.buceador.org) research project. A proof-of-concept tool has been implemented which is able to retrieve information from a digital library made of multimedia documents in the 4 official languages in Spain (Spanish, Basque, Catalan and Galician). The retrieved documents are presented in the user language after translation and dubbing (the four previous languages + English). The paper presents the tool functionality, the architecture, the digital library and provide some information about the technology involved in the fields of automatic speech recognition, statistical machine translation, text-to-speech synthesis and information retrieval. Each technology has been adapted to the purposes of the presented tool as well as to interact with the rest of the technologies involved.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,694
inproceedings
savkov-etal-2012-linguistic
Linguistic Analysis Processing Line for {B}ulgarian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1494/
Savkov, Aleksandar and Laskova, Laska and Kancheva, Stanislava and Osenova, Petya and Simov, Kiril
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2959--2964
This paper presents a linguistic processing pipeline for Bulgarian including morphological analysis, lemmatization and syntactic analysis of Bulgarian texts. The morphological analysis is performed by three modules {\textemdash} two statistical-based and one rule-based. The combination of these modules achieves the best result for morphological tagging of Bulgarian over a rich tagset (680 tags). The lemmatization is based on rules, generated from a large morphological lexicon of Bulgarian. The syntactic analysis is implemented via MaltParser. The two statistical morphological taggers and MaltParser are trained on datasets constructed within BulTreeBank project. The processing pipeline includes also a sentence splitter and a tokenizer. All tools in the pipeline are packed in modules that can also perform separately. The whole pipeline is designed to be able to serve as a back-end of a web service oriented interface, but it also supports the user tasks with a command-line interface. The processing pipeline is compatible with the Text Corpus Format, which allows it to delegate the management of the components to the WebLicht platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,695
inproceedings
hana-hladka-2012-getting
Getting more data {--} Schoolkids as annotators
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1495/
Hana, Jirka and Hladk{\'a}, Barbora
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4049--4054
We present a new way to get more morphologically and syntactically annotated data. We have developed an annotation editor tailored to school children to involve them in text annotation. Using this editor, they practice morphology and dependency-based syntax in the same way as they normally do at (Czech) schools, without any special training. Their annotation is then automatically transformed into the target annotation schema. The editor is designed to be language independent, however the subsequent transformation is driven by the annotation framework we are heading for. In our case, the object language is Czech and the target annotation scheme corresponds to the Prague Dependency Treebank annotation framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,696
inproceedings
abrate-bacciu-2012-visualizing
Visualizing word senses in {W}ord{N}et Atlas
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1496/
Abrate, Matteo and Bacciu, Clara
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2648--2652
This demo presents the second prototype of WordNet Atlas, a web application that gives users the ability to navigate and visualize the 146,312 word senses of the nouns contained within the Princeton WordNet. Two complementary, interlinked visualizations are provided: an hypertextual dictionary to represent detailed information about a word sense, such as lemma, definition and depictions, and a zoomable map representing the taxonomy of noun synsets in a circular layout. The application could help users unfamiliar with WordNet to get oriented in the large amount of data it contains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,697
inproceedings
schafer-bildhauer-2012-building
Building Large Corpora from the Web Using a New Efficient Tool Chain
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1497/
Sch{\"afer, Roland and Bildhauer, Felix
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
486--493
Over the last decade, methods of web corpus construction and the evaluation of web corpora have been actively researched. Prominently, the WaCky initiative has provided both theoretical results and a set of web corpora for selected European languages. We present a software toolkit for web corpus construction and a set of siginificantly larger corpora (up to over 9 billion tokens) built using this software. First, we discuss how the data should be collected to ensure that it is not biased towards certain hosts. Then, we describe our software toolkit which performs basic cleanups as well as boilerplate removal, simple connected text detection as well as shingling to remove duplicates from the corpora. We finally report evaluation results of the corpora built so far, for example w.r.t. the amount of duplication contained and the text type/genre distribution. Where applicable, we compare our corpora to the WaCky corpora, since it is inappropriate, in our view, to compare web corpora to traditional or balanced corpora. While we use some methods applied by the WaCky initiative, we can show that we have introduced incremental improvements.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,698
inproceedings
afantenos-etal-2012-empirical
An empirical resource for discovering cognitive principles of discourse organisation: the {ANNODIS} corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1498/
Afantenos, Stergos and Asher, Nicholas and Benamara, Farah and Bras, Myriam and Fabre, C{\'e}cile and Ho-dac, Mai and Draoulec, Anne Le and Muller, Philippe and P{\'e}ry-Woodley, Marie-Paule and Pr{\'e}vot, Laurent and Rebeyrolles, Josette and Tanguy, Ludovic and Vergez-Couret, Marianne and Vieu, Laure
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2727--2734
This paper describes the ANNODIS resource, a discourse-level annotated corpus for French. The corpus combines two perspectives on discourse: a bottom-up approach and a top-down approach. The bottom-up view incrementally builds a structure from elementary discourse units, while the top-down view focuses on the selective annotation of multi-level discourse structures. The corpus is composed of texts that are diversified with respect to genre, length and type of discursive organisation. The methodology followed here involves an iterative design of annotation guidelines in order to reach satisfactory inter-annotator agreement levels. This allows us to raise a few issues relevant for the comparison of such complex objects as discourse structures. The corpus also serves as a source of empirical evidence for discourse theories. We present here two first analyses taking advantage of this new annotated corpus --one that tested hypotheses on constraints governing discourse structure, and another that studied the variations in composition and signalling of multi-level discourse structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,699
inproceedings
zahra-carson-berndsen-2012-english
{E}nglish to {I}ndonesian Transliteration to Support {E}nglish Pronunciation Practice
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1499/
Zahra, Amalia and Carson-Berndsen, Julie
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4132--4135
The work presented in this paper explores the use of Indonesian transliteration to support English pronunciation practice. It is mainly aimed for Indonesian speakers who have no or minimum English language skills. The approach implemented combines a rule-based and a statistical method. The rules of English-Phone-to-Indonesian-Grapheme mapping are implemented with a Finite State Transducer (FST), followed by a statistical method which is a grapheme-based trigram language model. The Indonesian transliteration generated was used as a means to support the learners where their speech were then recorded. The speech recordings have been evaluated by 19 participants: 8 English native and 11 non-native speakers. The results show that the transliteration positively contributes to the improvement of their English pronunciation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,700
inproceedings
gabor-etal-2012-boosting
Boosting the Coverage of a Semantic Lexicon by Automatically Extracted Event Nominalizations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1500/
G{\'a}bor, Kata and Apidianaki, Marianna and Sagot, Beno{\^i}t and Villemonte de La Clergerie, {\'E}ric
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1466--1473
In this article, we present a distributional analysis method for extracting nominalization relations from monolingual corpora. The acquisition method makes use of distributional and morphological information to select nominalization candidates. We explain how the learning is performed on a dependency annotated corpus and describe the nominalization results. Furthermore, we show how these results served to enrich an existing lexical resource, the WOLF (Wordnet Libre du Franc{\^A}¸ais). We present the techniques that we developed in order to integrate the new information into WOLF, based on both its structure and content. Finally, we evaluate the validity of the automatically obtained information and the correctness of its integration into the semantic resource. The method proved to be useful for boosting the coverage of WOLF and presents the advantage of filling verbal synsets, which are particularly difficult to handle due to the high level of verbal polysemy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,701
inproceedings
choukri-etal-2012-using
Using the International Standard Language Resource Number: Practical and Technical Aspects
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1501/
Choukri, Khalid and Arranz, Victoria and Hamon, Olivier and Park, Jungyeul
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
50--54
This paper describes the International Standard Language Resource Number (ISLRN), a new identification schema for Language Resources where a Language Resource is provided with a unique and universal name using a standardized nomenclature. This will ensure that Language Resources be identified, accessed and disseminated in a unique manner, thus allowing them to be recognized with proper references in all activities concerning Human Language Technologies as well as in all documents and scientific papers. This would allow, for instance, the formal identification of potentially repeated resources across different repositories, the formal referencing of language resources and their correct use when different versions are processed by tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,702
inproceedings
jaja-etal-2012-assessing
Assessing Divergence Measures for Automated Document Routing in an Adaptive {MT} System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1502/
Jaja, Claire and Briesch, Douglas and Laoudi, Jamal and Voss, Clare
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3963--3970
Custom machine translation (MT) engines systematically outperform general-domain MT engines when translating within the relevant custom domain. This paper investigates the use of the Jensen-Shannon divergence measure for automatically routing new documents within a translation system with multiple MT engines to the appropriate custom MT engine in order to obtain the best translation. Three distinct domains are compared, and the impact of the language, size, and preprocessing of the documents on the Jensen-Shannon score is addressed. Six test datasets are then compared to the three known-domain corpora to predict which of the three custom MT engines they would be routed to at runtime given their Jensen-Shannon scores. The results are promising for incorporating this divergence measure into a translation workflow.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,703
inproceedings
forster-etal-2012-rwth
{RWTH}-{PHOENIX}-Weather: A Large Vocabulary Sign Language Recognition and Translation Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1503/
Forster, Jens and Schmidt, Christoph and Hoyoux, Thomas and Koller, Oscar and Zelle, Uwe and Piater, Justus and Ney, Hermann
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3785--3789
This paper introduces the RWTH-PHOENIX-Weather corpus, a video-based, large vocabulary corpus of German Sign Language suitable for statistical sign language recognition and translation. In contrastto most available sign language data collections, the RWTH-PHOENIX-Weather corpus has not been recorded for linguistic research but for the use in statistical pattern recognition. The corpus contains weather forecasts recorded from German public TV which are manually annotated using glosses distinguishing sign variants, and time boundaries have been marked on the sentence and the gloss level. Further, the spoken German weather forecast has been transcribed in a semi-automatic fashion using a state-of-the-art automatic speech recognition system. Moreover, an additional translation of the glosses into spoken German has been created to capture allowable translation variability. In addition to the corpus, experimental baseline results for hand and head tracking, statistical sign language recognition and translation are presented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,704
inproceedings
cattoni-etal-2012-knowledgestore
The {K}nowledge{S}tore: an Entity-Based Storage System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1504/
Cattoni, Roldano and Corcoglioniti, Francesco and Girardi, Christian and Magnini, Bernardo and Serafini, Luciano and Zanoli, Roberto
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3639--3646
This paper describes the KnowledgeStore, a large-scale infrastructure for the combined storage and interlinking of multimedia resources and ontological knowledge. Information in the KnowledgeStore is organized around entities, such as persons, organizations and locations. The system allows (i) to import background knowledge about entities, in form of annotated RDF triples; (ii) to associate resources to entities by automatically recognizing, coreferring and linking mentions of named entities; and (iii) to derive new entities based on knowledge extracted from mentions. The KnowledgeStore builds on state of art technologies for language processing, including document tagging, named entity extraction and cross-document coreference. Its design provides for a tight integration of linguistic and semantic features, and eases the further processing of information by explicitly representing the contexts where knowledge and mentions are valid or relevant. We describe the system and report about the creation of a large-scale KnowledgeStore instance for storing and integrating multimedia contents and background knowledge relevant to the Italian Trentino region.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,705
inproceedings
choukri-arranz-2012-analytical
An Analytical Model of Language Resource Sustainability
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1505/
Choukri, Khalid and Arranz, Victoria
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1395--1402
This paper elaborates on a sustainability model for Language Resources, both at a descriptive and analytical level. The first part, devoted to the descriptive model, elaborates on the definition of this concept both from a general point of view and from the Human Language Technology and Language Resources perspective. The paper also intends to list an exhaustive number of factors that have an impact on this sustainability. These factors will be clustered into Pillars so as ease understanding as well as the prediction of LR sustainability itself. Rather than simply identifying a set of LRs that have been in use for a while and that one can consider as sustainable, the paper aims at first clarifying and (re)defining the concept of sustainability by also connecting it to other domains. Then it also presents a detailed decomposition of all dimensions of Language Resource features that can contribute and/or have an impact on such sustainability. Such analysis will also help anticipate and forecast sustainability for a LR before taking any decisions concerning design and production.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,706
inproceedings
fokkens-etal-2012-climb
{CLIMB} grammars: three projects using metagrammar engineering
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1506/
Fokkens, Antske and Avgustinova, Tania and Zhang, Yi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1672--1679
This paper introduces the CLIMB (Comparative Libraries of Implementations with Matrix Basis) methodology and grammars. The basic idea behind CLIMB is to use code generation as a general methodology for grammar development in order to create a more systematic approach to grammar development. The particular method used in this paper is closely related to the LinGO Grammar Matrix. Like the Grammar Matrix, resulting grammars are HPSG grammars that can map bidirectionally between strings and MRS representations. The main purpose of this paper is to provide insight into the process of using CLIMB for grammar development. In addition, we describe three projects that make use of this methodology or have concrete plans to adapt CLIMB in the future: CLIMB for Germanic languages, CLIMB for Slavic languages and CLIMB to combine two grammars of Mandarin Chinese. We present the first results that indicate feasibility and development time improvements for creating a medium to large coverage precision grammar.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,707
inproceedings
kim-etal-2012-annotated
Annotated Bibliographical Reference Corpora in Digital Humanities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1507/
Kim, Young-Min and Bellot, Patrice and Faath, Elodie and Dacos, Marin
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
494--501
In this paper, we present new bibliographical reference corpora in digital humanities (DH) that have been developed under a research project, Robust and Language Independent Machine Learning Approaches for Automatic Annotation of Bibliographical References in DH Books supported by Google Digital Humanities Research Awards. The main target is the bibliographical references in the articles of Revues.org site, an oldest French online journal platform in DH field. Since the final object is to provide automatic links between related references and articles, the automatic recognition of reference fields like author and title is essential. These fields are therefore manually annotated using a set of carefully defined tags. After providing a full description of three corpora, which are separately constructed according to the difficulty level of annotation, we briefly introduce our experimental results on the first two corpora. A popular machine learning technique, Conditional Random Field (CRF) is used to build a model, which automatically annotates the fields of new references. In the experiments, we first establish a standard for defining features and labels adapted to our DH reference data. Then we show our new methodology against less structured references gives a meaningful result.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,708
inproceedings
tahon-etal-2012-corpus
Corpus of Children Voices for Mid-level Markers and Affect Bursts Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1508/
Tahon, Marie and Delaborde, Agnes and Devillers, Laurence
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2366--2369
This article presents a corpus featuring children playing games in interaction with the humanoid robot Nao: children have to express emotions in the course of a storytelling by the robot. This corpus was collected to design an affective interactive system driven by an interactional and emotional representation of the user. We evaluate here some mid-level markers used in our system: reaction time, speech duration and intensity level. We also question the presence of affect bursts, which are quite numerous in our corpus, probably because of the young age of the children and the absence of predefined lexical content.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,709
inproceedings
gahbiche-braham-etal-2012-joint
Joint Segmentation and {POS} Tagging for {A}rabic Using a {CRF}-based Classifier
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1509/
Gahbiche-Braham, Souhir and Bonneau-Maynard, H{\'e}l{\`e}ne and Lavergne, Thomas and Yvon, Fran{\c{c}}ois
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2107--2113
Arabic is a morphologically rich language, and Arabic texts abound of complex word forms built by concatenation of multiple subparts, corresponding for instance to prepositions, articles, roots prefixes, or suffixes. The development of Arabic Natural Language Processing applications, such as Machine Translation (MT) tools, thus requires some kind of morphological analysis. In this paper, we compare various strategies for performing such preprocessing, using generic machine learning techniques. The resulting tool is compared with two open domain alternatives in the context of a statistical MT task and is shown to be faster than its competitors, with no significant difference in MT quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,710
inproceedings
biber-breiteneder-2012-fivehundredmillionandone
Fivehundredmillionandone Tokens. Loading the {AAC} Container with Text Resources for Text Studies.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1510/
Biber, Hanno and Breiteneder, Evelyn
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1067--1070
The ''''''``AAC - Austrian Academy Corpus'''''''' is a diachronic German language digital text corpus of more than 500 million tokens. The text corpus has collected several thousands of texts representing a wide range of different text types. The primary research aim is to develop text language resources for the study of texts. For corpus linguistics and corpus based language research large text corpora need to be structured in a systematic way. For this structural purpose the AAC is making use of the notion of container. By container in the context of corpus research we understand a flexible system of pragmatic representation, manipulation, modification and structured storage of annotated items of text. The issue of representing a large corpus in formats that offer only limited space is paradigmatic for the general task of representing a language by just a small collection of text or a small sample of the language. Methods based upon structural normalization and standardization have to be developed in order to provide useful instruments for text studies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,711
inproceedings
nakagawa-den-2012-annotation
Annotation of anaphoric relations and topic continuity in {J}apanese conversation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1511/
Nakagawa, Natsuko and Den, Yasuharu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
179--186
This paper proposes a basic scheme for annotating anaphoric relations in Japanese conversations. More specifically, we propose methods of (i) dividing discourse segments into meaningful units, (ii) identifying zero pronouns and other overt anaphors, (iii) classifying zero pronouns, and (iv) identifying anaphoric relations. We discuss various kinds of problems involved in the annotation mainly caused by on-line processing of discourse and/or interactions between the participants. These problems do not arise in annotating written languages. This paper also proposes a method to compute topic continuity based on anaphoric relations. The topic continuity involves the information status of the noun in question (given, accessible, and new) and persistence (whether the noun is mentioned multiple times or not). We show that the topic continuity correlates with short-utterance units, which are determined prosodically through the previous annotations; nouns of high topic continuity tend to be prosodically separated from the predicates. This result indicates the validity of our annotations of anaphoric relations and topic continuity and the usefulness for further studies on discourse and interaction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,712
inproceedings
hayashi-narawa-2012-classifying
Classifying Standard Linguistic Processing Functionalities based on Fundamental Data Operation Types
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1512/
Hayashi, Yoshihiko and Narawa, Chiharu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1169--1173
iIt is often argued that a set of standard linguistic processing functionalities should be identified,with each of them given a formal specification. We would benefit from the formal specifications; for example, the semi-automated composition of a complex language processing workflow could be enabled in due time. This paper extracts a standard set of linguistic processing functionalities and tries to classify them formally. To do this, we first investigated prominent types of language Web services/linguistic processors by surveying a Web-based language service infrastructure and published NLP toolkits. We next induced a set of standard linguistic processing functionalities by carefully investigating each of the linguistic processor types. The standard linguistic processing functionalities was then characterized by the input/output data types, as well as the required data operation types, which were also derived from the investigation. As a result, we came up with an ontological depiction that classifies linguistic processors and linguistic processing functionalities with respect to the fundamental data operation types. We argue that such an ontological depiction can explicitly describe the functional aspects of a linguistic processing functionality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,713
inproceedings
szekely-etal-2012-evaluating
Evaluating expressive speech synthesis from audiobook corpora for conversational phrases
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1513/
Sz{\'e}kely, {\'E}va and Cabral, Joao Paulo and Abou-Zleikha, Mohamed and Cahill, Peter and Carson-Berndsen, Julie
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3335--3339
Audiobooks are a rich resource of large quantities of natural sounding, highly expressive speech. In our previous research we have shown that it is possible to detect different expressive voice styles represented in a particular audiobook, using unsupervised clustering to group the speech corpus of the audiobook into smaller subsets representing the detected voice styles. These subsets of corpora of different voice styles reflect the various ways a speaker uses their voice to express involvement and affect, or imitate characters. This study is an evaluation of the detection of voice styles in an audiobook in the application of expressive speech synthesis. A further aim of this study is to investigate the usability of audiobooks as a language resource for expressive speech synthesis of utterances of conversational speech. Two evaluations have been carried out to assess the effect of the genre transfer: transmitting expressive speech from read aloud literature to conversational phrases with the application of speech synthesis. The first evaluation revealed that listeners have different voice style preferences for a particular conversational phrase. The second evaluation showed that it is possible for users of speech synthesis systems to learn the characteristics of a voice style well enough to make reliable predictions about what a certain utterance will sound like when synthesised using that voice style.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,714
inproceedings
garrido-etal-2012-i3media
The {I}3{MEDIA} speech database: a trilingual annotated corpus for the analysis and synthesis of emotional speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1514/
Garrido, Juan Mar{\'i}a and Laplaza, Yesika and Marquina, Montse and Pearman, Andrea and Escalada, Jos{\'e} Gregorio and Rodr{\'i}guez, Miguel {\'A}ngel and Armenta, Ana
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1197--1202
In this article the I3Media corpus is presented, a trilingual (Catalan, English, Spanish) speech database of neutral and emotional material collected for analysis and synthesis purposes. The corpus is actually made up of six different subsets of material: a neutral subcorpus, containing emotionless utterances; a ‘dialog' subcorpus, containing typical call center utterances; an ‘emotional' corpus, a set of sentences representative of pure emotional states; a ‘football' subcorpus, including utterances imitating a football broadcasting situation; a ‘SMS' subcorpus, including readings of SMS texts; and a ‘paralinguistic elements' corpus, including recordings of interjections and paralinguistic sounds uttered in isolation. The corpus was read by professional speakers (male, in the case of Spanish and Catalan; female, in the case of the English corpus), carefully selected to meet criteria of language competence, voice quality and acting conditions. It is the result of a collaboration between the Speech Technology Group at Telef{\'o}nica Investigaci{\'o}n y Desarrollo (TID) and the Speech and Language Group at Barcelona Media Centre d`Innovaci{\'o} (BM), as part of the I3Media project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,715
inproceedings
elson-2012-dramabank
{D}rama{B}ank: Annotating Agency in Narrative Discourse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1515/
Elson, David
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2813--2819
We describe the Story Intention Graph, a set of discourse relations designed to represent aspects of narrative. Compared to prior models, ours is a novel synthesis of the notions of goal, plan, intention, outcome, affect and time that is amenable to corpus annotation. We describe a collection project, DramaBank, which includes encodings of texts ranging from small fables to epic poetry and contemporary nonfiction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,716
inproceedings
bosca-etal-2012-linguagrid
{L}inguagrid: a network of Linguistic and Semantic Services for the {I}talian Language.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1516/
Bosca, Alessio and Dini, Luca and Kouylekov, Milen and Trevisan, Marco
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3304--3307
In order to handle the increasing amount of textual information today available on the web and exploit the knowledge latent in this mass of unstructured data, a wide variety of linguistic knowledge and resources (Language Identification, Morphological Analysis, Entity Extraction, etc.). is crucial. In the last decade LRaas (Language Resource as a Service) emerged as a novel paradigm for publishing and sharing these heterogeneous software resources over the Web. In this paper we present an overview of Linguagrid, a recent initiative that implements an open network of linguistic and semantic Web Services for the Italian language, as well as a new approach for enabling customizable corpus-based linguistic services on Linguagrid LRaaS infrastructure. A corpus ingestion service in fact allows users to upload corpora of documents and to generate classification/clustering models tailored to their needs by means of standard machine learning techniques applied to the textual contents and metadata from the corpora. The models so generated can then be accessed through proper Web Services and exploited to process and classify new textual contents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,717
inproceedings
taslimipoor-etal-2012-using
Using Noun Similarity to Adapt an Acceptability Measure for {P}ersian Light Verb Constructions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1517/
Taslimipoor, Shiva and Fazly, Afsaneh and Hamzeh, Ali
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
670--673
Light verb constructions (LVCs), such as take a walk and make a decision, are a common subclass of multiword expressions (MWEs), whose distinct syntactic and semantic properties call for a special treatment within a computational system. In particular, LVCs are formed semi-productively: often a semantically-general verb (such as take) combines with a number of semantically-similar nouns to form semantically-related LVCs, as in make a decision/choice/commitment. Nonetheless, there are restrictions as to which verbs combine with which class of nouns. A proper computational account of LVCs is even more important for languages such as Persian, in which most verbs are of the form of LVCs. Recently, there has been some work on the automatic identification of MWEs (including LVCs) in resource-rich languages, such as English and Dutch. We adapt such existing techniques for the automatic identification of LVCs in Persian, an under-resourced language. Specifically, we extend an existing statistical measure of the acceptability of English LVCs (Fazly et al., 2007) to make explicit use of semantic classes of noun, and show that such classes are in particular useful for determining the LVC acceptability of new combinations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,718
inproceedings
arranz-hamon-2012-way
On the Way to a Legal Sharing of Web Applications in {NLP}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1518/
Arranz, Victoria and Hamon, Olivier
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2965--2970
For some years now, web services have been employed in Natural Language Processing (NLP) for a number of uses and within a number of sub-areas. Web services allow users to gain access to distant applications without having the need to install them on their local machines. A large paradigm of advantages can be obtained from a practical and development point of view. However, the legal aspects behind this sharing should not be neglected and should be openly discussed so as to understand the implications behind such data exchanges and tool uses. In the framework of PANACEA, this paper highlights the different points involved and describes the work done in order to handle all the legal aspects behind those points.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,719
inproceedings
steinberger-etal-2012-jrc
{JRC} Eurovoc Indexer {JEX} - A freely available multi-label categorisation tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1519/
Steinberger, Ralf and Ebrahim, Mohamed and Turchi, Marco
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
798--805
EuroVoc (2012) is a highly multilingual thesaurus consisting of over 6,700 hierarchically organised subject domains used by European Institutions and many authorities in Member States of the European Union (EU) for the classification and retrieval of official documents. JEX is JRC-developed multi-label classification software that learns from manually labelled data to automatically assign EuroVoc descriptors to new documents in a profile-based category-ranking task. The JEX release consists of trained classifiers for 22 official EU languages, of parallel training data in the same languages, of an interface that allows viewing and amending the assignment results, and of a module that allows users to re-train the tool on their own document collections. JEX allows advanced users to change the document representation so as to possibly improve the categorisation result through linguistic pre-processing. JEX can be used as a tool for interactive EuroVoc descriptor assignment to increase speed and consistency of the human categorisation process, or it can be used fully automatically. The output of JEX is a language-independent EuroVoc feature vector lending itself also as input to various other Language Technology tasks, including cross-lingual clustering and classification, cross-lingual plagiarism detection, sentence selection and ranking, and more.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,720
inproceedings
doukhan-etal-2012-designing
Designing {F}rench Tale Corpora for Entertaining Text To Speech Synthesis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1520/
Doukhan, David and Rosset, Sophie and Rilliard, Albert and d{'}Alessandro, Christophe and Adda-Decker, Martine
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1003--1010
Text and speech corpora for training a tale telling robot have been designed, recorded and annotated. The aim of these corpora is to study expressive storytelling behaviour, and to help in designing expressive prosodic and co-verbal variations for the artificial storyteller). A set of 89 children tales in French serves as a basis for this work. The tales annotation principles and scheme are described, together with the corpus description in terms of coverage and inter-annotator agreement. Automatic analysis of a new tale with the help of this corpus and machine learning is discussed. Metrics for evaluation of automatic annotation methods are discussed. A speech corpus of about 1 hour, with 12 tales has been recorded and aligned and annotated. This corpus is used for predicting expressive prosody in children tales, above the level of the sentence.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,721
inproceedings
aristar-dry-etal-2012-rendering
{\textquotedblleft}Rendering Endangered Lexicons Interoperable through Standards Harmonization{\textquotedblright}: the {RELISH} project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1521/
Aristar-Dry, Helen and Drude, Sebastian and Windhouwer, Menzo and Gippert, Jost and Nevskaya, Irina
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
766--770
The RELISH project promotes language-oriented research by addressing a two-pronged problem: (1) the lack of harmonization between digital standards for lexical information in Europe and America, and (2) the lack of interoperability among existing lexicons of endangered languages, in particular those created with the Shoebox/Toolbox lexicon building software. The cooperation partners in the RELISH project are the University of Frankfurt (FRA), the Max Planck Institute for Psycholinguistics (MPI Nijmegen), and Eastern Michigan University, the host of the Linguist List (ILIT). The project aims at harmonizing key European and American digital standards whose divergence has hitherto impeded international collaboration on language technology for resource creation and analysis, as well as web services for archive access. Focusing on several lexicons of endangered languages, the project will establish a unified way of referencing lexicon structure and linguistic concepts, and develop a procedure for migrating these heterogeneous lexicons to a standards-compliant format. Once developed, the procedure will be generalizable to the large store of lexical resources involved in the LEGO and DoBeS projects.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,722
inproceedings
martins-2012-le
Le Petit Prince in {UNL}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1522/
Martins, Ronaldo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3201--3204
The present paper addresses the process and the results of the interpretation of the integral text of “Le Petit Prince” (Little Prince), the famous novel by Antoine de Saint-Exup{\'e}ry, from French into UNL. The original text comprised 1,684 interpretation units (15,513 words), which were sorted according to their similarity, from the shortest to the longest ones, and which were then projected into a UNL graph structure, composed of semantic directed binary relations linking nodes associated to the synsets of the corresponding original lexical items. The whole UNL-ization process was carried-out manually and the results have been used as the main resource in a natural language generation project involving already 27 languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,723
inproceedings
varga-etal-2012-unsupervised
Unsupervised document zone identification using probabilistic graphical models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1523/
Varga, Andrea and Preo{\c{t}}iuc-Pietro, Daniel and Ciravegna, Fabio
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1610--1617
Document zone identification aims to automatically classify sequences of text-spans (e.g. sentences) within a document into predefined zone categories. Current approaches to document zone identification mostly rely on supervised machine learning methods, which require a large amount of annotated data, which is often difficult and expensive to obtain. In order to overcome this bottleneck, we propose graphical models based on the popular Latent Dirichlet Allocation (LDA) model. The first model, which we call zoneLDA aims to cluster the sentences into zone classes using only unlabelled data. We also study an extension of zoneLDA called zoneLDAb, which makes distinction between common words and non-common words within the different zone types. We present results on two different domains: the scientific domain and the technical domain. For the latter one we propose a new document zone classification schema, which has been annotated over a collection of 689 documents, achieving a Kappa score of 85{\%}. Overall our experiments show promising results for both of the domains, outperforming the baseline model. Furthermore, on the technical domain the performance of the models are comparable to the supervised approach using the same feature sets. We thus believe that graphical models are a promising avenue of research for automatic document zoning.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,724
inproceedings
fuentes-etal-2012-summarizing
Summarizing a multimodal set of documents in a Smart Room
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1524/
Fuentes, Maria and Rodr{\'i}guez, Horacio and Turmo, Jordi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2553--2558
This article reports an intrinsic automatic summarization evaluation in the scientific lecture domain. The lecture takes place in a Smart Room that has access to different types of documents produced from different media. An evaluation framework is presented to analyze the performance of systems producing summaries answering a user need. Several ROUGE metrics are used and a manual content responsiveness evaluation was carried out in order to analyze the performance of the evaluated approaches. Various multilingual summarization approaches are analyzed showing that the use of different types of documents outperforms the use of transcripts. In fact, not using any part of the spontaneous speech transcription in the summary improves the performance of automatic summaries. Moreover, the use of semantic information represented in the different textual documents coming from different media helps to improve summary quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,725
inproceedings
de-melo-etal-2012-empirical
Empirical Comparisons of {MASC} Word Sense Annotations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1525/
de Melo, Gerard and Baker, Collin F. and Ide, Nancy and Passonneau, Rebecca J. and Fellbaum, Christiane
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3036--3043
We analyze how different conceptions of lexical semantics affect sense annotations and how multiple sense inventories can be compared empirically, based on annotated text. Our study focuses on the MASC project, where data has been annotated using WordNet sense identifiers on the one hand, and FrameNet lexical units on the other. This allows us to compare the sense inventories of these lexical resources empirically rather than just theoretically, based on their glosses, leading to new insights. In particular, we compute contingency matrices and develop a novel measure, the Expected Jaccard Index, that quantifies the agreement between annotations of the same data based on two different resources even when they have different sets of categories.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,726
inproceedings
strassel-etal-2012-creating
Creating {HAVIC}: Heterogeneous Audio Visual {I}nternet Collection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1526/
Strassel, Stephanie and Morris, Amanda and Fiscus, Jonathan and Caruso, Christopher and Lee, Haejoong and Over, Paul and Fiumara, James and Shaw, Barbara and Antonishek, Brian and Michel, Martial
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2573--2577
Linguistic Data Consortium and the National Institute of Standards and Technology are collaborating to create a large, heterogeneous annotated multimodal corpus to support research in multimodal event detection and related technologies. The HAVIC (Heterogeneous Audio Visual Internet Collection) Corpus will ultimately consist of several thousands of hours of unconstrained user-generated multimedia content. HAVIC has been designed with an eye toward providing increased challenges for both acoustic and video processing technologies, focusing on multi-dimensional variation inherent in user-generated multimedia content. To date the HAVIC corpus has been used to support the NIST 2010 and 2011 TRECVID Multimedia Event Detection (MED) Evaluations. Portions of the corpus are expected to be released in LDC`s catalog in the coming year, with the remaining segments being published over time after their use in the ongoing MED evaluations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,727
inproceedings
bouamor-etal-2012-identifying
Identifying bilingual Multi-Word Expressions for Statistical Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1527/
Bouamor, Dhouha and Semmar, Nasredine and Zweigenbaum, Pierre
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
674--679
MultiWord Expressions (MWEs) repesent a key issue for numerous applications in Natural Language Processing (NLP) especially for Machine Translation (MT). In this paper, we describe a strategy for detecting translation pairs of MWEs in a French-English parallel corpus. In addition we introduce three methods aiming to integrate extracted bilingual MWE S in M OSES, a phrase based Statistical Machine Translation (SMT) system. We experimentally show that these textual units can improve translation quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,728
inproceedings
redeker-etal-2012-multi
Multi-Layer Discourse Annotation of a {D}utch Text Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1528/
Redeker, Gisela and Berzl{\'a}novich, Ildik{\'o} and van der Vliet, Nynke and Bouma, Gosse and Egg, Markus
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2820--2825
We have compiled a corpus of 80 Dutch texts from expository and persuasive genres, which we annotated for rhetorical and genre-specific discourse structure, and lexical cohesion with the goal of creating a gold standard for further research. The annota{\^A}{\textlnot}tions are based on a segmentation of the text in elementary discourse units that takes into account cues from syntax and punctuation. During the labor-intensive discourse-structure annotation (RST analysis), we took great care to thoroughly reconcile the initial analyses. That process and the availability of two independent initial analyses for each text allows us to analyze our disagreements and to assess the confusability of RST relations, and thereby improve the annotation guidelines and gather evidence for the classification of these relations into larger groups. We are using this resource for corpus-based studies of discourse relations, discourse markers, cohesion, and genre differences, e.g., the question of how discourse structure and lexical cohesion interact for different genres in the overall organization of texts. We are also exploring automatic text segmentation and semi-automatic discourse annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,729
inproceedings
rapp-etal-2012-identifying
Identifying Word Translations from Comparable Documents Without a Seed Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1529/
Rapp, Reinhard and Sharoff, Serge and Babych, Bogdan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
460--466
The extraction of dictionaries from parallel text corpora is an established technique. However, as parallel corpora are a scarce resource, in recent years the extraction of dictionaries using comparable corpora has obtained increasing attention. In order to find a mapping between languages, almost all approaches suggested in the literature rely on a seed lexicon. The work described here achieves competitive results without requiring such a seed lexicon. Instead it presupposes mappings between comparable documents in different languages. For some common types of textual resources (e.g. encyclopedias or newspaper texts) such mappings are either readily available or can be established relatively easily. The current work is based on Wikipedias where the mappings between languages are determined by the authors of the articles. We describe a neural-network inspired algorithm which first characterizes each Wikipedia article by a number of keywords, and then considers the identification of word translations as a variant of word alignment in a noisy environment. We present results and evaluations for eight language pairs involving Germanic, Romanic, and Slavic languages as well as Chinese.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,730
inproceedings
drude-etal-2012-language
The Language Archive {---} a new hub for language resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1530/
Drude, Sebastian and Broeder, Daan and Trilsbeek, Paul and Wittenburg, Peter
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3264--3267
This contribution presents “The Language Archive” (TLA), a new unit at the MPI for Psycholinguistics, discussing the current developments in management of scientific data, considering the need for new data research infrastructures. Although several initiatives worldwide in the realm of language resources aim at the integration, preservation and mobilization of research data, the state of such scientific data is still often problematic. Data are often not well organized and archived and not described by metadata {\textemdash} even unique data such as field-work observational data on endangered languages is still mostly on perishable carriers. New data centres are needed that provide trusted, quality-reviewed, persistent services and suitable tools and that take legal and ethical issues seriously. The CLARIN initiative has established criteria for suitable centres. TLA is in a good position to be one of such centres. It is based on three essential pillars: (1) A data archive; (2) management, access and annotation tools; (3) archiving and software expertise for collaborative projects. The archive hosts mostly observational data on small languages worldwide and language acquisition data, but also data resulting from experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,731
inproceedings
khademian-etal-2012-holistic
A Holistic Approach to Bilingual Sentence Fragment Extraction from Comparable Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1531/
Khademian, Mahdi and Taghipour, Kaveh and Mansour, Saab and Khadivi, Shahram
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4073--4079
Achieving accurate translation, especially in multiple domain documents with statistical machine translation systems, requires more and more bilingual texts and this need becomes more critical when training such systems for language pairs with scarce training data. In the recent years, there have been some researches on new sources of parallel texts that are documents which are not necessarily parallel but are comparable. Since these methods search for possible translation equivalences in a greedy manner, they are unable to consider all possible parallel texts in comparable documents. This paper investigates a different approach for this need by considering relationships between all words of two comparable documents, which works fairly well even in the worst case of comparability. We represent each document pair in a matrix and then transform it to a new space to find parallel fragments. Evaluations show that the system is successful in extraction of useful fragment pairs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,732
inproceedings
loukachevitch-2012-automatic
Automatic Term Recognition Needs Multiple Evidence
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1532/
Loukachevitch, Natalia
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2401--2407
In this paper we argue that the automatic term extraction procedure is an inherently multifactor process and the term extraction models needs to be based on multiple features including a specific type of a terminological resource under development. We proposed to use three types of features for extraction of two-word terms and showed that all these types of features are useful for term extraction. The set of features includes new features such as features extracted from an existing domain-specific thesaurus and features based on Internet search results. We studied the set of features for term extraction in two different domains and showed that the combination of several types of features considerably enhances the quality of the term extraction procedure. We found that for developing term extraction models in a specific domain, it is important to take into account some properties of the domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,733
inproceedings
kolachina-etal-2012-evaluation
Evaluation of Discourse Relation Annotation in the {H}indi Discourse Relation Bank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1533/
Kolachina, Sudheer and Prasad, Rashmi and Sharma, Dipti Misra and Joshi, Aravind
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
823--828
We describe our experiments on evaluating recently proposed modifications to the discourse relation annotation scheme of the Penn Discourse Treebank (PDTB), in the context of annotating discourse relations in Hindi Discourse Relation Bank (HDRB). While the proposed modifications were driven by the desire to introduce greater conceptual clarity in the PDTB scheme and to facilitate better annotation quality, our findings indicate that overall, some of the changes render the annotation task much more difficult for the annotators, as also reflected in lower inter-annotator agreement for the relevant sub-tasks. Our study emphasizes the importance of best practices in annotation task design and guidelines, given that a major goal of an annotation effort should be to achieve maximally high agreement between annotators. Based on our study, we suggest modifications to the current version of the HDRB, to be incorporated in our future annotation work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,734
inproceedings
kermes-2012-methodology
A methodology for the extraction of information about the usage of formulaic expressions in scientific texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1534/
Kermes, Hannah
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2064--2068
In this paper, we present a methodology for the extraction of formulaic expressions, which goes beyond the mere extraction of candidate patterns. Using a pipeline we are able to extract information about the usage of formulaic expressions automatically from text corpora. According to Biber and Barbieri (2007) formulaic expressions are “important building blocks of discourse in spoken and written registers”. The automatic extraction procedure can help to investigate the usage and function of these recurrent patterns in different registers and domains. Formulaic expressions are commonplace not only in every- day language but also in scientific writing. Patterns such as `in this paper', `the number of', `on the basis of' are often used by scientists to convey research interests, the theoretical basis of their studies, results of experiments, sci- entific findings as well as conclusions and are used as dis- course organizers. For Hyland (2008) they help to “shape meanings in specific context and contribute to our sense of coherence in a text”. We are interested in: (i) which and what type of formulaic expressions are used in scientific texts? (ii) the distribution of formulaic expression across different scien- tific disciplines, (iii) where do formulaic expressions occur within a text?
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,735
inproceedings
temnikova-etal-2012-clcm
{CLCM} - A Linguistic Resource for Effective Simplification of Instructions in the Crisis Management Domain and its Evaluations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1535/
Temnikova, Irina and Orasan, Constantin and Mitkov, Ruslan
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3007--3014
Due to the increasing number of emergency situations which can have substantial consequences, both financially and fatally, the Crisis Management (CM) domain is developing at an exponential speed. The efficient management of emergency situations relies on clear communication between all of the participants in a crisis situation. For these reasons the Text Complexity (TC) of the CM domain needed to be investigated and showed that CM domain texts exhibit high TC levels. This article presents a new linguistic resource in the form of Controlled Language (CL) guidelines for manual text simplification in the CM domain which aims to address high TC in the CM domain and produce clear messages to be used in crisis situations. The effectiveness of the resource has been tested via evaluation from several different perspectives important for the domain. The overall results show that the CLCM simplification has a positive impact on TC, reading comprehension, manual translation and machine translation. Additionally, an investigation of the cognitive difficulty in applying manual simplification operations led to interesting discoveries. This article provides details of the evaluation methods, the conducted experiments, their results and indications about future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,736
inproceedings
ghayoomi-2012-grammar
From Grammar Rule Extraction to Treebanking: A Bootstrapping Approach
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1536/
Ghayoomi, Masood
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1912--1919
Most of the reliable language resources are developed via human supervision. Developing supervised annotated data is hard and tedious, and it will be very time consuming when it is done totally manually; as a result, various types of annotated data, including treebanks, are not available for many languages. Considering that a portion of the language is regular, we can define regular expressions as grammar rules to recognize the strings which match the regular expressions, and reduce the human effort to annotate further unseen data. In this paper, we propose an incremental bootstrapping approach via extracting grammar rules when no treebank is available in the first step. Since Persian suffers from lack of available data sources, we have applied our method to develop a treebank for this language. Our experiment shows that this approach significantly decreases the amount of manual effort in the annotation process while enlarging the treebank.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,737
inproceedings
suzuki-etal-2012-detecting
Detecting {J}apanese Compound Functional Expressions using Canonical/Derivational Relation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1537/
Suzuki, Takafumi and Abe, Yusuke and Toyota, Itsuki and Utsuro, Takehito and Matsuyoshi, Suguru and Tsuchiya, Masatoshi
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
null
The Japanese language has various types of functional expressions. In order to organize Japanese functional expressions with various surface forms, a lexicon of Japanese functional expressions with hierarchical organization was compiled. This paper proposes how to design the framework of identifying more than 16,000 functional expressions in Japanese texts by utilizing hierarchical organization of the lexicon. In our framework, more than 16,000 functional expressions are roughly divided into canonical / derived functional expressions. Each derived functional expression is intended to be identified by referring to the most similar occurrence of its canonical expression. In our framework, contextual occurrence information of much fewer canonical expressions are expanded into the whole forms of derived expressions, to be utilized when identifying those derived expressions. We also empirically show that the proposed method can correctly identify more than 80{\%} of the functional / content usages only with less than 38,000 training instances of manually identified canonical expressions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,738
inproceedings
elahimanesh-etal-2012-improving
Improving K-Nearest Neighbor Efficacy for {F}arsi Text Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1538/
Elahimanesh, Mohammad Hossein and Minaei, Behrouz and Malekinezhad, Hossein
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1618--1621
One of the common processes in the field of text mining is text classification. Because of the complex nature of Farsi language, words with separate parts and combined verbs, the most of text classification systems are not applicable to Farsi texts. K-Nearest Neighbors (KNN) is one of the most popular used methods for text classification and presents good performance in experiments on different datasets. A method to improve the classification performance of KNN is proposed in this paper. Effects of removing or maintaining stop words, applying N-Grams with different lengths are also studied. For this study, a portion of a standard Farsi corpus called Hamshahri1 and articles of some archived newspapers are used. As the results indicate, classification efficiency improves by applying this approach especially when eight-grams indexing method and removing stop words are applied. Using N-grams with lengths more than 3 characters, presented very encouraging results for Farsi text classification. The Results of classification using our method are compared with the results obtained by mentioned related works.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,739
inproceedings
gibbon-2012-ulex
{UL}ex: new data models and a mobile environment for corpus enrichment.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1539/
Gibbon, Dafydd
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3392--3398
The Ubiquitous Lexicon concept (ULex) has two sides. In the first kind of ubiquity, ULex combines prelexical corpus based lexicon extraction and formatting techniques from speech technology and corpus linguistics for both language documentation and basic speech technology (e.g. speech synthesis), and proposes new XML models for the basic datatypes concerned, in order to enable standardisastion and data interchange in these areas. The prelexical data types range from basic wordlists through diphone tables to concordance and interlinear glossing structures. While several proposals for standardising XML models of lexicon types are available, these more basic pre-lexical, data types, which are important in lexical acquisition, have received little attention. In the second area of ubiquity, ULex is implemented in a novel mobile environment to enable collaborative cross-platform use via a web application, either on the internet or, via a local hotspot, on an intranet, which runs not only on standard PC types but also on tablet computers and smartphones and is thereby also rendered truly ubiquitous in a geographical sense.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,740
inproceedings
ogiso-etal-2012-unidic
{U}ni{D}ic for Early Middle {J}apanese: a Dictionary for Morphological Analysis of Classical {J}apanese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1540/
Ogiso, Toshinobu and Komachi, Mamoru and Den, Yasuharu and Matsumoto, Yuji
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
911--915
In order to construct an annotated diachronic corpus of Japanese, we propose to create a new dictionary for morphological analysis of Early Middle Japanese (Classical Japanese) based on UniDic, a dictionary for Contemporary Japanese. Differences between the Early Middle Japanese and Contemporary Japanese, which prevent a na{\"ive adaptation of UniDic to Early Middle Japanese, are found at the levels of lexicon, morphology, grammar, orthography and pronunciation. In order to overcome these problems, we extended dictionary entries and created a training corpus of Early Middle Japanese to adapt UniDic for Contemporary Japanese to Early Middle Japanese. Experimental results show that the proposed UniDic-EMJ, a new dictionary for Early Middle Japanese, achieves as high accuracy (97{\%) as needed for the linguistic research on lexicon and grammar in Japanese classical text analysis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,741
inproceedings
shoaib-etal-2012-platform
A platform-independent user-friendly dictionary from {I}talian to {LIS}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1541/
Shoaib, Umar and Ahmad, Nadeem and Prinetto, Paolo and Tiotto, Gabriele
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2435--2438
The Lack of written representation for Italian Sign Language (LIS) makes it difficult to do perform tasks like looking up a new word in a dictionary. Most of the paper dictionaries show LIS signs in drawings or pictures. It`s not a simple proposition to understand the meaning of sign from paper dictionaries unless one already knows the meanings. This paper presents the LIS dictionary which provides the facility to translate Italian text into sign language. LIS signs are shown as video animations performed by a virtual character. The LIS dictionary provides the integration with MultiWordNet database. The integration with MultiWordNet allows a rich extension with the meanings and senses of the words existing in MultiWordNet. The dictionary allows users to acquire information about lemmas, synonyms and synsets in the Sign Language (SL). The application is platform independent and can be used on any operating system. The results of input lemmas are displayed in groups of grammatical categories.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,742
inproceedings
laparra-etal-2012-mapping
Mapping {W}ord{N}et to the {K}yoto ontology
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1542/
Laparra, Egoitz and Rigau, German and Vossen, Piek
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2584--2589
This paper describes the connection of WordNet to a generic ontology based on DOLCE. We developed a complete set of heuristics for mapping all WordNet nouns, verbs and adjectives to the ontology. Moreover, the mapping also allows to represent predicates in a uniform and interoperable way, regardless of the way they are expressed in the text and in which language. Together with the ontology, the WordNet mappings provide a extremely rich and powerful basis for semantic processing of text in any domain. In particular, the mapping has been used in a knowledge-rich event-mining system developed for the Asian-European project KYOTO.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,743
inproceedings
broda-etal-2012-tools
Tools for pl{W}ord{N}et Development. Presentation and Perspectives
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1543/
Broda, Bartosz and Maziarz, Marek and Piasecki, Maciej
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3647--3652
Building a wordnet is a serious undertaking. Fortunately, Language Technology (LT) can improve the process of wordnet construction both in terms of quality and cost. In this paper we present LT tools used during the construction of plWordNet and their influence on the lexicographer`s work-flow. LT is employed in plWordNet development on every possible step: from data gathering through data analysis to data presentation. Nevertheless, every decision requires input from the lexicographer, but the quality of supporting tools is an important factor. Thus a limited evaluation of usefulness of employed tools is carried out on the basis of questionnaires.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,744
inproceedings
eskevich-etal-2012-creating
Creating a Data Collection for Evaluating Rich Speech Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1544/
Eskevich, Maria and Jones, Gareth J.F. and Larson, Martha and Ordelman, Roeland
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1736--1743
We describe the development of a test collection for the investigation of speech retrieval beyond identification of relevant content. This collection focuses on satisfying user information needs for queries associated with specific types of speech acts. The collection is based on an archive of the Internet video from Internet video sharing platform (blip.tv), and was provided by the MediaEval benchmarking initiative. A crowdsourcing approach was used to identify segments in the video data which contain speech acts, to create a description of the video containing the act and to generate search queries designed to refind this speech act. We describe and reflect on our experiences with crowdsourcing this test collection using the Amazon Mechanical Turk platform. We highlight the challenges of constructing this dataset, including the selection of the data source, design of the crowdsouring task and the specification of queries and relevant items.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,745
inproceedings
chiarcos-2012-ontologies
Ontologies of Linguistic Annotation: Survey and perspectives
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1545/
Chiarcos, Christian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
303--310
This paper announces the release of the Ontologies of Linguistic Annotation (OLiA). The OLiA ontologies represent a repository of annotation terminology for various linguistic phenomena on a great band-width of languages. This paper summarizes the results of five years of research, it describes recent developments and directions for further research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,746
inproceedings
chiarcos-etal-2012-open
The Open Linguistics Working Group
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1546/
Chiarcos, Christian and Hellmann, Sebastian and Nordhoff, Sebastian and Moran, Steven and Littauer, Richard and Eckle-Kohler, Judith and Gurevych, Iryna and Hartmann, Silvana and Matuschek, Michael and Meyer, Christian M.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3603--3610
This paper describes the Open Linguistics Working Group (OWLG) of the Open Knowledge Foundation (OKFN). The OWLG is an initiative concerned with linguistic data by scholars from diverse fields, including linguistics, NLP, and information science. The primary goal of the working group is to promote the idea of open linguistic resources, to develop means for their representation and to encourage the exchange of ideas across different disciplines. This paper summarizes the progress of the working group, goals that have been identified, problems that we are going to address, and recent activities and ongoing developments. Here, we put particular emphasis on the development of a Linked Open Data (sub-)cloud of linguistic resources that is currently being pursued by several OWLG members.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,747
inproceedings
manshadi-etal-2012-annotation
An Annotation Scheme for Quantifier Scope Disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1547/
Manshadi, Mehdi and Allen, James and Swift, Mary
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1546--1553
Annotating natural language sentences with quantifier scoping has proved to be very hard. In order to overcome the challenge, previous work on building scope-annotated corpora has focused on sentences with two explicitly quantified noun phrases (NPs). Furthermore, it does not address the annotation of scopal operators or complex NPs such as plurals and definites. We present the first annotation scheme for quantifier scope disambiguation where there is no restriction on the type or the number of scope-bearing elements in the sentence. We discuss some of the most prominent complex scope phenomena encountered in annotating the corpus, such as plurality and type-token distinction, and present mechanisms to handle those phenomena.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,748
inproceedings
chiarcos-2012-generic
A generic formalism to represent linguistic corpora in {RDF} and {OWL}/{DL}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1548/
Chiarcos, Christian
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3205--3212
This paper describes POWLA, a generic formalism to represent linguistic corpora by means of RDF and OWL/DL. Unlike earlier approaches in this direction, POWLA is not tied to a specific selection of annotation layers, but rather, it is designed to support any kind of text-oriented annotation. POWLA inherits its generic character from the underlying data model PAULA (Dipper, 2005; Chiarcos et al., 2009) that is based on early sketches of the ISO TC37/SC4 Linguistic Annotation Framework (Ide and Romary, 2004). As opposed to existing standoff XML linearizations for such generic data models, it uses RDF as representation formalism and OWL/DL for validation. The paper discusses advantages of this approach, in particular with respect to interoperability and queriability, which are illustrated for the MASC corpus, an open multi-layer corpus of American English (Ide et al., 2008).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,749
inproceedings
ahtaridis-etal-2012-ldc
{LDC} Language Resource Database: Building a Bibliographic Database
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1549/
Ahtaridis, Eleftheria and Cieri, Christopher and DiPersio, Denise
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1723--1728
The Linguistic Data Consortium (LDC) creates and provides language resources (LRs) including data, tools and specifications. In order to assess the impact of these LRs and to support both LR users and authors, LDC is collecting metadata about and URLs for research papers that introduce, describe, critique, extend or rely upon LDC LRs. Current collection efforts focus on papers published in journals and conference proceedings that are available online. To date, nearly 300, or over half of the LRs LDC distributes have been searched for extensively and almost 8000 research papers about these LRs have been documented. This paper discusses the issues with collecting references and includes preliminary analysis of those results. The remaining goals of the project are also outlined.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,750
inproceedings
giannoulis-potamianos-2012-hierarchical
A hierarchical approach with feature selection for emotion recognition from speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1550/
Giannoulis, Panagiotis and Potamianos, Gerasimos
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1203--1206
We examine speaker independent emotion classification from speech, reporting experiments on the Berlin database across six basic emotions. Our approach is novel in a number of ways: First, it is hierarchical, motivated by our belief that the most suitable feature set for classification is different for each pair of emotions. Further, it uses a large number of feature sets of different types, such as prosodic, spectral, glottal flow based, and AM-FM ones. Finally, it employs a two-stage feature selection strategy to achieve discriminative dimensionality reduction. The approach results to a classification rate of 85{\%}, comparable to the state-of-the-art on this dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,751
inproceedings
singh-2012-concise
A Concise Query Language with Search and Transform Operations for Corpora with Multiple Levels of Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1551/
Singh, Anil Kumar
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1490--1497
The usefulness of annotated corpora is greatly increased if there is an associated tool that can allow various kinds of operations to be performed in a simple way. Different kinds of annotation frameworks and many query languages for them have been proposed, including some to deal with multiple layers of annotation. We present here an easy to learn query language for a particular kind of annotation framework based on ‘threaded trees', which are somewhere between the complete order of a tree and the anarchy of a graph. Through `typed' threads, they can allow multiple levels of annotation in the same document. Our language has a simple, intuitive and concise syntax and high expressive power. It allows not only to search for complicated patterns with short queries but also allows data manipulation and specification of arbitrary return values. Many of the commonly used tasks that otherwise require writing programs, can be performed with one or more queries. We compare the language with some others and try to evaluate it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,752
inproceedings
ramanathan-visweswariah-2012-study
A Study of Word-Classing for {MT} Reordering
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1552/
Ramanathan, Ananthakrishnan and Visweswariah, Karthik
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3971--3976
MT systems typically use parsers to help reorder constituents. However most languages do not have adequate treebank data to learn good parsers, and such training data is extremely time-consuming to annotate. Our earlier work has shown that a reordering model learned from word-alignments using POS tags as features can improve MT performance (Visweswariah et al., 2011). In this paper, we investigate the effect of word-classing on reordering performance using this model. We show that unsupervised word clusters perform somewhat worse but still reasonably well, compared to a part-of-speech (POS) tagger built with a small amount of annotated data; while a richer tag set including case and gender-number-person further improves reordering performance by around 1.2 monolingual BLEU points. While annotating this richer tagset is more complicated than annotating the base tagset, it is much easier than annotating treebank data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,753
inproceedings
kotze-etal-2012-large
Large aligned treebanks for syntax-based machine translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1553/
Kotz{\'e, Gideon and Vandeghinste, Vincent and Martens, Scott and Tiedemann, J{\"org
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
467--473
We present a collection of parallel treebanks that have been automatically aligned on both the terminal and the nonterminal constituent level for use in syntax-based machine translation. We describe how they were constructed and applied to a syntax- and example-based machine translation system called Parse and Corpus-Based Machine Translation (PaCo-MT). For the language pair Dutch to English, we present evaluation scores of both the nonterminal constituent alignments and the MT system itself, and in the latter case, compare them with those of Moses, a current state-of-the-art statistical MT system, when trained on the same data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,754
inproceedings
skadina-etal-2012-collecting
Collecting and Using Comparable Corpora for Statistical Machine Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1554/
Skadi{\c{n}}a, Inguna and Aker, Ahmet and Mastropavlos, Nikos and Su, Fangzhong and Tufis, Dan and Verlic, Mateja and Vasi{\c{l}}jevs, Andrejs and Babych, Bogdan and Clough, Paul and Gaizauskas, Robert and Glaros, Nikos and Paramita, Monica Lestari and Pinnis, M{\={a}}rcis
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
438--445
Lack of sufficient parallel data for many languages and domains is currently one of the major obstacles to further advancement of automated translation. The ACCURAT project is addressing this issue by researching methods how to improve machine translation systems by using comparable corpora. In this paper we present tools and techniques developed in the ACCURAT project that allow additional data needed for statistical machine translation to be extracted from comparable corpora. We present methods and tools for acquisition of comparable corpora from the Web and other sources, for evaluation of the comparability of collected corpora, for multi-level alignment of comparable corpora and for extraction of lexical and terminological data for machine translation. Finally, we present initial evaluation results on the utility of collected corpora in domain-adapted machine translation and real-life applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,755
inproceedings
piasecki-etal-2012-recognition
Recognition of {P}olish Derivational Relations Based on Supervised Learning Scheme
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1555/
Piasecki, Maciej and Ramocki, Radoslaw and Maziarz, Marek
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
916--922
The paper presents construction of {\textbackslash}emph{\{}Derywator{\}} -- a language tool for the recognition of Polish derivational relations. It was built on the basis of machine learning in a way following the bootstrapping approach: a limited set of derivational pairs described manually by linguists in plWordNet is used to train {\textbackslash}emph{\{}Derivator{\}}. The tool is intended to be applied in semi-automated expansion of plWordNet with new instances of derivational relations. The training process is based on the construction of two transducers working in the opposite directions: one for prefixes and one for suffixes. Internal stem alternations are recognised, recorded in a form of mapping sequences and stored together with transducers. Raw results produced by {\textbackslash}emph{\{}Derivator{\}} undergo next corpus-based and morphological filtering. A set of derivational relations defined in plWordNet is presented. Results of tests for different derivational relations are discussed. A problem of the necessary corpus-based semantic filtering is analysed. The presented tool depends to a very little extent on the hand-crafted knowledge for a particular language, namely only a table of possible alternations and morphological filtering rules must be exchanged and it should not take longer than a couple of working days.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,756
inproceedings
moraes-lima-2012-combining
Combining Formal Concept Analysis and semantic information for building ontological structures from texts : an exploratory study
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1556/
Moraes, S{\'i}lvia and Lima, Vera
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
3653--3660
This work studies conceptual structures based on the Formal Concept Analysis method. We build these structures based on lexico-semantic information extracted from texts, among which we highlight the semantic roles. In our research, we propose ways to include semantic roles in concepts produced by this formal method. We analyze the contribution of semantic roles and verb classes in the composition of these concepts through structural measures. In these studies, we use the Penn Treebank Sample and SemLink 1.1 corpora, both in English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,757
inproceedings
shima-mitamura-2012-diversifiable
Diversifiable Bootstrapping for Acquiring High-Coverage Paraphrase Resource
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1557/
Shima, Hideki and Mitamura, Teruko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2666--2673
Recognizing similar or close meaning on different surface form is a common challenge in various Natural Language Processing and Information Access applications. However, we identified multiple limitations in existing resources that can be used for solving the vocabulary mismatch problem. To this end, we will propose the Diversifiable Bootstrapping algorithm that can learn paraphrase patterns with a high lexical coverage. The algorithm works in a lightly-supervised iterative fashion, where instance and pattern acquisition are interleaved, each using information provided by the other. By tweaking a parameter in the algorithm, resulting patterns can be diversifiable with a specific degree one can control.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,758
inproceedings
kestemont-etal-2012-netlog
The Netlog Corpus. A Resource for the Study of {F}lemish {D}utch {I}nternet Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1558/
Kestemont, Mike and Peersman, Claudia and De Decker, Benny and De Pauw, Guy and Luyckx, Kim and Morante, Roser and Vaassen, Frederik and van de Loo, Janneke and Daelemans, Walter
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1569--1572
Although in recent years numerous forms of Internet communication {\textemdash} such as e-mail, blogs, chat rooms and social network environments {\textemdash} have emerged, balanced corpora of Internet speech with trustworthy meta-information (e.g. age and gender) or linguistic annotations are still limited. In this paper we present a large corpus of Flemish Dutch chat posts that were collected from the Belgian online social network Netlog. For all of these posts we also acquired the users' profile information, making this corpus a unique resource for computational and sociolinguistic research. However, for analyzing such a corpus on a large scale, NLP tools are required for e.g. automatic POS tagging or lemmatization. Because many NLP tools fail to correctly analyze the surface forms of chat language usage, we propose to normalize this ‘anomalous' input into a format suitable for existing NLP solutions for standard Dutch. Additionally, we have annotated a substantial part of the corpus (i.e. the Chatty subset) to provide a gold standard for the evaluation of future approaches to automatic (Flemish) chat language normalization.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,759
inproceedings
keskes-etal-2012-clause
Clause-based Discourse Segmentation of {A}rabic Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1559/
Keskes, Iskandar and Benamara, Farah and Belguith, Lamia Hadrich
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2826--2832
This paper describes a rule-based approach to segment Arabic texts into clauses. Our method relies on an extensive analysis of a large set of lexical cues as well as punctuation marks. Our analysis was carried out on two different corpus genres: news articles and elementary school textbooks. We propose a three steps segmentation algorithm: first by using only punctuation marks, then by relying only on lexical cues and finally by using both typology and lexical cues. The results were compared with manual segmentations elaborated by experts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,760
inproceedings
matsubayashi-etal-2012-building
Building {J}apanese Predicate-argument Structure Corpus using Lexical Conceptual Structure
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1560/
Matsubayashi, Yuichiroh and Miyao, Yusuke and Aizawa, Akiko
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1554--1558
This paper introduces our study on creating a Japanese corpus that is annotated using semantically-motivated predicate-argument structures. We propose an annotation framework based on Lexical Conceptual Structure (LCS), where semantic roles of arguments are represented through a semantic structure decomposed by several primitive predicates. As a first stage of the project, we extended Jackendoff `s LCS theory to increase generality of expression and coverage for verbs frequently appearing in the corpus, and successfully created LCS structures for 60 frequent Japanese predicates in Kyoto university Text Corpus (KTC). In this paper, we report our framework for creating the corpus and the current status of creating an LCS dictionary for Japanese predicates.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,761
inproceedings
elahi-monachesi-2012-examination
An Examination of Cross-Cultural Similarities and Differences from Social Media Data with respect to Language Use
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1561/
Elahi, Mohammad Fazleh and Monachesi, Paola
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
4080--4086
We present a methodology for analyzing cross-cultural similarities and differences using language as a medium, love as domain, social media as a data source and `Terms' and `Topics' as cultural features. We discuss the techniques necessary for the creation of the social data corpus from which emotion terms have been extracted using NLP techniques. Topics of love discussion were then extracted from the corpus by means of Latent Dirichlet Allocation (LDA). Finally, on the basis of these features, a cross-cultural comparison was carried out. For the purpose of cross-cultural analysis, the experimental focus was on comparing data from a culture from the East (India) with a culture from the West (United States of America). Similarities and differences between these cultures have been analyzed with respect to the usage of emotions, their intensities and the topics used during love discussion in social media.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,762
inproceedings
uryupina-poesio-2012-domain
Domain-specific vs. Uniform Modeling for Coreference Resolution
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1562/
Uryupina, Olga and Poesio, Massimo
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
187--191
Several corpora annotated for coreference have been made available in the past decade. These resources differ with respect to their size and the underlying structure: the number of domains and their similarity. Our study compares domain-specific models, learned from small heterogeneous subsets of the investigated corpora, against uniform models, that utilize all the available data. We show that for knowledge-poor baseline systems, domain-specific and uniform modeling yield same results. Systems, relying on large amounts of linguistic knowledge, however, exhibit differences in their performance: with all the designed features in use, domain-specific models suffer from over-fitting, whereas with pre-selected feature sets they tend to outperform union models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,763
inproceedings
balahur-hermida-2012-extending
Extending the {E}moti{N}et Knowledge Base to Improve the Automatic Detection of Implicitly Expressed Emotions from Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1563/
Balahur, Alexandra and Hermida, Jes{\'u}s M.
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1207--1214
Sentiment analysis is one of the recent, highly dynamic fields in Natural Language Processing. Although much research has been performed in this area, most existing approaches are based on word-level analysis of texts and are mostly able to detect only explicit expressions of sentiment. However, in many cases, emotions are not expressed by using words with an affective meaning (e.g. happy), but by describing real-life situations, which readers (based on their commonsense knowledge) detect as being related to a specific emotion. Given the challenges of detecting emotions from contexts in which no lexical clue is present, in this article we present a comparative analysis between the performance of well-established methods for emotion detection (supervised and lexical knowledge-based) and a method we extend, which is based on commonsense knowledge stored in the EmotiNet knowledge base. Our extensive comparative evaluations show that, in the context of this task, the approach based on EmotiNet is the most appropriate.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,764
inproceedings
tolone-etal-2012-extending
Extending the adverbial coverage of a {F}rench morphological lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1564/
Tolone, Elsa and Voyatzi, Stavroula and Martineau, Claude and Constant, Matthieu
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
2856--2862
We present an extension of the adverbial entries of the French morphological lexicon DELA (Dictionnaires Electroniques du LADL / LADL electronic dictionaries). Adverbs were extracted from LGLex, a NLP-oriented syntactic resource for French, which in its turn contains all adverbs extracted from the Lexicon-Grammar tables of both simple adverbs ending in -ment (i.e., '-ly') and compound adverbs. This work exploits fine-grained linguistic information provided in existing resources. The resulting resource is reviewed in order to delete duplicates and is freely available under the LGPL-LR license.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,765
inproceedings
cristea-etal-2012-reconstructing
Reconstructing the Diachronic Morphology of {R}omanian from Dictionary Citations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1565/
Cristea, Dan and Simionescu, Radu and Haja, Gabriela
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
923--927
This work represents a first step in the direction of reconstructing a diachronic morphology for Romanian. The main resource used in this task is the digital version of Romanian Language Dictionary (eDTLR). This resource offers various usage examples for its entries, citations extracted from popular Romanian texts, which often present diachronic and inflected forms of the word they are provided for. The concept of “word deformation” is introduced and classified into more categories. The research conducted aims at detecting one type of such deformations occurring in the citations {\textemdash} changes only in the stem of the current word, without the migration to another paradigm. An algorithm is presented which automatically infers old stem forms. This uses a paradigmatic data model of the current Romanian morphology. Having the inferred roots and the paradigms that they are part of, old flexion forms of the words can be deduced. Even more, by considering the years in which the citations were published, the inferred old word forms can be framed in certain periods of time, creating a great resource for research in the evolution of the Romanian language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,766
inproceedings
pinnis-2012-latvian
{L}atvian and {L}ithuanian Named Entity Recognition with {T}ilde{NER}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2012
Istanbul, Turkey
European Language Resources Association (ELRA)
https://aclanthology.org/L12-1566/
Pinnis, M{\={a}}rcis
Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12)
1258--1265
In this paper the author presents TildeNER {\textemdash} an open source freely available named entity recognition toolkit and the first multi-class named entity recognition system for Latvian and Lithuanian languages. The system is built upon a supervised conditional random field classifier and features heuristic and statistical refinement methods that improve supervised classification, thus boosting the overall system`s performance. The toolkit provides means for named entity recognition model bootstrapping, plaintext document and also pre-processed (morpho-syntactically tagged) tab-separated document named entity tagging and evaluation on test data. The paper presents the design of the system, describes the most important data formats and briefly discusses extension possibilities to different languages. It also gives evaluation on human annotated gold standard test corpora for Latvian and Lithuanian languages as well as comparative performance analysis to a state-of-the art English named entity recognition system using parallel and strongly comparable corpora. The author gives analysis of the Latvian and Lithuanian named entity tagged corpora annotation process and the created named entity annotated corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
73,767