entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
novak-2014-new
A New Form of Humor {---} Mapping Constraint-Based Computational Morphologies to a Finite-State Representation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1207/
Nov{\'a}k, Attila
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1068--1073
MorphoLogic`s Humor morphological analyzer engine has been used for the development of several high-quality computational morphologies, among them ones for complex agglutinative languages. However, Humor`s closed source licensing scheme has been an obstacle to making these resources widely available. Moreover, there are other limitations of the rule-based Humor engine: lack of support for morphological guessing and for the integration of frequency information or other weighting of the models. These problems were solved by converting the databases to a finite-state representation that allows for morphological guessing and the addition of weights. Moreover, it has open-source implementations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,243
inproceedings
karppa-etal-2014-slmotion
{SLM}otion - An extensible sign language oriented video analysis tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1208/
Karppa, Matti and Viitaniemi, Ville and Luzardo, Marcos and Laaksonen, Jorma and Jantunen, Tommi
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1886--1891
We present a software toolkit called SLMotion which provides a framework for automatic and semiautomatic analysis, feature extraction and annotation of individual sign language videos, and which can easily be adapted to batch processing of entire sign language corpora. The program follows a modular design, and exposes a Numpy-compatible Python application programming interface that makes it easy and convenient to extend its functionality through scripting. The program includes support for exporting the annotations in ELAN format. The program is released as free software, and is available for GNU/Linux and MacOS platforms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,244
inproceedings
chu-etal-2014-constructing
Constructing a {C}hinese{---}{J}apanese Parallel Corpus from {W}ikipedia
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1209/
Chu, Chenhui and Nakazawa, Toshiaki and Kurohashi, Sadao
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
642--647
Parallel corpora are crucial for statistical machine translation (SMT). However, they are quite scarce for most language pairs, such as Chinese{\textemdash}Japanese. As comparable corpora are far more available, many studies have been conducted to automatically construct parallel corpora from comparable corpora. This paper presents a robust parallel sentence extraction system for constructing a Chinese{\textemdash}Japanese parallel corpus from Wikipedia. The system is inspired by previous studies that mainly consist of a parallel sentence candidate filter and a binary classifier for parallel sentence identification. We improve the system by using the common Chinese characters for filtering and two novel feature sets for classification. Experiments show that our system performs significantly better than the previous studies for both accuracy in parallel sentence extraction and SMT performance. Using the system, we construct a Chinese{\textemdash}Japanese parallel corpus with more than 126k highly accurate parallel sentences from Wikipedia. The constructed parallel corpus is freely available at \url{http://orchid.kuee.kyoto-u.ac.jp/chu/resource/wiki_zh_ja.tgz}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,245
inproceedings
carl-etal-2014-cft13
{CFT}13: A resource for research into the post-editing process
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1210/
Carl, Michael and Garc{\'i}a, Mercedes Mart{\'i}nez and Mesa-Lao, Bartolom{\'e}
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1757--1764
This paper describes the most recent dataset that has been added to the CRITT Translation Process Research Database (TPR-DB). Under the name CFT13, this new study contains user activity data (UAD) in the form of key-logging and eye-tracking collected during the second CasMaCat field trial in June 2013. The CFT13 is a publicly available resource featuring a number of simple and compound process and product units suited to investigate human-computer interaction while post-editing machine translation outputs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,246
inproceedings
zeevaert-2014-morkum
M{\"orkum Nj{\'alu. An annotated corpus to analyse and explain grammatical divergences between 14th-century manuscripts of Nj{\'al`s saga.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1211/
Zeevaert, Ludger
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
981--987
The work of the research project “Variance of Nj{\'a}ls saga” at the {\'A}rni Magn{\'u}sson Institute for Icelandic Studies in Reykjav{\'i}k relies mainly on an annotated XML-corpus of manuscripts of Brennu-Nj{\'a}ls saga or ‘The Story of Burnt Nj{\'a}l’, an Icelandic prose narrative from the end of the 13th century. One part of the project is devoted to linguistic variation in the earliest transmission of the text in parchment manuscripts and fragments from the 14th century. The article gives a short overview over the design of the corpus that has to serve quite different purposes from palaeographic over stemmatological to literary research. It focuses on features important for the analysis of certain linguistic variables and the challenge lying in their implementation in a corpus consisting of close transcriptions of medieval manuscripts and gives examples for the use of the corpus for linguistic research in the frame of the project that mainly consists of the analysis of different grammatical/syntactic constructions that are often referred to in connection with stylistic research (narrative inversion, historical present tense, indirect-speech constructions).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,247
inproceedings
goldman-etal-2014-crowdsourcing
A Crowdsourcing Smartphone Application for {S}wiss {G}erman: Putting Language Documentation in the Hands of the Users
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1212/
Goldman, Jean-Philippe and Leeman, Adrian and Kolly, Marie-Jos{\'e} and Hove, Ingrid and Almajai, Ibrahim and Dellwo, Volker and Moran, Steven
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3444--3447
This contribution describes an on-going projects a smartphone application called Voice {\~App, which is a follow-up of a previous application called Dial{\"akt {\~App. The main purpose of both apps is to identify the user’s Swiss German dialect on the basis of the dialectal variations of 15 words. The result is returned as one or more geographical points on a map. In Dial{\"akt {\~App, launched in 2013, the user provides his or her own pronunciation through buttons, while the Voice {\~App, currently in development, asks users to pronounce the word and uses speech recognition techniques to identify the variants and localize the user. This second app is more challenging from a technical point of view but nevertheless recovers the nature of dialect variation of spoken language. Besides, the Voice {\~App takes its users on a journey in which they explore the individuality of their own voices, answering questions such as: How high is my voice? How fast do I speak? Do I speak faster than users in the neighbouring city?
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,248
inproceedings
bethard-etal-2014-cleartk
{C}lear{TK} 2.0: Design Patterns for Machine Learning in {UIMA}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1213/
Bethard, Steven and Ogren, Philip and Becker, Lee
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3289--3293
ClearTK adds machine learning functionality to the UIMA framework, providing wrappers to popular machine learning libraries, a rich feature extraction library that works across different classifiers, and utilities for applying and evaluating machine learning models. Since its inception in 2008, ClearTK has evolved in response to feedback from developers and the community. This evolution has followed a number of important design principles including: conceptually simple annotator interfaces, readable pipeline descriptions, minimal collection readers, type system agnostic code, modules organized for ease of import, and assisting user comprehension of the complex UIMA framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,249
inproceedings
zribi-etal-2014-conventional
A Conventional Orthography for {T}unisian {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1214/
Zribi, In{\`e}s and Boujelbane, Rahma and Masmoudi, Abir and Ellouze, Mariem and Belguith, Lamia and Habash, Nizar
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2355--2361
Tunisian Arabic is a dialect of the Arabic language spoken in Tunisia. Tunisian Arabic is an under-resourced language. It has neither a standard orthography nor large collections of written text and dictionaries. Actually, there is no strict separation between Modern Standard Arabic, the official language of the government, media and education, and Tunisian Arabic; the two exist on a continuum dominated by mixed forms. In this paper, we present a conventional orthography for Tunisian Arabic, following a previous effort on developing a conventional orthography for Dialectal Arabic (or CODA) demonstrated for Egyptian Arabic. We explain the design principles of CODA and provide a detailed description of its guidelines as applied to Tunisian Arabic.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,250
inproceedings
mayer-cysouw-2014-creating
Creating a massively parallel {B}ible corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1215/
Mayer, Thomas and Cysouw, Michael
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3158--3163
We present our ongoing effort to create a massively parallel Bible corpus. While an ever-increasing number of Bible translations is available in electronic form on the internet, there is no large-scale parallel Bible corpus that allows language researchers to easily get access to the texts and their parallel structure for a large variety of different languages. We report on the current status of the corpus, with over 900 translations in more than 830 language varieties. All translations are tokenized (e.g., separating punctuation marks) and Unicode normalized. Mainly due to copyright restrictions only portions of the texts are made publicly available. However, we provide co-occurrence information for each translation in a (sparse) matrix format. All word forms in the translation are given together with their frequency and the verses in which they occur.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,251
inproceedings
rapp-2014-corpus
Corpus-Based Computation of Reverse Associations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1216/
Rapp, Reinhard
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1380--1386
According to psychological learning theory an important principle governing language acquisition is co-occurrence. For example, when we perceive language, our brain seems to unconsciously analyze and store the co-occurrence patterns of the words. And during language production, these co-occurrence patterns are reproduced. The applicability of this principle is particularly obvious in the case of word associations. There is evidence that the associative responses people typically come up with upon presentation of a stimulus word are often words which frequently co-occur with it. It is thus possible to predict a response by looking at co-occurrence data. The work presented here is along these lines. However, it differs from most previous work in that it investigates the direction from the response to the stimulus rather than vice-versa, and that it also deals with the case when several responses are known. Our results indicate that it is possible to predict a stimulus word from its responses, and that it helps if several responses are given.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,252
inproceedings
marrafa-etal-2014-lextec
{L}ex{T}ec {---} a rich language resource for technical domains in {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1217/
Marrafa, Palmira and Amaro, Raquel and Mendes, Sara
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1044--1050
The growing amount of available information and the importance given to the access to technical information enhance the potential role of NLP applications in enabling users to deal with information for a variety of knowledge domains. In this process, language resources are crucial. This paper presents Lextec, a rich computational language resource for technical vocabulary in Portuguese. Encoding a representative set of terms for ten different technical domains, this concept-based relational language resource combines a wide range of linguistic information by integrating each entry in a domain-specific wordnet and associating it with a precise definition for each lexicalization in the technical domain at stake, illustrative texts and information for translation into English.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,253
inproceedings
arias-etal-2014-boosting
Boosting the creation of a treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1218/
Arias, Blanca and Bel, N{\'u}ria and Lorente, Merc{\`e} and Marim{\'o}n, Montserrat and Mil{\`a}, Alba and Vivaldi, Jorge and Padr{\'o}, Muntsa and Fomicheva, Marina and Larrea, Imanol
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
775--781
In this paper we present the results of an ongoing experiment of bootstrapping a Treebank for Catalan by using a Dependency Parser trained with Spanish sentences. In order to save time and cost, our approach was to profit from the typological similarities between Catalan and Spanish to create a first Catalan data set quickly by automatically: (i) annotating with a de-lexicalized Spanish parser, (ii) manually correcting the parses, and (iii) using the Catalan corrected sentences to train a Catalan parser. The results showed that the number of parsed sentences required to train a Catalan parser is about 1000 that were achieved in 4 months, with 2 annotators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,254
inproceedings
sanders-etal-2014-dutch
The {D}utch {LESLLA} Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1219/
Sanders, Eric and van de Craats, Ineke and de Lint, Vanja
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2715--2718
This paper describes the Dutch LESLLA data and its curation. LESLLA stands for Low-Educated Second Language and Literacy Acquisition. The data was collected for research in this field and would have been disappeared if it were not saved. Within the CLARIN project Data Curation Service the data was made into a spoken language resource and made available to other researchers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,255
inproceedings
jelinek-2014-improvements
Improvements to Dependency Parsing Using Automatic Simplification of Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1220/
Jel{\'i}nek, Tom{\'a}{\v{s}}
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
73--77
In dependency parsing, much effort is devoted to the development of new methods of language modeling and better feature settings. Less attention is paid to actual linguistic data and how appropriate they are for automatic parsing: linguistic data can be too complex for a given parser, morphological tags may not reflect well syntactic properties of words, a detailed, complex annotation scheme may be ill suited for automatic parsing. In this paper, I present a study of this problem on the following case: automatic dependency parsing using the data of the Prague Dependency Treebank with two dependency parsers: MSTParser and MaltParser. I show that by means of small, reversible simplifications of the text and of the annotation, a considerable improvement of parsing accuracy can be achieved. In order to facilitate the task of language modeling performed by the parser, I reduce variability of lemmas and forms in the text. I modify the system of morphological annotation to adapt it better for parsing. Finally, the dependency annotation scheme is also partially modified. All such modifications are automatic and fully reversible: after the parsing is done, the original data and structures are automatically restored. With MaltParser, I achieve an 8.3{\%} error rate reduction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,256
inproceedings
mihaila-ananiadou-2014-meta
The Meta-knowledge of Causality in Biomedical Scientific Discourse
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1221/
Mih{\u{a}}il{\u{a}}, Claudiu and Ananiadou, Sophia
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1984--1991
Causality lies at the heart of biomedical knowledge, being involved in diagnosis, pathology or systems biology. Thus, automatic causality recognition can greatly reduce the human workload by suggesting possible causal connections and aiding in the curation of pathway models. For this, we rely on corpora that are annotated with classified, structured representations of important facts and findings contained within text. However, it is impossible to correctly interpret these annotations without additional information, e.g., classification of an event as fact, hypothesis, experimental result or analysis of results, confidence of authors about the validity of their analyses etc. In this study, we analyse and automatically detect this type of information, collectively termed meta-knowledge (MK), in the context of existing discourse causality annotations. Our effort proves the feasibility of identifying such pieces of information, without which the understanding of causal relations is limited.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,257
inproceedings
maier-etal-2014-discosuite
Discosuite - A parser test suite for {G}erman discontinuous structures
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1222/
Maier, Wolfgang and Kaeshammer, Miriam and Baumann, Peter and K{\"ubler, Sandra
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2905--2912
Parser evaluation traditionally relies on evaluation metrics which deliver a single aggregate score over all sentences in the parser output, such as PARSEVAL. However, for the evaluation of parser performance concerning a particular phenomenon, a test suite of sentences is needed in which this phenomenon has been identified. In recent years, the parsing of discontinuous structures has received a rising interest. Therefore, in this paper, we present a test suite for testing the performance of dependency and constituency parsers on non-projective dependencies and discontinuous constituents for German. The test suite is based on the newly released TIGER treebank version 2.2. It provides a unique possibility of benchmarking parsers on non-local syntactic relationships in German, for constituents and dependencies. We include a linguistic analysis of the phenomena that cause discontinuity in the TIGER annotation, thereby closing gaps in previous literature. The linguistic phenomena we investigate include extraposition, a placeholder/repeated element construction, topicalization, scrambling, local movement, parentheticals, and fronting of pronouns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,258
inproceedings
barbieri-saggion-2014-modelling-irony
Modelling Irony in {T}witter: Feature Analysis and Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1223/
Barbieri, Francesco and Saggion, Horacio
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4258--4264
Irony, a creative use of language, has received scarce attention from the computational linguistics research point of view. We propose an automatic system capable of detecting irony with good accuracy in the social network Twitter. Twitter allows users to post short messages (140 characters) which usually do not follow the expected rules of the grammar, users tend to truncate words and use particular punctuation. For these reason automatic detection of Irony in Twitter is not trivial and requires specific linguistic tools. We propose in this paper a new set of experiments to assess the relevance of the features included in our model. Our model does not include words or sequences of words as features, aiming to detect inner characteristic of Irony.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,259
inproceedings
titze-etal-2014-dbpedia
{DB}pedia Domains: augmenting {DB}pedia with domain information
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1224/
Titze, Gregor and Bryl, Volha and Zirn, C{\"acilia and Ponzetto, Simone Paolo
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1438--1442
We present an approach for augmenting DBpedia, a very large ontology lying at the heart of the Linked Open Data (LOD) cloud, with domain information. Our approach uses the thematic labels provided for DBpedia entities by Wikipedia categories, and groups them based on a kernel based k-means clustering algorithm. Experiments on gold-standard data show that our approach provides a first solution to the automatic annotation of DBpedia entities with domain labels, thus providing the largest LOD domain-annotated ontology to date.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,260
inproceedings
chollet-etal-2014-mining
Mining a multimodal corpus for non-verbal behavior sequences conveying attitudes
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1225/
Chollet, Mathieu and Ochs, Magalie and Pelachaud, Catherine
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3417--3424
Interpersonal attitudes are expressed by non-verbal behaviors on a variety of different modalities. The perception of these behaviors is influenced by how they are sequenced with other behaviors from the same person and behaviors from other interactants. In this paper, we present a method for extracting and generating sequences of non-verbal signals expressing interpersonal attitudes. These sequences are used as part of a framework for non-verbal expression with Embodied Conversational Agents that considers different features of non-verbal behavior: global behavior tendencies, interpersonal reactions, sequencing of non-verbal signals, and communicative intentions. Our method uses a sequence mining technique on an annotated multimodal corpus to extract sequences characteristic of different attitudes. New sequences of non-verbal signals are generated using a probabilistic model, and evaluated using the previously mined sequences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,261
inproceedings
grouin-2014-biomedical
Biomedical entity extraction using machine-learning based approaches
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1226/
Grouin, Cyril
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2518--2523
In this paper, we present the experiments we made to process entities from the biomedical domain. Depending on the task to process, we used two distinct supervised machine-learning techniques: Conditional Random Fields to perform both named entity identification and classification, and Maximum Entropy to classify given entities. Machine-learning approaches outperformed knowledge-based techniques on categories where sufficient annotated data was available. We showed that the use of external features (unsupervised clusters, information from ontology and taxonomy) improved the results significantly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,262
inproceedings
stein-2014-parsing
Parsing Heterogeneous Corpora with a Rich Dependency Grammar
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1227/
Stein, Achim
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2879--2886
Grammar models conceived for parsing purposes are often poorer than models that are motivated linguistically. We present a grammar model which is linguistically satisfactory and based on the principles of traditional dependency grammar. We show how a state-of-the-art dependency parser (mate tools) performs with this model, trained on the Syntactic Reference Corpus of Medieval French (SRCMF), a manually annotated corpus of medieval (Old French) texts. We focus on the problems caused by small and heterogeneous training sets typical for corpora of older periods. The result is the first publicly available dependency parser for Old French. On a 90/10 training/evaluation split of eleven OF texts (206000 words), we obtained an UAS of 89.68{\%} and a LAS of 82.62{\%}. Three experiments showed how heterogeneity, typical of medieval corpora, affects the parsing results: (a) a {\textquoteleft}one-on-one' cross evaluation for individual texts, (b) a {\textquoteleft}leave-one-out' cross evaluation, and (c) a prose/verse cross evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,263
inproceedings
bick-2014-ml
{ML}-Optimization of Ported Constraint Grammars
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1228/
Bick, Eckhard
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4483--4487
In this paper, we describe how a Constraint Grammar with linguist-written rules can be optimized and ported to another language using a Machine Learning technique. The effects of rule movements, sorting, grammar-sectioning and systematic rule modifications are discussed and quantitatively evaluated. Statistical information is used to provide a baseline and to enhance the core of manual rules. The best-performing parameter combinations achieved part-of-speech F-scores of over 92 for a grammar ported from English to Danish, a considerable advance over both the statistical baseline (85.7), and the raw ported grammar (86.1). When the same technique was applied to an existing native Danish CG, error reduction was 10{\%} (F=96.94).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,264
inproceedings
shaikh-etal-2014-multi
A Multi-Cultural Repository of Automatically Discovered Linguistic and Conceptual Metaphors
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1229/
Shaikh, Samira and Strzalkowski, Tomek and Liu, Ting and Broadwell, George Aaron and Yamrom, Boris and Taylor, Sarah and Feldman, Laurie and Cho, Kit and Boz, Umit and Cases, Ignacio and Peshkova, Yuliya and Lin, Ching-Sheng
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2495--2500
In this article, we present details about our ongoing work towards building a repository of Linguistic and Conceptual Metaphors. This resource is being developed as part of our research effort into the large-scale detection of metaphors from unrestricted text. We have stored a large amount of automatically extracted metaphors in American English, Mexican Spanish, Russian and Iranian Farsi in a relational database, along with pertinent metadata associated with these metaphors. A substantial subset of the contents of our repository has been systematically validated via rigorous social science experiments. Using information stored in the repository, we are able to posit certain claims in a cross-cultural context about how peoples in these cultures (America, Mexico, Russia and Iran) view particular concepts related to Governance and Economic Inequality through the use of metaphor. Researchers in the field can use this resource as a reference of typical metaphors used across these cultures. In addition, it can be used to recognize metaphors of the same form or pattern, in other domains of research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,265
inproceedings
salaberri-etal-2014-first
First approach toward Semantic Role Labeling for {B}asque
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1230/
Salaberri, Haritz and Arregi, Olatz and Zapirain, Be{\~n}at
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1387--1393
In this paper, we present the first Semantic Role Labeling system developed for Basque. The system is implemented using machine learning techniques and trained with the Reference Corpus for the Processing of Basque (EPEC). In our experiments the classifier that offers the best results is based on Support Vector Machines. Our system achieves 84.30 F1 score in identifying the PropBank semantic role for a given constituent and 82.90 F1 score in identifying the VerbNet role. Our study establishes a baseline for Basque SRL. Although there are no directly comparable systems for English we can state that the results we have achieved are quite good. In addition, we have performed a Leave-One-Out feature selection procedure in order to establish which features are the worthiest regarding argument classification. This will help smooth the way for future stages of Basque SRL and will help draw some of the guidelines of our research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,266
inproceedings
sepulveda-torres-etal-2014-generating
Generating a Lexicon of Errors in {P}ortuguese to Support an Error Identification System for {S}panish Native Learners
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1231/
Sep{\'u}lveda Torres, Lianet and Duran, Magali Sanches and Alu{\'i}sio, Sandra
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3952--3957
Portuguese is a less resourced language in what concerns foreign language learning. Aiming to inform a module of a system designed to support scientific written production of Spanish native speakers learning Portuguese, we developed an approach to automatically generate a lexicon of wrong words, reproducing language transfer errors made by such foreign learners. Each item of the artificially generated lexicon contains, besides the wrong word, the respective Spanish and Portuguese correct words. The wrong word is used to identify the interlanguage error and the correct Spanish and Portuguese forms are used to generate the suggestions. Keeping control of the correct word forms, we can provide correction or, at least, useful suggestions for the learners. We propose to combine two automatic procedures to obtain the error correction: i) a similarity measure and ii) a translation algorithm based on aligned parallel corpus. The similarity-based method achieved a precision of 52{\%}, whereas the alignment-based method achieved a precision of 90{\%}. In this paper we focus only on interlanguage errors involving suffixes that have different forms in both languages. The approach, however, is very promising to tackle other types of errors, such as gender errors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,267
inproceedings
zhang-etal-2014-xlid
x{L}i{D}-Lexica: Cross-lingual Linked Data Lexica
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1232/
Zhang, Lei and F{\"arber, Michael and Rettinger, Achim
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2101--2105
In this paper, we introduce our cross-lingual linked data lexica, called xLiD-Lexica, which are constructed by exploiting the multilingual Wikipedia and linked data resources from Linked Open Data (LOD). We provide the cross-lingual groundings of linked data resources from LOD as RDF data, which can be easily integrated into the LOD data sources. In addition, we build a SPARQL endpoint over our xLiD-Lexica to allow users to easily access them using SPARQL query language. Multilingual and cross-lingual information access can be facilitated by the availability of such lexica, e.g., allowing for an easy mapping of natural language expressions in different languages to linked data resources from LOD. Many tasks in natural language processing, such as natural language generation, cross-lingual entity linking, text annotation and question answering, can benefit from our xLiD-Lexica.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,268
inproceedings
luo-etal-2014-study
A Study on Expert Sourcing Enterprise Question Collection and Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1233/
Luo, Yuan and Boucher, Thomas and Oral, Tolga and Osofsky, David and Weber, Sara
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
181--188
Large enterprises, such as IBM, accumulate petabytes of free-text data within their organizations. To mine this big data, a critical ability is to enable meaningful question answering beyond keywords search. In this paper, we present a study on the characteristics and classification of IBM sales questions. The characteristics are analyzed both semantically and syntactically, from where a question classification guideline evolves. We adopted an enterprise level expert sourcing approach to gather questions, annotate questions based on the guideline and manage the quality of annotations via enhanced inter-annotator agreement analysis. We developed a question feature extraction system and experimented with rule-based, statistical and hybrid question classifiers. We share our annotated corpus of questions and report our experimental results. Statistical classifiers separately based on n-grams and hand-crafted rule features give reasonable macro-f1 scores at 61.7{\%} and 63.1{\%} respectively. Rule based classifier gives a macro-f1 at 77.1{\%}. The hybrid classifier with n-gram and rule features using a second guess model further improves the macro-f1 to 83.9{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,269
inproceedings
li-etal-2014-annotating
Annotating Relation Mentions in Tabloid Press
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1234/
Li, Hong and Krause, Sebastian and Xu, Feiyu and Uszkoreit, Hans and Hummel, Robert and Mironova, Veselina
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3253--3257
This paper presents a new resource for the training and evaluation needed by relation extraction experiments. The corpus consists of annotations of mentions for three semantic relations: marriage, parent{\textemdash}child, siblings, selected from the domain of biographic facts about persons and their social relationships. The corpus contains more than one hundred news articles from Tabloid Press. In the current corpus, we only consider the relation mentions occurring in the individual sentences. We provide multi-level annotations which specify the marked facts from relation, argument, entity, down to the token level, thus allowing for detailed analysis of linguistic phenomena and their interactions. A generic markup tool Recon developed at the DFKI LT lab has been utilised for the annotation task. The corpus has been annotated by two human experts, supported by additional conflict resolution conducted by a third expert. As shown in the evaluation, the annotation is of high quality as proved by the stated inter-annotator agreements both on sentence level and on relationmention level. The current corpus is already in active use in our research for evaluation of the relation extraction performance of our automatically learned extraction patterns.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,270
inproceedings
gavankar-etal-2014-efficient
Efficient Reuse of Structured and Unstructured Resources for Ontology Population
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1235/
Gavankar, Chetana and Kulkarni, Ashish and Ramakrishnan, Ganesh
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3654--3660
We study the problem of ontology population for a domain ontology and present solutions based on semi-automatic techniques. A domain ontology for an organization, often consists of classes whose instances are either specific to, or independent of the organization. E.g. in an academic domain ontology, classes like Professor, Department could be organization (university) specific, while Conference, Programming languages are organization independent. This distinction allows us to leverage data sources both{\textemdash}within the organization and those in the Internet {\textemdash} to extract entities and populate an ontology. We propose techniques that build on those for open domain IE. Together with user input, we show through comprehensive evaluation, how these semi-automatic techniques achieve high precision. We experimented with the academic domain and built an ontology comprising of over 220 classes. Intranet documents from five universities formed our organization specific corpora and we used open domain knowledge bases like Wikipedia, Linked Open Data, and web pages from the Internet as the organization independent data sources. The populated ontology that we built for one of the universities comprised of over 75,000 instances. We adhere to the semantic web standards and tools and make the resources available in the OWL format. These could be useful for applications such as information extraction, text annotation, and information retrieval.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,271
inproceedings
koprivova-etal-2014-mapping
Mapping Diatopic and Diachronic Variation in Spoken {C}zech: The {ORTOFON} and {DIALEKT} Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1236/
Kop{\v{r}}ivov{\'a}, Marie and Gol{\'a}{\v{n}}ov{\'a}, Hana and Klime{\v{s}}ov{\'a}, Petra and Luke{\v{s}}, David
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
376--382
ORTOFON and DIALEKT are two corpora of spoken Czech (recordings + transcripts) which are currently being built at the Institute of the Czech National Corpus. The first one (ORTOFON) continues the tradition of the CNC`s ORAL series of spoken corpora by focusing on collecting recordings of unscripted informal spoken interactions ({\textquotedblleft}prototypically spoken texts{\textquotedblright}), but also provides new features, most notably an annotation scheme with multiple tiers per speaker, including orthographic and phonetic transcripts and allowing for a more precise treatment of overlapping speech. Rich speaker- and situation-related metadata are also collected for possible use as factors in sociolinguistic analyses. One of the stated goals is to make the data in the corpus balanced with respect to a subset of these. The second project, DIALEKT, consists in annotating (in a way partially compatible with the ORTOFON corpus) and providing electronic access to historical (1960s{--}80s) dialect recordings, mainly of a monological nature, from all over the Czech Republic. The goal is to integrate both corpora into one map-based browsing interface, allowing an intuitive and informative spatial visualization of query results or dialect feature maps, confrontation with isoglosses previously established through the effort of dialectologists etc.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,272
inproceedings
schone-etal-2014-corpus
Corpus and Evaluation of Handwriting Recognition of Historical Genealogical Records
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1237/
Schone, Patrick and Nielson, Heath and Ward, Mark
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
153--159
Over the last few decades, significant strides have been made in handwriting recognition (HR), which is the automatic transcription of handwritten documents. HR often focuses on modern handwritten material, but in the electronic age, the volume of handwritten material is rapidly declining. However, we believe HR is on the verge of having major application to historical record collections. In recent years, archives and genealogical organizations have conducted huge campaigns to transcribe valuable historical record content with such transcription being largely done through human-intensive labor. HR has the potential of revolutionizing these transcription endeavors. To test the hypothesis that this technology is close to applicability, and to provide a testbed for reducing any accuracy gaps, we have developed an evaluation paradigm for historical record handwriting recognition. We created a huge test corpus consisting of four historical data collections of four differing genres and three languages. In this paper, we provide the details of these extensive resources which we intend to release to the research community for further study. Since several research organizations have already participated in this evaluation, we also show initial results and comparisons to human levels of performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,273
inproceedings
pilan-volodina-2014-reusing
Reusing {S}wedish {F}rame{N}et for training semantic roles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1238/
Pil{\'a}n, Ildik{\'o} and Volodina, Elena
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1359--1363
In this article we present the first experiences of reusing the Swedish FrameNet (SweFN) as a resource for training semantic roles. We give an account of the procedure we used to adapt SweFN to the needs of students of Linguistics in the form of an automatically generated exercise. During this adaptation, the mapping of the fine-grained distinction of roles from SweFN into learner-friendlier coarse-grained roles presented a major challenge. Besides discussing the details of this mapping, we describe the resulting multiple-choice exercise and its graphical user interface. The exercise was made available through L{\"arka, an online platform for students of Linguistics and learners of Swedish as a second language. We outline also aspects underlying the selection of the incorrect answer options which include semantic as well as frequency-based criteria. Finally, we present our own observations and initial user feedback about the applicability of such a resource in the pedagogical domain. Students' answers indicated an overall positive experience, the majority found the exercise useful for learning semantic roles.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,274
inproceedings
berkling-etal-2014-database
A Database of Freely Written Texts of {G}erman School Students for the Purpose of Automatic Spelling Error Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1239/
Berkling, Kay and Fay, Johanna and Ghayoomi, Masood and Hein, Katrin and Lavalley, R{\'emi and Linhuber, Ludwig and St{\"uker, Sebastian
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1212--1217
The spelling competence of school students is best measured on freely written texts, instead of pre-determined, dictated texts. Since the analysis of the error categories in these kinds of texts is very labor intensive and costly, we are working on an automatic systems to perform this task. The modules of the systems are derived from techniques from the area of natural language processing, and are learning systems that need large amounts of training data. To obtain the data necessary for training and evaluating the resulting system, we conducted data collection of freely written, German texts by school children. 1,730 students from grade 1 through 8 participated in this data collection. The data was transcribed electronically and annotated with their corrected version. This resulted in a total of 14,563 sentences that can now be used for research regarding spelling diagnostics. Additional meta-data was collected regarding writers' language biography, teaching methodology, age, gender, and school year. In order to do a detailed manual annotation of the categories of the spelling errors committed by the students we developed a tool specifically tailored to the task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,275
inproceedings
haenig-etal-2014-pace
{PACE} Corpus: a multilingual corpus of Polarity-annotated textual data from the domains Automotive and {CE}llphone
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1240/
Haenig, Christian and Niekler, Andreas and Wuensch, Carsten
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2219--2224
In this paper, we describe a publicly available multilingual evaluation corpus for phrase-level Sentiment Analysis that can be used to evaluate real world applications in an industrial context. This corpus contains data from English and German Internet forums (1000 posts each) focusing on the automotive domain. The major topic of the corpus is connecting and using cellphones to/in cars. The presented corpus contains different types of annotations: objects (e.g. my car, my new cellphone), features (e.g. address book, sound quality) and phrase-level polarities (e.g. the best possible automobile, big problem). Each of the posts has been annotated by at least four different annotators {\textemdash} these annotations are retained in their original form. The reliability of the annotations is evaluated by inter-annotator agreement scores. Besides the corpus data and format, we provide comprehensive corpus statistics. This corpus is one of the first lexical resources focusing on real world applications that analyze the voice of the customer which is crucial for various industrial use cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,276
inproceedings
vincze-etal-2014-szeged
{S}zeged Corpus 2.5: Morphological Modifications in a Manually {POS}-tagged {H}ungarian Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1241/
Vincze, Veronika and Varga, Viktor and Simk{\'o}, Katalin Ilona and Zsibrita, J{\'a}nos and Nagy, {\'A}goston and Farkas, Rich{\'a}rd and Csirik, J{\'a}nos
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1074--1078
The Szeged Corpus is the largest manually annotated database containing the possible morphological analyses and lemmas for each word form. In this work, we present its latest version, Szeged Corpus 2.5, in which the new harmonized morphological coding system of Hungarian has been employed and, on the other hand, the majority of misspelled words have been corrected and tagged with the proper morphological code. New morphological codes are introduced for participles, causative / modal / frequentative verbs, adverbial pronouns and punctuation marks, moreover, the distinction between common and proper nouns is eliminated. We also report some statistical data on the frequency of the new morphological codes. The new version of the corpus made it possible to train magyarlanc, a data-driven POS-tagger of Hungarian on a dataset with the new harmonized codes. According to the results, magyarlanc is able to achieve a state-of-the-art accuracy score on the 2.5 version as well.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,277
inproceedings
menard-barriere-2014-linked
Linked Open Data and Web Corpus Data for noun compound bracketing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1242/
M{\'e}nard, Pierre Andr{\'e} and Barri{\`e}re, Caroline
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
702--709
This research provides a comparison of a linked open data resource (DBpedia) and web corpus data resources (Google Web Ngrams and Google Books Ngrams) for noun compound bracketing. Large corpus statistical analysis has often been used for noun compound bracketing, and our goal is to introduce a linked open data (LOD) resource for such task. We show its particularities and its performance on the task. Results obtained on resources tested individually are promising, showing a potential for DBpedia to be included in future hybrid systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,278
inproceedings
freitas-etal-2014-multimodal
Multimodal Corpora for Silent Speech Interaction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1243/
Freitas, Jo{\~a}o and Teixeira, Ant{\'o}nio and Dias, Miguel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4507--4511
A Silent Speech Interface (SSI) allows for speech communication to take place in the absence of an acoustic signal. This type of interface is an alternative to conventional Automatic Speech Recognition which is not adequate for users with some speech impairments or in the presence of environmental noise. The work presented here produces the conditions to explore and analyze complex combinations of input modalities applicable in SSI research. By exploring non-invasive and promising modalities, we have selected the following sensing technologies used in human-computer interaction: Video and Depth input, Ultrasonic Doppler sensing and Surface Electromyography. This paper describes a novel data collection methodology where these independent streams of information are synchronously acquired with the aim of supporting research and development of a multimodal SSI. The reported recordings were divided into two rounds: a first one where the acquired data was silently uttered and a second round where speakers pronounced the scripted prompts in an audible and normal tone. In the first round of recordings, a total of 53.94 minutes were captured where 30.25{\%} was estimated to be silent speech. In the second round of recordings, a total of 30.45 minutes were obtained and 30.05{\%} of the recordings were audible speech.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,279
inproceedings
izumi-etal-2014-constructing
Constructing a Corpus of {J}apanese Predicate Phrases for Synonym/Antonym Relations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1244/
Izumi, Tomoko and Shibata, Tomohide and Asano, Hisako and Matsuo, Yoshihiro and Kurohashi, Sadao
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1394--1400
We construct a large corpus of Japanese predicate phrases for synonym-antonym relations. The corpus consists of 7,278 pairs of predicates such as “receive-permission (ACC)” vs. “obtain-permission (ACC)”, in which each predicate pair is accompanied by a noun phrase and case information. The relations are categorized as synonyms, entailment, antonyms, or unrelated. Antonyms are further categorized into three different classes depending on their aspect of oppositeness. Using the data as a training corpus, we conduct the supervised binary classification of synonymous predicates based on linguistically-motivated features. Combining features that are characteristic of synonymous predicates with those that are characteristic of antonymous predicates, we succeed in automatically identifying synonymous predicates at the high F-score of 0.92, a 0.4 improvement over the baseline method of using the Japanese WordNet. The results of an experiment confirm that the quality of the corpus is high enough to achieve automatic classification. To the best of our knowledge, this is the first and the largest publicly available corpus of Japanese predicate phrases for synonym-antonym relations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,280
inproceedings
rennes-jonsson-2014-impact
The Impact of Cohesion Errors in Extraction Based Summaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1245/
Rennes, Evelina and J{\"onsson, Arne
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1575--1582
We present results from an eye tracking study of automatic text summarization. Automatic text summarization is a growing field due to the modern world`s Internet based society, but to automatically create perfect summaries is challenging. One problem is that extraction based summaries often have cohesion errors. By the usage of an eye tracking camera, we have studied the nature of four different types of cohesion errors occurring in extraction based summaries. A total of 23 participants read and rated four different texts and marked the most difficult areas of each text. Statistical analysis of the data revealed that absent cohesion or context and broken anaphoric reference (pronouns) caused some disturbance in reading, but that the impact is restricted to the effort to read rather than the comprehension of the text. However, erroneous anaphoric references (pronouns) were not always detected by the participants which poses a problem for automatic text summarizers. The study also revealed other potential disturbing factors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,281
inproceedings
zhou-etal-2014-cuhk
The {CUHK} Discourse {T}ree{B}ank for {C}hinese: Annotating Explicit Discourse Connectives for the {C}hinese {T}ree{B}ank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1246/
Zhou, Lanjun and Li, Binyang and Wei, Zhongyu and Wong, Kam-Fai
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
942--949
The lack of open discourse corpus for Chinese brings limitations for many natural language processing tasks. In this work, we present the first open discourse treebank for Chinese, namely, the Discourse Treebank for Chinese (DTBC). At the current stage, we annotated explicit intra-sentence discourse connectives, their corresponding arguments and senses for all 890 documents of the Chinese Treebank 5. We started by analysing the characteristics of discourse annotation for Chinese, adapted the annotation scheme of Penn Discourse Treebank 2 (PDTB2) to Chinese language while maintaining the compatibility as far as possible. We made adjustments to 3 essential aspects according to the previous study of Chinese linguistics. They are sense hierarchy, argument scope and semantics of arguments. Agreement study showed that our annotation scheme could achieve highly reliable results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,282
inproceedings
sadamitsu-etal-2014-extraction
Extraction of Daily Changing Words for Question Answering
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1247/
Sadamitsu, Kugatsu and Higashinaka, Ryuichiro and Matsuo, Yoshihiro
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2608--2612
This paper proposes a method for extracting Daily Changing Words (DCWs), words that indicate which questions are real-time dependent. Our approach is based on two types of template matching using time and named entity slots from large size corpora and adding simple filtering methods from news corpora. Extracted DCWs are utilized for detecting and sorting real-time dependent questions. Experiments confirm that our DCW method achieves higher accuracy in detecting real-time dependent questions than existing word classes and a simple supervised machine learning approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,283
inproceedings
dandapat-groves-2014-mtwatch
{MTW}atch: A Tool for the Analysis of Noisy Parallel Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1248/
Dandapat, Sandipan and Groves, Declan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
41--45
State-of-the-art statistical machine translation (SMT) technique requires a good quality parallel data to build a translation model. The availability of large parallel corpora has rapidly increased over the past decade. However, often these newly developed parallel data contains contain significant noise. In this paper, we describe our approach for classifying good quality parallel sentence pairs from noisy parallel data. We use 10 different features within a Support Vector Machine (SVM)-based model for our classification task. We report a reasonably good classification accuracy and its positive effect on overall MT accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,284
inproceedings
riedl-etal-2014-distributed
Distributed Distributional Similarities of {G}oogle {B}ooks Over the Centuries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1249/
Riedl, Martin and Steuer, Richard and Biemann, Chris
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1401--1405
This paper introduces a distributional thesaurus and sense clusters computed on the complete Google Syntactic N-grams, which is extracted from Google Books, a very large corpus of digitized books published between 1520 and 2008. We show that a thesaurus computed on such a large text basis leads to much better results than using smaller corpora like Wikipedia. We also provide distributional thesauri for equal-sized time slices of the corpus. While distributional thesauri can be used as lexical resources in NLP tasks, comparing word similarities over time can unveil sense change of terms across different decades or centuries, and can serve as a resource for diachronic lexicography. Thesauri and clusters are available for download.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,285
inproceedings
urooj-etal-2014-cle
The {CLE} {U}rdu {POS} Tagset
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1250/
Urooj, Saba and Hussain, Sarmad and Mustafa, Asad and Parveen, Rahila and Adeeba, Farah and Ahmed Khan, Tafseer and Butt, Miriam and Hautli, Annette
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2920--2925
The paper presents a design schema and details of a new Urdu POS tagset. This tagset is designed due to challenges encountered in working with existing tagsets for Urdu. It uses tags that judiciously incorporate information about special morpho-syntactic categories found in Urdu. With respect to the overall naming schema and the basic divisions, the tagset draws on the Penn Treebank and a Common Tagset for Indian Languages. The resulting CLE Urdu POS Tagset consists of 12 major categories with subdivisions, resulting in 32 tags. The tagset has been used to tag 100k words of the CLE Urdu Digest Corpus, giving a tagging accuracy of 96.8{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,286
inproceedings
benikova-etal-2014-nosta
{N}o{S}ta-{D} Named Entity Annotation for {G}erman: Guidelines and Dataset
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1251/
Benikova, Darina and Biemann, Chris and Reznicek, Marc
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2524--2531
We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,287
inproceedings
liu-etal-2014-phone
Phone Boundary Annotation in Conversational Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1252/
Liu, Yi-Fen and Tseng, Shu-Chuan and Jang, J.-S. Roger
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
848--853
Phone-aligned spoken corpora are indispensable language resources for quantitative linguistic analyses and automatic speech systems. However, producing this type of data resources is not an easy task due to high costs of time and man power as well as difficulties of applying valid annotation criteria and achieving reliable inter-labeler’s consistency. Among different types of spoken corpora, conversational speech that is often filled with extreme reduction and varying pronunciation variants is particularly challenging. By adopting a combined verification procedure, we obtained reasonably good annotation results. Preliminary phone boundaries that were automatically generated by a phone aligner were provided to human labelers for verifying. Instead of making use of the visualization of acoustic cues, the labelers should solely rely on their perceptual judgments to locate a position that best separates two adjacent phones. Impressionistic judgments in cases of reduction and segment deletion were helpful and necessary, as they balanced subtle nuance caused by differences in perception.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,288
inproceedings
bono-etal-2014-colloquial
A Colloquial Corpus of {J}apanese {S}ign {L}anguage: Linguistic Resources for Observing Sign Language Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1253/
Bono, Mayumi and Kikuchi, Kouhei and Cibulka, Paul and Osugi, Yutaka
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1898--1904
We began building a corpus of Japanese Sign Language (JSL) in April 2011. The purpose of this project was to increase awareness of sign language as a distinctive language in Japan. This corpus is beneficial not only to linguistic research but also to hearing-impaired and deaf individuals, as it helps them to recognize and respect their linguistic differences and communication styles. This is the first large-scale JSL corpus developed for both academic and public use. We collected data in three ways: interviews (for introductory purposes only), dialogues, and lexical elicitation. In this paper, we focus particularly on data collected during a dialogue to discuss the application of conversation analysis (CA) to signed dialogues and signed conversations. Our annotation scheme was designed not only to elucidate theoretical issues related to grammar and linguistics but also to clarify pragmatic and interactional phenomena related to the use of JSL.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,289
inproceedings
przepiorkowski-etal-2014-walenty
{W}alenty: Towards a comprehensive valence dictionary of {P}olish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1254/
Przepi{\'o}rkowski, Adam and Hajnicz, El{\.z}bieta and Patejuk, Agnieszka and Woli{\'n}ski, Marcin and Skwarski, Filip and {\'S}widzi{\'n}ski, Marek
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2785--2792
This paper presents Walenty, a comprehensive valence dictionary of Polish, with a number of novel features, as compared to other such dictionaries. The notion of argument is based on the coordination test and takes into consideration the possibility of diverse morphosyntactic realisations. Some aspects of the internal structure of phraseological (idiomatic) arguments are handled explicitly. While the current version of the dictionary concentrates on syntax, it already contains some semantic features, including semantically defined arguments, such as locative, temporal or manner, as well as control and raising, and work on extending it with semantic roles and selectional preferences is in progress. Although Walenty is still being intensively developed, it is already by far the largest Polish valence dictionary, with around 8600 verbal lemmata and almost 39 000 valence schemata. The dictionary is publicly available on the Creative Commons BY SA licence and may be downloaded from \url{http://zil.ipipan.waw.pl/Walenty}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,290
inproceedings
a-r-2014-crowd
Can the Crowd be Controlled?: A Case Study on Crowd Sourcing and Automatic Validation of Completed Tasks based on User Modeling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1255/
A.R, Balamurali
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
189--195
Annotation is an essential step in the development cycle of many Natural Language Processing (NLP) systems. Lately, crowd-sourcing has been employed to facilitate large scale annotation at a reduced cost. Unfortunately, verifying the quality of the submitted annotations is a daunting task. Existing approaches address this problem either through sampling or redundancy. However, these approaches do have a cost associated with it. Based on the observation that a crowd-sourcing worker returns to do a task that he has done previously, a novel framework for automatic validation of crowd-sourced task is proposed in this paper. A case study based on sentiment analysis is presented to elucidate the framework and its feasibility. The result suggests that validation of the crowd-sourced task can be automated to a certain extent.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,291
inproceedings
bogel-etal-2014-computational
Computational Narratology: Extracting Tense Clusters from Narrative Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1256/
B{\"ogel, Thomas and Str{\"otgen, Jannik and Gertz, Michael
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
950--955
Computational Narratology is an emerging field within the Digital Humanities. In this paper, we tackle the problem of extracting temporal information as a basis for event extraction and ordering, as well as further investigations of complex phenomena in narrative texts. While most existing systems focus on news texts and extract explicit temporal information exclusively, we show that this approach is not feasible for narratives. Based on tense information of verbs, we define temporal clusters as an annotation task and validate the annotation schema by showing that the task can be performed with high inter-annotator agreement. To alleviate and reduce the manual annotation effort, we propose a rule-based approach to robustly extract temporal clusters using a multi-layered and dynamic NLP pipeline that combines off-the-shelf components in a heuristic setting. Comparing our results against human judgements, our system is capable of predicting the tense of verbs and sentences with very high reliability: for the most prevalent tense in our corpus, more than 95{\%} of all verbs are annotated correctly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,292
inproceedings
pinnis-etal-2014-designing
Designing the {L}atvian Speech Recognition Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1257/
Pinnis, M{\={a}}rcis and Auzi{\c{n}}a, Ilze and Goba, K{\={a}}rlis
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1547--1553
In this paper the authors present the first Latvian speech corpus designed specifically for speech recognition purposes. The paper outlines the decisions made in the corpus designing process through analysis of related work on speech corpora creation for different languages. The authors provide also guidelines that were used for the creation of the Latvian speech recognition corpus. The corpus creation guidelines are fairly general for them to be re-used by other researchers when working on different language speech recognition corpora. The corpus consists of two parts {\textemdash} an orthographically annotated corpus containing 100 hours of orthographically transcribed audio data and a phonetically annotated corpus containing 4 hours of phonetically transcribed audio data. Metadata files in XML format provide additional details about the speakers, noise levels, speech styles, etc. The speech recognition corpus is phonetically balanced and phonetically rich and the paper describes also the methodology how the phonetical balancedness has been assessed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,293
inproceedings
vondricka-2014-aligning
Aligning parallel texts with {I}nter{T}ext
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1258/
Vond{\v{r}}i{\v{c}}ka, Pavel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1875--1879
InterText is a flexible manager and editor for alignment of parallel texts aimed both at individual and collaborative creation of parallel corpora of any size or translational memories. It is available in two versions: as a multi-user server application with a web-based interface and as a native desktop application for personal use. Both versions are able to cooperate with each other. InterText can process plain text or custom XML documents, deploy existing automatic aligners and provide a comfortable interface for manual post-alignment correction of both the alignment and the text contents and segmentation of the documents. One language version may be aligned with several other versions (using stand-off alignment) and the application ensures consistency between them. The server version supports different user levels and privileges and it can also track changes made to the texts for easier supervision. It also allows for batch import, alignment and export and can be connected to other tools and scripts for better integration in a more complex project workflow.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,294
inproceedings
chaimongkol-etal-2014-corpus
Corpus for Coreference Resolution on Scientific Papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1259/
Chaimongkol, Panot and Aizawa, Akiko and Tateisi, Yuka
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3187--3190
The ever-growing number of published scientific papers prompts the need for automatic knowledge extraction to help scientists keep up with the state-of-the-art in their respective fields. To construct a good knowledge extraction system, annotated corpora in the scientific domain are required to train machine learning models. As described in this paper, we have constructed an annotated corpus for coreference resolution in multiple scientific domains, based on an existing corpus. We have modified the annotation scheme from Message Understanding Conference to better suit scientific texts. Then we applied that to the corpus. The annotated corpus is then compared with corpora in general domains in terms of distribution of resolution classes and performance of the Stanford Dcoref coreference resolver. Through these comparisons, we have demonstrated quantitatively that our manually annotated corpus differs from a general-domain corpus, which suggests deep differences between general-domain texts and scientific texts and which shows that different approaches can be made to tackle coreference resolution for general texts and scientific texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,295
inproceedings
falk-etal-2014-non
From Non Word to New Word: Automatically Identifying Neologisms in {F}rench Newspapers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1260/
Falk, Ingrid and Bernhard, Delphine and G{\'e}rard, Christophe
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4337--4344
In this paper we present a statistical machine learning approach to formal neologism detection going some way beyond the use of exclusion lists. We explore the impact of three groups of features: form related, morpho-lexical and thematic features. The latter type of features has not yet been used in this kind of application and represents a way to access the semantic context of new words. The results suggest that form related features are helpful at the overall classification task, while morpho-lexical and thematic features better single out true neologisms.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,296
inproceedings
underwood-etal-2014-evaluating
Evaluating the effects of interactivity in a post-editing workbench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1261/
Underwood, Nancy and Mesa-Lao, Bartolom{\'e} and Mart{\'i}nez, Mercedes Garc{\'i}a and Carl, Michael and Alabau, Vicent and Gonz{\'a}lez-Rubio, Jes{\'u}s and Leiva, Luis A. and Sanchis-Trilles, Germ{\'a}n and Ort{\'i}z-Mart{\'i}nez, Daniel and Casacuberta, Francisco
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
553--559
This paper describes the field trial and subsequent evaluation of a post-editing workbench which is currently under development in the EU-funded CasMaCat project. Based on user evaluations of the initial prototype of the workbench, this second prototype of the workbench includes a number of interactive features designed to improve productivity and user satisfaction. Using CasMaCat`s own facilities for logging keystrokes and eye tracking, data were collected from nine post-editors in a professional setting. These data were then used to investigate the effects of the interactive features on productivity, quality, user satisfaction and cognitive load as reflected in the post-editors’ gaze activity. These quantitative results are combined with the qualitative results derived from user questionnaires and interviews conducted with all the participants.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,297
inproceedings
schmidt-2014-research
The Research and Teaching Corpus of Spoken {G}erman {---} {FOLK}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1263/
Schmidt, Thomas
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
383--387
FOLK is the {\textquotedblleft}Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK){\textquotedblright} (eng.: research and teaching corpus of spoken German). The project has set itself the aim of building a corpus of German conversations which a) covers a broad range of interaction types in private, institutional and public settings, b) is sufficiently large and diverse and of sufficient quality to support different qualitative and quantitative research approaches, c) is transcribed, annotated and made accessible according to current technological standards, and d) is available to the scientific community on a sound legal basis and without unnecessary restrictions of usage. This paper gives an overview of the corpus design, the strategies for acquisition of a diverse range of interaction data, and the corpus construction workflow from recording via transcription an annotation to dissemination.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,299
inproceedings
degaetano-ortlieb-etal-2014-data
Data Mining with Shallow vs. Linguistic Features to Study Diversification of Scientific Registers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1264/
Degaetano-Ortlieb, Stefania and Fankhauser, Peter and Kermes, Hannah and Lapshinova-Koltunski, Ekaterina and Ordan, Noam and Teich, Elke
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1327--1334
We present a methodology to analyze the linguistic evolution of scientific registers with data mining techniques, comparing the insights gained from shallow vs. linguistic features. The focus is on selected scientific disciplines at the boundaries to computer science (computational linguistics, bioinformatics, digital construction, microelectronics). The data basis is the English Scientific Text Corpus (SCITEX) which covers a time range of roughly thirty years (1970/80s to early 2000s) (Degaetano-Ortlieb et al., 2013; Teich and Fankhauser, 2010). In particular, we investigate the diversification of scientific registers over time. Our theoretical basis is Systemic Functional Linguistics (SFL) and its specific incarnation of register theory (Halliday and Hasan, 1985). In terms of methods, we combine corpus-based methods of feature extraction and data mining techniques.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,300
inproceedings
saif-etal-2014-stopwords
On Stopwords, Filtering and Data Sparsity for Sentiment Analysis of {T}witter
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1265/
Saif, Hassan and Fernandez, Miriam and He, Yulan and Alani, Harith
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
810--817
Sentiment classification over Twitter is usually affected by the noisy nature (abbreviations, irregular forms) of tweets data. A popular procedure to reduce the noise of textual data is to remove stopwords by using pre-compiled stopword lists or more sophisticated methods for dynamic stopword identification. However, the effectiveness of removing stopwords in the context of Twitter sentiment classification has been debated in the last few years. In this paper we investigate whether removing stopwords helps or hampers the effectiveness of Twitter sentiment classification methods. To this end, we apply six different stopword identification methods to Twitter data from six different datasets and observe how removing stopwords affects two well-known supervised sentiment classification methods. We assess the impact of removing stopwords by observing fluctuations on the level of data sparsity, the size of the classifier`s feature space and its classification performance. Our results show that using pre-compiled lists of stopwords negatively impacts the performance of Twitter sentiment classification approaches. On the other hand, the dynamic generation of stopword lists, by removing those infrequent terms appearing only once in the corpus, appears to be the optimal method to maintaining a high classification performance while reducing the data sparsity and shrinking the feature space.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,301
inproceedings
hnatkova-etal-2014-syn
The {SYN}-series corpora of written {C}zech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1267/
Hn{\'a}tkov{\'a}, Milena and K{\v{r}}en, Michal and Proch{\'a}zka, Pavel and Skoumalov{\'a}, Hana
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
160--164
The paper overviews the SYN series of synchronic corpora of written Czech compiled within the framework of the Czech National Corpus project. It describes their design and processing with a focus on the annotation, i.e. lemmatization and morphological tagging. The paper also introduces SYN2013PUB, a new 935-million newspaper corpus of Czech published in 2013 as the most recent addition to the SYN series before planned revision of its architecture. SYN2013PUB can be seen as a completion of the series in terms of titles and publication dates of major Czech newspapers that are now covered by complete volumes in comparable proportions. All SYN-series corpora can be characterized as traditional, with emphasis on cleared copyright issues, well-defined composition, reliable metadata and high-quality data processing; their overall size currently exceeds 2.2 billion running words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,303
inproceedings
guillou-etal-2014-parcor
{P}ar{C}or 1.0: A Parallel Pronoun-Coreference Corpus to Support Statistical {MT}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1268/
Guillou, Liane and Hardmeier, Christian and Smith, Aaron and Tiedemann, J{\"org and Webber, Bonnie
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3191--3198
We present ParCor, a parallel corpus of texts in which pronoun coreference {\textemdash} reduced coreference in which pronouns are used as referring expressions {\textemdash} has been annotated. The corpus is intended to be used both as a resource from which to learn systematic differences in pronoun use between languages and ultimately for developing and testing informed Statistical Machine Translation systems aimed at addressing the problem of pronoun coreference in translation. At present, the corpus consists of a collection of parallel English-German documents from two different text genres: TED Talks (transcribed planned speech), and EU Bookshop publications (written text). All documents in the corpus have been manually annotated with respect to the type and location of each pronoun and, where relevant, its antecedent. We provide details of the texts that we selected, the guidelines and tools used to support annotation and some corpus statistics. The texts in the corpus have already been translated into many languages, and we plan to expand the corpus into these other languages, as well as other genres, in the future.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,304
inproceedings
list-prokic-2014-benchmark
A Benchmark Database of Phonetic Alignments in Historical Linguistics and Dialectology
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1269/
List, Johann-Mattis and Proki{\'c}, Jelena
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
288--294
In the last two decades, alignment analyses have become an important technique in quantitative historical linguistics and dialectology. Phonetic alignment plays a crucial role in the identification of regular sound correspondences and deeper genealogical relations between and within languages and language families. Surprisingly, up to today, there are no easily accessible benchmark data sets for phonetic alignment analyses. Here we present a publicly available database of manually edited phonetic alignments which can serve as a platform for testing and improving the performance of automatic alignment algorithms. The database consists of a great variety of alignments drawn from a large number of different sources. The data is arranged in a such way that typical problems encountered in phonetic alignment analyses (metathesis, diversity of phonetic sequences) are represented and can be directly tested.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,305
inproceedings
tannier-2014-extracting
Extracting News Web Page Creation Time with {DCTF}inder
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1270/
Tannier, Xavier
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2037--2042
Web pages do not offer reliable metadata concerning their creation date and time. However, getting the document creation time is a necessary step for allowing to apply temporal normalization systems to web pages. In this paper, we present DCTFinder, a system that parses a web page and extracts from its content the title and the creation date of this web page. DCTFinder combines heuristic title detection, supervised learning with Conditional Random Fields (CRFs) for document date extraction, and rule-based creation time recognition. Using such a system allows further deep and efficient temporal analysis of web pages. Evaluation on three corpora of English and French web pages indicates that the tool can extract document creation times with reasonably high accuracy (between 87 and 92{\%}). DCTFinder is made freely available on \url{http://sourceforge.net/projects/dctfinder/}, as well as all resources (vocabulary and annotated documents) built for training and evaluating the system in English and French, and the English trained model itself.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,306
inproceedings
kucera-stluka-2014-corpus
Corpus of 19th-century {C}zech Texts: Problems and Solutions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1271/
Ku{\v{c}}era, Karel and Stluka, Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
165--168
Although the Czech language of the 19th century represents the roots of modern Czech and many features of the 20th- and 21st-century language cannot be properly understood without this historical background, the 19th-century Czech has not been thoroughly and consistently researched so far. The long-term project of a corpus of 19th-century Czech printed texts, currently in its third year, is intended to stimulate the research as well as to provide a firm material basis for it. The reason why, in our opinion, the project is worth mentioning is that it is faced with an unusual concentration of problems following mostly from the fact that the 19th century was arguably the most tumultuous period in the history of Czech, as well as from the fact that Czech is a highly inflectional language with a long history of sound changes, orthography reforms and rather discontinuous development of its vocabulary. The paper will briefly characterize the general background of the problems and present the reasoning behind the solutions that have been implemented in the ongoing project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,307
inproceedings
velcin-etal-2014-investigating
Investigating the Image of Entities in Social Media: Dataset Design and First Results
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1272/
Velcin, Julien and Kim, Young-Min and Brun, Caroline and Dormagen, Jean-Yves and SanJuan, Eric and Khouas, Leila and Peradotto, Anne and Bonnevay, Stephane and Roux, Claude and Boyadjian, Julien and Molina, Alejandro and Neihouser, Marie
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
818--822
The objective of this paper is to describe the design of a dataset that deals with the image (i.e., representation, web reputation) of various entities populating the Internet: politicians, celebrities, companies, brands etc. Our main contribution is to build and provide an original annotated French dataset. This dataset consists of 11527 manually annotated tweets expressing the opinion on specific facets (e.g., ethic, communication, economic project) describing two French policitians over time. We believe that other researchers might benefit from this experience, since designing and implementing such a dataset has proven quite an interesting challenge. This design comprises different processes such as data selection, formal definition and instantiation of an image. We have set up a full open-source annotation platform. In addition to the dataset design, we present the first results that we obtained by applying clustering methods to the annotated dataset in order to extract the entity images.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,308
inproceedings
solberg-etal-2014-norwegian
The {N}orwegian Dependency Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1273/
Solberg, Per Erik and Skj{\ae}rholt, Arne and {\O}vrelid, Lilja and Hagen, Kristin and Johannessen, Janne Bondi
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
789--795
The Norwegian Dependency Treebank is a new syntactic treebank for Norwegian Bokm{\"al and Nynorsk with manual syntactic and morphological annotation, developed at the National Library of Norway in collaboration with the University of Oslo. It is the first publically available treebank for Norwegian. This paper presents the core principles behind the syntactic annotation and how these principles were employed in certain specific cases. We then present the selection of texts and distribution between genres, as well as the annotation process and an evaluation of the inter-annotator agreement. Finally, we present the first results of data-driven dependency parsing of Norwegian, contrasting four state-of-the-art dependency parsers trained on the treebank. The consistency and the parsability of this treebank is shown to be comparable to other large treebank initiatives.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,309
inproceedings
stuhrenberg-2014-extending
Extending standoff annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1274/
St{\"uhrenberg, Maik
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
169--174
Textual information is sometimes accompanied by additional encodings (such as visuals). These multimodal documents may be interesting objects of investigation for linguistics. Another class of complex documents are pre-annotated documents. Classic XML inline annotation often fails for both document classes because of overlapping markup. However, standoff annotation, that is the separation of primary data and markup, is a valuable and common mechanism to annotate multiple hierarchies and/or read-only primary data. We demonstrate an extended version of the XStandoff meta markup language, that allows the definition of segments in spatial and pre-annotated primary data. Together with the ability to import already established (linguistic) serialization formats as annotation levels and layers in an XStandoff instance, we are able to annotate a variety of primary data files, including text, audio, still and moving images. Application scenarios that may benefit from using XStandoff are the analyzation of multimodal documents such as instruction manuals, or sports match analysis, or the less destructive cleaning of web pages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,310
inproceedings
laki-orosz-2014-efficient
An efficient language independent toolkit for complete morphological disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1275/
Laki, L{\'aszl{\'o and Orosz, Gy{\"orgy
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1625--1630
In this paper a Moses SMT toolkit-based language-independent complete morphological annotation tool is presented called HuLaPos2. Our system performs PoS tagging and lemmatization simultaneously. Amongst others, the algorithm used is able to handle phrases instead of unigrams, and can perform the tagging in a not strictly left-to-right order. With utilizing these gains, our system outperforms the HMM-based ones. In order to handle the unknown words, a suffix-tree based guesser was integrated into HuLaPos2. To demonstrate the performance of our system it was compared with several systems in different languages and PoS tag sets. In general, it can be concluded that the quality of HuLaPos2 is comparable with the state-of-the-art systems, and in the case of PoS tagging it outperformed many available systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,311
inproceedings
spyns-van-veenendaal-2014-decade
A decade of {HLT} Agency activities in the Low Countries: from resource maintenance ({BLARK}) to service offerings ({BLAISE})
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1276/
Spyns, Peter and van Veenendaal, Remco
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2158--2165
In this paper we report on the Flemish-Dutch Agency for Human Language Technologies (HLT Agency or TST-Centrale in Dutch) in the Low Countries. We present its activities in its first decade of existence. The main goal of the HLT Agency is to ensure the sustainability of linguistic resources for Dutch. 10 years after its inception, the HLT Agency faces new challenges and opportunities. An important contextual factor is the rise of the infrastructure networks and proliferation of resource centres. We summarise some lessons learnt and we propose as future work to define and build for Dutch (which by extension can apply to any national language) a set of Basic LAnguage Infrastructure SErvices (BLAISE). As a conclusion, we state that the HLT Agency, also by its peculiar institutional status, has fulfilled and still is fulfilling an important role in maintaining Dutch as a digitally fully fledged functional language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,312
inproceedings
cho-etal-2014-corpus
A Corpus of Spontaneous Speech in Lectures: The {KIT} Lecture Corpus for Spoken Language Processing and Translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1277/
Cho, Eunah and F{\"unfer, Sarah and St{\"uker, Sebastian and Waibel, Alex
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1554--1559
With the increasing number of applications handling spontaneous speech, the needs to process spoken languages become stronger. Speech disfluency is one of the most challenging tasks to deal with in automatic speech processing. As most applications are trained with well-formed, written texts, many issues arise when processing spontaneous speech due to its distinctive characteristics. Therefore, more data with annotated speech disfluencies will help the adaptation of natural language processing applications, such as machine translation systems. In order to support this, we have annotated speech disfluencies in German lectures at KIT. In this paper we describe how we annotated the disfluencies in the data and provide detailed statistics on the size of the corpus and the speakers. Moreover, machine translation performance on a source text including disfluencies is compared to the results of the translation of a source text without different sorts of disfluencies or no disfluencies at all.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,313
inproceedings
vanhainen-salvi-2014-free
Free Acoustic and Language Models for Large Vocabulary Continuous Speech Recognition in {S}wedish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1278/
Vanhainen, Niklas and Salvi, Giampiero
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
388--392
This paper presents results for large vocabulary continuous speech recognition (LVCSR) in Swedish. We trained acoustic models on the public domain NST Swedish corpus and made them freely available to the community. The training procedure corresponds to the reference recogniser (RefRec) developed for the SpeechDat databases during the COST249 action. We describe the modifications we made to the procedure in order to train on the NST database, and the language models we created based on the N-gram data available at the Norwegian Language Council. Our tests include medium vocabulary isolated word recognition and LVCSR. Because no previous results are available for LVCSR in Swedish, we use as baseline the performance of the SpeechDat models on the same tasks. We also compare our best results to the ones obtained in similar conditions on resource rich languages such as American English. We tested the acoustic models with HTK and Julius and plan to make them available in CMU Sphinx format as well in the near future. We believe that the free availability of these resources will boost research in speech and language technology in Swedish, even in research groups that do not have resources to develop ASR systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,314
inproceedings
erten-etal-2014-turkish
{T}urkish Resources for Visual Word Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1279/
Erten, Beg{\"um and Bozsahin, Cem and Zeyrek, Deniz
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2106--2110
We report two tools to conduct psycholinguistic experiments on Turkish words. KelimetriK allows experimenters to choose words based on desired orthographic scores of word frequency, bigram and trigram frequency, ON, OLD20, ATL and subset/superset similarity. Turkish version of Wuggy generates pseudowords from one or more template words using an efficient method. The syllabified version of the words are used as the input, which are decomposed into their sub-syllabic components. The bigram frequency chains are constructed by the entire words' onset, nucleus and coda patterns. Lexical statistics of stems and their syllabification are compiled by us from BOUN corpus of 490 million words. Use of these tools in some experiments is shown.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,315
inproceedings
refaee-rieser-2014-arabic
An {A}rabic {T}witter Corpus for Subjectivity and Sentiment Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1280/
Refaee, Eshrag and Rieser, Verena
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2268--2273
We present a newly collected data set of 8,868 gold-standard annotated Arabic feeds. The corpus is manually labelled for subjectivity and sentiment analysis (SSA) ( = 0:816). In addition, the corpus is annotated with a variety of motivated feature-sets that have previously shown positive impact on performance. The paper highlights issues posed by twitter as a genre, such as mixture of language varieties and topic-shifts. Our next step is to extend the current corpus, using online semi-supervised learning. A first sub-corpus will be released via the ELRA repository as part of this submission.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,316
inproceedings
moneglia-etal-2014-imagact
The {IMAGACT} Visual Ontology. An Extendable Multilingual Infrastructure for the representation of lexical encoding of Action
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1281/
Moneglia, Massimo and Brown, Susan and Frontini, Francesca and Gagliardi, Gloria and Khan, Fahad and Monachini, Monica and Panunzi, Alessandro
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3425--3432
Action verbs have many meanings, covering actions in different ontological types. Moreover, each language categorizes action in its own way. One verb can refer to many different actions and one action can be identified by more than one verb. The range of variations within and across languages is largely unknown, causing trouble for natural language processing tasks. IMAGACT is a corpus-based ontology of action concepts, derived from English and Italian spontaneous speech corpora, which makes use of the universal language of images to identify the different action types extended by verbs referring to action in English, Italian, Chinese and Spanish. This paper presents the infrastructure and the various linguistic information the user can derive from it. IMAGACT makes explicit the variation of meaning of action verbs within one language and allows comparisons of verb variations within and across languages. Because the action concepts are represented with videos, extension into new languages beyond those presently implemented in IMAGACT is done using competence-based judgments by mother-tongue informants without intense lexicographic work involving underdetermined semantic description
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,317
inproceedings
benjamin-2014-collaboration
Collaboration in the Production of a Massively Multilingual Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1282/
Benjamin, Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
211--215
This paper discusses the multiple approaches to collaboration that the Kamusi Project is employing in the creation of a massively multilingual lexical resource. The project’s data structure enables the inclusion of large amounts of rich data within each sense-specific entry, with transitive concept-based links across languages. Data collection involves mining existing data sets, language experts using an online editing system, crowdsourcing, and games with a purpose. The paper discusses the benefits and drawbacks of each of these elements, and the steps the project is taking to account for those. Special attention is paid to guiding crowd members with targeted questions that produce results in a specific format. Collaboration is seen as an essential method for generating large amounts of linguistic data, as well as for validating the data so it can be considered trustworthy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,318
inproceedings
salmon-vallet-2014-effortless
An Effortless Way To Create Large-Scale Datasets For Famous Speakers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1283/
Salmon, Fran{\c{c}}ois and Vallet, F{\'e}licien
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
348--352
The creation of large-scale multimedia datasets has become a scientific matter in itself. Indeed, the fully-manual annotation of hundreds or thousands of hours of video and/or audio turns out to be practically infeasible. In this paper, we propose an extremly handy approach to automatically construct a database of famous speakers from TV broadcast news material. We then run a user experiment with a correctly designed tool that demonstrates that very reliable results can be obtained with this method. In particular, a thorough error analysis demonstrates the value of the approach and provides hints for the improvement of the quality of the dataset.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,319
inproceedings
ludusan-etal-2014-bridging
Bridging the gap between speech technology and natural language processing: an evaluation toolbox for term discovery systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1284/
Ludusan, Bogdan and Versteegh, Maarten and Jansen, Aren and Gravier, Guillaume and Cao, Xuan-Nga and Johnson, Mark and Dupoux, Emmanuel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
560--567
The unsupervised discovery of linguistic terms from either continuous phoneme transcriptions or from raw speech has seen an increasing interest in the past years both from a theoretical and a practical standpoint. Yet, there exists no common accepted evaluation method for the systems performing term discovery. Here, we propose such an evaluation toolbox, drawing ideas from both speech technology and natural language processing. We first transform the speech-based output into a symbolic representation and compute five types of evaluation metrics on this representation: the quality of acoustic matching, the quality of the clusters found, and the quality of the alignment with real words (type, token, and boundary scores). We tested our approach on two term discovery systems taking speech as input, and one using symbolic input. The latter was run using both the gold transcription and a transcription obtained from an automatic speech recognizer, in order to simulate the case when only imperfect symbolic information is available. The results obtained are analysed through the use of the proposed evaluation metrics and the implications of these metrics are discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,320
inproceedings
rosner-etal-2014-modeling
Modeling and evaluating dialog success in the {LAST} {MINUTE} corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1285/
R{\"osner, Dietmar and Friesen, Rafael and G{\"unther, Stephan and Andrich, Rico
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
259--265
The LAST MINUTE corpus comprises records and transcripts of naturalistic problem solving dialogs between N = 130 subjects and a companion system simulated in a Wizard of Oz experiment. Our goal is to detect dialog situations where subjects might break up the dialog with the system which might happen when the subject is unsuccessful. We present a dialog act based representation of the dialog courses in the problem solving phase of the experiment and propose and evaluate measures for dialog success or failure derived from this representation. This dialog act representation refines our previous coarse measure as it enables the correct classification of many dialog sequences that were ambiguous before. The dialog act representation is useful for the identification of different subject groups and the exploration of interesting dialog courses in the corpus. We find young females to be most successful in the challenging last part of the problem solving phase and young subjects to have the initiative in the dialog more often than the elderly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,321
inproceedings
sidorov-etal-2014-comparison
Comparison of Gender- and Speaker-adaptive Emotion Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1286/
Sidorov, Maxim and Ultes, Stefan and Schmitt, Alexander
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3476--3480
Deriving the emotion of a human speaker is a hard task, especially if only the audio stream is taken into account. While state-of-the-art approaches already provide good results, adaptive methods have been proposed in order to further improve the recognition accuracy. A recent approach is to add characteristics of the speaker, e.g., the gender of the speaker. In this contribution, we argue that adding information unique for each speaker, i.e., by using speaker identification techniques, improves emotion recognition simply by adding this additional information to the feature vector of the statistical classification algorithm. Moreover, we compare this approach to emotion recognition adding only the speaker gender being a non-unique speaker attribute. We justify this by performing adaptive emotion recognition using both gender and speaker information on four different corpora of different languages containing acted and non-acted speech. The final results show that adding speaker information significantly outperforms both adding gender information and solely using a generic speaker-independent approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,322
inproceedings
alsop-nesi-2014-pragmatic
The pragmatic annotation of a corpus of academic lectures
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1287/
Alsop, Si{\^a}n and Nesi, Hilary
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1560--1563
This paper will describe a process of ‘pragmatic annotation’ (c.f. Simpson-Vlach and Leicher 2006) which systematically identifies pragmatic meaning in spoken text. The annotation of stretches of text that perform particular pragmatic functions allows conclusions to be drawn across data sets at a different level than that of the individual lexical item, or structural content. The annotation of linguistic features, which cannot be identified by purely objective means, is distinguished here from structural mark-up of speaker identity, turns, pauses etc. The features annotated are ‘explaining’, ‘housekeeping’, ‘humour’, ‘storytelling’ and ‘summarising’. Twenty-two subcategories are attributed to these elements. Data is from the Engineering Lecture Corpus (ELC), which includes 76 English-medium engineering lectures from the UK, New Zealand and Malaysia. The annotation allows us to compare differences in the use of these discourse features across cultural subcorpora. Results show that cultural context does impact on the linguistic realisation of commonly occurring discourse features in engineering lectures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,323
inproceedings
hansen-etal-2014-using
Using {TEI}, {CMDI} and {ISO}cat in {CLARIN}-{DK}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1288/
Hansen, Dorte Haltrup and Offersgaard, Lene and Olsen, Sussi
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
613--618
This paper presents the challenges and issues encountered in the conversion of TEI header metadata into the CMDI format. The work is carried out in the Danish research infrastructure, CLARIN-DK, in order to enable the exchange of language resources nationally as well as internationally, in particular with other partners of CLARIN ERIC. The paper describes the task of converting an existing TEI specification applied to all the text resources deposited in DK-CLARIN. During the task we have tried to reuse and share CMDI profiles and components in the CLARIN Component Registry, as well as linking the CMDI components and elements to the relevant data categories in the ISOcat Data Category Registry. The conversion of the existing metadata into the CMDI format turned out not to be a trivial task and the experience and insights gained from this work have resulted in a proposal for a work flow for future use. We also present a core TEI header metadata set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,324
inproceedings
campano-etal-2014-comparative
Comparative analysis of verbal alignment in human-human and human-agent interactions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1289/
Campano, Sabrina and Durand, Jessica and Clavel, Chlo{\'e}
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4415--4422
Engagement is an important feature in human-human and human-agent interaction. In this paper, we investigate lexical alignment as a cue of engagement, relying on two different corpora : CID and SEMAINE. Our final goal is to build a virtual conversational character that could use alignment strategies to maintain user`s engagement. To do so, we investigate two alignment processes : shared vocabulary and other-repetitions. A quantitative and qualitative approach is proposed to characterize these aspects in human-human (CID) and human-operator (SEMAINE) interactions. Our results show that these processes are observable in both corpora, indicating a stable pattern that can be further modelled in conversational agents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,325
inproceedings
petic-gifu-2014-transliteration
Transliteration and alignment of parallel texts from {C}yrillic to {L}atin
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1290/
Petic, Mircea and G{\^i}fu, Daniela
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1819--1823
This article describes a methodology of recovering and preservation of old Romanian texts and problems related to their recognition. Our focus is to create a gold corpus for Romanian language (the novella Sania), for both alphabets used in Transnistria {\textemdash} Cyrillic and Latin. The resource is available for similar researches. This technology is based on transliteration and semiautomatic alignment of parallel texts at the level of letter/lexem/multiwords. We have analysed every text segment present in this corpus and discovered other conventions of writing at the level of transliteration, academic norms and editorial interventions. These conventions allowed us to elaborate and implement some new heuristics that make a correct automatic transliteration process. Sometimes the words of Latin script are modified in Cyrillic script from semantic reasons (for instance, editor`s interpretation). Semantic transliteration is seen as a good practice in introducing multiwords from Cyrillic to Latin. Not only does it preserve how a multiwords sound in the source script, but also enables the translator to modify in the original text (here, choosing the most common sense of an expression). Such a technology could be of interest to lexicographers, but also to specialists in computational linguistics to improve the actual transliteration standards.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,326
inproceedings
dima-etal-2014-tell
How to Tell a Schneemann from a Milchmann: An Annotation Scheme for Compound-Internal Relations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1291/
Dima, Corina and Henrich, Verena and Hinrichs, Erhard and Hoppermann, Christina
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1194--1201
This paper presents a language-independent annotation scheme for the semantic relations that link the constituents of noun-noun compounds, such as Schneemann {\textquoteleftsnow man' or Milchmann {\textquoteleftmilk man'. The annotation scheme is hybrid in the sense that it assigns each compound a two-place label consisting of a semantic property and a prepositional paraphrase. The resulting inventory combines the insights of previous annotation schemes that rely exclusively on either semantic properties or prepositions, thus avoiding the known weaknesses that result from using only one of the two label types. The proposed annotation scheme has been used to annotate a set of 5112 German noun-noun compounds. A release of the dataset is currently being prepared and will be made available via the CLARIN Center T{\"ubingen. In addition to the presentation of the hybrid annotation scheme, the paper also reports on an inter-annotator agreement study that has resulted in a substantial agreement among annotators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,327
inproceedings
bautista-saggion-2014-numerical
Can Numerical Expressions Be Simpler? Implementation and Demostration of a Numerical Simplification System for {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1292/
Bautista, Susana and Saggion, Horacio
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
956--962
Information in newspapers is often showed in the form of numerical expressions which present comprehension problems for many people, including people with disabilities, illiteracy or lack of access to advanced technology. The purpose of this paper is to motivate, describe, and demonstrate a rule-based lexical component that simplifies numerical expressions in Spanish texts. We propose an approach that makes news articles more accessible to certain readers by rewriting difficult numerical expressions in a simpler way. We will showcase the numerical simplification system with a live demo based on the execution of our components over different texts, and which will consider both successful and unsuccessful simplification cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,328
inproceedings
racz-etal-2014-4fx
4{FX}: Light Verb Constructions in a Multilingual Parallel Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1293/
R{\'a}cz, Anita and Nagy T., Istv{\'a}n and Vincze, Veronika
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
710--715
In this paper, we describe 4FX, a quadrilingual (English-Spanish-German-Hungarian) parallel corpus annotated for light verb constructions. We present the annotation process, and report statistical data on the frequency of LVCs in each language. We also offer inter-annotator agreement rates and we highlight some interesting facts and tendencies on the basis of comparing multilingual data from the four corpora. According to the frequency of LVC categories and the calculated Kendall’s coefficient for the four corpora, we found that Spanish and German are very similar to each other, Hungarian is also similar to both, but German differs from all these three. The qualitative and quantitative data analysis might prove useful in theoretical linguistic research for all the four languages. Moreover, the corpus will be an excellent testbed for the development and evaluation of machine learning based methods aiming at extracting or identifying light verb constructions in these four languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,329
inproceedings
kliche-etal-2014-eidentity
The e{I}dentity Text Exploration Workbench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1294/
Kliche, Fritz and Blessing, Andr{\'e} and Heid, Ulrich and Sonntag, Jonathan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
691--697
We work on tools to explore text contents and metadata of newspaper articles as provided by news archives. Our tool components are being integrated into an {\textquotedblleft}Exploration Workbench{\textquotedblright} for Digital Humanities researchers. Next to the conversion of different data formats and character encodings, a prominent feature of our design is its {\textquotedblleft}Wizard{\textquotedblright} function for corpus building: Researchers import raw data and define patterns to extract text contents and metadata. The Workbench also comprises different tools for data cleaning. These include filtering of off-topic articles, duplicates and near-duplicates, corrupted and empty articles. We currently work on ca. 860.000 newspaper articles from different media archives, provided in different data formats. We index the data with state-of-the-art systems to allow for large scale information retrieval. We extract metadata on publishing dates, author names, newspaper sections, etc., and split articles into segments such as headlines, subtitles, paragraphs, etc. After cleaning the data and compiling a thematically homogeneous corpus, the sample can be used for quantitative analyses which are not affected by noise. Users can retrieve sets of articles on different topics, issues or otherwise defined research questions ({\textquotedblleft}subcorpora{\textquotedblright}) and investigate quantitatively their media attention on the timeline ({\textquotedblleft}Issue Cycles{\textquotedblright}).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,330
inproceedings
fourati-pelachaud-2014-emilya
{E}milya: Emotional body expression in daily actions database
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1295/
Fourati, Nesrine and Pelachaud, Catherine
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3486--3493
The studies of bodily expression of emotion have been so far mostly focused on body movement patterns associated with emotional expression. Recently, there is an increasing interest on the expression of emotion in daily actions, called also non-emblematic movements (such as walking or knocking at the door). Previous studies were based on database limited to a small range of movement tasks or emotional states. In this paper, we describe our new database of emotional body expression in daily actions, where 11 actors express 8 emotions in 7 actions. We use motion capture technology to record body movements, but we recorded as well synchronized audio-visual data to enlarge the use of the database for different research purposes. We investigate also the matching between the expressed emotions and the perceived ones through a perceptive study. The first results of this study are discussed in this paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,331
inproceedings
darwish-etal-2014-using
Using Stem-Templates to Improve {A}rabic {POS} and Gender/Number Tagging
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1296/
Darwish, Kareem and Abdelali, Ahmed and Mubarak, Hamdy
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2926--2931
This paper presents an end-to-end automatic processing system for Arabic. The system performs: correction of common spelling errors pertaining to different forms of alef, ta marbouta and ha, and alef maqsoura and ya; context sensitive word segmentation into underlying clitics, POS tagging, and gender and number tagging of nouns and adjectives. We introduce the use of stem templates as a feature to improve POS tagging by 0.5{\%} and to help ascertain the gender and number of nouns and adjectives. For gender and number tagging, we report accuracies that are significantly higher on previously unseen words compared to a state-of-the-art system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,332
inproceedings
he-etal-2014-construction
Construction of Diachronic Ontologies from People`s Daily of Fifty Years
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1297/
He, Shaoda and Zou, Xiaojun and Xiao, Liumingjing and Hu, Junfeng
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3258--3263
This paper presents an Ontology Learning From Text (OLFT) method follows the well-known OLFT cake layer framework. Based on the distributional similarity, the proposed method generates multi-level ontologies from comparatively small corpora with the aid of HITS algorithm. Currently, this method covers terms extraction, synonyms recognition, concepts discovery and concepts hierarchical clustering. Among them, both concepts discovery and concepts hierarchical clustering are aided by the HITS authority, which is obtained from the HITS algorithm by an iteratively recommended way. With this method, a set of diachronic ontologies is constructed for each year based on People`s Daily corpora of fifty years (i.e., from 1947 to 1996). Preliminary experiments show that our algorithm outperforms the Google`s RNN and K-means based algorithm in both concepts discovery and concepts hierarchical clustering.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,333
inproceedings
chevelu-etal-2014-roots
{ROOTS}: a toolkit for easy, fast and consistent processing of large sequential annotated data collections
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1298/
Chevelu, Jonathan and Lecorv{\'e}, Gw{\'e}nol{\'e} and Lolive, Damien
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
619--626
The development of new methods for given speech and natural language processing tasks usually consists in annotating large corpora of data before applying machine learning techniques to train models or to extract information. Beyond scientific aspects, creating and managing such annotated data sets is a recurrent problem. While using human annotators is obviously expensive in time and money, relying on automatic annotation processes is not a simple solution neither. Typically, the high diversity of annotation tools and of data formats, as well as the lack of efficient middleware to interface them all together, make such processes very complex and painful to design. To circumvent this problem, this paper presents the toolkit ROOTS, a freshly released open source toolkit (\url{http://roots-toolkit.gforge.inria.fr}) for easy, fast and consistent management of heterogeneously annotated data. ROOTS is designed to efficiently handle massive complex sequential data and to allow quick and light prototyping, as this is often required for research purposes. To illustrate these properties, three sample applications are presented in the field of speech and language processing, though ROOTS can more generally be easily extended to other application domains.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,334
inproceedings
jansche-2014-computer
Computer-Aided Quality Assurance of an {I}celandic Pronunciation Dictionary
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1299/
Jansche, Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2111--2114
We propose a model-driven method for ensuring the quality of pronunciation dictionaries. The key ingredient is computing an alignment between letter strings and phoneme strings, a standard technique in pronunciation modeling. The novel aspect of our method is the use of informative, parametric alignment models which are refined iteratively as they are tested against the data. We discuss the use of alignment failures as a signal for detecting and correcting problematic dictionary entries. We illustrate this method using an existing pronunciation dictionary for Icelandic. Our method is completely general and has been applied in the construction of pronunciation dictionaries for commercially deployed speech recognition systems in several languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,335
inproceedings
el-maarouf-etal-2014-disambiguating
Disambiguating Verbs by Collocation: Corpus Lexicography meets Natural Language Processing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1300/
El Maarouf, Ismail and Bradbury, Jane and Baisa, V{\'i}t and Hanks, Patrick
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1001--1006
This paper reports the results of Natural Language Processing (NLP) experiments in semantic parsing, based on a new semantic resource, the Pattern Dictionary of English Verbs (PDEV) (Hanks, 2013). This work is set in the DVC (Disambiguating Verbs by Collocation) project , a project in Corpus Lexicography aimed at expanding PDEV to a large scale. This project springs from a long-term collaboration of lexicographers with computer scientists which has given rise to the design and maintenance of specific, adapted, and user-friendly editing and exploration tools. Particular attention is drawn on the use of NLP deep semantic methods to help in data processing. Possible contributions of NLP include pattern disambiguation, the focus of this article. The present article explains how PDEV differs from other lexical resources and describes its structure in detail. It also presents new classification experiments on a subset of 25 verbs. The SVM model obtained a micro-average F1 score of 0.81.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,336
inproceedings
vincze-etal-2014-automatic
Automatic Error Detection concerning the Definite and Indefinite Conjugation in the {H}un{L}earner Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1301/
Vincze, Veronika and Zsibrita, J{\'a}nos and Durst, P{\'e}ter and Szab{\'o}, Martina Katalin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3958--3962
In this paper we present the results of automatic error detection, concerning the definite and indefinite conjugation in the extended version of the HunLearner corpus, the learners’ corpus of the Hungarian language. We present the most typical structures that trigger definite or indefinite conjugation in Hungarian and we also discuss the most frequent types of errors made by language learners in the corpus texts. We also illustrate the error types with sentences taken from the corpus. Our results highlight grammatical structures that might pose problems for learners of Hungarian, which can be fruitfully applied in the teaching and practicing of such constructions from the language teacher’s or learners’ point of view. On the other hand, these results may be exploited in extending the functionalities of a grammar checker, concerning the definiteness of the verb. Our automatic system was able to achieve perfect recall, i.e. it could find all the mismatches between the type of the object and the conjugation of the verb, which is promising for future studies in this area.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,337
inproceedings
sidorov-etal-2014-speech
Speech-Based Emotion Recognition: Feature Selection by Self-Adaptive Multi-Criteria Genetic Algorithm
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1302/
Sidorov, Maxim and Brester, Christina and Minker, Wolfgang and Semenkin, Eugene
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3481--3485
Automated emotion recognition has a number of applications in Interactive Voice Response systems, call centers, etc. While employing existing feature sets and methods for automated emotion recognition has already achieved reasonable results, there is still a lot to do for improvement. Meanwhile, an optimal feature set, which should be used to represent speech signals for performing speech-based emotion recognition techniques, is still an open question. In our research, we tried to figure out the most essential features with self-adaptive multi-objective genetic algorithm as a feature selection technique and a probabilistic neural network as a classifier. The proposed approach was evaluated using a number of multi-languages databases (English, German), which were represented by 37- and 384-dimensional feature sets. According to the obtained results, the developed technique allows to increase the emotion recognition performance by up to 26.08 relative improvement in accuracy. Moreover, emotion recognition performance scores for all applied databases are improved.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,338
inproceedings
hofler-sugisaki-2014-constructing
Constructing and exploiting an automatically annotated resource of legislative texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1303/
H{\"ofler, Stefan and Sugisaki, Kyoko
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
175--180
In this paper, we report on the construction of a resource of Swiss legislative texts that is automatically annotated with structural, morphosyntactic and content-related information, and we discuss the exploitation of this resource for the purposes of legislative drafting, legal linguistics and translation and for the evaluation of legislation. Our resource is based on the classified compilation of Swiss federal legislation. All texts contained in the classified compilation exist in German, French and Italian, some of them are also available in Romansh and English. Our resource is currently being exploited (a) as a testing environment for developing methods of automated style checking for legislative drafts, (b) as the basis of a statistical multilingual word concordance, and (c) for the empirical evaluation of legislation. The paper describes the domain- and language-specific procedures that we have implemented to provide the automatic annotations needed for these applications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,339
inproceedings
schneider-2014-genitivdb
{G}enitiv{DB} {---} a Corpus-Generated Database for {G}erman Genitive Classification
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1304/
Schneider, Roman
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
988--994
We present a novel NLP resource for the explanation of linguistic phenomena, built and evaluated exploring very large annotated language corpora. For the compilation, we use the German Reference Corpus (DeReKo) with more than 5 billion word forms, which is the largest linguistic resource worldwide for the study of contemporary written German. The result is a comprehensive database of German genitive formations, enriched with a broad range of intra- und extralinguistic metadata. It can be used for the notoriously controversial classification and prediction of genitive endings (short endings, long endings, zero-marker). We also evaluate the main factors influencing the use of specific endings. To get a general idea about a factor’s influences and its side effects, we calculate chi-square-tests and visualize the residuals with an association plot. The results are evaluated against a gold standard by implementing tree-based machine learning algorithms. For the statistical analysis, we applied the supervised LMT Logistic Model Trees algorithm, using the WEKA software. We intend to use this gold standard to evaluate GenitivDB, as well as to explore methodologies for a predictive genitive model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,340
inproceedings
sindlerova-etal-2014-resources
Resources in Conflict: A Bilingual Valency Lexicon vs. a Bilingual Treebank vs. a Linguistic Theory
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1305/
{\v{S}}indlerov{\'a}, Jana and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka and Fucikova, Eva
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2490--2494
In this paper, we would like to exemplify how a syntactically annotated bilingual treebank can help us in exploring and revising a developed linguistic theory. On the material of the Prague Czech-English Dependency Treebank we observe sentences in which an Addressee argument in one language is linked translationally to a Patient argument in the other one, and make generalizations about the theoretical grounds of the argument non-correspondences and its relations to the valency theory beyond the annotation practice. Exploring verbs of three semantic classes (Judgement verbs, Teaching verbs and Attempt Suasion verbs) we claim that the Functional Generative Description argument labelling is highly dependent on the morphosyntactic realization of the individual participants, which then results in valency frame differences. Nevertheless, most of the differences can be overcome without substantial changes to the linguistic theory itself.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,341
inproceedings
sauri-etal-2014-newsome
The {N}ew{S}o{M}e Corpus: A Unifying Opinion Annotation Framework across Genres and in Multiple Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1306/
Saur{\'i}, Roser and Domingo, Judith and Badia, Toni
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2229--2236
We present the NewSoMe (News and Social Media) Corpus, a set of subcorpora with annotations on opinion expressions across genres (news reports, blogs, product reviews and tweets) and covering multiple languages (English, Spanish, Catalan and Portuguese). NewSoMe is the result of an effort to increase the opinion corpus resources available in languages other than English, and to build a unifying annotation framework for analyzing opinion in different genres, including controlled text, such as news reports, as well as different types of user generated contents (UGC). Given the broad design of the resource, most of the annotation effort were carried out resorting to crowdsourcing platforms: Amazon Mechanical Turk and CrowdFlower. This created an excellent opportunity to research on the feasibility of crowdsourcing methods for annotating big amounts of text in different languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,342
inproceedings
xue-zhang-2014-buy
Buy one get one free: Distant annotation of {C}hinese tense, event type and modality
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1307/
Xue, Nianwen and Zhang, Yuchen
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1412--1416
We describe a {\textquotedblleft}distant annotation{\textquotedblright} method where we mark up the semantic tense, event type, and modality of Chinese events via a word-aligned parallel corpus. We first map Chinese verbs to their English counterparts via word alignment, and then annotate the resulting English text spans with coarse-grained categories for semantic tense, event type, and modality that we believe apply to both English and Chinese. Because English has richer morpho-syntactic indicators for semantic tense, event type and modality than Chinese, our intuition is that this distant annotation approach will yield more consistent annotation than if we annotate the Chinese side directly. We report experimental results that show stable annotation agreement statistics and that event type and modality have significant influence on tense prediction. We also report the size of the annotated corpus that we have obtained, and how different domains impact annotation consistency.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,343
inproceedings
takahashi-inoue-2014-multimodal
Multimodal dialogue segmentation with gesture post-processing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1308/
Takahashi, Kodai and Inoue, Masashi
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3433--3437
We investigate an automatic dialogue segmentation method using both verbal and non-verbal modalities. Dialogue contents are used for the initial segmentation of dialogue; then, gesture occurrences are used to remove the incorrect segment boundaries. A unique characteristic of our method is to use verbal and non-verbal information separately. We use a three-party dialogue that is rich in gesture as data. The transcription of the dialogue is segmented into topics without prior training by using the TextTiling and U00 algorithm. Some candidates for segment boundaries - where the topic continues - are irrelevant. Those boundaries can be found and removed by locating gestures that stretch over the boundary candidates. This ltering improves the segmentation accuracy of text-only segmentation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,344