entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
bittar-etal-2014-dangerous
The Dangerous Myth of the Star System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1309/
Bittar, Andr{\'e} and Dini, Luca and Maurel, Sigrid and Ruhlmann, Mathieu
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2237--2241
In recent years we have observed two parallel trends in computational linguistics research and e-commerce development. On the research side, there has been an increasing interest in algorithms and approaches that are able to capture the polarity of opinions expressed by users on products, institutions and services. On the other hand, almost all big e-commerce and aggregator sites are by now providing users the possibility of writing comments and expressing their appreciation with a numeric score (usually represented as a number of stars). This generates the impression that the work carried out in the research community is made partially useless (at least for economic exploitation) by an evolution in web practices. In this paper we describe an experiment on a large corpus which shows that the score judgments provided by users are often conflicting with the text contained in the opinion, and to such a point that a rule-based opinion mining system can be demonstrated to perform better than the users themselves in ranking their opinions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,345
inproceedings
li-etal-2014-comparison
Comparison of the Impact of Word Segmentation on Name Tagging for {C}hinese and {J}apanese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1310/
Li, Haibo and Hagiwara, Masato and Li, Qi and Ji, Heng
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2532--2536
Word Segmentation is usually considered an essential step for many Chinese and Japanese Natural Language Processing tasks, such as name tagging. This paper presents several new observations and analysis on the impact of word segmentation on name tagging; (1). Due to the limitation of current state-of-the-art Chinese word segmentation performance, a character-based name tagger can outperform its word-based counterparts for Chinese but not for Japanese; (2). It is crucial to keep segmentation settings (e.g. definitions, specifications, methods) consistent between training and testing for name tagging; (3). As long as (2) is ensured, the performance of word segmentation does not have appreciable impact on Chinese and Japanese name tagging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,346
inproceedings
mititelu-etal-2014-corola
{C}o{R}o{L}a {---} The Reference Corpus of Contemporary {R}omanian Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1311/
Mititelu, Verginica Barbu and Irimia, Elena and Tufiș, Dan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1235--1239
We present the project of creating CoRoLa, a reference corpus of contemporary Romanian (from 1945 onwards). In the international context, the project finds its place among the initiatives of gathering huge collections of texts, of pre-processing and annotating them at several levels, and also of documenting them with metadata (CMDI). Our project is a joined effort of two institutes of the Romanian Academy. We foresee a corpus of more than 500 million word forms, covering all functional styles of the language. Although the vast majority of texts will be in written form, we target about 300 hours of oral texts, too, obligatorily with associated transcripts. Most of the texts will be from books, while the rest will be harvested from newspapers, booklets, technical reports, etc. The pre-processing includes cleaning the data and harmonising the diacritics, sentence splitting and tokenization. Annotation will be done at a morphological level in a first stage, followed by lemmatization, with the possibility of adding syntactic, semantic and discourse annotation in a later stage. A core of CoRoLa is described in the article. The target users of our corpus will be researchers in linguistics and language processing, teachers of Romanian, students.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,347
inproceedings
kiss-etal-2014-building
Building a reference lexicon for countability in {E}nglish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1312/
Kiss, Tibor and Pelletier, Francis Jeffry and Stadtfeld, Tobias
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
995--1000
The present paper describes the construction of a resource to determine the lexical preference class of a large number of English noun-senses ($\approx$ 14,000) with respect to the distinction between mass and count interpretations. In constructing the lexicon, we have employed a questionnaire-based approach based on existing resources such as the Open ANC (\url{http://www.anc.org}) and WordNet (CITATION). The questionnaire requires annotators to answer six questions about a noun-sense pair. Depending on the answers, a given noun-sense pair can be assigned to fine-grained noun classes, spanning the area between count and mass. The reference lexicon contains almost 14,000 noun-sense pairs. An initial data set of 1,000 has been annotated together by four native speakers, while the remaining 12,800 noun-sense pairs have been annotated in parallel by two annotators each. We can confirm the general feasibility of the approach by reporting satisfactory values between 0.694 and 0.755 in inter-annotator agreement using Krippendorff`s $\alpha$.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,348
inproceedings
de-chalendar-2014-lima
The {LIMA} Multilingual Analyzer Made Free: {FLOSS} Resources Adaptation and Correction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1313/
de Chalendar, Ga{\"el
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2932--2937
At CEA LIST, we have decided to release our multilingual analyzer LIMA as Free software. As we were not proprietary of all the language resources used we had to select and adapt free ones in order to attain results good enough and equivalent to those obtained with our previous ones. For English and French, we found and adapted a full-form dictionary and an annotated corpus for learning part-of-speech tagging models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,349
inproceedings
marelli-etal-2014-sick
A {SICK} cure for the evaluation of compositional distributional semantic models
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1314/
Marelli, Marco and Menini, Stefano and Baroni, Marco and Bentivogli, Luisa and Bernardi, Raffaella and Zamparelli, Roberto
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
216--223
Shared and internationally recognized benchmarks are fundamental for the development of any computational system. We aim to help the research community working on compositional distributional semantic models (CDSMs) by providing SICK (Sentences Involving Compositional Knowldedge), a large size English benchmark tailored for them. SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic and semantic phenomena that CDSMs are expected to account for, but do not require dealing with other aspects of existing sentential data sets (idiomatic multiword expressions, named entities, telegraphic language) that are not within the scope of CDSMs. By means of crowdsourcing techniques, each pair was annotated for two crucial semantic tasks: relatedness in meaning (with a 5-point rating scale as gold score) and entailment relation between the two elements (with three possible gold labels: entailment, contradiction, and neutral). The SICK data set was used in SemEval-2014 Task 1, and it freely available for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,350
inproceedings
rahayudi-etal-2014-twente
Twente Debate Corpus {---} A Multimodal Corpus for Head Movement Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1315/
Rahayudi, Bayu and Poppe, Ronald and Heylen, Dirk
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4184--4188
This paper introduces a multimodal discussion corpus for the study into head movement and turn-taking patterns in debates. Given that participants either acted alone or in a pair, cooperation and competition and their nonverbal correlates can be analyzed. In addition to the video and audio of the recordings, the corpus contains automatically estimated head movements, and manual annotations of who is speaking and who is looking where. The corpus consists of over 2 hours of debates, in 6 groups with 18 participants in total. We describe the recording setup and present initial analyses of the recorded data. We found that the person who acted as single debater speaks more and also receives more attention compared to the other debaters, also when corrected for the time speaking. We also found that a single debater was more likely to speak after a team debater. Future work will be aimed at further analysis of the relation between speaking and looking patterns, the outcome of the debate and perceived dominance of the debaters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,351
inproceedings
hamalainen-etal-2014-easr
The {EASR} Corpora of {E}uropean {P}ortuguese, {F}rench, {H}ungarian and {P}olish Elderly Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1316/
H{\"am{\"al{\"ainen, Annika and Avelar, Jairo and Rodrigues, Silvia and Dias, Miguel Sales and Kolesi{\'nski, Artur and Fegy{\'o, Tibor and N{\'emeth, G{\'eza and Csob{\'anka, Petra and Lan, Karine and Hewson, David
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1458--1464
Currently available speech recognisers do not usually work well with elderly speech. This is because several characteristics of speech (e.g. fundamental frequency, jitter, shimmer and harmonic noise ratio) change with age and because the acoustic models used by speech recognisers are typically trained with speech collected from younger adults only. To develop speech-driven applications capable of successfully recognising elderly speech, this type of speech data is needed for training acoustic models from scratch or for adapting acoustic models trained with younger adults’ speech. However, the availability of suitable elderly speech corpora is still very limited. This paper describes an ongoing project to design, collect, transcribe and annotate large elderly speech corpora for four European languages: Portuguese, French, Hungarian and Polish. The Portuguese, French and Polish corpora contain read speech only, whereas the Hungarian corpus also contains spontaneous command and control type of speech. Depending on the language in question, the corpora contain 76 to 205 hours of speech collected from 328 to 986 speakers aged 60 and over. The final corpora will come with manually verified orthographic transcriptions, as well as annotations for filled pauses, noises and damaged words.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,352
inproceedings
abrate-etal-2014-sharing
Sharing Cultural Heritage: the Clavius on the Web Project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1317/
Abrate, Matteo and Del Grosso, Angelo Mario and Giovannetti, Emiliano and Duca, Angelica Lo and Luzzi, Damiana and Mancini, Lorenzo and Marchetti, Andrea and Pedretti, Irene and Piccini, Silvia
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
627--634
In the last few years the amount of manuscripts digitized and made available on the Web has been constantly increasing. However, there is still a considarable lack of results concerning both the explicitation of their content and the tools developed to make it available. The objective of the Clavius on the Web project is to develop a Web platform exposing a selection of Christophorus Clavius letters along with three different levels of analysis: linguistic, lexical and semantic. The multilayered annotation of the corpus involves a XML-TEI encoding followed by a tokenization step where each token is univocally identified through a CTS urn notation and then associated to a part-of-speech and a lemma. The text is lexically and semantically annotated on the basis of a lexicon and a domain ontology, the former structuring the most relevant terms occurring in the text and the latter representing the domain entities of interest (e.g. people, places, etc.). Moreover, each entity is connected to linked and non linked resources, including DBpedia and VIAF. Finally, the results of the three layers of analysis are gathered and shown through interactive visualization and storytelling techniques. A demo version of the integrated architecture was developed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,353
inproceedings
liu-etal-2014-3d
3{D} Face Tracking and Multi-Scale, Spatio-temporal Analysis of Linguistically Significant Facial Expressions and Head Positions in {ASL}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1318/
Liu, Bo and Liu, Jingjing and Yu, Xiang and Metaxas, Dimitris and Neidle, Carol
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4512--4518
Essential grammatical information is conveyed in signed languages by clusters of events involving facial expressions and movements of the head and upper body. This poses a significant challenge for computer-based sign language recognition. Here, we present new methods for the recognition of nonmanual grammatical markers in American Sign Language (ASL) based on: (1) new 3D tracking methods for the estimation of 3D head pose and facial expressions to determine the relevant low-level features; (2) methods for higher-level analysis of component events (raised/lowered eyebrows, periodic head nods and head shakes) used in grammatical markings{\textemdash}with differentiation of temporal phases (onset, core, offset, where appropriate), analysis of their characteristic properties, and extraction of corresponding features; (3) a 2-level learning framework to combine low- and high-level features of differing spatio-temporal scales. This new approach achieves significantly better tracking and recognition results than our previous methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,354
inproceedings
geer-keane-2014-exploring
Exploring factors that contribute to successful fingerspelling comprehension
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1319/
Geer, Leah and Keane, Jonathan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1905--1910
Using a novel approach, we examine which cues in a fingerspelling stream, namely holds or transitions, allow for more successful comprehension by students learning American Sign Language (ASL). Sixteen university-level ASL students participated in this study. They were shown video clips of a native signer fingerspelling common English words. Clips were modified in the following ways: all were slowed down to half speed, one-third of the clips were modified to black out the transition portion of the fingerspelling stream, and one-third modified to have holds blacked out. The remaining third of clips were free of blacked out portions, which we used to establish a baseline of comprehension. Research by Wilcox (1992), among others, suggested that transitions provide more rich information, and thus items with the holds blacked out should be easier to comprehend than items with the transitions blacked out. This was not found to be the case here. Students achieved higher comprehension scores when hold information was provided. Data from this project can be used to design training tools to help students become more proficient at fingerspelling comprehension, a skill with which most students struggle.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,355
inproceedings
niraula-etal-2014-dare
The {DARE} Corpus: A Resource for Anaphora Resolution in Dialogue Based Intelligent Tutoring Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1320/
Niraula, Nobal and Rus, Vasile and Banjade, Rajendra and Stefanescu, Dan and Baggett, William and Morgan, Brent
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3199--3203
We describe the DARE corpus, an annotated data set focusing on pronoun resolution in tutorial dialogue. Although data sets for general purpose anaphora resolution exist, they are not suitable for dialogue based Intelligent Tutoring Systems. To the best of our knowledge, no data set is currently available for pronoun resolution in dialogue based intelligent tutoring systems. The described DARE corpus consists of 1,000 annotated pronoun instances collected from conversations between high-school students and the intelligent tutoring system DeepTutor. The data set is publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,356
inproceedings
forcada-2014-annotation
On the annotation of {TMX} translation memories for advanced leveraging in computer-aided translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1321/
Forcada, Mikel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4374--4378
The term advanced leveraging refers to extensions beyond the current usage of translation memory (TM) in computer-aided translation (CAT). One of these extensions is the ability to identify and use matches on the sub-segment level {\textemdash} for instance, using sub-sentential elements when segments are sentences{\textemdash} to help the translator when a reasonable fuzzy-matched proposal is not available; some such functionalities have started to become available in commercial CAT tools. Resources such as statistical word aligners, external machine translation systems, glossaries and term bases could be used to identify and annotate segment-level translation units at the sub-segment level, but there is currently no single, agreed standard supporting the interchange of sub-segmental annotation of translation memories to create a richer translation resource. This paper discusses the capabilities and limitations of some current standards, envisages possible alternatives, and ends with a tentative proposal which slightly abuses (repurposes) the usage of existing elements in the TMX standard.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,357
inproceedings
hennig-etal-2014-ans
The {D}-{ANS} corpus: the {D}ublin-Autonomous Nervous System corpus of biosignal and multimodal recordings of conversational speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1322/
Hennig, Shannon and Chellali, Ryad and Campbell, Nick
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3438--3443
Biosignals, such as electrodermal activity (EDA) and heart rate, are increasingly being considered as potential data sources to provide information about the temporal fluctuations in affective experience during human interaction. This paper describes an English-speaking, multiple session corpus of small groups of people engaged in informal, unscripted conversation while wearing wireless, wrist-based EDA sensors. Additionally, one participant per recording session wore a heart rate monitor. This corpus was collected in order to observe potential interactions between various social and communicative phenomena and the temporal dynamics of the recorded biosignals. Here we describe the communicative context, technical set-up, synchronization process, and challenges in collecting and utilizing such data. We describe the segmentation and annotations to date, including laughter annotations, and how the research community can access and collaborate on this corpus now and in the future. We believe this corpus is particularly relevant to researchers interested in unscripted social conversation as well as to researchers with a specific interest in observing the dynamics of biosignals during informal social conversation rich with examples of laughter, conversational turn-taking, and non-task-based interaction.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,358
inproceedings
moro-etal-2014-annotating
Annotating the {MASC} Corpus with {B}abel{N}et
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1323/
Moro, Andrea and Navigli, Roberto and Tucci, Francesco Maria and Passonneau, Rebecca J.
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4214--4219
In this paper we tackle the problem of automatically annotating, with both word senses and named entities, the MASC 3.0 corpus, a large English corpus covering a wide range of genres of written and spoken text. We use BabelNet 2.0, a multilingual semantic network which integrates both lexicographic and encyclopedic knowledge, as our sense/entity inventory together with its semantic structure, to perform the aforementioned annotation task. Word sense annotated corpora have been around for more than twenty years, helping the development of Word Sense Disambiguation algorithms by providing both training and testing grounds. More recently Entity Linking has followed the same path, with the creation of huge resources containing annotated named entities. However, to date, there has been no resource that contains both kinds of annotation. In this paper we present an automatic approach for performing this annotation, together with its output on the MASC corpus. We use this corpus because its goal of integrating different types of annotations goes exactly in our same direction. Our overall aim is to stimulate research on the joint exploitation and disambiguation of word senses and named entities. Finally, we estimate the quality of our annotations using both manually-tagged named entities and word senses, obtaining an accuracy of roughly 70{\%} for both named entities and word sense annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,359
inproceedings
bastings-simaan-2014-fragments
All Fragments Count in Parser Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1324/
Bastings, Jasmijn and Sima{'}an, Khalil
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
78--82
PARSEVAL, the default paradigm for evaluating constituency parsers, calculates parsing success (Precision/Recall) as a function of the number of matching labeled brackets across the test set. Nodes in constituency trees, however, are connected together to reflect important linguistic relations such as predicate-argument and direct-dominance relations between categories. In this paper, we present FREVAL, a generalization of PARSEVAL, where the precision and recall are calculated not only for individual brackets, but also for co-occurring, connected brackets (i.e. fragments). FREVAL fragments precision (FLP) and recall (FLR) interpolate the match across the whole spectrum of fragment sizes ranging from those consisting of individual nodes (labeled brackets) to those consisting of full parse trees. We provide evidence that FREVAL is informative for inspecting relative parser performance by comparing a range of existing parsers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,360
inproceedings
garrido-etal-2014-texafon
{T}ex{AF}on 2.0: A text processing tool for the generation of expressive speech in {TTS} applications
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1325/
Garrido, Juan Mar{\'i}a and Laplaza, Yesika and Kolz, Benjamin and Cornudella, Miquel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3494--3500
This paper presents TexAfon 2.0, an improved version of the text processing tool TexAFon, specially oriented to the generation of synthetic speech with expressive content. TexAFon is a text processing module in Catalan and Spanish for TTS systems, which performs all the typical tasks needed for the generation of synthetic speech from text: sentence detection, pre-processing, phonetic transcription, syllabication, prosodic segmentation and stress prediction. These improvements include a new normalisation module for the standardisation on chat text in Spanish, a module for the detection of the expressed emotions in the input text, and a module for the automatic detection of the intended speech acts, which are briefly described in the paper. The results of the evaluations carried out for each module are also presented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,361
inproceedings
seraji-etal-2014-persian
A {P}ersian Treebank with {S}tanford Typed Dependencies
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1326/
Seraji, Mojgan and Jahani, Carina and Megyesi, Be{\'a}ta and Nivre, Joakim
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
796--801
We present the Uppsala Persian Dependency Treebank (UPDT) with a syntactic annotation scheme based on Stanford Typed Dependencies. The treebank consists of 6,000 sentences and 151,671 tokens with an average sentence length of 25 words. The data is from different genres, including newspaper articles and fiction, as well as technical descriptions and texts about culture and art, taken from the open source Uppsala Persian Corpus (UPC). The syntactic annotation scheme is extended for Persian to include all syntactic relations that could not be covered by the primary scheme developed for English. In addition, we present open source tools for automatic analysis of Persian containing a text normalizer, a sentence segmenter and tokenizer, a part-of-speech tagger, and a parser. The treebank and the parser have been developed simultaneously in a bootstrapping procedure. The result of a parsing experiment shows an overall labeled attachment score of 82.05{\%} and an unlabeled attachment score of 85.29{\%}. The treebank is freely available as an open source resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,362
inproceedings
baranes-sagot-2014-language
A Language-independent Approach to Extracting Derivational Relations from an Inflectional Lexicon
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1327/
Baranes, Marion and Sagot, Beno{\^i}t
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2793--2799
In this paper, we describe and evaluate an unsupervised method for acquiring pairs of lexical entries belonging to the same morphological family, i.e., derivationally related words, starting from a purely inflectional lexicon. Our approach relies on transformation rules that relate lexical entries with the one another, and which are automatically extracted from the inflected lexicon based on surface form analogies and on part-of-speech information. It is generic enough to be applied to any language with a mainly concatenative derivational morphology. Results were obtained and evaluated on English, French, German and Spanish. Precision results are satisfying, and our French results favorably compare with another resource, although its construction relied on manually developed lexicographic information whereas our approach only requires an inflectional lexicon.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,363
inproceedings
kucuk-etal-2014-named
Named Entity Recognition on {T}urkish Tweets
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1328/
K{\"u{\c{c{\"uk, Dilek and Jacquet, Guillaume and Steinberger, Ralf
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
450--454
Various recent studies show that the performance of named entity recognition (NER) systems developed for well-formed text types drops significantly when applied to tweets. The only existing study for the highly inflected agglutinative language Turkish reports a drop in F-Measure from 91{\%} to 19{\%} when ported from news articles to tweets. In this study, we present a new named entity-annotated tweet corpus and a detailed analysis of the various tweet-specific linguistic phenomena. We perform comparative NER experiments with a rule-based multilingual NER system adapted to Turkish on three corpora: a news corpus, our new tweet corpus, and another tweet corpus. Based on the analysis and the experimentation results, we suggest system features required to improve NER results for social media like Twitter.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,364
inproceedings
lacheret-etal-2014-rhapsodie
{R}hapsodie: a Prosodic-Syntactic Treebank for Spoken {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1329/
Lacheret, Anne and Kahane, Sylvain and Beliao, Julie and Dister, Anne and Gerdes, Kim and Goldman, Jean-Philippe and Obin, Nicolas and Pietrandrea, Paola and Tchobanov, Atanas
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
295--301
The main objective of the Rhapsodie project (ANR Rhapsodie 07 Corp-030-01) was to define rich, explicit, and reproducible schemes for the annotation of prosody and syntax in different genres ({\^A}{\ensuremath{\pm}} spontaneous, {\^A}{\ensuremath{\pm}} planned, face-to-face interviews vs. broadcast, etc.), in order to study the prosody/syntax/discourse interface in spoken French, and their roles in the segmentation of speech into discourse units (Lacheret, Kahane, {\&} Pietrandrea forthcoming). We here describe the deliverable, a syntactic and prosodic treebank of spoken French, composed of 57 short samples of spoken French (5 minutes long on average, amounting to 3 hours of speech and 33000 words), orthographically and phonetically transcribed. The transcriptions and the annotations are all aligned on the speech signal: phonemes, syllables, words, speakers, overlaps. This resource is freely available at www.projet-rhapsodie.fr. The sound samples (wav/mp3), the acoustic analysis (original F0 curve manually corrected and automatic stylized F0, pitch format), the orthographic transcriptions (txt), the microsyntactic annotations (tabular format), the macrosyntactic annotations (txt, tabular format), the prosodic annotations (xml, textgrid, tabular format), and the metadata (xml and html) can be freely downloaded under the terms of the Creative Commons licence Attribution - Noncommercial - Share Alike 3.0 France. The metadata are encoded in the IMDI-CMFI format and can be parsed on line.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,365
inproceedings
marimon-etal-2014-iula
The {IULA} {S}panish {LSP} Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1330/
Marimon, Montserrat and Bel, N{\'u}ria and Fisas, Beatriz and Arias, Blanca and V{\'a}zquez, Silvia and Vivaldi, Jorge and Morell, Carlos and Lorente, Merc{\`e}
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
782--788
This paper presents the IULA Spanish LSP Treebank, a dependency treebank of over 41,000 sentences of different domains (Law, Economy, Computing Science, Environment, and Medicine), developed in the framework of the European project METANET4U. Dependency annotations in the treebank were automatically derived from manually selected parses produced by an HPSG-grammar by a deterministic conversion algorithm that used the identifiers of grammar rules to identify the heads, the dependents, and some dependency types that were directly transferred onto the dependency structure (e.g., subject, specifier, and modifier), and the identifiers of the lexical entries to identify the argument-related dependency functions (e.g. direct object, indirect object, and oblique complement). The treebank is accessible with a browser that provides concordance-based search functions and delivers the results in two formats: (i) a column-based format, in the style of CoNLL-2006 shared task, and (ii) a dependency graph, where dependency relations are noted by an oriented arrow which goes from the dependent node to the head node. The IULA Spanish LSP Treebank is the first technical corpus of Spanish annotated at surface syntactic level following the dependency grammar theory. The treebank has been made publicly and freely available from the META-SHARE platform with a Creative Commons CC-by licence.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,366
inproceedings
goryainova-etal-2014-morpho
Morpho-Syntactic Study of Errors from Speech Recognition System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1331/
Goryainova, Maria and Grouin, Cyril and Rosset, Sophie and Vasilescu, Ioana
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3045--3049
The study provides an original standpoint of the speech transcription errors by focusing on the morpho-syntactic features of the erroneous chunks and of the surrounding left and right context. The typology concerns the forms, the lemmas and the POS involved in erroneous chunks, and in the surrounding contexts. Comparison with error free contexts are also provided. The study is conducted on French. Morpho-syntactic analysis underlines that three main classes are particularly represented in the erroneous chunks: (i) grammatical words (to, of, the), (ii) auxiliary verbs (has, is), and (iii) modal verbs (should, must). Such items are widely encountered in the ASR outputs as frequent candidates to transcription errors. The analysis of the context points out that some left 3-grams contexts (e.g., repetitions, that is disfluencies, bracketing formulas such as {\textquotedblleft}c’est{\textquotedblright}, etc.) may be better predictors than others. Finally, the surface analysis conducted through a Levensthein distance analysis, highlighted that the most common distance is of 2 characters and mainly involves differences between inflected forms of a unique item.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,367
inproceedings
xue-etal-2014-interlingua
Not an Interlingua, But Close: Comparison of {E}nglish {AMR}s to {C}hinese and {C}zech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1332/
Xue, Nianwen and Bojar, Ond{\v{r}}ej and Haji{\v{c}}, Jan and Palmer, Martha and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka and Zhang, Xiuhong
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1765--1772
Abstract Meaning Representations (AMRs) are rooted, directional and labeled graphs that abstract away from morpho-syntactic idiosyncrasies such as word category (verbs and nouns), word order, and function words (determiners, some prepositions). Because these syntactic idiosyncrasies account for many of the cross-lingual differences, it would be interesting to see if this representation can serve, e.g., as a useful, minimally divergent transfer layer in machine translation. To answer this question, we have translated 100 English sentences that have existing AMRs into Chinese and Czech to create AMRs for them. A cross-linguistic comparison of English to Chinese and Czech AMRs reveals both cases where the AMRs for the language pairs align well structurally and cases of linguistic divergence. We found that the level of compatibility of AMR between English and Chinese is higher than between English and Czech. We believe this kind of comparison is beneficial to further refining the annotation standards for each of the three languages and will lead to more compatible annotation guidelines between the languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,368
inproceedings
klassen-etal-2014-annotating
Annotating Clinical Events in Text Snippets for Phenotype Detection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1334/
Klassen, Prescott and Xia, Fei and Vanderwende, Lucy and Yetisgen, Meliha
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2753--2757
Early detection and treatment of diseases that onset after a patient is admitted to a hospital, such as pneumonia, is critical to improving and reducing costs in healthcare. NLP systems that analyze the narrative data embedded in clinical artifacts such as x-ray reports can help support early detection. In this paper, we consider the importance of identifying the change of state for events - in particular, clinical events that measure and compare the multiple states of a patient’s health across time. We propose a schema for event annotation comprised of five fields and create preliminary annotation guidelines for annotators to apply the schema. We then train annotators, measure their performance, and finalize our guidelines. With the complete guidelines, we then annotate a corpus of snippets extracted from chest x-ray reports in order to integrate the annotations as a new source of features for classification tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,370
inproceedings
ruiz-etal-2014-phoneme
Phoneme Similarity Matrices to Improve Long Audio Alignment for Automatic Subtitling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1335/
Ruiz, Pablo and {\'A}lvarez, Aitor and Arzelus, Haritz
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
437--442
Long audio alignment systems for Spanish and English are presented, within an automatic subtitling application. Language-specific phone decoders automatically recognize audio contents at phoneme level. At the same time, language-dependent grapheme-to-phoneme modules perform a transcription of the script for the audio. A dynamic programming algorithm (Hirschberg`s algorithm) finds matches between the phonemes automatically recognized by the phone decoder and the phonemes in the script’s transcription. Alignment accuracy is evaluated when scoring alignment operations with a baseline binary matrix, and when scoring alignment operations with several continuous-score matrices, based on phoneme similarity as assessed through comparing multivalued phonological features. Alignment accuracy results are reported at phoneme, word and subtitle level. Alignment accuracy when using the continuous scoring matrices based on phonological similarity was clearly higher than when using the baseline binary matrix.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,371
inproceedings
chatzimina-etal-2014-use
Use of unsupervised word classes for entity recognition: Application to the detection of disorders in clinical reports
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1336/
Chatzimina, Maria Evangelia and Grouin, Cyril and Zweigenbaum, Pierre
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3264--3271
Unsupervised word classes induced from unannotated text corpora are increasingly used to help tasks addressed by supervised classification, such as standard named entity detection. This paper studies the contribution of unsupervised word classes to a medical entity detection task with two specific objectives: How do unsupervised word classes compare to available knowledge-based semantic classes? Does syntactic information help produce unsupervised word classes with better properties? We design and test two syntax-based methods to produce word classes: one applies the Brown clustering algorithm to syntactic dependencies, the other collects latent categories created by a PCFG-LA parser. When added to non-semantic features, knowledge-based semantic classes gain 7.28 points of F-measure. In the same context, basic unsupervised word classes gain 4.16pt, reaching 60{\%} of the contribution of knowledge-based semantic classes and outperforming Wikipedia, and adding PCFG-LA unsupervised word classes gain one more point at 5.11pt, reaching 70{\%}. Unsupervised word classes could therefore provide a useful semantic back-off in domains where no knowledge-based semantic classes are available. The combination of both knowledge-based and basic unsupervised classes gains 8.33pt. Therefore, unsupervised classes are still useful even when rich knowledge-based classes exist.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,372
inproceedings
hajicova-2014-three
Three dimensions of the so-called {\textquotedblleft}interoperability{\textquotedblright} of annotation schemes{\textquotedblright}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1337/
Haji{\v{c}}ov{\'a}, Eva
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4559--4564
“Interoperability” of annotation schemes is one of the key words in the discussions about annotation of corpora. In the present contribution, we propose to look at the so-called interoperability from (at least) three angles, namely (i) as a relation (and possible interaction or cooperation) of different annotation schemes for different layers or phenomena of a single language, (ii) the possibility to annotate different languages by a single (modified or not) annotation scheme, and (iii) the relation between different annotation schemes for a single language, or for a single phenomenon or layer of the same language. The pros and cons of each of these aspects are discussed as well as their contribution to linguistic studies and natural language processing. It is stressed that a communication and collaboration between different annotation schemes requires an explicit specification and consistency of each of the schemes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,373
inproceedings
kaeshammer-westburg-2014-complex
On Complex Word Alignment Configurations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1338/
Kaeshammer, Miriam and Westburg, Anika
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1773--1780
Resources of manual word alignments contain configurations that are beyond the alignment capacity of current translation models, hence the term complex alignment configuration. They have been the matter of some debate in the machine translation community, as they call for more powerful translation models that come with further complications. In this work we investigate instances of complex alignment configurations in data sets of four different language pairs to shed more light on the nature and cause of those configurations. For the English-German alignments from Pad{\'o} and Lapata (2006), for instance, we find that only a small fraction of the complex configurations are due to real annotation errors. While a third of the complex configurations in this data set could be simplified when annotating according to a different style guide, the remaining ones are phenomena that one would like to be able to generate during translation. Those instances are mainly caused by the different word order of English and German. Our findings thus motivate further research in the area of translation beyond phrase-based and context-free translation modeling.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,374
inproceedings
kokkinakis-etal-2014-hfst
{HFST}-{S}we{NER} {---} A New {NER} Resource for {S}wedish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1339/
Kokkinakis, Dimitrios and Niemi, Jyrki and Hardwick, Sam and Lind{\'e}n, Krister and Borin, Lars
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2537--2543
Named entity recognition (NER) is a knowledge-intensive information extraction task that is used for recognizing textual mentions of entities that belong to a predefined set of categories, such as locations, organizations and time expressions. NER is a challenging, difficult, yet essential preprocessing technology for many natural language processing applications, and particularly crucial for language understanding. NER has been actively explored in academia and in industry especially during the last years due to the advent of social media data. This paper describes the conversion, modeling and adaptation of a Swedish NER system from a hybrid environment, with integrated functionality from various processing components, to the Helsinki Finite-State Transducer Technology (HFST) platform. This new HFST-based NER (HFST-SweNER) is a full-fledged open source implementation that supports a variety of generic named entity types and consists of multiple, reusable resource layers, e.g., various n-gram-based named entity lists (gazetteers).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,375
inproceedings
saad-etal-2014-building
Building and Modelling Multilingual Subjective Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1340/
Saad, Motaz and Langlois, David and Sma{\"ili, Kamel
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3086--3091
Building multilingual opinionated models requires multilingual corpora annotated with opinion labels. Unfortunately, such kind of corpora are rare. We consider opinions in this work as subjective or objective. In this paper, we introduce an annotation method that can be reliably transferred across topic domains and across languages. The method starts by building a classifier that annotates sentences into subjective/objective label using a training data from {\textquotedblleft}movie reviews{\textquotedblright} domain which is in English language. The annotation can be transferred to another language by classifying English sentences in parallel corpora and transferring the same annotation to the same sentences of the other language. We also shed the light on the link between opinion mining and statistical language modelling, and how such corpora are useful for domain specific language modelling. We show the distinction between subjective and objective sentences which tends to be stable across domains and languages. Our experiments show that language models trained on objective (respectively subjective) corpus lead to better perplexities on objective (respectively subjective) test.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,376
inproceedings
schuppler-etal-2014-grass
{GRASS}: the Graz corpus of Read And Spontaneous Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1341/
Schuppler, Barbara and Hagmueller, Martin and Morales-Cordovilla, Juan A. and Pessentheiner, Hannes
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1465--1470
This paper provides a description of the preparation, the speakers, the recordings, and the creation of the orthographic transcriptions of the first large scale speech database for Austrian German. It contains approximately 1900 minutes of (read and spontaneous) speech produced by 38 speakers. The corpus consists of three components. First, the Conversation Speech (CS) component contains free conversations of one hour length between friends, colleagues, couples, or family members. Second, the Commands Component (CC) contains commands and keywords which were either read or elicited by pictures. Third, the Read Speech (RS) component contains phonetically balanced sentences and digits. The speech of all components has been recorded at super-wideband quality in a soundproof recording-studio with head-mounted microphones, large-diaphragm microphones, a laryngograph, and with a video camera. The orthographic transcriptions, which have been created and subsequently corrected manually, contain approximately 290 000 word tokens from 15 000 different word types.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,377
inproceedings
windhouwer-schuurman-2014-linguistic
Linguistic resources and cats: how to use {ISO}cat, {REL}cat and {SCHEMA}cat
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1342/
Windhouwer, Menzo and Schuurman, Ineke
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3806--3810
Within the European CLARIN infrastructure ISOcat is used to enable both humans and computer programs to find specific resources even when they use different terminology or data structures. In order to do so, it should be clear which concepts are used in these resources, both at the level of metadata for the resource as well as its content, and what is meant by them. The concepts can be specified in ISOcat. SCHEMAcat enables us to relate the concepts used by a resource, while RELcat enables to type these relationships and add relationships beyond resource boundaries. This way these three registries together allow us (and the programs) to find what we are looking for.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,378
inproceedings
borin-etal-2014-bring
Bring vs. {MTR}oget: Evaluating automatic thesaurus translation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1343/
Borin, Lars and Allwood, Jens and de Melo, Gerard
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2115--2121
Evaluation of automatic language-independent methods for language technology resource creation is difficult, and confounded by a largely unknown quantity, viz. to what extent typological differences among languages are significant for results achieved for one language or language pair to be applicable across languages generally. In the work presented here, as a simplifying assumption, language-independence is taken as axiomatic within certain specified bounds. We evaluate the automatic translation of Roget`s {\textquotedblleft}Thesaurus{\textquotedblright} from English into Swedish using an independently compiled Roget-style Swedish thesaurus, S.C. Bring`s {\textquotedblleft}Swedish vocabulary arranged into conceptual classes{\textquotedblright} (1930). Our expectation is that this explicit evaluation of one of the thesaureses created in the MTRoget project will provide a good estimate of the quality of the other thesauruses created using similar methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,379
inproceedings
lopez-otero-etal-2014-introducing
Introducing a Framework for the Evaluation of Music Detection Tools
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1344/
Lopez-Otero, Paula and Docio-Fernandez, Laura and Garcia-Mateo, Carmen
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
568--572
The huge amount of multimedia information available nowadays makes its manual processing prohibitive, requiring tools for automatic labelling of these contents. This paper describes a framework for assessing a music detection tool; this framework consists of a database, composed of several hours of radio recordings that include different types of radio programmes, and a set of evaluation measures for evaluating the performance of a music detection tool in detail. A tool for automatically detecting music in audio streams, with application to music information retrieval tasks, is presented as well. The aim of this tool is to discard the audio excerpts that do not contain music in order to avoid their unnecessary processing. This tool applies fingerprinting to different acoustic features extracted from the audio signal in order to remove perceptual irrelevancies, and a support vector machine is trained for classifying these fingerprints in classes music and no-music. The validity of this tool is assessed in the proposed evaluation framework.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,380
inproceedings
miller-gurevych-2014-wordnet
{W}ord{N}et{---}{W}ikipedia{---}{W}iktionary: Construction of a Three-way Alignment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1345/
Miller, Tristan and Gurevych, Iryna
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2094--2100
The coverage and quality of conceptual information contained in lexical semantic resources is crucial for many tasks in natural language processing. Automatic alignment of complementary resources is one way of improving this coverage and quality; however, past attempts have always been between pairs of specific resources. In this paper we establish some set-theoretic conventions for describing concepts and their alignments, and use them to describe a method for automatically constructing n-way alignments from arbitrary pairwise alignments. We apply this technique to the production of a three-way alignment from previously published WordNet-Wikipedia and WordNet-Wiktionary alignments. We then present a quantitative and informal qualitative analysis of the aligned resource. The three-way alignment was found to have greater coverage, an enriched sense representation, and coarser sense granularity than both the original resources and their pairwise alignments, though this came at the cost of accuracy. An evaluation of the induced word sense clusters in a word sense disambiguation task showed that they were no better than random clusters of equivalent granularity. However, use of the alignments to enrich a sense inventory with additional sense glosses did significantly improve the performance of a baseline knowledge-based WSD algorithm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,381
inproceedings
grisot-meyer-2014-cross
Cross-linguistic annotation of narrativity for {E}nglish/{F}rench verb tense disambiguation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1346/
Grisot, Cristina and Meyer, Thomas
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
963--966
This paper presents manual and automatic annotation experiments for a pragmatic verb tense feature (narrativity) in English/French parallel corpora. The feature is considered to play an important role for translating English Simple Past tense into French, where three different tenses are available. Whether the French Passe {\`I} Compose {\`I}, Passe {\`I} Simple or Imparfait should be used is highly dependent on a longer-range context, in which either narrative events ordered in time or mere non-narrative state of affairs in the past are described. This longer-range context is usually not available to current machine translation (MT) systems, that are trained on parallel corpora. Annotating narrativity prior to translation is therefore likely to help current MT systems. Our experiments show that narrativity can be reliably identified with kappa-values of up to 0.91 in manual annotation and with F1 scores of up to 0.72 in automatic annotation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,382
inproceedings
avramidis-etal-2014-taraxu
The tara{X{\"U corpus of human-annotated machine translations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1347/
Avramidis, Eleftherios and Burchardt, Aljoscha and Hunsicker, Sabine and Popovi{\'c}, Maja and Tscherwinka, Cindy and Vilar, David and Uszkoreit, Hans
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2679--2682
Human translators are the key to evaluating machine translation (MT) quality and also to addressing the so far unanswered question when and how to use MT in professional translation workflows. This paper describes the corpus developed as a result of a detailed large scale human evaluation consisting of three tightly connected tasks: ranking, error classification and post-editing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,383
inproceedings
el-haj-etal-2014-detecting
Detecting Document Structure in a Very Large Corpus of {UK} Financial Reports
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1348/
El-Haj, Mahmoud and Rayson, Paul and Young, Steve and Walker, Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1335--1338
In this paper we present the evaluation of our automatic methods for detecting and extracting document structure in annual financial reports. The work presented is part of the Corporate Financial Information Environment (CFIE) project in which we are using Natural Language Processing (NLP) techniques to study the causes and consequences of corporate disclosure and financial reporting outcomes. We aim to uncover the determinants of financial reporting quality and the factors that influence the quality of information disclosed to investors beyond the financial statements. The CFIE consists of the supply of information by firms to investors, and the mediating influences of information intermediaries on the timing, relevance and reliability of information available to investors. It is important to compare and contrast specific elements or sections of each annual financial report across our entire corpus rather than working at the full document level. We show that the values of some metrics e.g. readability will vary across sections, thus improving on previous research research based on full texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,384
inproceedings
stefanescu-etal-2014-latent
Latent Semantic Analysis Models on {W}ikipedia and {TASA}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1349/
Ștef{\u{a}}nescu, Dan and Banjade, Rajendra and Rus, Vasile
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1417--1422
This paper introduces a collection of freely available Latent Semantic Analysis models built on the entire English Wikipedia and the TASA corpus. The models differ not only on their source, Wikipedia versus TASA, but also on the linguistic items they focus on: all words, content-words, nouns-verbs, and main concepts. Generating such models from large datasets (e.g. Wikipedia), that can provide a large coverage for the actual vocabulary in use, is computationally challenging, which is the reason why large LSA models are rarely available. Our experiments show that for the task of word-to-word similarity, the scores assigned by these models are strongly correlated with human judgment, outperforming many other frequently used measures, and comparable to the state of the art.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,385
inproceedings
rehm-etal-2014-strategic
The Strategic Impact of {META}-{NET} on the Regional, National and International Level
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1350/
Rehm, Georg and Uszkoreit, Hans and Ananiadou, Sophia and Bel, N{\'uria and Bielevi{\v{cien{\.{e, Audron{\.{e and Borin, Lars and Branco, Ant{\'onio and Budin, Gerhard and Calzolari, Nicoletta and Daelemans, Walter and Garab{\'ik, Radovan and Grobelnik, Marko and Garc{\'ia-Mateo, Carmen and van Genabith, Josef and Haji{\v{c, Jan and Hern{\'aez, Inma and Judge, John and Koeva, Svetla and Krek, Simon and Krstev, Cvetana and Lind{\'en, Krister and Magnini, Bernardo and Mariani, Joseph and McNaught, John and Melero, Maite and Monachini, Monica and Moreno, Asunci{\'on and Odijk, Jan and Ogrodniczuk, Maciej and P{\k{ezik, Piotr and Piperidis, Stelios and Przepi{\'orkowski, Adam and R{\"ognvaldsson, Eir{\'ikur and Rosner, Michael and Pedersen, Bolette and Skadi{\c{na, Inguna and De Smedt, Koenraad and Tadi{\'c, Marko and Thompson, Paul and Tufi{\c{s, Dan and V{\'aradi, Tam{\'as and Vasi{\c{ljevs, Andrejs and Vider, Kadri and Zabarskaite, Jolanta
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1517--1524
This article provides an overview of the dissemination work carried out in META-NET from 2010 until early 2014; we describe its impact on the regional, national and international level, mainly with regard to politics and the situation of funding for LT topics. This paper documents the initiative’s work throughout Europe in order to boost progress and innovation in our field.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,386
inproceedings
akbik-michael-2014-weltmodell
The Weltmodell: A Data-Driven Commonsense Knowledge Base
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1351/
Akbik, Alan and Michael, Thilo
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3272--3276
We present the Weltmodell, a commonsense knowledge base that was automatically generated from aggregated dependency parse fragments gathered from over 3.5 million English language books. We leverage the magnitude and diversity of this dataset to arrive at close to ten million distinct N-ary commonsense facts using techniques from open-domain Information Extraction (IE). Furthermore, we compute a range of measures of association and distributional similarity on this data. We present the results of our efforts using a browsable web demonstrator and publicly release all generated data for use and discussion by the research community. In this paper, we give an overview of our knowledge acquisition method and representation model, and present our web demonstrator.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,387
inproceedings
schiel-kisler-2014-german
{G}erman Alcohol Language Corpus - the Question of Dialect
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1352/
Schiel, Florian and Kisler, Thomas
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
353--356
Speech uttered under the influence of alcohol is known to deviate from the speech of the same person when sober. This is an important feature in forensic investigations and could also be used to detect intoxication in the automotive environment. Aside from acoustic-phonetic features and speech content which have already been studied by others in this contribution we address the question whether speakers use dialectal variation or dialect words more frequently when intoxicated than when sober. We analyzed 300,000 recorded word tokens in read and spontaneous speech uttered by 162 female and male speakers within the German Alcohol Language Corpus. We found that contrary to our expectations the frequency of dialectal forms decreases significantly when speakers are under the influence. We explain this effect with a compensatory over-shoot mechanism: speakers are aware of their intoxication and that they are being monitored. In forensic analysis of speech this {\textquoteleft}awareness factor' must be taken into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,388
inproceedings
de-smedt-etal-2014-clara
{CLARA}: A New Generation of Researchers in Common Language Resources and Their Applications
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1353/
De Smedt, Koenraad and Hinrichs, Erhard and Meurers, Detmar and Skadi{\c{n}}a, Inguna and Pedersen, Bolette and Navarretta, Costanza and Bel, N{\'u}ria and Lind{\'e}n, Krister and Lopatkov{\'a}, Mark{\'e}ta and Haji{\v{c}}, Jan and Andersen, Gisle and Lenkiewicz, Przemyslaw
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2166--2174
CLARA (Common Language Resources and Their Applications) is a Marie Curie Initial Training Network which ran from 2009 until 2014 with the aim of providing researcher training in crucial areas related to language resources and infrastructure. The scope of the project was broad and included infrastructure design, lexical semantic modeling, domain modeling, multimedia and multimodal communication, applications, and parsing technologies and grammar models. An international consortium of 9 partners and 12 associate partners employed researchers in 19 new positions and organized a training program consisting of 10 thematic courses and summer/winter schools. The project has resulted in new theoretical insights as well as new resources and tools. Most importantly, the project has trained a new generation of researchers who can perform advanced research and development in language resources and technologies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,389
inproceedings
hartmann-etal-2014-large
A Large Corpus of Product Reviews in {P}ortuguese: Tackling Out-Of-Vocabulary Words
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1354/
Hartmann, Nathan and Avan{\c{c}}o, Lucas and Balage, Pedro and Duran, Magali and das Gra{\c{c}}as Volpe Nunes, Maria and Pardo, Thiago and Alu{\'i}sio, Sandra
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3865--3871
Web 2.0 has allowed a never imagined communication boom. With the widespread use of computational and mobile devices, anyone, in practically any language, may post comments in the web. As such, formal language is not necessarily used. In fact, in these communicative situations, language is marked by the absence of more complex syntactic structures and the presence of internet slang, with missing diacritics, repetitions of vowels, and the use of chat-speak style abbreviations, emoticons and colloquial expressions. Such language use poses severe new challenges for Natural Language Processing (NLP) tools and applications, which, so far, have focused on well-written texts. In this work, we report the construction of a large web corpus of product reviews in Brazilian Portuguese and the analysis of its lexical phenomena, which support the development of a lexical normalization tool for, in future work, subsidizing the use of standard NLP products for web opinion mining and summarization purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,390
inproceedings
kunchukuttan-etal-2014-shata
Shata-Anuvadak: Tackling Multiway Translation of {I}ndian Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1355/
Kunchukuttan, Anoop and Mishra, Abhijit and Chatterjee, Rajen and Shah, Ritesh and Bhattacharyya, Pushpak
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1781--1787
We present a compendium of 110 Statistical Machine Translation systems built from parallel corpora of 11 Indian languages belonging to both Indo-Aryan and Dravidian families. We analyze the relationship between translation accuracy and the language families involved. We feel that insights obtained from this analysis will provide guidelines for creating machine translation systems of specific Indian language pairs. We build phrase based systems and some extensions. Across multiple languages, we show improvements on the baseline phrase based systems using these extensions: (1) source side reordering for English-Indian language translation, and (2) transliteration of untranslated words for Indian language-Indian language translation. These enhancements harness shared characteristics of Indian languages. To stimulate similar innovation widely in the NLP community, we have made the trained models for these language pairs publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,391
inproceedings
hinrichs-krauwer-2014-clarin
The {CLARIN} Research Infrastructure: Resources and Tools for e{H}umanities Scholars
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1356/
Hinrichs, Erhard and Krauwer, Steven
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1525--1531
CLARIN is the short name for the Common Language Resources and Technology Infrastructure, which aims at providing easy and sustainable access for scholars in the humanities and social sciences to digital language data and advanced tools to discover, explore, exploit, annotate, analyse or combine them, independent of where they are located. CLARIN is in the process of building a networked federation of European data repositories, service centers and centers of expertise, with single sign-on access for all members of the academic community in all participating countries. Tools and data from different centers will be interoperable so that data collections can be combined and tools from different sources can be chained to perform complex operations to support researchers in their work. Interoperability of language resources and tools in the federation of CLARIN Centers is ensured by adherence to TEI and ISO standards for text encoding, by the use of persistent identifiers, and by the observance of common protocols. The purpose of the present paper is to give an overview of language resources, tools, and services that CLARIN presently offers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,392
inproceedings
ai-etal-2014-sprinter
{S}printer: Language Technologies for Interactive and Multimedia Language Learning
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1357/
Ai, Renlong and Charfuelan, Marcela and Kasper, Walter and Kl{\"uwer, Tina and Uszkoreit, Hans and Xu, Feiyu and Gasber, Sandra and Gienandt, Philip
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2733--2738
Modern language learning courses are no longer exclusively based on books or face-to-face lectures. More and more lessons make use of multimedia and personalized learning methods. Many of these are based on e-learning solutions. Learning via the Internet provides 7/24 services that require sizeable human resources. Therefore we witness a growing economic pressure to employ computer-assisted methods for improving language learning in quality, efficiency and scalability. In this paper, we will address three applications of language technologies for language learning: 1) Methods and strategies for pronunciation training in second language learning, e.g., multimodal feedback via visualization of sound features, speech verification and prosody transplantation; 2) Dialogue-based language learning games; 3) Application of parsing and generation technologies to the automatic generation of paraphrases for the semi-automatic production of learning material.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,393
inproceedings
mairidan-etal-2014-bilingual
Bilingual Dictionary Induction as an Optimization Problem
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1358/
Mairidan, Wushouer and Ishida, Toru and Lin, Donghui and Hirayama, Katsutoshi
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2122--2129
Bilingual dictionaries are vital in many areas of natural language processing, but such resources are rarely available for lower-density language pairs, especially for those that are closely related. Pivot-based induction consists of using a third language to bridge a language pair. As an approach to create new dictionaries, it can generate wrong translations due to polysemy and ambiguous words. In this paper we propose a constraint approach to pivot-based dictionary induction for the case of two closely related languages. In order to take into account the word senses, we use an approach based on semantic distances, in which possibly missing translations are considered, and instance of induction is encoded as an optimization problem to generate new dictionary. Evaluations show that the proposal achieves 83.7{\%} accuracy and approximately 70.5{\%} recall, thus outperforming the baseline pivot-based method.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,394
inproceedings
macwhinney-fromm-2014-two
Two Approaches to Metaphor Detection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1359/
MacWhinney, Brian and Fromm, Davida
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2501--2506
Methods for automatic detection and interpretation of metaphors have focused on analysis and utilization of the ways in which metaphors violate selectional preferences (Martin, 2006). Detection and interpretation processes that rely on this method can achieve wide coverage and may be able to detect some novel metaphors. However, they are prone to high false alarm rates, often arising from imprecision in parsing and supporting ontological and lexical resources. An alternative approach to metaphor detection emphasizes the fact that many metaphors become conventionalized collocations, while still preserving their active metaphorical status. Given a large enough corpus for a given language, it is possible to use tools like SketchEngine (Kilgariff, Rychly, Smrz, {\&} Tugwell, 2004) to locate these high frequency metaphors for a given target domain. In this paper, we examine the application of these two approaches and discuss their relative strengths and weaknesses for metaphors in the target domain of economic inequality in English, Spanish, Farsi, and Russian.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,395
inproceedings
mori-etal-2014-japanese
A {J}apanese Word Dependency Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1360/
Mori, Shinsuke and Ogura, Hideki and Sasada, Tetsuro
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
753--758
In this paper, we present a corpus annotated with dependency relationships in Japanese. It contains about 30 thousand sentences in various domains. Six domains in Balanced Corpus of Contemporary Written Japanese have part-of-speech and pronunciation annotation as well. Dictionary example sentences have pronunciation annotation and cover basic vocabulary in Japanese with English sentence equivalent. Economic newspaper articles also have pronunciation annotation and the topics are similar to those of Penn Treebank. Invention disclosures do not have other annotation, but it has a clear application, machine translation. The unit of our corpus is word like other languages contrary to existing Japanese corpora whose unit is phrase called bunsetsu. Each sentence is manually segmented into words. We first present the specification of our corpus. Then we give a detailed explanation about our standard of word dependency. We also report some preliminary results of an MST-based dependency parser on our corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,396
inproceedings
fromreide-etal-2014-crowdsourcing
Crowdsourcing and annotating {NER} for {T}witter {\#}drift
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1361/
Fromreide, Hege and Hovy, Dirk and S{\o}gaard, Anders
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2544--2547
We present two new NER datasets for Twitter; a manually annotated set of 1,467 tweets (kappa=0.942) and a set of 2,975 expert-corrected, crowdsourced NER annotated tweets from the dataset described in Finin et al. (2010). In our experiments with these datasets, we observe two important points: (a) language drift on Twitter is significant, and while off-the-shelf systems have been reported to perform well on in-sample data, they often perform poorly on new samples of tweets, (b) state-of-the-art performance across various datasets can be obtained from crowdsourced annotations, making it more feasible to {\textquotedblleft}catch up{\textquotedblright} with language drift.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,397
inproceedings
krause-etal-2014-language
Language Resources and Annotation Tools for Cross-Sentence Relation Extraction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1362/
Krause, Sebastian and Li, Hong and Xu, Feiyu and Uszkoreit, Hans and Hummel, Robert and Spielhagen, Luise
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4320--4325
In this paper, we present a novel combination of two types of language resources dedicated to the detection of relevant relations (RE) such as events or facts across sentence boundaries. One of the two resources is the sar-graph, which aggregates for each target relation ten thousands of linguistic patterns of semantically associated relations that signal instances of the target relation (Uszkoreit and Xu, 2013). These have been learned from the Web by intra-sentence pattern extraction (Krause et al., 2012) and after semantic filtering and enriching have been automatically combined into a single graph. The other resource is cockrACE, a specially annotated corpus for the training and evaluation of cross-sentence RE. By employing our powerful annotation tool Recon, annotators mark selected entities and relations (including events), coreference relations among these entities and events, and also terms that are semantically related to the relevant relations and events. This paper describes how the two resources are created and how they complement each other.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,398
inproceedings
couillault-etal-2014-evaluating
Evaluating corpora documentation with regards to the Ethics and Big Data Charter
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1363/
Couillault, Alain and Fort, Kar{\"en and Adda, Gilles and de Mazancourt, Hugues
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4225--4229
The authors have written the Ethic and Big Data Charter in collaboration with various agencies, private bodies and associations. This Charter aims at describing any large or complex resources, and in particular language resources, from a legal and ethical viewpoint and ensuring the transparency of the process of creating and distributing such resources. We propose in this article an analysis of the documentation coverage of the most frequently mentioned language resources with regards to the Charter, in order to show the benefit it offers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,399
inproceedings
aker-etal-2014-bootstrapping
Bootstrapping Term Extractors for Multiple Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1364/
Aker, Ahmet and Paramita, Monica and Barker, Emma and Gaizauskas, Robert
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
483--489
Terminology extraction resources are needed for a wide range of human language technology applications, including knowledge management, information extraction, semantic search, cross-language information retrieval and automatic and assisted translation. We create a low cost method for creating terminology extraction resources for 21 non-English EU languages. Using parallel corpora and a projection method, we create a General POS Tagger for these languages. We also investigate the use of EuroVoc terms and Wikipedia corpus to automatically create term grammar for each language. Our results show that these automatically generated resources can assist term extraction process with similar performance to manually generated resources. All resources resulted in this experiment are freely available for download.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,400
inproceedings
lefever-etal-2014-evaluation
Evaluation of Automatic Hypernym Extraction from Technical Corpora in {E}nglish and {D}utch
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1365/
Lefever, Els and Van de Kauter, Marjan and Hoste, V{\'e}ronique
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
490--497
In this research, we evaluate different approaches for the automatic extraction of hypernym relations from English and Dutch technical text. The detected hypernym relations should enable us to semantically structure automatically obtained term lists from domain- and user-specific data. We investigated three different hypernymy extraction approaches for Dutch and English: a lexico-syntactic pattern-based approach, a distributional model and a morpho-syntactic method. To test the performance of the different approaches on domain-specific data, we collected and manually annotated English and Dutch data from two technical domains, viz. the dredging and financial domain. The experimental results show that especially the morpho-syntactic approach obtains good results for automatic hypernym extraction from technical and domain-specific texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,401
inproceedings
broda-etal-2014-measuring
Measuring Readability of {P}olish Texts: Baseline Experiments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1366/
Broda, Bartosz and Nito{\'n}, Bart{\l}omiej and Gruszczy{\'n}ski, W{\l}odzimierz and Ogrodniczuk, Maciej
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
573--580
Measuring readability of a text is the first sensible step to its simplification. In this paper we present an overview of the most common approaches to automatic measuring of readability. Of the described ones, we implemented and evaluated: Gunning FOG index, Flesch-based Pisarek method. We also present two other approaches. The first one is based on measuring distributional lexical similarity of a target text and comparing it to reference texts. In the second one, we propose a novel method for automation of Taylor test {\textemdash} which, in its base form, requires performing a large amount of surveys. The automation of Taylor test is performed using a technique called statistical language modelling. We have developed a free on-line web-based system and constructed plugins for the most common text editors, namely Microsoft Word and OpenOffice.org. Inner workings of the system are described in detail. Finally, extensive evaluations are performed for Polish {\textemdash} a Slavic, highly inflected language. We show that Pisarek’s method is highly correlated to Gunning FOG Index, even if different in form, and that both the similarity-based approach and automated Taylor test achieve high accuracy. Merits of using either of them are discussed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,402
inproceedings
winkelmann-raess-2014-introducing
Introducing a web application for labeling, visualizing speech and correcting derived speech signals
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1367/
Winkelmann, Raphael and Raess, Georg
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4129--4133
The advent of HTML5 has sparked a great increase in interest in the web as a development platform for a variety of different research applications. Due to its ability to easily deploy software to remote clients and the recent development of standardized browser APIs, we argue that the browser has become a good platform to develop a speech labeling tool for. This paper introduces a preliminary version of an open-source client-side web application for labeling speech data, visualizing speech and segmentation information and manually correcting derived speech signals such as formant trajectories. The user interface has been designed to be as user-friendly as possible in order to make the sometimes tedious task of transcribing as easy and efficient as possible. The future integration into the next iteration of the EMU speech database management system and its general architecture will also be outlined, as the work presented here is only one of several components contributing to the future system.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,403
inproceedings
rebout-langlais-2014-iterative
An Iterative Approach for Mining Parallel Sentences in a Comparable Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1368/
Rebout, Lise and Langlais, Phillippe
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
648--655
We describe an approach for mining parallel sentences in a collection of documents in two languages. While several approaches have been proposed for doing so, our proposal differs in several respects. First, we use a document level classifier in order to focus on potentially fruitful document pairs, an understudied approach. We show that mining less, but more parallel documents can lead to better gains in machine translation. Second, we compare different strategies for post-processing the output of a classifier trained to recognize parallel sentences. Last, we report a simple bootstrapping experiment which shows that promising sentence pairs extracted in a first stage can help to mine new sentence pairs in a second stage. We applied our approach on the English-French Wikipedia. Gains of a statistical machine translation (SMT) engine are analyzed along different test sets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,404
inproceedings
elmahdy-etal-2014-development
Development of a {TV} Broadcasts Speech Recognition System for Qatari {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1369/
Elmahdy, Mohamed and Hasegawa-Johnson, Mark and Mustafawi, Eiman
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3057--3061
A major problem with dialectal Arabic speech recognition is due to the sparsity of speech resources. In this paper, a transfer learning framework is proposed to jointly use a large amount of Modern Standard Arabic (MSA) data and little amount of dialectal Arabic data to improve acoustic and language modeling. The Qatari Arabic (QA) dialect has been chosen as a typical example for an under-resourced Arabic dialect. A wide-band speech corpus has been collected and transcribed from several Qatari TV series and talk-show programs. A large vocabulary speech recognition baseline system was built using the QA corpus. The proposed MSA-based transfer learning technique was performed by applying orthographic normalization, phone mapping, data pooling, acoustic model adaptation, and system combination. The proposed approach can achieve more than 28{\%} relative reduction in WER.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,405
inproceedings
zaghouani-dukes-2014-crowdsourcing
Can Crowdsourcing be used for Effective Annotation of {A}rabic?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1370/
Zaghouani, Wajdi and Dukes, Kais
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
224--228
Crowdsourcing has been used recently as an alternative to traditional costly annotation by many natural language processing groups. In this paper, we explore the use of Amazon Mechanical Turk (AMT) in order to assess the feasibility of using AMT workers (also known as Turkers) to perform linguistic annotation of Arabic. We used a gold standard data set taken from the Quran corpus project annotated with part-of-speech and morphological information. An Arabic language qualification test was used to filter out potential non-qualified participants. Two experiments were performed, a part-of-speech tagging task in where the annotators were asked to choose a correct word-category from a multiple choice list and case ending identification task. The results obtained so far showed that annotating Arabic grammatical case is harder than POS tagging, and crowdsourcing for Arabic linguistic annotation requiring expert annotators could be not as effective as other crowdsourcing experiments requiring less expertise and qualifications.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,406
inproceedings
koiso-etal-2014-design
Design and development of an {RDB} version of the Corpus of Spontaneous {J}apanese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1371/
Koiso, Hanae and Den, Yasuharu and Nishikawa, Ken{'}ya and Maekawa, Kikuo
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1471--1476
In this paper, we describe the design and development of a new version of the Corpus of Spontaneous Japanese (CSJ), which is a large-scale spoken corpus released in 2004. CSJ contains various annotations that are represented in XML format (CSJ-XML). CSJ-XML, however, is very complicated and suffers from some problems. To overcome this problem, we have developed and released, in 2013, a relational database version of CSJ (CSJ-RDB). CSJ-RDB is based on an extension of the segment and link-based annotation scheme, which we adapted to handle multi-channel and multi-modal streams. Because this scheme adopts a stand-off framework, CSJ-RDB can represent three hierarchical structures at the same time: inter-pausal-unit-top, clause-top, and intonational-phrase-top. CSJ-RDB consists of five different types of tables: segment, unaligned-segment, link, relation, and meta-information tables. The database was automatically constructed from annotation files extracted from CSJ-XML by using general-purpose corpus construction tools. CSJ-RDB enables us to easily and efficiently conduct complex searches required for corpus-based studies of spoken language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,407
inproceedings
elmahdy-etal-2014-automatic
Automatic Long Audio Alignment and Confidence Scoring for Conversational {A}rabic Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1372/
Elmahdy, Mohamed and Hasegawa-Johnson, Mark and Mustafawi, Eiman
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3062--3066
In this paper, a framework for long audio alignment for conversational Arabic speech is proposed. Accurate alignments help in many speech processing tasks such as audio indexing, speech recognizer acoustic model (AM) training, audio summarizing and retrieving, etc. We have collected more than 1,400 hours of conversational Arabic besides the corresponding human generated non-aligned transcriptions. Automatic audio segmentation is performed using a split and merge approach. A biased language model (LM) is trained using the corresponding text after a pre-processing stage. Because of the dominance of non-standard Arabic in conversational speech, a graphemic pronunciation model (PM) is utilized. The proposed alignment approach is performed in two passes. Firstly, a generic standard Arabic AM is used along with the biased LM and the graphemic PM in a fast speech recognition pass. In a second pass, a more restricted LM is generated for each audio segment, and unsupervised acoustic model adaptation is applied. The recognizer output is aligned with the processed transcriptions using Levenshtein algorithm. The proposed approach resulted in an initial alignment accuracy of 97.8-99.0{\%} depending on the amount of disfluencies. A confidence scoring metric is proposed to accept/reject aligner output. Using confidence scores, it was possible to reject the majority of mis-aligned segments resulting in alignment accuracy of 99.0-99.8{\%} depending on the speech domain and the amount of disfluencies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,408
inproceedings
goldhahn-quasthoff-2014-vocabulary
Vocabulary-Based Language Similarity using Web Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1373/
Goldhahn, Dirk and Quasthoff, Uwe
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3294--3299
This paper will focus on the evaluation of automatic methods for quantifying language similarity. This is achieved by ascribing language similarity to the similarity of text corpora. This corpus similarity will first be determined by the resemblance of the vocabulary of languages. Thereto words or parts of them such as letter n-grams are examined. Extensions like transliteration of the text data will ensure the independence of the methods from text characteristics such as the writing system used. Further analyzes will show to what extent knowledge about the distribution of words in parallel text can be used in the context of language similarity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,409
inproceedings
vossen-etal-2014-newsreader
{N}ews{R}eader: recording history from daily news streams
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1374/
Vossen, Piek and Rigau, German and Serafini, Luciano and Stouten, Pim and Irving, Francis and Van Hage, Willem
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2000--2007
The European project NewsReader develops technology to process daily news streams in 4 languages, extracting what happened, when, where and who was involved. NewsReader does not just read a single newspaper but massive amounts of news coming from thousands of sources. It compares the results across sources to complement information and determine where they disagree. Furthermore, it merges news of today with previous news, creating a long-term history rather than separate events. The result is stored in a KnowledgeStore, that cumulates information over time, producing an extremely large knowledge graph that is visualized using new techniques to provide more comprehensive access. We present the first version of the system and the results of processing first batches of data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,410
inproceedings
coltekin-2014-set
A set of open source tools for {T}urkish natural language processing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1375/
{\c{C{\"oltekin, {\c{Ca{\u{gr{\i
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1079--1086
This paper introduces a set of freely available, open-source tools for Turkish that are built around TRmorph, a morphological analyzer introduced earlier in Coltekin (2010). The article first provides an update on the analyzer, which includes a complete rewrite using a different finite-state description language and tool set as well as major tagset changes to comply better with the state-of-the-art computational processing of Turkish and the user requests received so far. Besides these major changes to the analyzer, this paper introduces tools for morphological segmentation, stemming and lemmatization, guessing unknown words, grapheme to phoneme conversion, hyphenation and a morphological disambiguation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,411
inproceedings
hagemeijer-etal-2014-gulf
The {G}ulf of {G}uinea Creole Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1376/
Hagemeijer, Tjerk and G{\'e}n{\'e}reux, Michel and Hendrickx, Iris and Mendes, Am{\'a}lia and Tiny, Abigail and Zamora, Armando
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
523--529
We present the process of building linguistic corpora of the Portuguese-related Gulf of Guinea creoles, a cluster of four historically related languages: Santome, Angolar, Principense and Fa d’Amb{\^o}. We faced the typical difficulties of languages lacking an official status, such as lack of standard spelling, language variation, lack of basic language instruments, and small data sets, which comprise data from the late 19th century to the present. In order to tackle these problems, the compiled written and transcribed spoken data collected during field work trips were adapted to a normalized spelling that was applied to the four languages. For the corpus compilation we followed corpus linguistics standards. We recorded meta data for each file and added morphosyntactic information based on a part-of-speech tag set that was designed to deal with the specificities of these languages. The corpora of three of the four creoles are already available and searchable via an online web interface.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,412
inproceedings
viitaniemi-etal-2014-pot
{S}-pot - a benchmark in spotting signs within continuous signing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1377/
Viitaniemi, Ville and Jantunen, Tommi and Savolainen, Leena and Karppa, Matti and Laaksonen, Jorma
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1892--1897
In this paper we present S-pot, a benchmark setting for evaluating the performance of automatic spotting of signs in continuous sign language videos. The benchmark includes 5539 video files of Finnish Sign Language, ground truth sign spotting results, a tool for assessing the spottings against the ground truth, and a repository for storing information on the results. In addition we will make our sign detection system and results made with it publicly available as a baseline for comparison and further developments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,413
inproceedings
ghayoomi-kuhn-2014-converting
Converting an {HPSG}-based Treebank into its Parallel Dependency-based Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1378/
Ghayoomi, Masood and Kuhn, Jonas
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
802--809
A treebank is an important language resource for supervised statistical parsers. The parser induces the grammatical properties of a language from this language resource and uses the model to parse unseen data automatically. Since developing such a resource is very time-consuming and tedious, one can take advantage of already extant resources by adapting them to a particular application. This reduces the amount of human effort required to develop a new language resource. In this paper, we introduce an algorithm to convert an HPSG-based treebank into its parallel dependency-based treebank. With this converter, we can automatically create a new language resource from an existing treebank developed based on a grammar formalism. Our proposed algorithm is able to create both projective and non-projective dependency trees.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,414
inproceedings
alegria-etal-2014-tweetnorm
{T}weet{N}orm{\_}es: an annotated corpus for {S}panish microtext normalization
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1379/
Alegria, I{\~n}aki and Aranberri, Nora and Comas, Pere and Fresno, V{\'i}ctor and Gamallo, Pablo and Padr{\'o}, Lluis and San Vicente, I{\~n}aki and Turmo, Jordi and Zubiaga, Arkaitz
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2274--2278
In this paper we introduce TweetNorm{\_}es, an annotated corpus of tweets in Spanish language, which we make publicly available under the terms of the CC-BY license. This corpus is intended for development and testing of microtext normalization systems. It was created for Tweet-Norm, a tweet normalization workshop and shared task, and is the result of a joint annotation effort from different research groups. In this paper we describe the methodology defined to build the corpus as well as the guidelines followed in the annotation process. We also present a brief overview of the Tweet-Norm shared task, as the first evaluation environment where the corpus was used.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,415
inproceedings
hajnicz-2014-procedure
The Procedure of Lexico-Semantic Annotation of Sk{\l}adnica Treebank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1380/
Hajnicz, El{\.z}bieta
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2290--2297
In this paper, the procedure of lexico-semantic annotation of Sk{\l}adnica Treebank using Polish WordNet is presented. Other semantically annotated corpora, in particular treebanks, are outlined first. Resources involved in annotation as well as a tool called Semantikon used for it are described. The main part of the paper is the analysis of the applied procedure. It consists of the basic and correction phases. During basic phase all nouns, verbs and adjectives are annotated with wordnet senses. The annotation is performed independently by two linguists. During the correction phase, conflicts are resolved by the linguist supervising the process. Multi-word units obtain special tags, synonyms and hypernyms are used for senses absent in Polish WordNet. Additionally, each sentence receives its general assessment. Finally, some statistics of the results of annotation are given, including inter-annotator agreement. The final resource is represented in XML files preserving the structure of Sk{\l}adnica.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,416
inproceedings
pajzs-etal-2014-media
Media monitoring and information extraction for the highly inflected agglutinative language {H}ungarian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1381/
Pajzs, J{\'u}lia and Steinberger, Ralf and Ehrmann, Maud and Ebrahim, Mohamed and Della Rocca, Leonida and Bucci, Stefano and Simon, Eszter and V{\'a}radi, Tam{\'a}s
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2049--2056
The Europe Media Monitor (EMM) is a fully-automatic system that analyses written online news by gathering articles in over 70 languages and by applying text analysis software for currently 21 languages, without using linguistic tools such as parsers, part-of-speech taggers or morphological analysers. In this paper, we describe the effort of adding to EMM Hungarian text mining tools for news gathering; document categorisation; named entity recognition and classification for persons, organisations and locations; name lemmatisation; quotation recognition; and cross-lingual linking of related news clusters. The major challenge of dealing with the Hungarian language is its high degree of inflection and agglutination. We present several experiments where we apply linguistically light-weight methods to deal with inflection and we propose a method to overcome the challenges. We also present detailed frequency lists of Hungarian person and location name suffixes, as found in real-life news texts. This empirical data can be used to draw further conclusions and to improve existing Named Entity Recognition software. Within EMM, the solutions described here will also be applied to other morphologically complex languages such as those of the Slavic language family. The media monitoring and analysis system EMM is freely accessible online via the web page \url{http://emm.newsbrief.eu/overview.html}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,417
inproceedings
moriceau-tannier-2014-french
{F}rench Resources for Extraction and Normalization of Temporal Expressions with {H}eidel{T}ime
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1382/
Moriceau, V{\'e}ronique and Tannier, Xavier
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3239--3243
In this paper, we describe the development of French resources for the extraction and normalization of temporal expressions with HeidelTime, a open-source multilingual, cross-domain temporal tagger. HeidelTime extracts temporal expressions from documents and normalizes them according to the TIMEX3 annotation standard. Several types of temporal expressions are extracted: dates, times, durations and temporal sets. French resources have been evaluated in two different ways: on the French TimeBank corpus, a corpus of newspaper articles in French annotated according to the ISO-TimeML standard, and on a user application for automatic building of event timelines. Results on the French TimeBank are quite satisfaying as they are comparable to those obtained by HeidelTime in English and Spanish on newswire articles. Concerning the user application, we used two temporal taggers for the preprocessing of the corpus in order to compare their performance and results show that the performances of our application on French documents are better with HeidelTime. The French resources and evaluation scripts are publicly available with HeidelTime.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,418
inproceedings
truyens-van-eecke-2014-legal
Legal aspects of text mining
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1383/
Truyens, Maarten and Van Eecke, Patrick
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2182--2186
Unlike data mining, text mining has received only limited attention in legal circles. Nevertheless, interesting legal stumbling blocks exist, both with respect to the data collection and data sharing phases, due to the strict rules of copyright and database law. Conflicts are particularly likely when content is extracted from commercial databases, and when texts that have a minimal level of creativity are stored in a permanent way. In all circumstances, even with non-commercial research, license agreements and website terms of use can impose further restrictions. Accordingly, only for some delineated areas (very old texts for which copyright expired, legal statutes, texts in the public domain) strong legal certainty can be obtained without case-by-case assessments. As a result, while prior permission is certainly not required in all cases, many researchers tend to err on the side of caution, and seek permission from publishers, institutions and individual authors before including texts in their corpora, although this process can be difficult and very time-consuming. In the United States, the legal assessment is very different, due to the open-ended nature and flexibility offered by the {\textquotedblleft}fair use{\textquotedblright} doctrine.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,419
inproceedings
ivanova-van-noord-2014-treelet
Treelet Probabilities for {HPSG} Parsing and Error Correction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1384/
Ivanova, Angelina and van Noord, Gertjan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2887--2892
Most state-of-the-art parsers take an approach to produce an analysis for any input despite errors. However, small grammatical mistakes in a sentence often cause parser to fail to build a correct syntactic tree. Applications that can identify and correct mistakes during parsing are particularly interesting for processing user-generated noisy content. Such systems potentially could take advantage of linguistic depth of broad-coverage precision grammars. In order to choose the best correction for an utterance, probabilities of parse trees of different sentences should be comparable which is not supported by discriminative methods underlying parsing software for processing deep grammars. In the present work we assess the treelet model for determining generative probabilities for HPSG parsing with error correction. In the first experiment the treelet model is applied to the parse selection task and shows superior exact match accuracy than the baseline and PCFG. In the second experiment it is tested for the ability to score the parse tree of the correct sentence higher than the constituency tree of the original version of the sentence containing grammatical error.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,420
inproceedings
masmoudi-etal-2014-corpus
A Corpus and Phonetic Dictionary for {T}unisian {A}rabic Speech Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1385/
Masmoudi, Abir and Khmekhem, Mariem Ellouze and Est{\`e}ve, Yannick and Belguith, Lamia Hadrich and Habash, Nizar
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
306--310
In this paper we describe an effort to create a corpus and phonetic dictionary for Tunisian Arabic Automatic Speech Recognition (ASR). The corpus, named TARIC (Tunisian Arabic Railway Interaction Corpus) has a collection of audio recordings and transcriptions from dialogues in the Tunisian Railway Transport Network. The phonetic (or pronunciation) dictionary is an important ASR component that serves as an intermediary between acoustic models and language models in ASR systems. The method proposed in this paper, to automatically generate a phonetic dictionary, is rule based. For that reason, we define a set of pronunciation rules and a lexicon of exceptions. To determine the performance of our phonetic rules, we chose to evaluate our pronunciation dictionary on two types of corpora. The word error rate of word grapheme-to-phoneme mapping is around 9{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,421
inproceedings
lhomme-etal-2014-discovering
Discovering frames in specialized domains
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1386/
L{'Homme, Marie-Claude and Robichaud, Beno{\^it and R{\"uggeberg, Carlos Subirats
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1364--1371
This paper proposes a method for discovering semantic frames (Fillmore, 1982, 1985; Fillmore et al., 2003) in specialized domains. It is assumed that frames are especially relevant for capturing the lexical structure in specialized domains and that they complement structures such as ontologies that appear better suited to represent specific relationships between entities. The method we devised is based on existing lexical entries recorded in a specialized database related to the field of the environment (erode, impact, melt, recycling, warming). The frames and the data encoded in FrameNet are used as a reference. Selected information was extracted automatically from the database on the environment (and, when possible, compared to FrameNet), and presented to a linguist who analyzed this information to discover potential frames. Several different frames were discovered with this method. About half of them correspond to frames already described in FrameNet; some new frames were also defined and part of these might be specific to the field of the environment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,422
inproceedings
levin-etal-2014-resources
Resources for the Detection of Conventionalized Metaphors in Four Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1387/
Levin, Lori and Mitamura, Teruko and MacWhinney, Brian and Fromm, Davida and Carbonell, Jaime and Feely, Weston and Frederking, Robert and Gershman, Anatole and Ramirez, Carlos
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
498--501
This paper describes a suite of tools for extracting conventionalized metaphors in English, Spanish, Farsi, and Russian. The method depends on three significant resources for each language: a corpus of conventionalized metaphors, a table of conventionalized conceptual metaphors (CCM table), and a set of extraction rules. Conventionalized metaphors are things like {\textquotedblleft}escape from poverty{\textquotedblright} and {\textquotedblleft}burden of taxation{\textquotedblright}. For each metaphor, the CCM table contains the metaphorical source domain word (such as {\textquotedblleft}escape{\textquotedblright}) the target domain word (such as {\textquotedblleft}poverty{\textquotedblright}) and the grammatical construction in which they can be found. The extraction rules operate on the output of a dependency parser and identify the grammatical configurations (such as a verb with a prepositional phrase complement) that are likely to contain conventional metaphors. We present results on detection rates for conventional metaphors and analysis of the similarity and differences of source domains for conventional metaphors in the four languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,423
inproceedings
odijk-2014-clarin
{CLARIN}-{NL}: Major results
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1388/
Odijk, Jan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2187--2193
In this paper I provide a high level overview of the major results of CLARIN-NL so far. I will show that CLARIN-NL is starting to provide the data, facilities and services in the CLARIN infrastructure to carry out humanities research supported by large amounts of data and tools. These services have easy interfaces and easy search options (no technical background needed). Still some training is required, to understand both the possibilities and the limitations of the data and the tools. Actual use of the facilities leads to suggestions for improvements and to suggestions for new functionality. All researchers are therefore invited to start using the elements in the CLARIN infrastructure offered by CLARIN-NL. Though I will show that a lot has been achieved in the CLARIN-NL project, I will also provide a long list of functionality and interoperability cases that have not been dealt with in CLARIN-NL and must remain for future work.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,424
inproceedings
oliveira-etal-2014-exploiting
Exploiting {P}ortuguese Lexical Knowledge Bases for Answering Open Domain Cloze Questions Automatically
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1389/
Oliveira, Hugo Gon{\c{c}}alo and Coelho, In{\^e}s and Gomes, Paulo
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4202--4209
We present the task of answering cloze questions automatically and how it can be tackled by exploiting lexical knowledge bases (LKBs). This task was performed in what can be seen as an indirect evaluation of Portuguese LKB. We introduce the LKBs used and the algorithms applied, and then report on the obtained results and draw some conclusions: LKBs are definitely useful resources for this challenging task, and exploiting them, especially with PageRanking-based algorithms, clearly improves the baselines. Moreover, larger LKB, created automatically and not sense-aware led to the best results, as opposed to handcrafted LKB structured on synsets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,425
inproceedings
tateisi-etal-2014-annotation
Annotation of Computer Science Papers for Semantic Relation Extrac-tion
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1390/
Tateisi, Yuka and Shidahara, Yo and Miyao, Yusuke and Aizawa, Akiko
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1423--1429
We designed a new annotation scheme for formalising relation structures in research papers, through the investigation of computer science papers. The annotation scheme is based on the hypothesis that identifying the role of entities and events that are described in a paper is useful for intelligent information retrieval in academic literature, and the role can be determined by the relationship between the author and the described entities or events, and relationships among them. Using the scheme, we have annotated research abstracts from the IPSJ Journal published in Japanese by the Information Processing Society of Japan. On the basis of the annotated corpus, we have developed a prototype information extraction system which has the facility to classify sentences according to the relationship between entities mentioned, to help find the role of the entity in which the searcher is interested.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,426
inproceedings
ho-etal-2014-identifying
Identifying Idioms in {C}hinese Translations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1391/
Ho, Wan Yu and Kng, Christine and Wang, Shan and Bond, Francis
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
716--721
Optimally, a translated text should preserve information while maintaining the writing style of the original. When this is not possible, as is often the case with figurative speech, a common practice is to simplify and make explicit the implications. However, in our investigations of translations from English to another language, English-to-Chinese texts were often found to include idiomatic expressions (usually in the form of Chengyu {\ae}ˆ{\`e} ̄) where there were originally no idiomatic, metaphorical, or even figurative expressions. We have created an initial small lexicon of Chengyu, with which we can use to find all occurrences of Chengyu in a given corpus, and will continue to expand the database. By examining the rates and patterns of occurrence across four genres in the NTU Multilingual Corpus, a resource may be created to aid machine translation or, going further, predict Chinese translational trends in any given genre.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,427
inproceedings
etchegoyhen-etal-2014-machine
Machine Translation for Subtitling: A Large-Scale Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1392/
Etchegoyhen, Thierry and Bywood, Lindsay and Fishel, Mark and Georgakopoulou, Panayota and Jiang, Jie and van Loenhout, Gerard and del Pozo, Arantza and Mau{\v{c}}ec, Mirjam Sepesy and Turner, Anja and Volk, Martin
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
46--53
This article describes a large-scale evaluation of the use of Statistical Machine Translation for professional subtitling. The work was carried out within the FP7 EU-funded project SUMAT and involved two rounds of evaluation: a quality evaluation and a measure of productivity gain/loss. We present the SMT systems built for the project and the corpora they were trained on, which combine professionally created and crowd-sourced data. Evaluation goals, methodology and results are presented for the eleven translation pairs that were evaluated by professional subtitlers. Overall, a majority of the machine translated subtitles received good quality ratings. The results were also positive in terms of productivity, with a global gain approaching 40{\%}. We also evaluated the impact of applying quality estimation and filtering of poor MT output, which resulted in higher productivity gains for filtered files as opposed to fully machine-translated files. Finally, we present and discuss feedback from the subtitlers who participated in the evaluation, a key aspect for any eventual adoption of machine translation technology in professional subtitling.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,428
inproceedings
jezek-etal-2014-pas
{T}-{PAS}; A resource of Typed Predicate Argument Structures for linguistic analysis and semantic processing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1393/
Jezek, Elisabetta and Magnini, Bernardo and Feltracco, Anna and Bianchini, Alessia and Popescu, Octavian
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
890--895
The goal of this paper is to introduce T-PAS, a resource of typed predicate argument structures for Italian, acquired from corpora by manual clustering of distributional information about Italian verbs, to be used for linguistic analysis and semantic processing tasks. T-PAS is the first resource for Italian in which semantic selection properties and sense-in-context distinctions of verbs are characterized fully on empirical ground. In the paper, we first describe the process of pattern acquisition and corpus annotation (section 2) and its ongoing evaluation (section 3). We then demonstrate the benefits of pattern tagging for NLP purposes (section 4), and discuss current effort to improve the annotation of the corpus (section 5). We conclude by reporting on ongoing experiments using semiautomatic techniques for extending coverage (section 6).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,429
inproceedings
warburton-2014-narrowing
Narrowing the Gap Between Termbases and Corpora in Commercial Environments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1394/
Warburton, Kara
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
722--727
Terminological resources offer potential to support applications beyond translation, such as controlled authoring and indexing, which are increasingly of interest to commercial enterprises. The ad-hoc semasiological approach adopted by commercial terminographers diverges considerably from methodologies prescribed by conventional theory. The notion of termhood in such production-oriented environments is driven by pragmatic criteria such as frequency and repurposability of the terminological unit. A high degree of correspondence between the commercial corpus and the termbase is desired. Research carried out at the City University of Hong Kong using four IT companies as case studies revealed a large gap between corpora and termbases. Problems in selecting terms and in encoding them properly in termbases account for a significant portion of this gap. A rigorous corpus-based approach to term selection would significantly reduce this gap and improve the effectiveness of commercial termbases. In particular, single-word terms (keywords) identified by comparison to a reference corpus offer great potential for identifying important multi-word terms in this context. We conclude that terminography for production purposes should be more corpus-based than is currently the norm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,430
inproceedings
mukherjee-joshi-2014-author
Author-Specific Sentiment Aggregation for Polarity Prediction of Reviews
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1395/
Mukherjee, Subhabrata and Joshi, Sachindra
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
3092--3099
In this work, we propose an author-specific sentiment aggregation model for polarity prediction of reviews using an ontology. We propose an approach to construct a Phrase Annotated Author Specific Sentiment Ontology Tree (PASOT), where the facet nodes are annotated with opinion phrases of the author, used to describe the facets, as well as the author`s preference for the facets. We show that an author-specific aggregation of sentiment over an ontology fares better than a flat classification model, which does not take the domain-specific facet importance or author-specific facet preference into account. We compare our approach to supervised classification using Support Vector Machines, as well as other baselines from previous works, where we achieve an accuracy improvement of 7.55{\%} over the SVM baseline. Furthermore, we also show the effectiveness of our approach in capturing thwarting in reviews, achieving an accuracy improvement of 11.53{\%} over the SVM baseline.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,431
inproceedings
jacquet-etal-2014-clustering
Clustering of Multi-Word Named Entity variants: Multilingual Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1396/
Jacquet, Guillaume and Ehrmann, Maud and Steinberger, Ralf
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2548--2553
Multi-word entities, such as organisation names, are frequently written in many different ways. We have previously automatically identified over one million acronym pairs in 22 languages, consisting of their short form (e.g. EC) and their corresponding long forms (e.g. European Commission, European Union Commission). In order to automatically group such long form variants as belonging to the same entity, we cluster them, using bottom-up hierarchical clustering and pair-wise string similarity metrics. In this paper, we address the issue of how to evaluate the named entity variant clusters automatically, with minimal human annotation effort. We present experiments that make use of Wikipedia redirection tables and we show that this method produces good results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,432
inproceedings
sproat-etal-2014-database
A Database for Measuring Linguistic Information Content
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1397/
Sproat, Richard and Cartoni, Bruno and Choe, HyunJeong and Huynh, David and Ha, Linne and Rajakumar, Ravindran and Wenzel-Grondie, Evelyn
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
967--974
Which languages convey the most information in a given amount of space? This is a question often asked of linguists, especially by engineers who often have some information theoretic measure of “information” in mind, but rarely define exactly how they would measure that information. The question is, in fact remarkably hard to answer, and many linguists consider it unanswerable. But it is a question that seems as if it ought to have an answer. If one had a database of close translations between a set of typologically diverse languages, with detailed marking of morphosyntactic and morphosemantic features, one could hope to quantify the differences between how these different languages convey information. Since no appropriate database exists we decided to construct one. The purpose of this paper is to present our work on the database, along with some preliminary results. We plan to release the dataset once complete.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,433
inproceedings
asheghi-etal-2014-designing
Designing and Evaluating a Reliable Corpus of Web Genres via Crowd-Sourcing
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1398/
Asheghi, Noushin Rezapour and Sharoff, Serge and Markert, Katja
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1339--1346
Research in Natural Language Processing often relies on a large collection of manually annotated documents. However, currently there is no reliable genre-annotated corpus of web pages to be employed in Automatic Genre Identification (AGI). In AGI, documents are classified based on their genres rather than their topics or subjects. The major shortcoming of available web genre collections is their relatively low inter-coder agreement. Reliability of annotated data is an essential factor for reliability of the research result. In this paper, we present the first web genre corpus which is reliably annotated. We developed precise and consistent annotation guidelines which consist of well-defined and well-recognized categories. For annotating the corpus, we used crowd-sourcing which is a novel approach in genre annotation. We computed the overall as well as the individual categories' chance-corrected inter-annotator agreement. The results show that the corpus has been annotated reliably.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,434
inproceedings
alonso-romeo-2014-crowdsourcing
Crowdsourcing as a preprocessing for complex semantic annotation tasks
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1399/
Alonso, H{\'e}ctor Mart{\'i}nez and Romeo, Lauren
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
229--234
This article outlines a methodology that uses crowdsourcing to reduce the workload of experts for complex semantic tasks. We split turker-annotated datasets into a high-agreement block, which is not modified, and a low-agreement block, which is re-annotated by experts. The resulting annotations have higher observed agreement. We identify different biases in the annotation for both turkers and experts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,435
inproceedings
turchi-negri-2014-automatic
Automatic Annotation of Machine Translation Datasets with Binary Quality Judgements
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1400/
Turchi, Marco and Negri, Matteo
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1788--1792
The automatic estimation of machine translation (MT) output quality is an active research area due to its many potential applications (e.g. aiding human translation and post-editing, re-ranking MT hypotheses, MT system combination). Current approaches to the task rely on supervised learning methods for which high-quality labelled data is fundamental. In this framework, quality estimation (QE) has been mainly addressed as a regression problem where models trained on (source, target) sentence pairs annotated with continuous scores (in the [0-1] interval) are used to assign quality scores (in the same interval) to unseen data. Such definition of the problem assumes that continuous scores are informative and easily interpretable by different users. These assumptions, however, conflict with the subjectivity inherent to human translation and evaluation. On one side, the subjectivity of human judgements adds noise and biases to annotations based on scaled values. This problem reduces the usability of the resulting datasets, especially in application scenarios where a sharp distinction between “good” and “bad” translations is needed. On the other side, continuous scores are not always sufficient to decide whether a translation is actually acceptable or not. To overcome these issues, we present an automatic method for the annotation of (source, target) pairs with binary judgements that reflect an empirical, and easily interpretable notion of quality. The method is applied to annotate with binary judgements three QE datasets for different language combinations. The three datasets are combined in a single resource, called BinQE, which can be freely downloaded from \url{http://hlt.fbk.eu/technologies/binqe}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,436
inproceedings
apidianaki-etal-2014-semantic
Semantic Clustering of Pivot Paraphrases
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1401/
Apidianaki, Marianna and Verzeni, Emilia and McCarthy, Diana
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4270--4275
Paraphrases extracted from parallel corpora by the pivot method (Bannard and Callison-Burch, 2005) constitute a valuable resource for multilingual NLP applications. In this study, we analyse the semantics of unigram pivot paraphrases and use a graph-based sense induction approach to unveil hidden sense distinctions in the paraphrase sets. The comparison of the acquired senses to gold data from the Lexical Substitution shared task (McCarthy and Navigli, 2007) demonstrates that sense distinctions exist in the paraphrase sets and highlights the need for a disambiguation step in applications using this resource.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,437
inproceedings
hovy-etal-2014-pos
When {POS} data sets don`t add up: Combatting sample bias
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1402/
Hovy, Dirk and Plank, Barbara and S{\o}gaard, Anders
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4472--4475
Several works in Natural Language Processing have recently looked into part-of-speech annotation of Twitter data and typically used their own data sets. Since conventions on Twitter change rapidly, models often show sample bias. Training on a combination of the existing data sets should help overcome this bias and produce more robust models than any trained on the individual corpora. Unfortunately, combining the existing corpora proves difficult: many of the corpora use proprietary tag sets that have little or no overlap. Even when mapped to a common tag set, the different corpora systematically differ in their treatment of various tags and tokens. This includes both pre-processing decisions, as well as default labels for frequent tokens, thus exhibiting data bias and label bias, respectively. Only if we address these biases can we combine the existing data sets to also overcome sample bias. We present a systematic study of several Twitter POS data sets, the problems of label and data bias, discuss their effects on model performance, and show how to overcome them to learn models that perform well on various test sets, achieving relative error reduction of up to 21{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,438
inproceedings
shardlow-2014-open
Out in the Open: Finding and Categorising Errors in the Lexical Simplification Pipeline
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1403/
Shardlow, Matthew
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1583--1590
Lexical simplification is the task of automatically reducing the complexity of a text by identifying difficult words and replacing them with simpler alternatives. Whilst this is a valuable application of natural language generation, rudimentary lexical simplification systems suffer from a high error rate which often results in nonsensical, non-simple text. This paper seeks to characterise and quantify the errors which occur in a typical baseline lexical simplification system. We expose 6 distinct categories of error and propose a classification scheme for these. We also quantify these errors for a moderate size corpus, showing the magnitude of each error type. We find that for 183 identified simplification instances, only 19 (10.38{\%}) result in a valid simplification, with the rest causing errors of varying gravity.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,439
inproceedings
finlayson-etal-2014-n2
The N2 corpus: A semantically annotated collection of Islamist extremist stories
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1404/
Finlayson, Mark and Halverson, Jeffry and Corman, Steven
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
896--902
We describe the N2 (Narrative Networks) Corpus, a new language resource. The corpus is unique in three important ways. First, every text in the corpus is a story, which is in contrast to other language resources that may contain stories or story-like texts, but are not specifically curated to contain only stories. Second, the unifying theme of the corpus is material relevant to Islamist Extremists, having been produced by or often referenced by them. Third, every text in the corpus has been annotated for 14 layers of syntax and semantics, including: referring expressions and co-reference; events, time expressions, and temporal relationships; semantic roles; and word senses. In cases where analyzers were not available to do high-quality automatic annotations, layers were manually double-annotated and adjudicated by trained annotators. The corpus comprises 100 texts and 42,480 words. Most of the texts were originally in Arabic but all are provided in English translation. We explain the motivation for constructing the corpus, the process for selecting the texts, the detailed contents of the corpus itself, the rationale behind the choice of annotation layers, and the annotation procedure.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,440
inproceedings
remus-ziegelmayer-2014-learning
Learning from Domain Complexity
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1405/
Remus, Robert and Ziegelmayer, Dominique
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2021--2028
Sentiment analysis is genre and domain dependent, i.e. the same method performs differently when applied to text that originates from different genres and domains. Intuitively, this is due to different language use in different genres and domains. We measure such differences in a sentiment analysis gold standard dataset that contains texts from 1 genre and 10 domains. Differences in language use are quantified using certain language statistics, viz. domain complexity measures. We investigate 4 domain complexity measures: percentage of rare words, word richness, relative entropy and corpus homogeneity. We relate domain complexity measurements to performance of a standard machine learning-based classifier and find strong correlations. We show that we can accurately estimate its performance based on domain complexity using linear regression models fitted using robust loss functions. Moreover, we illustrate how domain complexity may guide us in model selection, viz. in deciding what word n-gram order to employ in a discriminative model and whether to employ aggressive or conservative word n-gram feature selection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,441
inproceedings
abbasi-etal-2014-benchmarking
Benchmarking {T}witter Sentiment Analysis Tools
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1406/
Abbasi, Ahmed and Hassan, Ammar and Dhar, Milan
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
823--829
Twitter has become one of the quintessential social media platforms for user-generated content. Researchers and industry practitioners are increasingly interested in Twitter sentiments. Consequently, an array of commercial and freely available Twitter sentiment analysis tools have emerged, though it remains unclear how well these tools really work. This study presents the findings of a detailed benchmark analysis of Twitter sentiment analysis tools, incorporating 20 tools applied to 5 different test beds. In addition to presenting detailed performance evaluation results, a thorough error analysis is used to highlight the most prevalent challenges facing Twitter sentiment analysis tools. The results have important implications for various stakeholder groups, including social media analytics researchers, NLP developers, and industry managers and practitioners using social media sentiments as input for decision-making.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,442
inproceedings
fauth-etal-2014-designing
Designing a Bilingual Speech Corpus for {F}rench and {G}erman Language Learners: a Two-Step Process
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1407/
Fauth, Camille and Bonneau, Anne and Zimmerer, Frank and Trouvain, Juergen and Andreeva, Bistra and Colotte, Vincent and Fohr, Dominique and Jouvet, Denis and J{\"ugler, Jeanin and Laprie, Yves and Mella, Odile and M{\"obius, Bernd
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
1477--1482
We present the design of a corpus of native and non-native speech for the language pair French-German, with a special emphasis on phonetic and prosodic aspects. To our knowledge there is no suitable corpus, in terms of size and coverage, currently available for the target language pair. To select the target L1-L2 interference phenomena we prepare a small preliminary corpus (corpus1), which is analyzed for coverage and cross-checked jointly by French and German experts. Based on this analysis, target phenomena on the phonetic and phonological level are selected on the basis of the expected degree of deviation from the native performance and the frequency of occurrence. 14 speakers performed both L2 (either French or German) and L1 material (either German or French). This allowed us to test, recordings duration, recordings material, the performance of our automatic aligner software. Then, we built corpus2 taking into account what we learned about corpus1. The aims are the same but we adapted speech material to avoid too long recording sessions. 100 speakers will be recorded. The corpus (corpus1 and corpus2) will be prepared as a searchable database, available for the scientific community after completion of the project.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,443
inproceedings
strapparava-etal-2014-creative
Creative language explorations through a high-expressivity {N}-grams query language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1408/
Strapparava, Carlo and Gatti, Lorenzo and Guerini, Marco and Stock, Oliviero
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
4326--4330
In computation linguistics a combination of syntagmatic and paradigmatic features is often exploited. While the first aspects are typically managed by information present in large n-gram databases, domain and ontological aspects are more properly modeled by lexical ontologies such as WordNet and semantic similarity spaces. This interconnection is even stricter when we are dealing with creative language phenomena, such as metaphors, prototypical properties, puns generation, hyperbolae and other rhetorical phenomena. This paper describes a way to focus on and accomplish some of these tasks by exploiting NgramQuery, a generalized query language on Google N-gram database. The expressiveness of this query language is boosted by plugging semantic similarity acquired both from corpora (e.g. LSA) and from WordNet, also integrating operators for phonetics and sentiment analysis. The paper reports a number of examples of usage in some creative language tasks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,444
inproceedings
rapp-2014-using-word
Using Word Familiarities and Word Associations to Measure Corpus Representativeness
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2014
Reykjavik, Iceland
European Language Resources Association (ELRA)
https://aclanthology.org/L14-1409/
Rapp, Reinhard
Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14)
2029--2036
The definition of corpus representativeness used here assumes that a representative corpus should reflect as well as possible the average language use a native speaker encounters in everyday life over a longer period of time. As it is not practical to observe people`s language input over years, we suggest to utilize two types of experimental data capturing two forms of human intuitions: Word familiarity norms and word association norms. If it is true that human language acquisition is corpus-based, such data should reflect people`s perceived language input. Assuming so, we compute a representativeness score for a corpus by extracting word frequency and word association statistics from it and by comparing these statistics to the human data. The higher the similarity, the more representative the corpus should be for the language environments of the test persons. We present results for five different corpora and for truncated versions thereof. The results confirm the expectation that corpus size and corpus balance are crucial aspects for corpus representativeness.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
67,445