entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
yates-etal-2016-effects
Effects of Sampling on {T}witter Trend Detection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1479/
Yates, Andrew and Kolcz, Alek and Goharian, Nazli and Frieder, Ophir
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
2998--3005
Much research has focused on detecting trends on Twitter, including health-related trends such as mentions of Influenza-like illnesses or their symptoms. The majority of this research has been conducted using Twitter`s public feed, which includes only about 1{\%} of all public tweets. It is unclear if, when, and how using Twitter`s 1{\%} feed has affected the evaluation of trend detection methods. In this work we use a larger feed to investigate the effects of sampling on Twitter trend detection. We focus on using health-related trends to estimate the prevalence of Influenza-like illnesses based on tweets. We use ground truth obtained from the CDC and Google Flu Trends to explore how the prevalence estimates degrade when moving from a 100{\%} to a 1{\%} sample. We find that using the 1{\%} sample is unlikely to substantially harm ILI estimates made at the national level, but can cause poor performance when estimates are made at the city level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,789
inproceedings
foucault-courtin-2016-automatic
Automatic Classification of Tweets for Analyzing Communication Behavior of Museums
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1480/
Foucault, Nicolas and Courtin, Antoine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3006--3013
In this paper, we present a study on tweet classification which aims to define the communication behavior of the 103 French museums that participated in 2014 in the Twitter operation: MuseumWeek. The tweets were automatically classified in four communication categories: sharing experience, promoting participation, interacting with the community, and promoting-informing about the institution. Our classification is multi-class. It combines Support Vector Machines and Naive Bayes methods and is supported by a selection of eighteen subtypes of features of four different kinds: metadata information, punctuation marks, tweet-specific and lexical features. It was tested against a corpus of 1,095 tweets manually annotated by two experts in Natural Language Processing and Information Communication and twelve Community Managers of French museums. We obtained an state-of-the-art result of F1-score of 72{\%} by 10-fold cross-validation. This result is very encouraging since is even better than some state-of-the-art results found in the tweet classification literature.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,790
inproceedings
bekavac-snajder-2016-graph
Graph-Based Induction of Word Senses in {C}roatian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1481/
Bekavac, Marko and {\v{S}}najder, Jan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3014--3018
Word sense induction (WSI) seeks to induce senses of words from unannotated corpora. In this paper, we address the WSI task for the Croatian language. We adopt the word clustering approach based on co-occurrence graphs, in which senses are taken to correspond to strongly inter-connected components of co-occurring words. We experiment with a number of graph construction techniques and clustering algorithms, and evaluate the sense inventories both as a clustering problem and extrinsically on a word sense disambiguation (WSD) task. In the cluster-based evaluation, Chinese Whispers algorithm outperformed Markov Clustering, yielding a normalized mutual information score of 64.3. In contrast, in WSD evaluation Markov Clustering performed better, yielding an accuracy of about 75{\%}. We are making available two induced sense inventories of 10,000 most frequent Croatian words: one coarse-grained and one fine-grained inventory, both obtained using the Markov Clustering algorithm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,791
inproceedings
johansson-etal-2016-multi
A Multi-domain Corpus of {S}wedish Word Sense Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1482/
Johansson, Richard and Adesam, Yvonne and Bouma, Gerlof and Hedberg, Karin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3019--3022
We describe the word sense annotation layer in \textit{Eukalyptus}, a freely available five-domain corpus of contemporary Swedish with several annotation layers. The annotation uses the SALDO lexicon to define the sense inventory, and allows word sense annotation of compound segments and multiword units. We give an overview of the new annotation tool developed for this project, and finally present an analysis of the inter-annotator agreement between two annotators.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,792
inproceedings
otegi-etal-2016-qtleap
{QTL}eap {WSD}/{NED} Corpora: Semantic Annotation of Parallel Corpora in Six Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1483/
Otegi, Arantxa and Aranberri, Nora and Branco, Antonio and Haji{\v{c}}, Jan and Popel, Martin and Simov, Kiril and Agirre, Eneko and Osenova, Petya and Pereira, Rita and Silva, Jo{\~a}o and Neale, Steven
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3023--3030
This work presents parallel corpora automatically annotated with several NLP tools, including lemma and part-of-speech tagging, named-entity recognition and classification, named-entity disambiguation, word-sense disambiguation, and coreference. The corpora comprise both the well-known Europarl corpus and a domain-specific question-answer troubleshooting corpus on the IT domain. English is common in all parallel corpora, with translations in five languages, namely, Basque, Bulgarian, Czech, Portuguese and Spanish. We describe the annotated corpora and the tools used for annotation, as well as annotation statistics for each language. These new resources are freely available and will help research on semantic processing for machine translation and cross-lingual transfer.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,793
inproceedings
mujdricza-maydt-etal-2016-combining
Combining Semantic Annotation of Word Sense {\&} Semantic Roles: A Novel Annotation Scheme for {V}erb{N}et Roles on {G}erman Language Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1484/
M{\'u}jdricza-Maydt, {\'E}va and Hartmann, Silvana and Gurevych, Iryna and Frank, Anette
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3031--3038
We present a VerbNet-based annotation scheme for semantic roles that we explore in an annotation study on German language data that combines word sense and semantic role annotation. We reannotate a substantial portion of the SALSA corpus with GermaNet senses and a revised scheme of VerbNet roles. We provide a detailed evaluation of the interaction between sense and role annotation. The resulting corpus will allow us to compare VerbNet role annotation for German to FrameNet and PropBank annotation by mapping to existing role annotations on the SALSA corpus. We publish the annotated corpus and detailed guidelines for the new role annotation scheme.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,794
inproceedings
bhingardive-etal-2016-synset
Synset Ranking of {H}indi {W}ord{N}et
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1485/
Bhingardive, Sudha and Shukla, Rajita and Saraswati, Jaya and Kashyap, Laxmi and Singh, Dhirendra and Bhattacharyya, Pushpak
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3039--3043
Word Sense Disambiguation (WSD) is one of the open problems in the area of natural language processing. Various supervised, unsupervised and knowledge based approaches have been proposed for automatically determining the sense of a word in a particular context. It has been observed that such approaches often find it difficult to beat the WordNet First Sense (WFS) baseline which assigns the sense irrespective of context. In this paper, we present our work on creating the WFS baseline for Hindi language by manually ranking the synsets of Hindi WordNet. A ranking tool is developed where human experts can see the frequency of the word senses in the sense-tagged corpora and have been asked to rank the senses of a word by using this information and also his/her intuition. The accuracy of WFS baseline is tested on several standard datasets. F-score is found to be 60{\%}, 65{\%} and 55{\%} on Health, Tourism and News datasets respectively. The created rankings can also be used in other NLP applications viz., Machine Translation, Information Retrieval, Text Summarization, etc.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,795
inproceedings
kutuzov-kuzmenko-2016-neural
Neural Embedding Language Models in Semantic Clustering of Web Search Results
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1486/
Kutuzov, Andrey and Kuzmenko, Elizaveta
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3044--3048
In this paper, a new approach towards semantic clustering of the results of ambiguous search queries is presented. We propose using distributed vector representations of words trained with the help of prediction-based neural embedding models to detect senses of search queries and to cluster search engine results page according to these senses. The words from titles and snippets together with semantic relationships between them form a graph, which is further partitioned into components related to different query senses. This approach to search engine results clustering is evaluated against a new manually annotated evaluation data set of Russian search queries. We show that in the task of semantically clustering search results, prediction-based models slightly but stably outperform traditional count-based ones, with the same training corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,796
inproceedings
alvarez-etal-2016-impact
Impact of Automatic Segmentation on the Quality, Productivity and Self-reported Post-editing Effort of Intralingual Subtitles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1487/
{\'A}lvarez, Aitor and Balenciaga, Marina and del Pozo, Arantza and Arzelus, Haritz and Matamala, Anna and Mart{\'i}nez-Hinarejos, Carlos-D.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3049--3053
This paper describes the evaluation methodology followed to measure the impact of using a machine learning algorithm to automatically segment intralingual subtitles. The segmentation quality, productivity and self-reported post-editing effort achieved with such approach are shown to improve those obtained by the technique based in counting characters, mainly employed for automatic subtitle segmentation currently. The corpus used to train and test the proposed automated segmentation method is also described and shared with the community, in order to foster further research in this area.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,797
inproceedings
elliott-kleppe-2016-1
1 Million Captioned {D}utch Newspaper Images
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1488/
Elliott, Desmond and Kleppe, Martijn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3054--3058
Images naturally appear alongside text in a wide variety of media, such as books, magazines, newspapers, and in online articles. This type of multi-modal data offers an interesting basis for vision and language research but most existing datasets use crowdsourced text, which removes the images from their original context. In this paper, we introduce the KBK-1M dataset of 1.6 million images in their original context, with co-occurring texts found in Dutch newspapers from 1922 - 1994. The images are digitally scanned photographs, cartoons, sketches, and weather forecasts; the text is generated from OCR scanned blocks. The dataset is suitable for experiments in automatic image captioning, image{\textemdash}article matching, object recognition, and data-to-text generation for weather forecasting. It can also be used by humanities scholars to analyse photographic style changes, the representation of people and societal issues, and new tools for exploring photograph reuse via image-similarity-based search.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,798
inproceedings
wang-gaizauskas-2016-cross
Cross-validating Image Description Datasets and Evaluation Metrics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1489/
Wang, Josiah and Gaizauskas, Robert
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3059--3066
The task of automatically generating sentential descriptions of image content has become increasingly popular in recent years, resulting in the development of large-scale image description datasets and the proposal of various metrics for evaluating image description generation systems. However, not much work has been done to analyse and understand both datasets and the metrics. In this paper, we propose using a leave-one-out cross validation (LOOCV) process as a means to analyse multiply annotated, human-authored image description datasets and the various evaluation metrics, i.e. evaluating one image description against other human-authored descriptions of the same image. Such an evaluation process affords various insights into the image description datasets and evaluation metrics, such as the variations of image descriptions within and across datasets and also what the metrics capture. We compute and analyse (i) human upper-bound performance; (ii) ranked correlation between metric pairs across datasets; (iii) lower-bound performance by comparing a set of descriptions describing one image to another sentence not describing that image. Interesting observations are made about the evaluation metrics and image description datasets, and we conclude that such cross-validation methods are extremely useful for assessing and gaining insights into image description datasets and evaluation metrics for image descriptions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,799
inproceedings
yanovich-etal-2016-detection
Detection of Major {ASL} Sign Types in Continuous Signing For {ASL} Recognition
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1490/
Yanovich, Polina and Neidle, Carol and Metaxas, Dimitris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3067--3073
In American Sign Language (ASL) as well as other signed languages, different classes of signs (e.g., lexical signs, fingerspelled signs, and classifier constructions) have different internal structural properties. Continuous sign recognition accuracy can be improved through use of distinct recognition strategies, as well as different training datasets, for each class of signs. For these strategies to be applied, continuous signing video needs to be segmented into parts corresponding to particular classes of signs. In this paper we present a multiple instance learning-based segmentation system that accurately labels 91.27{\%} of the video frames of 500 continuous utterances (including 7 different subjects) from the publicly accessible NCSLGR corpus (Neidle and Vogler, 2012). The system uses novel feature descriptors derived from both motion and shape statistics of the regions of high local motion. The system does not require a hand tracker.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,800
inproceedings
paetzold-specia-2016-benchmarking
Benchmarking Lexical Simplification Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1491/
Paetzold, Gustavo and Specia, Lucia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3074--3080
Lexical Simplification is the task of replacing complex words in a text with simpler alternatives. A variety of strategies have been devised for this challenge, yet there has been little effort in comparing their performance. In this contribution, we present a benchmarking of several Lexical Simplification systems. By combining resources created in previous work with automatic spelling and inflection correction techniques, we introduce BenchLS: a new evaluation dataset for the task. Using BenchLS, we evaluate the performance of solutions for various steps in the typical Lexical Simplification pipeline, both individually and jointly. This is the first time Lexical Simplification systems are compared in such fashion on the same data, and the findings introduce many contributions to the field, revealing several interesting properties of the systems evaluated.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,801
inproceedings
fisas-etal-2016-multi
A Multi-Layered Annotated Corpus of Scientific Papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1492/
Fisas, Beatriz and Ronzano, Francesco and Saggion, Horacio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3081--3088
Scientific literature records the research process with a standardized structure and provides the clues to track the progress in a scientific field. Understanding its internal structure and content is of paramount importance for natural language processing (NLP) technologies. To meet this requirement, we have developed a multi-layered annotated corpus of scientific papers in the domain of Computer Graphics. Sentences are annotated with respect to their role in the argumentative structure of the discourse. The purpose of each citation is specified. Special features of the scientific discourse such as advantages and disadvantages are identified. In addition, a grade is allocated to each sentence according to its relevance for being included in a summary. To the best of our knowledge, this complex, multi-layered collection of annotations and metadata characterizing a set of research papers had never been grouped together before in one corpus and therefore constitutes a newer, richer resource with respect to those currently available in the field.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,802
inproceedings
mehdad-etal-2016-extractive
Extractive Summarization under Strict Length Constraints
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1493/
Mehdad, Yashar and Stent, Amanda and Thadani, Kapil and Radev, Dragomir and Billawala, Youssef and Buchner, Karolina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3089--3093
In this paper we report a comparison of various techniques for single-document extractive summarization under strict length budgets, which is a common commercial use case (e.g. summarization of news articles by news aggregators). We show that, evaluated using ROUGE, numerous algorithms from the literature fail to beat a simple lead-based baseline for this task. However, a supervised approach with lightweight and efficient features improves over the lead-based baseline. Additional human evaluation demonstrates that the supervised approach also performs competitively with a commercial system that uses more sophisticated features.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,803
inproceedings
barker-etal-2016-whats
What`s the Issue Here?: Task-based Evaluation of Reader Comment Summarization Systems
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1494/
Barker, Emma and Paramita, Monica and Funk, Adam and Kurtic, Emina and Aker, Ahmet and Foster, Jonathan and Hepple, Mark and Gaizauskas, Robert
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3094--3101
Automatic summarization of reader comments in on-line news is an extremely challenging task and a capability for which there is a clear need. Work to date has focussed on producing extractive summaries using well-known techniques imported from other areas of language processing. But are extractive summaries of comments what users really want? Do they support users in performing the sorts of tasks they are likely to want to perform with reader comments? In this paper we address these questions by doing three things. First, we offer a specification of one possible summary type for reader comment, based on an analysis of reader comment in terms of issues and viewpoints. Second, we define a task-based evaluation framework for reader comment summarization that allows summarization systems to be assessed in terms of how well they support users in a time-limited task of identifying issues and characterising opinion on issues in comments. Third, we describe a pilot evaluation in which we used the task-based evaluation framework to evaluate a prototype reader comment clustering and summarization system, demonstrating the viability of the evaluation framework and illustrating the sorts of insight such an evaluation affords.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,804
inproceedings
nouri-yangarber-2016-novel
A Novel Evaluation Method for Morphological Segmentation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1495/
Nouri, Javad and Yangarber, Roman
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3102--3109
Unsupervised learning of morphological segmentation of words in a language, based only on a large corpus of words, is a challenging task. Evaluation of the learned segmentations is a challenge in itself, due to the inherent ambiguity of the segmentation task. There is no way to posit unique {\textquotedblleft}correct{\textquotedblright} segmentation for a set of data in an objective way. Two models may arrive at different ways of segmenting the data, which may nonetheless both be valid. Several evaluation methods have been proposed to date, but they do not insist on consistency of the evaluated model. We introduce a new evaluation methodology, which enforces correctness of segmentation boundaries while also assuring consistency of segmentation decisions across the corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,805
inproceedings
hazem-daille-2016-bilingual
Bilingual Lexicon Extraction at the Morpheme Level Using Distributional Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1496/
Hazem, Amir and Daille, B{\'e}atrice
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3110--3115
Bilingual lexicon extraction from comparable corpora is usually based on distributional methods when dealing with single word terms (SWT). These methods often treat SWT as single tokens without considering their compositional property. However, many SWT are compositional (composed of roots and affixes) and this information, if taken into account can be very useful to match translational pairs, especially for infrequent terms where distributional methods often fail. For instance, the English compound \textit{xenograft} which is composed of the root \textit{xeno} and the lexeme \textit{graft} can be translated into French compositionally by aligning each of its elements (\textit{xeno} with \textit{x{\'e}no} and \textit{graft} with \textit{greffe}) resulting in the translation: \textit{x{\'e}nogreffe}. In this paper, we experiment several distributional modellings at the morpheme level that we apply to perform compositional translation to a subset of French and English compounds. We show promising results using distributional analysis at the root and affix levels. We also show that the adapted approach significantly improve bilingual lexicon extraction from comparable corpora compared to the approach at the word level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,806
inproceedings
sylak-glassman-etal-2016-remote
Remote Elicitation of Inflectional Paradigms to Seed Morphological Analysis in Low-Resource Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1497/
Sylak-Glassman, John and Kirov, Christo and Yarowsky, David
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3116--3120
Structured, complete inflectional paradigm data exists for very few of the world`s languages, but is crucial to training morphological analysis tools. We present methods inspired by linguistic fieldwork for gathering inflectional paradigm data in a machine-readable, interoperable format from remotely-located speakers of any language. Informants are tasked with completing language-specific paradigm elicitation templates. Templates are constructed by linguists using grammatical reference materials to ensure completeness. Each cell in a template is associated with contextual prompts designed to help informants with varying levels of linguistic expertise (from professional translators to untrained native speakers) provide the desired inflected form. To facilitate downstream use in interoperable NLP/HLT applications, each cell is also associated with a language-independent machine-readable set of morphological tags from the UniMorph Schema. This data is useful for seeding morphological analysis and generation software, particularly when the data is representative of the range of surface morphological variation in the language. At present, we have obtained 792 lemmas and 25,056 inflected forms from 15 languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,807
inproceedings
kirov-etal-2016-large
Very-large Scale Parsing and Normalization of {W}iktionary Morphological Paradigms
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1498/
Kirov, Christo and Sylak-Glassman, John and Que, Roger and Yarowsky, David
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3121--3126
Wiktionary is a large-scale resource for cross-lingual lexical information with great potential utility for machine translation (MT) and many other NLP tasks, especially automatic morphological analysis and generation. However, it is designed primarily for human viewing rather than machine readability, and presents numerous challenges for generalized parsing and extraction due to a lack of standardized formatting and grammatical descriptor definitions. This paper describes a large-scale effort to automatically extract and standardize the data in Wiktionary and make it available for use by the NLP research community. The methodological innovations include a multidimensional table parsing algorithm, a cross-lexeme, token-frequency-based method of separating inflectional form data from grammatical descriptors, the normalization of grammatical descriptors to a unified annotation scheme that accounts for cross-linguistic diversity, and a verification and correction process that exploits within-language, cross-lexeme table format consistency to minimize human effort. The effort described here resulted in the extraction of a uniquely large normalized resource of nearly 1,000,000 inflectional paradigms across 350 languages. Evaluation shows that even though the data is extracted using a language-independent approach, it is comparable in quantity and quality to data extracted using hand-tuned, language-specific approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,808
inproceedings
sun-etal-2016-appdialogue
{A}pp{D}ialogue: Multi-App Dialogues for Intelligent Assistants
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1499/
Sun, Ming and Chen, Yun-Nung and Hua, Zhenhao and Tamres-Rudnicky, Yulian and Dash, Arnab and Rudnicky, Alexander
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3127--3132
Users will interact with an individual app on smart devices (e.g., phone, TV, car) to fulfill a specific goal (e.g. find a photographer), but users may also pursue more complex tasks that will span multiple domains and apps (e.g. plan a wedding ceremony). Planning and executing such multi-app tasks are typically managed by users, considering the required global context awareness. To investigate how users arrange domains/apps to fulfill complex tasks in their daily life, we conducted a user study on 14 participants to collect such data from their Android smart phones. This document 1) summarizes the techniques used in the data collection and 2) provides a brief statistical description of the data. This data guilds the future direction for researchers in the fields of conversational agent and personal assistant, etc. This data is available at \url{http://AppDialogue.com}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,809
inproceedings
petukhova-etal-2016-modelling
Modelling Multi-issue Bargaining Dialogues: Data Collection, Annotation Design and Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1500/
Petukhova, Volha and Stevens, Christopher and de Weerd, Harmen and Taatgen, Niels and Cnossen, Fokie and Malchanau, Andrei
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3133--3140
The paper describes experimental dialogue data collection activities, as well semantically annotated corpus creation undertaken within EU-funded METALOGUE project(www.metalogue.eu). The project aims to develop a dialogue system with flexible dialogue management to enable system`s adaptive, reactive, interactive and proactive dialogue behavior in setting goals, choosing appropriate strategies and monitoring numerous parallel interpretation and management processes. To achieve these goals negotiation (or more precisely multi-issue bargaining) scenario has been considered as the specific setting and application domain. The dialogue corpus forms the basis for the design of task and interaction models of participants negotiation behavior, and subsequently for dialogue system development which would be capable to replace one of the negotiators. The METALOGUE corpus will be released to the community for research purposes.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,810
inproceedings
konovalov-etal-2016-negochat
The Negochat Corpus of Human-agent Negotiation Dialogues
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1501/
Konovalov, Vasily and Artstein, Ron and Melamud, Oren and Dagan, Ido
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3141--3145
Annotated in-domain corpora are crucial to the successful development of dialogue systems of automated agents, and in particular for developing natural language understanding (NLU) components of such systems. Unfortunately, such important resources are scarce. In this work, we introduce an annotated natural language human-agent dialogue corpus in the negotiation domain. The corpus was collected using Amazon Mechanical Turk following the {\textquoteleft}Wizard-Of-Oz' approach, where a {\textquoteleft}wizard' human translates the participants' natural language utterances in real time into a semantic language. Once dialogue collection was completed, utterances were annotated with intent labels by two independent annotators, achieving high inter-annotator agreement. Our initial experiments with an SVM classifier show that automatically inferring such labels from the utterances is far from trivial. We make our corpus publicly available to serve as an aid in the development of dialogue systems for negotiation agents, and suggest that analogous corpora can be created following our methodology and using our available source code. To the best of our knowledge this is the first publicly available negotiation dialogue corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,811
inproceedings
higashinaka-etal-2016-dialogue
The dialogue breakdown detection challenge: Task description, datasets, and evaluation metrics
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1502/
Higashinaka, Ryuichiro and Funakoshi, Kotaro and Kobayashi, Yuka and Inaba, Michimasa
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3146--3150
Dialogue breakdown detection is a promising technique in dialogue systems. To promote the research and development of such a technique, we organized a dialogue breakdown detection challenge where the task is to detect a system`s inappropriate utterances that lead to dialogue breakdowns in chat. This paper describes the design, datasets, and evaluation metrics for the challenge as well as the methods and results of the submitted runs of the participants.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,812
inproceedings
bunt-etal-2016-dialogbank
The {D}ialog{B}ank
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1503/
Bunt, Harry and Petukhova, Volha and Malchanau, Andrei and Wijnhoven, Kars and Fang, Alex
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3151--3158
This paper presents the DialogBank, a new language resource consisting of dialogues with gold standard annotations according to the ISO 24617-2 standard. Some of these dialogues have been taken from existing corpora and have been re-annotated according to the ISO standard; others have been annotated directly according to the standard. The ISO 24617-2 annotations have been designed according to the ISO principles for semantic annotation, as formulated in ISO 24617-6. The DialogBank makes use of three alternative representation formats, which are shown to be interoperable.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,813
inproceedings
liu-etal-2016-coordinating
Coordinating Communication in the Wild: The Artwalk Dialogue Corpus of Pedestrian Navigation and Mobile Referential Communication
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1504/
Liu, Kris and Fox Tree, Jean and Walker, Marilyn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3159--3166
The Artwalk Corpus is a collection of 48 mobile phone conversations between 24 pairs of friends and 24 pairs of strangers performing a novel, naturalistically-situated referential communication task. This task produced dialogues which, on average, are just under 40 minutes. The task requires the identification of public art while walking around and navigating pedestrian routes in the downtown area of Santa Cruz, California. The task involves a Director on the UCSC campus with access to maps providing verbal instructions to a Follower executing the task. The task provides a setting for real-world situated dialogic language and is designed to: (1) elicit entrainment and coordination of referring expressions between the dialogue participants, (2) examine the effect of friendship on dialogue strategies, and (3) examine how the need to complete the task while negotiating myriad, unanticipated events in the real world {\textemdash} such as avoiding cars and other pedestrians {\textemdash} affects linguistic coordination and other dialogue behaviors. Previous work on entrainment and coordinating communication has primarily focused on similar tasks in laboratory settings where there are no interruptions and no need to navigate from one point to another in a complex space. The corpus provides a general resource for studies on how coordinated task-oriented dialogue changes when we move outside the laboratory and into the world. It can also be used for studies of entrainment in dialogue, and the form and style of pedestrian instruction dialogues, as well as the effect of friendship on dialogic behaviors.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,814
inproceedings
llanos-etal-2016-managing
Managing Linguistic and Terminological Variation in a Medical Dialogue System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1505/
Llanos, Leonardo Campillos and Bouamor, Dhouha and Zweigenbaum, Pierre and Rosset, Sophie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3167--3173
We introduce a dialogue task between a virtual patient and a doctor where the dialogue system, playing the patient part in a simulated consultation, must reconcile a specialized level, to understand what the doctor says, and a lay level, to output realistic patient-language utterances. This increases the challenges in the analysis and generation phases of the dialogue. This paper proposes methods to manage linguistic and terminological variation in that situation and illustrates how they help produce realistic dialogues. Our system makes use of lexical resources for processing synonyms, inflectional and derivational variants, or pronoun/verb agreement. In addition, specialized knowledge is used for processing medical roots and affixes, ontological relations and concept mapping, and for generating lay variants of terms according to the patient`s non-expert discourse. We also report the results of a first evaluation carried out by 11 users interacting with the system. We evaluated the non-contextual analysis module, which supports the Spoken Language Understanding step. The annotation of task domain entities obtained 91.8{\%} of Precision, 82.5{\%} of Recall, 86.9{\%} of F-measure, 19.0{\%} of Slot Error Rate, and 32.9{\%} of Sentence Error Rate.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,815
inproceedings
gokcen-etal-2016-corpus
A Corpus of Word-Aligned Asked and Anticipated Questions in a Virtual Patient Dialogue System
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1506/
Gokcen, Ajda and Jaffe, Evan and Erdmann, Johnsey and White, Michael and Danforth, Douglas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3174--3179
We present a corpus of virtual patient dialogues to which we have added manually annotated gold standard word alignments. Since each question asked by a medical student in the dialogues is mapped to a canonical, anticipated version of the question, the corpus implicitly defines a large set of paraphrase (and non-paraphrase) pairs. We also present a novel process for selecting the most useful data to annotate with word alignments and for ensuring consistent paraphrase status decisions. In support of this process, we have enhanced the earlier Edinburgh alignment tool (Cohn et al., 2008) and revised and extended the Edinburgh guidelines, in particular adding guidance intended to ensure that the word alignments are consistent with the overall paraphrase status decision. The finished corpus and the enhanced alignment tool are made freely available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,816
inproceedings
prevot-etal-2016-cup
A {CUP} of {C}o{F}ee: A large Collection of feedback Utterances Provided with communicative function annotations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1507/
Pr{\'e}vot, Laurent and Gorisch, Jan and Bertrand, Roxane
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3180--3185
There have been several attempts to annotate communicative functions to utterances of verbal feedback in English previously. Here, we suggest an annotation scheme for verbal and non-verbal feedback utterances in French including the categories base, attitude, previous and visual. The data comprises conversations, maptasks and negotiations from which we extracted ca. 13,000 candidate feedback utterances and gestures. 12 students were recruited for the annotation campaign of ca. 9,500 instances. Each instance was annotated by between 2 and 7 raters. The evaluation of the annotation agreement resulted in an average best-pair kappa of 0.6. While the base category with the values acknowledgement, evaluation, answer, elicit achieve good agreement, this is not the case for the other main categories. The data sets, which also include automatic extractions of lexical, positional and acoustic features, are freely available and will further be used for machine learning classification experiments to analyse the form-function relationship of feedback.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,817
inproceedings
sanders-etal-2016-palabras
{P}alabras: Crowdsourcing Transcriptions of {L}2 Speech
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1508/
Sanders, Eric and Burgos, Pepi and Cucchiarini, Catia and van Hout, Roeland
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3186--3191
We developed a web application for crowdsourcing transcriptions of Dutch words spoken by Spanish L2 learners. In this paper we discuss the design of the application and the influence of metadata and various forms of feedback. Useful data were obtained from 159 participants, with an average of over 20 transcriptions per item, which seems a satisfactory result for this type of research. Informing participants about how many items they still had to complete, and not how many they had already completed, turned to be an incentive to do more items. Assigning participants a score for their performance made it more attractive for them to carry out the transcription task, but this seemed to influence their performance. We discuss possible advantages and disadvantages in connection with the aim of the research and consider possible lessons for designing future experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,818
inproceedings
megyesi-etal-2016-uppsala
The {U}ppsala Corpus of Student Writings: Corpus Creation, Annotation, and Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1509/
Megyesi, Be{\'ata and N{\"asman, Jesper and Palm{\'er, Anne
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3192--3199
The Uppsala Corpus of Student Writings consists of Swedish texts produced as part of a national test of students ranging in age from nine (in year three of primary school) to nineteen (the last year of upper secondary school) who are studying either Swedish or Swedish as a second language. National tests have been collected since 1996. The corpus currently consists of 2,500 texts containing over 1.5 million tokens. Parts of the texts have been annotated on several linguistic levels using existing state-of-the-art natural language processing tools. In order to make the corpus easy to interpret for scholars in the humanities, we chose the CoNLL format instead of an XML-based representation. Since spelling and grammatical errors are common in student writings, the texts are automatically corrected while keeping the original tokens in the corpus. Each token is annotated with part-of-speech and morphological features as well as syntactic structure. The main purpose of the corpus is to facilitate the systematic and quantitative empirical study of the writings of various student groups based on gender, geographic area, age, grade awarded or a combination of these, synchronically or diachronically. The intention is for this to be a monitor corpus, currently under development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,819
inproceedings
berkling-2016-corpus
Corpus for Children`s Writing with Enhanced Output for Specific Spelling Patterns (2nd and 3rd Grade)
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1510/
Berkling, Kay
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3200--3206
This paper describes the collection of the H1 Corpus of children`s weekly writing over the course of 3 months in 2nd and 3rd grades, aged 7-11. The texts were collected within the normal classroom setting by the teacher. Texts of children whose parents signed the permission to donate the texts to science were collected and transcribed. The corpus consists of the elicitation techniques, an overview of the data collected and the transcriptions of the texts both with and without spelling errors, aligned on a word by word basis, as well as the scanned in texts. The corpus is available for research via Linguistic Data Consortium (LDC). Researchers are strongly encouraged to make additional annotations and improvements and return it to the public domain via LDC.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,820
inproceedings
mendes-etal-2016-cople2
The {COPLE}2 corpus: a learner corpus for {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1511/
Mendes, Am{\'a}lia and Antunes, Sandra and Janssen, Maarten and Gon{\c{c}}alves, Anabela
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3207--3214
We present the COPLE2 corpus, a learner corpus of Portuguese that includes written and spoken texts produced by learners of Portuguese as a second or foreign language. The corpus includes at the moment a total of 182,474 tokens and 978 texts, classified according to the CEFR scales. The original handwritten productions are transcribed in TEI compliant XML format and keep record of all the original information, such as reformulations, insertions and corrections made by the teacher, while the recordings are transcribed and aligned with EXMARaLDA. The TEITOK environment enables different views of the same document (XML, student version, corrected version), a CQP-based search interface, the POS, lemmatization and normalization of the tokens, and will soon be used for error annotation in stand-off format. The corpus has already been a source of data for phonological, lexical and syntactic interlanguage studies and will be used for a data-informed selection of language features for each proficiency level.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,821
inproceedings
wottawa-adda-decker-2016-french
{F}rench Learners Audio Corpus of {G}erman Speech ({FLACGS})
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1512/
Wottawa, Jane and Adda-Decker, Martine
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3215--3219
The French Learners Audio Corpus of German Speech (FLACGS) was created to compare German speech production of German native speakers (GG) and French learners of German (FG) across three speech production tasks of increasing production complexity: repetition, reading and picture description. 40 speakers, 20 GG and 20 FG performed each of the three tasks, which in total leads to approximately 7h of speech. The corpus was manually transcribed and automatically aligned. Analysis that can be performed on this type of corpus are for instance segmental differences in the speech production of L2 learners compared to native speakers. We chose the realization of the velar nasal consonant engma. In spoken French, engma does not appear in a VCV context which leads to production difficulties in FG. With increasing speech production complexity (reading and picture description), engma is realized as engma + plosive by FG in over 50{\%} of the cases. The results of a two way ANOVA with unequal sample sizes on the durations of the different realizations of engma indicate that duration is a reliable factor to distinguish between engma and engma + plosive in FG productions compared to the engma productions in GG in a VCV context. The FLACGS corpus allows to study L2 production and perception.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,822
inproceedings
stefanec-etal-2016-croatian
{C}roatian Error-Annotated Corpus of Non-Professional Written Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1513/
{\v{S}}tefanec, Vanja and Ljube{\v{s}}i{\'c}, Nikola and Kraljevi{\'c}, Jelena Kuva{\v{c}}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3220--3226
In the paper authors present the Croatian corpus of non-professional written language. Consisting of two subcorpora, i.e. the clinical subcorpus, consisting of written texts produced by speakers with various types of language disorders, and the healthy speakers subcorpus, as well as by the levels of its annotation, it offers an opportunity for different lines of research. The authors present the corpus structure, describe the sampling methodology, explain the levels of annotation, and give some very basic statistics. On the basis of data from the corpus, existing language technologies for Croatian are adapted in order to be implemented in a platform facilitating text production to speakers with language disorders. In this respect, several analyses of the corpus data and a basic evaluation of the developed technologies are presented.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,823
inproceedings
hubert-etal-2016-training
Training {\&} Quality Assessment of an Optical Character Recognition Model for {N}orthern {H}aida
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1514/
Hubert, Isabell and Arppe, Antti and Lachler, Jordan and Santos, Eddie A.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3227--3234
We are presenting our work on the creation of the first optical character recognition (OCR) model for Northern Haida, also known as Masset or Xaad Kil, a nearly extinct First Nations language spoken in the Haida Gwaii archipelago in British Columbia, Canada. We are addressing the challenges of training an OCR model for a language with an extensive, non-standard Latin character set as follows: (1) We have compared various training approaches and present the results of practical analyses to maximize recognition accuracy and minimize manual labor. An approach using just one or two pages of Source Images directly performed better than the Image Generation approach, and better than models based on three or more pages. Analyses also suggest that a character`s frequency is directly correlated with its recognition accuracy. (2) We present an overview of current OCR accuracy analysis tools available. (3) We have ported the once de-facto standardized OCR accuracy tools to be able to cope with Unicode input. Our work adds to a growing body of research on OCR for particularly challenging character sets, and contributes to creating the largest electronic corpus for this severely endangered language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,824
inproceedings
gibbon-2016-legacy
Legacy language atlas data mining: mapping Kru languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1515/
Gibbon, Dafydd
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3235--3242
An online tool based on dialectometric methods, DistGraph, is applied to a group of Kru languages of C{\^o}te d`Ivoire, Liberia and Burkina Faso. The inputs to this resource consist of tables of languages x linguistic features (e.g. phonological, lexical or grammatical), and statistical and graphical outputs are generated which show similarities and differences between the languages in terms of the features as virtual distances. In the present contribution, attention is focussed on the consonant systems of the languages, a traditional starting point for language comparison. The data are harvested from a legacy language data resource based on fieldwork in the 1970s and 1980s, a language atlas of the Kru languages. The method on which the online tool is based extends beyond documentation of individual languages to the documentation of language groups, and supports difference-based prioritisation in education programmes, decisions on language policy and documentation and conservation funding, as well as research on language typology and heritage documentation of history and migration.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,825
inproceedings
ohya-2016-data
Data Formats and Management Strategies from the Perspective of Language Resource Producers {\textemdash} Personal Diachronic and Social Synchronic Data Sharing {\textemdash}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1516/
Ohya, Kazushi
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3243--3248
This is a report of findings from on-going language documentation research based on three consecutive projects from 2008 to 2016. In the light of this research, we propose that (1) we should stand on the side of language resource producers to enhance the research of language processing. We support personal data management in addition to social data sharing. (2) This support leads to adopting simple data formats instead of the multi-link-path data models proposed as international standards up to the present. (3) We should set up a framework for total language resource study that includes not only pivotal data formats such as standard formats, but also the surroundings of data formation to capture a wider range of language activities, e.g. annotation, hesitant language formation, and reference-referent relations. A study of this framework is expected to be a foundation of rebuilding man-machine interface studies in which we seek to observe generative processes of informational symbols in order to establish a high affinity interface in regard to documentation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,826
inproceedings
van-den-heuvel-etal-2016-curation
Curation of {D}utch Regional Dictionaries
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1517/
van den Heuvel, Henk and Sanders, Eric and van der Sijs, Nicoline
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3249--3255
This paper describes the process of semi-automatically converting dictionaries from paper to structured text (database) and the integration of these into the CLARIN infrastructure in order to make the dictionaries accessible and retrievable for the research community. The case study at hand is that of the curation of 42 fascicles of the Dictionaries of the Brabantic and Limburgian dialects, and 6 fascicles of the Dictionary of dialects in Gelderland.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,827
inproceedings
soria-etal-2016-fostering
Fostering digital representation of {EU} regional and minority languages: the Digital Language Diversity Project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1518/
Soria, Claudia and Russo, Irene and Quochi, Valeria and Hicks, Davyth and Gurrutxaga, Antton and Sarhimaa, Anneli and Tuomisto, Matti
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3256--3260
Poor digital representation of minority languages further prevents their usability on digital media and devices. The Digital Language Diversity Project, a three-year project funded under the Erasmus+ programme, aims at addressing the problem of low digital representation of EU regional and minority languages by giving their speakers the intellectual an practical skills to create, share, and reuse online digital content. Availability of digital content and technical support to use it are essential prerequisites for the development of language-based digital applications, which in turn can boost digital usage of these languages. In this paper we introduce the project, its aims, objectives and current activities for sustaining digital usability of minority languages through adult education.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,828
inproceedings
prys-etal-2016-cysill
Cysill Ar-lein: A Corpus of Written Contemporary {W}elsh Compiled from an On-line Spelling and Grammar Checker
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1519/
Prys, Delyth and Prys, Gruffudd and Jones, Dewi Bryn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3261--3264
This paper describes the use of a free, on-line language spelling and grammar checking aid as a vehicle for the collection of a significant (31 million words and rising) corpus of text for academic research in the context of less resourced languages where such data in sufficient quantities are often unavailable. It describes two versions of the corpus: the texts as submitted, prior to the correction process, and the texts following the user`s incorporation of any suggested changes. An overview of the corpus' contents is given and an analysis of use including usage statistics is also provided. Issues surrounding privacy and the anonymization of data are explored as is the data`s potential use for linguistic analysis, lexical research and language modelling. The method used for gathering this corpus is believed to be unique, and is a valuable addition to corpus studies in a minority language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,829
inproceedings
wieling-etal-2016-alt
{ALT} Explored: Integrating an Online Dialectometric Tool and an Online Dialect Atlas
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1520/
Wieling, Martijn and Sassolini, Eva and Cucurullo, Sebastiana and Montemagni, Simonetta
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3265--3272
In this paper, we illustrate the integration of an online dialectometric tool, Gabmap, together with an online dialect atlas, the Atlante Lessicale Toscano (ALT-Web). By using a newly created url-based interface to Gabmap, ALT-Web is able to take advantage of the sophisticated dialect visualization and exploration options incorporated in Gabmap. For example, distribution maps showing the distribution in the Tuscan dialect area of a specific dialectal form (selected via the ALT-Web website) are easily obtainable. Furthermore, the complete ALT-Web dataset as well as subsets of the data (selected via the ALT-Web website) can be automatically uploaded and explored in Gabmap. By combining these two online applications, macro- and micro-analyses of dialectal data (respectively offered by Gabmap and ALT-Web) are effectively and dynamically combined.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,830
inproceedings
strassel-tracey-2016-lorelei
{LORELEI} Language Packs: Data, Tools, and Resources for Technology Development in Low Resource Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1521/
Strassel, Stephanie and Tracey, Jennifer
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3273--3280
In this paper, we describe the textual linguistic resources in nearly 3 dozen languages being produced by Linguistic Data Consortium for DARPA`s LORELEI (Low Resource Languages for Emergent Incidents) Program. The goal of LORELEI is to improve the performance of human language technologies for low-resource languages and enable rapid re-training of such technologies for new languages, with a focus on the use case of deployment of resources in sudden emergencies such as natural disasters. Representative languages have been selected to provide broad typological coverage for training, and surprise incident languages for testing will be selected over the course of the program. Our approach treats the full set of language packs as a coherent whole, maintaining LORELEI-wide specifications, tagsets, and guidelines, while allowing for adaptation to the specific needs created by each language. Each representative language corpus, therefore, both stands on its own as a resource for the specific language and forms part of a large multilingual resource for broader cross-language technology development.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,831
inproceedings
nordhoff-etal-2016-alaskan
The Alaskan Athabascan Grammar Database
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1523/
Nordhoff, Sebastian and Tuttle, Siri and Lovick, Olga
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3286--3290
This paper describes a repository of example sentences in three endangered Athabascan languages: Koyukon, Upper Tanana, Lower Tanana. The repository allows researchers or language teachers to browse the example sentence corpus to either investigate the languages or to prepare teaching materials. The originally heterogeneous text collection was imported into a SOLR store via the POIO bridge. This paper describes the requirements, implementation, advantages and drawbacks of this approach and discusses the potential to apply it for other languages of the Athabascan family or beyond.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,833
inproceedings
nasution-etal-2016-constraint
Constraint-Based Bilingual Lexicon Induction for Closely Related Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1524/
Nasution, Arbi Haza and Murakami, Yohei and Ishida, Toru
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3291--3298
The lack or absence of parallel and comparable corpora makes bilingual lexicon extraction becomes a difficult task for low-resource languages. Pivot language and cognate recognition approach have been proven useful to induce bilingual lexicons for such languages. We analyze the features of closely related languages and define a semantic constraint assumption. Based on the assumption, we propose a constraint-based bilingual lexicon induction for closely related languages by extending constraints and translation pair candidates from recent pivot language approach. We further define three constraint sets based on language characteristics. In this paper, two controlled experiments are conducted. The former involves four closely related language pairs with different language pair similarities, and the latter focuses on sense connectivity between non-pivot words and pivot words. We evaluate our result with F-measure. The result indicates that our method works better on voluminous input dictionaries and high similarity languages. Finally, we introduce a strategy to use proper constraint sets for different goals and language characteristics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,834
inproceedings
otrusina-smrz-2016-wtf
{WTF}-{LOD} - A New Resource for Large-Scale {NER} Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1525/
Otrusina, Lubomir and Smrz, Pavel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3299--3302
This paper introduces the Web TextFull linkage to Linked Open Data (WTF-LOD) dataset intended for large-scale evaluation of named entity recognition (NER) systems. First, we present the process of collecting data from the largest publically-available textual corpora, including Wikipedia dumps, monthly runs of the CommonCrawl, and ClueWeb09/12. We discuss similarities and differences of related initiatives such as WikiLinks and WikiReverse. Our work primarily focuses on links from {\textquotedblleft}textfull{\textquotedblright} documents (links surrounded by a text that provides a useful context for entity linking), de-duplication of the data and advanced cleaning procedures. Presented statistics demonstrate that the collected data forms one of the largest available resource of its kind. They also prove suitability of the result for complex NER evaluation campaigns, including an analysis of the most ambiguous name mentions appearing in the data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,835
inproceedings
bleicken-etal-2016-using
Using a Language Technology Infrastructure for {G}erman in order to Anonymize {G}erman {S}ign {L}anguage Corpus Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1526/
Bleicken, Julian and Hanke, Thomas and Salden, Uta and Wagner, Sven
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3303--3306
For publishing sign language corpus data on the web, anonymization is crucial even if it is impossible to hide the visual appearance of the signers: In a small community, even vague references to third persons may be enough to identify those persons. In the case of the DGS Korpus (German Sign Language corpus) project, we want to publish data as a contribution to the cultural heritage of the sign language community while annotation of the data is still ongoing. This poses the question how well anonymization can be achieved given that no full linguistic analysis of the data is available. Basically, we combine analysis of all data that we have, including named entity recognition on translations into German. For this, we use the WebLicht language technology infrastructure. We report on the reliability of these methods in this special context and also illustrate how the anonymization of the video data is technically achieved in order to minimally disturb the viewer.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,836
inproceedings
dojchinovski-etal-2016-crowdsourced
Crowdsourced Corpus with Entity Salience Annotations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1527/
Dojchinovski, Milan and Reddy, Dinesh and Kliegr, Tom{\'a}{\v{s}} and Vitvar, Tom{\'a}{\v{s}} and Sack, Harald
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3307--3311
In this paper, we present a crowdsourced dataset which adds entity salience (importance) annotations to the Reuters-128 dataset, which is subset of Reuters-21578. The dataset is distributed under a free license and publish in the NLP Interchange Format, which fosters interoperability and re-use. We show the potential of the dataset on the task of learning an entity salience classifier and report on the results from several experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,837
inproceedings
oramas-etal-2016-elmd
{ELMD}: An Automatically Generated Entity Linking Gold Standard Dataset in the Music Domain
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1528/
Oramas, Sergio and Anke, Luis Espinosa and Sordo, Mohamed and Saggion, Horacio and Serra, Xavier
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3312--3317
In this paper we present a gold standard dataset for Entity Linking (EL) in the Music Domain. It contains thousands of musical named entities such as Artist, Song or Record Label, which have been automatically annotated on a set of artist biographies coming from the Music website and social network Last.fm. The annotation process relies on the analysis of the hyperlinks present in the source texts and in a voting-based algorithm for EL, which considers, for each entity mention in text, the degree of agreement across three state-of-the-art EL systems. Manual evaluation shows that EL Precision is at least 94{\%}, and due to its tunable nature, it is possible to derive annotations favouring higher Precision or Recall, at will. We make available the annotated dataset along with evaluation data and the code.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,838
inproceedings
littell-etal-2016-bridge
Bridge-Language Capitalization Inference in {W}estern {I}ranian: {S}orani, {K}urmanji, Zazaki, and {T}ajik
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1529/
Littell, Patrick and Mortensen, David R. and Goyal, Kartik and Dyer, Chris and Levin, Lori
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3318--3324
In Sorani Kurdish, one of the most useful orthographic features in named-entity recognition {--} capitalization {--} is absent, as the language`s Perso-Arabic script does not make a distinction between uppercase and lowercase letters. We describe a system for deriving an inferred capitalization value from closely related languages by phonological similarity, and illustrate the system using several related Western Iranian languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,839
inproceedings
kilicoglu-etal-2016-annotating
Annotating Named Entities in Consumer Health Questions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1530/
Kilicoglu, Halil and Abacha, Asma Ben and Mrabet, Yassine and Roberts, Kirk and Rodriguez, Laritza and Shooshan, Sonya and Demner-Fushman, Dina
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3325--3332
We describe a corpus of consumer health questions annotated with named entities. The corpus consists of 1548 de-identified questions about diseases and drugs, written in English. We defined 15 broad categories of biomedical named entities for annotation. A pilot annotation phase in which a small portion of the corpus was double-annotated by four annotators was followed by a main phase in which double annotation was carried out by six annotators, and a reconciliation phase in which all annotations were reconciled by an expert. We conducted the annotation in two modes, manual and assisted, to assess the effect of automatic pre-annotation and calculated inter-annotator agreement. We obtained moderate inter-annotator agreement; assisted annotation yielded slightly better agreement and fewer missed annotations than manual annotation. Due to complex nature of biomedical entities, we paid particular attention to nested entities for which we obtained slightly lower inter-annotator agreement, confirming that annotating nested entities is somewhat more challenging. To our knowledge, the corpus is the first of its kind for consumer health text and is publicly available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,840
inproceedings
brasoveanu-etal-2016-regional
A Regional News Corpora for Contextualized Entity Discovery and Linking
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1531/
Bra{\c{s}}oveanu, Adrian and Nixon, Lyndon J.B. and Weichselbraun, Albert and Scharl, Arno
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3333--3338
This paper presents a German corpus for Named Entity Linking (NEL) and Knowledge Base Population (KBP) tasks. We describe the annotation guideline, the annotation process, NIL clustering techniques and conversion to popular NEL formats such as NIF and TAC that have been used to construct this corpus based on news transcripts from the German regional broadcaster RBB (Rundfunk Berlin Brandenburg). Since creating such language resources requires significant effort, the paper also discusses how to derive additional evaluation resources for tasks like named entity contextualization or ontology enrichment by exploiting the links between named entities from the annotated corpus. The paper concludes with an evaluation that shows how several well-known NEL tools perform on the corpus, a discussion of the evaluation results, and with suggestions on how to keep evaluation corpora and datasets up to date.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,841
inproceedings
brummer-etal-2016-dbpedia
{DB}pedia Abstracts: A Large-Scale, Open, Multilingual {NLP} Training Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1532/
Br{\"ummer, Martin and Dojchinovski, Milan and Hellmann, Sebastian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3339--3343
The ever increasing importance of machine learning in Natural Language Processing is accompanied by an equally increasing need in large-scale training and evaluation corpora. Due to its size, its openness and relative quality, the Wikipedia has already been a source of such data, but on a limited scale. This paper introduces the DBpedia Abstract Corpus, a large-scale, open corpus of annotated Wikipedia texts in six languages, featuring over 11 million texts and over 97 million entity links. The properties of the Wikipedia texts are being described, as well as the corpus creation process, its format and interesting use-cases, like Named Entity Linking training and evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,842
inproceedings
eiselen-2016-government
Government Domain Named Entity Recognition for {S}outh {A}frican Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1533/
Eiselen, Roald
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3344--3348
This paper describes the named entity language resources developed as part of a development project for the South African languages. The development efforts focused on creating protocols and annotated data sets with at least 15,000 annotated named entity tokens for ten of the official South African languages. The description of the protocols and annotated data sets provide an overview of the problems encountered during the annotation of the data sets. Based on these annotated data sets, CRF named entity recognition systems are developed that leverage existing linguistic resources. The newly created named entity recognisers are evaluated, with F-scores of between 0.64 and 0.77, and error analysis is performed to identify possible avenues for improving the quality of the systems.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,843
inproceedings
ehrmann-etal-2016-named
Named Entity Resources - Overview and Outlook
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1534/
Ehrmann, Maud and Nouvel, Damien and Rosset, Sophie
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3349--3356
Recognition of real-world entities is crucial for most NLP applications. Since its introduction some twenty years ago, named entity processing has undergone a significant evolution with, among others, the definition of new tasks (e.g. entity linking) and the emergence of new types of data (e.g. speech transcriptions, micro-blogging). These pose certainly new challenges which affect not only methods and algorithms but especially linguistic resources. Where do we stand with respect to named entity resources? This paper aims at providing a systematic overview of named entity resources, accounting for qualities such as multilingualism, dynamicity and interoperability, and to identify shortfalls in order to guide future developments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,844
inproceedings
garcia-2016-incorporating
Incorporating Lexico-semantic Heuristics into Coreference Resolution Sieves for Named Entity Recognition at Document-level
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1535/
Garcia, Marcos
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3357--3361
This paper explores the incorporation of lexico-semantic heuristics into a deterministic Coreference Resolution (CR) system for classifying named entities at document-level. The highest precise sieves of a CR tool are enriched with both a set of heuristics for merging named entities labeled with different classes and also with some constraints that avoid the incorrect merging of similar mentions. Several tests show that this strategy improves both NER labeling and CR. The CR tool can be applied in combination with any system for named entity recognition using the CoNLL format, and brings benefits to text analytics tasks such as Information Extraction. Experiments were carried out in Spanish, using three different NER tools.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,845
inproceedings
sulea-etal-2016-using
Using Word Embeddings to Translate Named Entities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1536/
{\c{S}}ulea, Octavia-Maria and Nisioi, Sergiu and Dinu, Liviu P.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3362--3366
In this paper we investigate the usefulness of neural word embeddings in the process of translating Named Entities (NEs) from a resource-rich language to a language low on resources relevant to the task at hand, introducing a novel, yet simple way of obtaining bilingual word vectors. Inspired by observations in (Mikolov et al., 2013b), which show that training their word vector model on comparable corpora yields comparable vector space representations of those corpora, reducing the problem of translating words to finding a rotation matrix, and results in (Zou et al., 2013), which showed that bilingual word embeddings can improve Chinese Named Entity Recognition (NER) and English to Chinese phrase translation, we use the sentence-aligned English-French EuroParl corpora and show that word embeddings extracted from a merged corpus (corpus resulted from the merger of the two aligned corpora) can be used to NE translation. We extrapolate that word embeddings trained on merged parallel corpora are useful in Named Entity Recognition and Translation tasks for resource-poor languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,846
inproceedings
eichler-etal-2016-teg
{TEG}-{REP}: A corpus of Textual Entailment Graphs based on Relation Extraction Patterns
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1537/
Eichler, Kathrin and Xu, Feiyu and Uszkoreit, Hans and Hennig, Leonhard and Krause, Sebastian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3367--3372
The task of relation extraction is to recognize and extract relations between entities or concepts in texts. Dependency parse trees have become a popular source for discovering extraction patterns, which encode the grammatical relations among the phrases that jointly express relation instances. State-of-the-art weakly supervised approaches to relation extraction typically extract thousands of unique patterns only potentially expressing the target relation. Among these patterns, some are semantically equivalent, but differ in their morphological, lexical-semantic or syntactic form. Some express a relation that entails the target relation. We propose a new approach to structuring extraction patterns by utilizing entailment graphs, hierarchical structures representing entailment relations, and present a novel resource of gold-standard entailment graphs based on a set of patterns automatically acquired using distant supervision. We describe the methodology used for creating the dataset and present statistics of the resource as well as an analysis of inference types underlying the entailment decisions.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,847
inproceedings
fawei-etal-2016-passing
Passing a {USA} National Bar Exam: a First Corpus for Experimentation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1538/
Fawei, Biralatei and Wyner, Adam and Pan, Jeff
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3373--3378
Bar exams provide a key watershed by which legal professionals demonstrate their knowledge of the law and its application. Passing the bar entitles one to practice the law in a given jurisdiction. The bar provides an excellent benchmark for the performance of legal information systems since passing the bar would arguably signal that the system has acquired key aspects of legal reason on a par with a human lawyer. The paper provides a corpus and experimental results with material derived from a real bar exam, treating the problem as a form of textual entailment from the question to an answer. The providers of the bar exam material set the Gold Standard, which is the answer key. The experiments carried out using the {\textquoteleft}out of the box' the Excitement Open Platform for textual entailment. The results and evaluation show that the tool can identify wrong answers (non-entailment) with a high F1 score, but it performs poorly in identifying the correct answer (entailment). The results provide a baseline performance measure against which to evaluate future improvements. The reasons for the poor performance are examined, and proposals are made to augment the tool in the future. The corpus facilitates experimentation by other researchers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,848
inproceedings
vo-popescu-2016-corpora
Corpora for Learning the Mutual Relationship between Semantic Relatedness and Textual Entailment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1539/
Vo, Ngoc Phuoc An and Popescu, Octavian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3379--3386
In this paper we present the creation of a corpora annotated with both semantic relatedness (SR) scores and textual entailment (TE) judgments. In building this corpus we aimed at discovering, if any, the relationship between these two tasks for the mutual benefit of resolving one of them by relying on the insights gained from the other. We considered a corpora already annotated with TE judgments and we proceed to the manual annotation with SR scores. The RTE 1-4 corpora used in the PASCAL competition fit our need. The annotators worked independently of one each other and they did not have access to the TE judgment during annotation. The intuition that the two annotations are correlated received major support from this experiment and this finding led to a system that uses this information to revise the initial estimates of SR scores. As semantic relatedness is one of the most general and difficult task in natural language processing we expect that future systems will combine different sources of information in order to solve it. Our work suggests that textual entailment plays a quantifiable role in addressing it.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,849
inproceedings
ferrugento-etal-2016-topic
Can Topic Modelling benefit from Word Sense Information?
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1540/
Ferrugento, Adriana and Oliveira, Hugo Gon{\c{c}}alo and Alves, Ana and Rodrigues, Filipe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3387--3393
This paper proposes a new topic model that exploits word sense information in order to discover less redundant and more informative topics. Word sense information is obtained from WordNet and the discovered topics are groups of synsets, instead of mere surface words. A key feature is that all the known senses of a word are considered, with their probabilities. Alternative configurations of the model are described and compared to each other and to LDA, the most popular topic model. However, the obtained results suggest that there are no benefits of enriching LDA with word sense information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,850
inproceedings
shrestha-etal-2016-age
Age and Gender Prediction on Health Forum Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1541/
Shrestha, Prasha and Rey-Villamizar, Nicolas and Sadeque, Farig and Pedersen, Ted and Bethard, Steven and Solorio, Thamar
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3394--3401
Health support forums have become a rich source of data that can be used to improve health care outcomes. A user profile, including information such as age and gender, can support targeted analysis of forum data. But users might not always disclose their age and gender. It is desirable then to be able to automatically extract this information from users' content. However, to the best of our knowledge there is no such resource for author profiling of health forum data. Here we present a large corpus, with close to 85,000 users, for profiling and also outline our approach and benchmark results to automatically detect a user`s age and gender from their forum posts. We use a mix of features from a user`s text as well as forum specific features to obtain accuracy well above the baseline, thus showing that both our dataset and our method are useful and valid.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,851
inproceedings
nisioi-2016-comparing
Comparing Speech and Text Classification on {ICNALE}
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1542/
Nisioi, Sergiu
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3402--3406
In this paper we explore and compare a speech and text classification approach on a corpus of native and non-native English speakers. We experiment on a subset of the International Corpus Network of Asian Learners of English containing the recorded speeches and the equivalent text transcriptions. Our results suggest a high correlation between the spoken and written classification results, showing that native accent is highly correlated with grammatical structures found in text.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,852
inproceedings
arsevska-etal-2016-monitoring
Monitoring Disease Outbreak Events on the Web Using Text-mining Approach and Domain Expert Knowledge
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1543/
Arsevska, Elena and Roche, Mathieu and Falala, Sylvain and Lancelot, Renaud and Chavernac, David and Hendrikx, Pascal and Dufour, Barbara
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3407--3411
Timeliness and precision for detection of infectious animal disease outbreaks from the information published on the web is crucial for prevention against their spread. We propose a generic method to enrich and extend the use of different expressions as queries in order to improve the acquisition of relevant disease related pages on the web. Our method combines a text mining approach to extract terms from corpora of relevant disease outbreak documents, and domain expert elicitation (Delphi method) to propose expressions and to select relevant combinations between terms obtained with text mining. In this paper we evaluated the performance as queries of a number of expressions obtained with text mining and validated by a domain expert and expressions proposed by a panel of 21 domain experts. We used African swine fever as an infectious animal disease model. The expressions obtained with text mining outperformed as queries the expressions proposed by domain experts. However, domain experts proposed expressions not extracted automatically. Our method is simple to conduct and flexible to adapt to any other animal infectious disease and even in the public health domain.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,853
inproceedings
wu-etal-2016-developing
On Developing Resources for Patient-level Information Retrieval
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1544/
Wu, Stephen and Timmons, Tamara and Yates, Amy and Wang, Meikun and Bedrick, Steven and Hersh, William and Liu, Hongfang
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3412--3416
Privacy concerns have often served as an insurmountable barrier for the production of research and resources in clinical information retrieval (IR). We believe that both clinical IR research innovation and legitimate privacy concerns can be served by the creation of intra-institutional, fully protected resources. In this paper, we provide some principles and tools for IR resource-building in the unique problem setting of patient-level IR, following the tradition of the Cranfield paradigm.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,854
inproceedings
klassen-etal-2016-annotating
Annotating and Detecting Medical Events in Clinical Notes
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1545/
Klassen, Prescott and Xia, Fei and Yetisgen, Meliha
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3417--3421
Early detection and treatment of diseases that onset after a patient is admitted to a hospital, such as pneumonia, is critical to improving and reducing costs in healthcare. Previous studies (Tepper et al., 2013) showed that change-of-state events in clinical notes could be important cues for phenotype detection. In this paper, we extend the annotation schema proposed in (Klassen et al., 2014) to mark change-of-state events, diagnosis events, coordination, and negation. After we have completed the annotation, we build NLP systems to automatically identify named entities and medical events, which yield an f-score of 94.7{\%} and 91.8{\%}, respectively.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,855
inproceedings
sitaram-black-2016-speech
Speech Synthesis of Code-Mixed Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1546/
Sitaram, Sunayana and Black, Alan W
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3422--3428
Most Text to Speech (TTS) systems today assume that the input text is in a single language and is written in the same language that the text needs to be synthesized in. However, in bilingual and multilingual communities, code mixing or code switching occurs in speech, in which speakers switch between languages in the same utterance. Due to the popularity of social media, we now see code-mixing even in text in these multilingual communities. TTS systems capable of synthesizing such text need to be able to handle text that is written in multiple languages and scripts. Code-mixed text poses many challenges to TTS systems, such as language identification, spelling normalization and pronunciation modeling. In this work, we describe a preliminary framework for synthesizing code-mixed text. We carry out experiments on synthesizing code-mixed Hindi and English text. We find that there is a significant user preference for TTS systems that can correctly identify and pronounce words in different languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,856
inproceedings
chiarain-chasaide-2016-chatbot
Chatbot Technology with Synthetic Voices in the Acquisition of an Endangered Language: Motivation, Development and Evaluation of a Platform for {I}rish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1547/
Chiar{\'a}in, Neasa N{\'i} and Chasaide, Ailbhe N{\'i}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3429--3435
This paper describes the development and evaluation of a chatbot platform designed for the teaching/learning of Irish. The chatbot uses synthetic voices developed for the dialects of Irish. Speech-enabled chatbot technology offers a potentially powerful tool for dealing with the challenges of teaching/learning an endangered language where learners have limited access to native speaker models of the language and limited exposure to the language in a truly communicative setting. The sociolinguistic context that motivates the present development is explained. The evaluation of the chatbot was carried out in 13 schools by 228 pupils and consisted of two parts. Firstly, learners' opinions of the overall chatbot platform as a learning environment were elicited. Secondly, learners evaluated the intelligibility, quality, and attractiveness of the synthetic voices used in this platform. Results were overwhelmingly positive to both the learning platform and the synthetic voices and indicate that the time may now be ripe for language learning applications which exploit speech and language technologies. It is further argued that these technologies have a particularly vital role to play in the maintenance of the endangered language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,857
inproceedings
campbell-2016-chatr
{CHATR} the Corpus; a 20-year-old archive of Concatenative Speech Synthesis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1548/
Campbell, Nick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3436--3439
This paper reports the preservation of an old speech synthesis website as a corpus. CHATR was a revolutionary technique developed in the mid nineties for concatenative speech synthesis. The method has since become the standard for high quality speech output by computer although much of the current research is devoted to parametric or hybrid methods that employ smaller amounts of data and can be more easily tunable to individual voices. The system was first reported in 1994 and the website was functional in 1996. The ATR labs where this system was invented no longer exist, but the website has been preserved as a corpus containing 1537 samples of synthesised speech from that period (118 MB in aiff format) in 211 pages under various finely interrelated themes The corpus can be accessed from www.speech-data.jp as well as www.tcd-fastnet.com, where the original code and samples are now being maintained.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,858
inproceedings
holthaus-etal-2016-address
How to Address Smart Homes with a Social Robot? A Multi-modal Corpus of User Interactions with an Intelligent Environment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1549/
Holthaus, Patrick and Leichsenring, Christian and Bernotat, Jasmin and Richter, Viktor and Pohling, Marian and Carlmeyer, Birte and K{\"oster, Norman and zu Borgsen, Sebastian Meyer and Zorn, Ren{\'e and Schiffhauer, Birte and Engelmann, Kai Frederic and Lier, Florian and Schulz, Simon and Cimiano, Philipp and Eyssel, Friederike and Hermann, Thomas and Kummert, Franz and Schlangen, David and Wachsmuth, Sven and Wagner, Petra and Wrede, Britta and Wrede, Sebastian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3440--3446
In order to explore intuitive verbal and non-verbal interfaces in smart environments we recorded user interactions with an intelligent apartment. Besides offering various interactive capabilities itself, the apartment is also inhabited by a social robot that is available as a humanoid interface. This paper presents a multi-modal corpus that contains goal-directed actions of naive users in attempts to solve a number of predefined tasks. Alongside audio and video recordings, our data-set consists of large amount of temporally aligned sensory data and system behavior provided by the environment and its interactive components. Non-verbal system responses such as changes in light or display contents, as well as robot and apartment utterances and gestures serve as a rich basis for later in-depth analysis. Manual annotations provide further information about meta data like the current course of study and user behavior including the incorporated modality, all literal utterances, language features, emotional expressions, foci of attention, and addressees.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,859
inproceedings
hu-etal-2016-corpus
A Corpus of Gesture-Annotated Dialogues for Monologue-to-Dialogue Generation from Personal Narratives
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1550/
Hu, Zhichao and Dick, Michelle and Chang, Chung-Ning and Bowden, Kevin and Neff, Michael and Fox Tree, Jean and Walker, Marilyn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3447--3454
Story-telling is a fundamental and prevalent aspect of human social behavior. In the wild, stories are told conversationally in social settings, often as a dialogue and with accompanying gestures and other nonverbal behavior. This paper presents a new corpus, the Story Dialogue with Gestures (SDG) corpus, consisting of 50 personal narratives regenerated as dialogues, complete with annotations of gesture placement and accompanying gesture forms. The corpus includes dialogues generated by human annotators, gesture annotations on the human generated dialogues, videos of story dialogues generated from this representation, video clips of each gesture used in the gesture annotations, and annotations of the original personal narratives with a deep representation of story called a Story Intention Graph. Our long term goal is the automatic generation of story co-tellings as animated dialogues from the Story Intention Graph. We expect this corpus to be a useful resource for researchers interested in natural language generation, intelligent virtual agents, generation of nonverbal behavior, and story and narrative representations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,860
inproceedings
fotinea-etal-2016-multimodal
Multimodal Resources for Human-Robot Communication Modelling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1551/
Fotinea, Stavroula{--}Evita and Efthimiou, Eleni and Koutsombogera, Maria and Dimou, Athanasia-Lida and Goulas, Theodore and Vasilaki, Kyriaki
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3455--3460
This paper reports on work related to the modelling of Human-Robot Communication on the basis of multimodal and multisensory human behaviour analysis. A primary focus in this framework of analysis is the definition of semantics of human actions in interaction, their capture and their representation in terms of behavioural patterns that, in turn, feed a multimodal human-robot communication system. Semantic analysis encompasses both oral and sign languages, as well as both verbal and non-verbal communicative signals to achieve an effective, natural interaction between elderly users with slight walking and cognitive inability and an assistive robotic platform.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,861
inproceedings
tolins-etal-2016-verbal
A Verbal and Gestural Corpus of Story Retellings to an Expressive Embodied Virtual Character
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1552/
Tolins, Jackson and Liu, Kris and Neff, Michael and Walker, Marilyn and Fox Tree, Jean
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3461--3468
We present a corpus of 44 human-agent verbal and gestural story retellings designed to explore whether humans would gesturally entrain to an embodied intelligent virtual agent. We used a novel data collection method where an agent presented story components in installments, which the human would then retell to the agent. At the end of the installments, the human would then retell the embodied animated agent the story as a whole. This method was designed to allow us to observe whether changes in the agent`s gestural behavior would result in human gestural changes. The agent modified its gestures over the course of the story, by starting out the first installment with gestural behaviors designed to manifest extraversion, and slowly modifying gestures to express introversion over time, or the reverse. The corpus contains the verbal and gestural transcripts of the human story retellings. The gestures were coded for type, handedness, temporal structure, spatial extent, and the degree to which the participants' gestures match those produced by the agent. The corpus illustrates the variation in expressive behaviors produced by users interacting with embodied virtual characters, and the degree to which their gestures were influenced by the agent`s dynamic changes in personality-based expressive style.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,862
inproceedings
tolins-etal-2016-multimodal
A Multimodal Motion-Captured Corpus of Matched and Mismatched Extravert-Introvert Conversational Pairs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1553/
Tolins, Jackson and Liu, Kris and Wang, Yingying and Fox Tree, Jean E. and Walker, Marilyn and Neff, Michael
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3469--3476
This paper presents a new corpus, the Personality Dyads Corpus, consisting of multimodal data for three conversations between three personality-matched, two-person dyads (a total of 9 separate dialogues). Participants were selected from a larger sample to be 0.8 of a standard deviation above or below the mean on the Big-Five Personality extraversion scale, to produce an Extravert-Extravert dyad, an Introvert-Introvert dyad, and an Extravert-Introvert dyad. Each pair carried out conversations for three different tasks. The conversations were recorded using optical motion capture for the body and data gloves for the hands. Dyads' speech was transcribed and the gestural and postural behavior was annotated with ANVIL. The released corpus includes personality profiles, ANVIL files containing speech transcriptions and the gestural annotations, and BVH files containing body and hand motion in 3D.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,863
inproceedings
lanser-etal-2016-crowdsourcing
Crowdsourcing Ontology Lexicons
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1554/
Lanser, Bettina and Unger, Christina and Cimiano, Philipp
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3477--3484
In order to make the growing amount of conceptual knowledge available through ontologies and datasets accessible to humans, NLP applications need access to information on how this knowledge can be verbalized in natural language. One way to provide this kind of information are ontology lexicons, which apart from the actual verbalizations in a given target language can provide further, rich linguistic information about them. Compiling such lexicons manually is a very time-consuming task and requires expertise both in Semantic Web technologies and lexicon engineering, as well as a very good knowledge of the target language at hand. In this paper we present an alternative approach to generating ontology lexicons by means of crowdsourcing: We use CrowdFlower to generate a small Japanese ontology lexicon for ten exemplary ontology elements from the DBpedia ontology according to a two-stage workflow, the main underlying idea of which is to turn the task of generating lexicon entries into a translation task; the starting point of this translation task is a manually created English lexicon for DBpedia. Comparison of the results to a manually created Japanese lexicon shows that the presented workflow is a viable option if an English seed lexicon is already available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,864
inproceedings
modi-etal-2016-inscript
{I}n{S}cript: Narrative texts annotated with script information
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1555/
Modi, Ashutosh and Anikina, Tatjana and Ostermann, Simon and Pinkal, Manfred
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3485--3493
This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,865
inproceedings
wanzare-etal-2016-crowdsourced
A Crowdsourced Database of Event Sequence Descriptions for the Acquisition of High-quality Script Knowledge
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1556/
Wanzare, Lilian D. A. and Zarcone, Alessandra and Thater, Stefan and Pinkal, Manfred
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3494--3501
Scripts are standardized event sequences describing typical everyday activities, which play an important role in the computational modeling of cognitive abilities (in particular for natural language processing). We present a large-scale crowdsourced collection of explicit linguistic descriptions of script-specific event sequences (40 scenarios with 100 sequences each). The corpus is enriched with crowdsourced alignment annotation on a subset of the event descriptions, to be used in future work as seed data for automatic alignment of event descriptions (for example via clustering). The event descriptions to be aligned were chosen among those expected to have the strongest corrective effect on the clustering algorithm. The alignment annotation was evaluated against a gold standard of expert annotators. The resulting database of partially-aligned script-event descriptions provides a sound empirical basis for inducing high-quality script knowledge, as well as for any task involving alignment and paraphrase detection of events.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,866
inproceedings
caselli-etal-2016-temporal
Temporal Information Annotation: Crowd vs. Experts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1557/
Caselli, Tommaso and Sprugnoli, Rachele and Inel, Oana
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3502--3509
This paper describes two sets of crowdsourcing experiments on temporal information annotation conducted on two languages, i.e., English and Italian. The first experiment, launched on the CrowdFlower platform, was aimed at classifying temporal relations given target entities. The second one, relying on the CrowdTruth metric, consisted in two subtasks: one devoted to the recognition of events and temporal expressions and one to the detection and classification of temporal relations. The outcomes of the experiments suggest a valuable use of crowdsourcing annotations also for a complex task like Temporal Processing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,867
inproceedings
salvetti-etal-2016-tangled
A Tangled Web: The Faint Signals of Deception in Text - Boulder Lies and Truth Corpus ({BLT}-{C})
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1558/
Salvetti, Franco and Lowe, John B. and Martin, James H.
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3510--3517
We present an approach to creating corpora for use in detecting deception in text, including a discussion of the challenges peculiar to this task. Our approach is based on soliciting several types of reviews from writers and was implemented using Amazon Mechanical Turk. We describe the multi-dimensional corpus of reviews built using this approach, available free of charge from LDC as the Boulder Lies and Truth Corpus (BLT-C). Challenges for both corpus creation and the deception detection include the fact that human performance on the task is typically at chance, that the signal is faint, that paid writers such as turkers are sometimes deceptive, and that deception is a complex human behavior; manifestations of deception depend on details of domain, intrinsic properties of the deceiver (such as education, linguistic competence, and the nature of the intention), and specifics of the deceptive act (e.g., lying vs. fabricating.) To overcome the inherent lack of ground truth, we have developed a set of semi-automatic techniques to ensure corpus validity. We present some preliminary results on the task of deception detection which suggest that the BLT-C is an improvement in the quality of resources available for this task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,868
inproceedings
tiedemann-2016-finding
Finding Alternative Translations in a Large Corpus of Movie Subtitle
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1559/
Tiedemann, J{\"org
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3518--3522
OpenSubtitles.org provides a large collection of user contributed subtitles in various languages for movies and TV programs. Subtitle translations are valuable resources for cross-lingual studies and machine translation research. A less explored feature of the collection is the inclusion of alternative translations, which can be very useful for training paraphrase systems or collecting multi-reference test suites for machine translation. However, differences in translation may also be due to misspellings, incomplete or corrupt data files, or wrongly aligned subtitles. This paper reports our efforts in recognising and classifying alternative subtitle translations with language independent techniques. We use time-based alignment with lexical re-synchronisation techniques and BLEU score filters and sort alternative translations into categories using edit distance metrics and heuristic rules. Our approach produces large numbers of sentence-aligned translation alternatives for over 50 languages provided via the OPUS corpus collection.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,869
inproceedings
etchegoyhen-etal-2016-exploiting
Exploiting a Large Strongly Comparable Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1560/
Etchegoyhen, Thierry and Azpeitia, Andoni and P{\'e}rez, Naiara
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3523--3529
This article describes a large comparable corpus for Basque and Spanish and the methods employed to build a parallel resource from the original data. The EITB corpus, a strongly comparable corpus in the news domain, is to be shared with the research community, as an aid for the development and testing of methods in comparable corpora exploitation, and as basis for the improvement of data-driven machine translation systems for this language pair. Competing approaches were explored for the alignment of comparable segments in the corpus, resulting in the design of a simple method which outperformed a state-of-the-art method on the corpus test sets. The method we present is highly portable, computationally efficient, and significantly reduces deployment work, a welcome result for the exploitation of comparable corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,870
inproceedings
ziemski-etal-2016-united
The {U}nited {N}ations Parallel Corpus v1.0
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1561/
Ziemski, Micha{\l} and Junczys-Dowmunt, Marcin and Pouliquen, Bruno
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3530--3534
This paper describes the creation process and statistics of the official United Nations Parallel Corpus, the first parallel corpus composed from United Nations documents published by the original data creator. The parallel corpus presented consists of manually translated UN documents from the last 25 years (1990 to 2014) for the six official UN languages, Arabic, Chinese, English, French, Russian, and Spanish. The corpus is freely available for download under a liberal license. Apart from the pairwise aligned documents, a fully aligned subcorpus for the six official UN languages is distributed. We provide baseline BLEU scores of our Moses-based SMT systems trained with the full data of language pairs involving English and for all possible translation directions of the six-way subcorpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,871
inproceedings
bentivogli-etal-2016-wags
{WAGS}: A Beautiful {E}nglish-{I}talian Benchmark Supporting Word Alignment Evaluation on Rare Words
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1562/
Bentivogli, Luisa and Cettolo, Mauro and Farajian, M. Amin and Federico, Marcello
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3535--3542
This paper presents WAGS (Word Alignment Gold Standard), a novel benchmark which allows extensive evaluation of WA tools on out-of-vocabulary (OOV) and rare words. WAGS is a subset of the Common Test section of the Europarl English-Italian parallel corpus, and is specifically tailored to OOV and rare words. WAGS is composed of 6,715 sentence pairs containing 11,958 occurrences of OOV and rare words up to frequency 15 in the Europarl Training set (5,080 English words and 6,878 Italian words), representing almost 3{\%} of the whole text. Since WAGS is focused on OOV/rare words, manual alignments are provided for these words only, and not for the whole sentences. Two off-the-shelf word aligners have been evaluated on WAGS, and results have been compared to those obtained on an existing benchmark tailored to full text alignment. The results obtained confirm that WAGS is a valuable resource, which allows a statistically sound evaluation of WA systems' performance on OOV and rare words, as well as extensive data analyses. WAGS is publicly released under a Creative Commons Attribution license.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,872
inproceedings
tamchyna-barancikova-2016-manual
Manual and Automatic Paraphrases for {MT} Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1563/
Tamchyna, Ale{\v{s}} and Baran{\v{c}}{\'i}kov{\'a}, Petra
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3543--3548
Paraphrasing of reference translations has been shown to improve the correlation with human judgements in automatic evaluation of machine translation (MT) outputs. In this work, we present a new dataset for evaluating English-Czech translation based on automatic paraphrases. We compare this dataset with an existing set of manually created paraphrases and find that even automatic paraphrases can improve MT evaluation. We have also propose and evaluate several criteria for selecting suitable reference translations from a larger set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,873
inproceedings
augustinus-etal-2016-poly
Poly-{G}r{ETEL}: Cross-Lingual Example-based Querying of Syntactic Constructions
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1564/
Augustinus, Liesbeth and Vandeghinste, Vincent and Vanallemeersch, Tom
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3549--3554
We present Poly-GrETEL, an online tool which enables syntactic querying in parallel treebanks, based on the monolingual GrETEL environment. We provide online access to the Europarl parallel treebank for Dutch and English, allowing users to query the treebank using either an XPath expression or an example sentence in order to look for similar constructions. We provide automatic alignments between the nodes. By combining example-based query functionality with node alignments, we limit the need for users to be familiar with the query language and the structure of the trees in the source and target language, thus facilitating the use of parallel corpora for comparative linguistics and translation studies.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,874
inproceedings
dyvik-etal-2016-norgrambank
{N}or{G}ram{B}ank: A {\textquoteleft}Deep' Treebank for {N}orwegian
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1565/
Dyvik, Helge and Meurer, Paul and Ros{\'e}n, Victoria and De Smedt, Koenraad and Haugereid, Petter and Losnegaard, Gyri Sm{\o}rdal and Lyse, Gunn Inger and Thunes, Martha
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3555--3562
We present NorGramBank, a treebank for Norwegian with highly detailed LFG analyses. It is one of many treebanks made available through the INESS treebanking infrastructure. NorGramBank was constructed as a parsebank, i.e. by automatically parsing a corpus, using the wide coverage grammar NorGram. One part consisting of 350,000 words has been manually disambiguated using computer-generated discriminants. A larger part of 50 M words has been stochastically disambiguated. The treebank is dynamic: by global reparsing at certain intervals it is kept compatible with the latest versions of the grammar and the lexicon, which are continually further developed in interaction with the annotators. A powerful query language, INESS Search, has been developed for search across formalisms in the INESS treebanks, including LFG c- and f-structures. Evaluation shows that the grammar provides about 85{\%} of randomly selected sentences with good analyses. Agreement among the annotators responsible for manual disambiguation is satisfactory, but also suggests desirable simplifications of the grammar.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,875
inproceedings
ribeyre-etal-2016-accurate
Accurate Deep Syntactic Parsing of Graphs: The Case of {F}rench
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1566/
Ribeyre, Corentin and Villemonte de la Clergerie, Eric and Seddah, Djam{\'e}
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3563--3568
Parsing predicate-argument structures in a deep syntax framework requires graphs to be predicted. Argument structures represent a higher level of abstraction than the syntactic ones and are thus more difficult to predict even for highly accurate parsing models on surfacic syntax. In this paper we investigate deep syntax parsing, using a French data set (Ribeyre et al., 2014a). We demonstrate that the use of topologically different types of syntactic features, such as dependencies, tree fragments, spines or syntactic paths, brings a much needed context to the parser. Our higher-order parsing model, gaining thus up to 4 points, establishes the state of the art for parsing French deep syntactic structures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,876
inproceedings
hawwari-etal-2016-explicit
Explicit Fine grained Syntactic and Semantic Annotation of the Idafa Construction in {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1567/
Hawwari, Abdelati and Attia, Mohammed and Ghoneim, Mahmoud and Diab, Mona
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3569--3577
Idafa in traditional Arabic grammar is an umbrella construction that covers several phenomena including what is expressed in English as noun-noun compounds and Saxon and Norman genitives. Additionally, Idafa participates in some other constructions, such as quantifiers, quasi-prepositions, and adjectives. Identifying the various types of the Idafa construction (IC) is of importance to Natural Language processing (NLP) applications. Noun-Noun compounds exhibit special behavior in most languages impacting their semantic interpretation. Hence distinguishing them could have an impact on downstream NLP applications. The most comprehensive syntactic representation of the Arabic language is the LDC Arabic Treebank (ATB). In the ATB, ICs are not explicitly labeled and furthermore, there is no distinction between ICs of noun-noun relations and other traditional ICs. Hence, we devise a detailed syntactic and semantic typification process of the IC phenomenon in Arabic. We target the ATB as a platform for this classification. We render the ATB annotated with explicit IC labels but with the further semantic characterization which is useful for syntactic, semantic and cross language processing. Our typification of IC comprises 3 main syntactic IC types: FIC, GIC, and TIC, and they are further divided into 10 syntactic subclasses. The TIC group is further classified into semantic relations. We devise a method for automatic IC labeling and compare its yield against the CATiB treebank. Our evaluation shows that we achieve the same level of accuracy, but with the additional fine-grained classification into the various syntactic and semantic types.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,877
inproceedings
schumann-fischer-2016-compasses
Compasses, Magnets, Water Microscopes: Annotation of Terminology in a Diachronic Corpus of Scientific Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1568/
Schumann, Anne-Kathrin and Fischer, Stefan
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3578--3585
The specialised lexicon belongs to the most prominent attributes of specialised writing: Terms function as semantically dense encodings of specialised concepts, which, in the absence of terms, would require lengthy explanations and descriptions. In this paper, we argue that terms are the result of diachronic processes on both the semantic and the morpho-syntactic level. Very little is known about these processes. We therefore present a corpus annotation project aiming at revealing how terms are coined and how they evolve to fit their function as semantically and morpho-syntactically dense encodings of specialised knowledge. The scope of this paper is two-fold: Firstly, we outline our methodology for annotating terminology in a diachronic corpus of scientific publications. Moreover, we provide a detailed analysis of our annotation results and suggest methods for improving the accuracy of annotations in a setting as difficult as ours. Secondly, we present results of a pilot study based on the annotated terms. The results suggest that terms in older texts are linguistically relatively simple units that are hard to distinguish from the lexicon of general language. We believe that this supports our hypothesis that terminology undergoes diachronic processes of densification and specialisation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,878
inproceedings
diewald-etal-2016-korap
{K}or{AP} Architecture {\textemdash} Diving in the Deep Sea of Corpus Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1569/
Diewald, Nils and Hanl, Michael and Margaretha, Eliza and Bingel, Joachim and Kupietz, Marc and Ba{\'n}ski, Piotr and Witt, Andreas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3586--3591
KorAP is a corpus search and analysis platform, developed at the Institute for the German Language (IDS). It supports very large corpora with multiple annotation layers, multiple query languages, and complex licensing scenarios. KorAP`s design aims to be scalable, flexible, and sustainable to serve the German Reference Corpus DeReKo for at least the next decade. To meet these requirements, we have adopted a highly modular microservice-based architecture. This paper outlines our approach: An architecture consisting of small components that are easy to extend, replace, and maintain. The components include a search backend, a user and corpus license management system, and a web-based user frontend. We also describe a general corpus query protocol used by all microservices for internal communications. KorAP is open source, licensed under BSD-2, and available on GitHub.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,879
inproceedings
grouin-2016-text
Text Segmentation of Digitized Clinical Texts
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1570/
Grouin, Cyril
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3592--3599
In this paper, we present the experiments we made to recover the original page layout structure into two columns from layout damaged digitized files. We designed several CRF-based approaches, either to identify column separator or to classify each token from each line into left or right columns. We achieved our best results with a model trained on homogeneous corpora (only files composed of 2 columns) when classifying each token into left or right columns (overall F-measure of 0.968). Our experiments show it is possible to recover the original layout in columns of digitized documents with results of quality.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,880
inproceedings
acar-etal-2016-turkish
A {T}urkish Database for Psycholinguistic Studies Based on Frequency, Age of Acquisition, and Imageability
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1571/
Acar, Elif Ahsen and Zeyrek, Deniz and Kurfal{\i}, Murathan and Boz{\c{s}}ahin, Cem
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3600--3606
This study primarily aims to build a Turkish psycholinguistic database including three variables: word frequency, age of acquisition (AoA), and imageability, where AoA and imageability information are limited to nouns. We used a corpus-based approach to obtain information about the AoA variable. We built two corpora: a child literature corpus (CLC) including 535 books written for 3-12 years old children, and a corpus of transcribed children`s speech (CSC) at ages 1;4-4;8. A comparison between the word frequencies of CLC and CSC gave positive correlation results, suggesting the usability of the CLC to extract AoA information. We assumed that frequent words of the CLC would correspond to early acquired words whereas frequent words of a corpus of adult language would correspond to late acquired words. To validate AoA results from our corpus-based approach, a rated AoA questionnaire was conducted on adults. Imageability values were collected via a different questionnaire conducted on adults. We conclude that it is possible to deduce AoA information for high frequency words with the corpus-based approach. The results about low frequency words were inconclusive, which is attributed to the fact that corpus-based AoA information is affected by the strong negative correlation between corpus frequency and rated AoA.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,881
inproceedings
remus-biemann-2016-domain
Domain-Specific Corpus Expansion with Focused Webcrawling
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1572/
Remus, Steffen and Biemann, Chris
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3607--3611
This work presents a straightforward method for extending or creating in-domain web corpora by focused webcrawling. The focused webcrawler uses statistical N-gram language models to estimate the relatedness of documents and weblinks and needs as input only N-grams or plain texts of a predefined domain and seed URLs as starting points. Two experiments demonstrate that our focused crawler is able to stay focused in domain and language. The first experiment shows that the crawler stays in a focused domain, the second experiment demonstrates that language models trained on focused crawls obtain better perplexity scores on in-domain corpora. We distribute the focused crawler as open source software.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,882
inproceedings
ljubesic-etal-2016-corpus
Corpus-Based Diacritic Restoration for {S}outh {S}lavic Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1573/
Ljube{\v{s}}i{\'c}, Nikola and Erjavec, Toma{\v{z}} and Fi{\v{s}}er, Darja
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3612--3616
In computer-mediated communication, Latin-based scripts users often omit diacritics when writing. Such text is typically easily understandable to humans but very difficult for computational processing because many words become ambiguous or unknown. Letter-level approaches to diacritic restoration generalise better and do not require a lot of training data but word-level approaches tend to yield better results. However, they typically rely on a lexicon which is an expensive resource, not covering non-standard forms, and often not available for less-resourced languages. In this paper we present diacritic restoration models that are trained on easy-to-acquire corpora. We test three different types of corpora (Wikipedia, general web, Twitter) for three South Slavic languages (Croatian, Serbian and Slovene) and evaluate them on two types of text: standard (Wikipedia) and non-standard (Twitter). The proposed approach considerably outperforms charlifter, so far the only open source tool available for this task. We make the best performing systems freely available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,883
inproceedings
couto-vale-etal-2016-automatic
Automatic Recognition of Linguistic Replacements in Text Series Generated from Keystroke Logs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1574/
Couto-Vale, Daniel and Neumann, Stella and Niemietz, Paula
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3617--3623
This paper introduces a toolkit used for the purpose of detecting replacements of different grammatical and semantic structures in ongoing text production logged as a chronological series of computer interaction events (so-called keystroke logs). The specific case we use involves human translations where replacements can be indicative of translator behaviour that leads to specific features of translations that distinguish them from non-translated texts. The toolkit uses a novel CCG chart parser customised so as to recognise grammatical words independently of space and punctuation boundaries. On the basis of the linguistic analysis, structures in different versions of the target text are compared and classified as potential equivalents of the same source text segment by {\textquoteleft}equivalence judges'. In that way, replacements of grammatical and semantic structures can be detected. Beyond the specific task at hand the approach will also be useful for the analysis of other types of spaceless text such as Twitter hashtags and texts in agglutinative or spaceless languages like Finnish or Chinese.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,884
inproceedings
manishina-etal-2016-automatic
Automatic Corpus Extension for Data-driven Natural Language Generation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1575/
Manishina, Elena and Jabaian, Bassam and Huet, St{\'e}phane and Lef{\`e}vre, Fabrice
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3624--3631
As data-driven approaches started to make their way into the Natural Language Generation (NLG) domain, the need for automation of corpus building and extension became apparent. Corpus creation and extension in data-driven NLG domain traditionally involved manual paraphrasing performed by either a group of experts or with resort to crowd-sourcing. Building the training corpora manually is a costly enterprise which requires a lot of time and human resources. We propose to automate the process of corpus extension by integrating automatically obtained synonyms and paraphrases. Our methodology allowed us to significantly increase the size of the training corpus and its level of variability (the number of distinct tokens and specific syntactic structures). Our extension solutions are fully automatic and require only some initial validation. The human evaluation results confirm that in many cases native speakers favor the outputs of the model built on the extended corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,885
inproceedings
htait-etal-2016-bilbo
Bilbo-Val: Automatic Identification of Bibliographical Zone in Papers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1576/
Htait, Amal and Fournier, Sebastien and Bellot, Patrice
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3632--3636
In this paper, we present the automatic annotation of bibliographical references' zone in papers and articles of XML/TEI format. Our work is applied through two phases: first, we use machine learning technology to classify bibliographical and non-bibliographical paragraphs in papers, by means of a model that was initially created to differentiate between the footnotes containing or not containing bibliographical references. The previous description is one of BILBO`s features, which is an open source software for automatic annotation of bibliographic reference. Also, we suggest some methods to minimize the margin of error. Second, we propose an algorithm to find the largest list of bibliographical references in the article. The improvement applied on our model results an increase in the model`s efficiency with an Accuracy equal to 85.89. And by testing our work, we are able to achieve 72.23{\%} as an average for the percentage of success in detecting bibliographical references' zone.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,886
inproceedings
zaghouani-etal-2016-guidelines
Guidelines and Framework for a Large Scale {A}rabic Diacritized Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1577/
Zaghouani, Wajdi and Bouamor, Houda and Hawwari, Abdelati and Diab, Mona and Obeid, Ossama and Ghoneim, Mahmoud and Alqahtani, Sawsan and Oflazer, Kemal
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3637--3643
This paper presents the annotation guidelines developed as part of an effort to create a large scale manually diacritized corpus for various Arabic text genres. The target size of the annotated corpus is 2 million words. We summarize the guidelines and describe issues encountered during the training of the annotators. We also discuss the challenges posed by the complexity of the Arabic language and how they are addressed. Finally, we present the diacritization annotation procedure and detail the quality of the resulting annotations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,887
inproceedings
temnikova-etal-2016-applying
Applying the Cognitive Machine Translation Evaluation Approach to {A}rabic
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1578/
Temnikova, Irina and Zaghouani, Wajdi and Vogel, Stephan and Habash, Nizar
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3644--3651
The goal of the cognitive machine translation (MT) evaluation approach is to build classifiers which assign post-editing effort scores to new texts. The approach helps estimate fair compensation for post-editors in the translation industry by evaluating the cognitive difficulty of post-editing MT output. The approach counts the number of errors classified in different categories on the basis of how much cognitive effort they require in order to be corrected. In this paper, we present the results of applying an existing cognitive evaluation approach to Modern Standard Arabic (MSA). We provide a comparison of the number of errors and categories of errors in three MSA texts of different MT quality (without any language-specific adaptation), as well as a comparison between MSA texts and texts from three Indo-European languages (Russian, Spanish, and Bulgarian), taken from a previous experiment. The results show how the error distributions change passing from the MSA texts of worse MT quality to MSA texts of better MT quality, as well as a similarity in distinguishing the texts of better MT quality for all four languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,888
inproceedings
scarton-specia-2016-reading
A Reading Comprehension Corpus for Machine Translation Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1579/
Scarton, Carolina and Specia, Lucia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
3652--3658
Effectively assessing Natural Language Processing output tasks is a challenge for research in the area. In the case of Machine Translation (MT), automatic metrics are usually preferred over human evaluation, given time and budget constraints. However, traditional automatic metrics (such as BLEU) are not reliable for absolute quality assessment of documents, often producing similar scores for documents translated by the same MT system. For scenarios where absolute labels are necessary for building models, such as document-level Quality Estimation, these metrics can not be fully trusted. In this paper, we introduce a corpus of reading comprehension tests based on machine translated documents, where we evaluate documents based on answers to questions by fluent speakers of the target language. We describe the process of creating such a resource, the experiment design and agreement between the test takers. Finally, we discuss ways to convert the reading comprehension test into document-level quality scores.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,889