entry_type
stringclasses
4 values
citation_key
stringlengths
10
110
title
stringlengths
6
276
editor
stringclasses
723 values
month
stringclasses
69 values
year
stringdate
1963-01-01 00:00:00
2022-01-01 00:00:00
address
stringclasses
202 values
publisher
stringclasses
41 values
url
stringlengths
34
62
author
stringlengths
6
2.07k
booktitle
stringclasses
861 values
pages
stringlengths
1
12
abstract
stringlengths
302
2.4k
journal
stringclasses
5 values
volume
stringclasses
24 values
doi
stringlengths
20
39
n
stringclasses
3 values
wer
stringclasses
1 value
uas
null
language
stringclasses
3 values
isbn
stringclasses
34 values
recall
null
number
stringclasses
8 values
a
null
b
null
c
null
k
null
f1
stringclasses
4 values
r
stringclasses
2 values
mci
stringclasses
1 value
p
stringclasses
2 values
sd
stringclasses
1 value
female
stringclasses
0 values
m
stringclasses
0 values
food
stringclasses
1 value
f
stringclasses
1 value
note
stringclasses
20 values
__index_level_0__
int64
22k
106k
inproceedings
habash-etal-2016-exploiting
Exploiting {A}rabic Diacritization for High Quality Automatic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1681/
Habash, Nizar and Shahrour, Anas and Al-Khalil, Muhamed
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4298--4304
We present a novel technique for Arabic morphological annotation. The technique utilizes diacritization to produce morphological annotations of quality comparable to human annotators. Although Arabic text is generally written without diacritics, diacritization is already available for large corpora of Arabic text in several genres. Furthermore, diacritization can be generated at a low cost for new text as it does not require specialized training beyond what educated Arabic typists know. The basic approach is to enrich the input to a state-of-the-art Arabic morphological analyzer with word diacritics (full or partial) to enhance its performance. When applied to fully diacritized text, our approach produces annotations with an accuracy of over 97{\%} on lemma, part-of-speech, and tokenization combined.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,991
inproceedings
kabashi-proisl-2016-proposal
A Proposal for a Part-of-Speech Tagset for the {A}lbanian Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1682/
Kabashi, Besim and Proisl, Thomas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4305--4310
Part-of-speech tagging is a basic step in Natural Language Processing that is often essential. Labeling the word forms of a text with fine-grained word-class information adds new value to it and can be a prerequisite for downstream processes like a dependency parser. Corpus linguists and lexicographers also benefit greatly from the improved search options that are available with tagged data. The Albanian language has some properties that pose difficulties for the creation of a part-of-speech tagset. In this paper, we discuss those difficulties and present a proposal for a part-of-speech tagset that can adequately represent the underlying linguistic phenomena.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,992
inproceedings
outahajala-rosso-2016-using
Using a Small Lexicon with {CRF}s Confidence Measure to Improve {POS} Tagging Accuracy
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1683/
Outahajala, Mohamed and Rosso, Paolo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4311--4315
Like most of the languages which have only recently started being investigated for the Natural Language Processing (NLP) tasks, Amazigh lacks annotated corpora and tools and still suffers from the scarcity of linguistic tools and resources. The main aim of this paper is to present a new part-of-speech (POS) tagger based on a new Amazigh tag set (AMTS) composed of 28 tags. In line with our goal we have trained Conditional Random Fields (CRFs) to build a POS tagger for the Amazigh language. We have used the 10-fold technique to evaluate and validate our approach. The CRFs 10 folds average level is 87.95{\%} and the best fold level result is 91.18{\%}. In order to improve this result, we have gathered a set of about 8k words with their POS tags. The collected lexicon was used with CRFs confidence measure in order to have a more accurate POS-tagger. Hence, we have obtained a better performance of 93.82{\%}.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,993
inproceedings
schulz-kuhn-2016-learning
Learning from Within? Comparing {P}o{S} Tagging Approaches for Historical Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1684/
Schulz, Sarah and Kuhn, Jonas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4316--4322
In this paper, we investigate unsupervised and semi-supervised methods for part-of-speech (PoS) tagging in the context of historical German text. We locate our research in the context of Digital Humanities where the non-canonical nature of text causes issues facing an Natural Language Processing world in which tools are mainly trained on standard data. Data deviating from the norm requires tools adjusted to this data. We explore to which extend the availability of such training material and resources related to it influences the accuracy of PoS tagging. We investigate a variety of algorithms including neural nets, conditional random fields and self-learning techniques in order to find the best-fitted approach to tackle data sparsity. Although methods using resources from related languages outperform weakly supervised methods using just a few training examples, we can still reach a promising accuracy with methods abstaining additional resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,994
inproceedings
da-costa-bond-2016-wow
Wow! What a Useful Extension! Introducing Non-Referential Concepts to {W}ordnet
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1685/
Da Costa, Luis Morgado and Bond, Francis
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4323--4328
In this paper we present the ongoing efforts to expand the depth and breath of the Open Multilingual Wordnet coverage by introducing two new classes of non-referential concepts to wordnet hierarchies: interjections and numeral classifiers. The lexical semantic hierarchy pioneered by Princeton Wordnet has traditionally restricted its coverage to referential and contentful classes of words: such as nouns, verbs, adjectives and adverbs. Previous efforts have been employed to enrich wordnet resources including, for example, the inclusion of pronouns, determiners and quantifiers within their hierarchies. Following similar efforts, and motivated by the ongoing semantic annotation of the NTU-Multilingual Corpus, we decided that the four traditional classes of words present in wordnets were too restrictive. Though non-referential, interjections and classifiers possess interesting semantics features that can be well captured by lexical resources like wordnets. In this paper, we will further motivate our decision to include non-referential concepts in wordnets and give an account of the current state of this expansion.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,995
inproceedings
dhuliawala-etal-2016-slangnet
{S}lang{N}et: A {W}ord{N}et like resource for {E}nglish Slang
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1686/
Dhuliawala, Shehzaad and Kanojia, Diptesh and Bhattacharyya, Pushpak
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4329--4332
We present a WordNet like structured resource for slang words and neologisms on the internet. The dynamism of language is often an indication that current language technology tools trained on today`s data, may not be able to process the language in the future. Our resource could be (1) used to augment the WordNet, (2) used in several Natural Language Processing (NLP) applications which make use of noisy data on the internet like Information Retrieval and Web Mining. Such a resource can also be used to distinguish slang word senses from conventional word senses. To stimulate similar innovations widely in the NLP community, we test the efficacy of our resource for detecting slang using standard bag of words Word Sense Disambiguation (WSD) algorithms (Lesk and Extended Lesk) for English data on the internet.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,996
inproceedings
oliveira-santos-2016-discovering
Discovering Fuzzy Synsets from the Redundancy in Different Lexical-Semantic Resources
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1687/
Oliveira, Hugo Gon{\c{c}}alo and Santos, F{\'a}bio
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4333--4340
Although represented as such in wordnets, word senses are not discrete. To handle word senses as fuzzy objects, we exploit the graph structure of synonymy pairs acquired from different sources to discover synsets where words have different membership degrees that reflect confidence. Following this approach, a wide-coverage fuzzy thesaurus was discovered from a synonymy network compiled from seven Portuguese lexical-semantic resources. Based on a crowdsourcing evaluation, we can say that the quality of the obtained synsets is far from perfect but, as expected in a confidence measure, it increases significantly for higher cut-points on the membership and, at a certain point, reaches 100{\%} correction rate.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,997
inproceedings
hayoun-elhadad-2016-hebrew
The {H}ebrew {F}rame{N}et Project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1688/
Hayoun, Avi and Elhadad, Michael
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4341--4347
We present the Hebrew FrameNet project, describe the development and annotation processes and enumerate the challenges we faced along the way. We have developed semi-automatic tools to help speed the annotation and data collection process. The resource currently covers 167 frames, 3,000 lexical units and about 500 fully annotated sentences. We have started training and testing automatic SRL tools on the seed data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,998
inproceedings
neudecker-2016-open
An Open Corpus for Named Entity Recognition in Historic Newspapers
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1689/
Neudecker, Clemens
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4348--4352
The availability of openly available textual datasets ({\textquotedblleft}corpora{\textquotedblright}) with highly accurate manual annotations ({\textquotedblleft}gold standard{\textquotedblright}) of named entities (e.g. persons, locations, organizations, etc.) is crucial in the training and evaluation of named entity recognition systems. Currently there are only few such datasets available on the web, and even less for texts containing historical spelling variation. The production and subsequent release into the public domain of four such datasets with 100 pages each for the languages Dutch, French, German (including Austrian) as part of the Europeana Newspapers project is expected to contribute to the further development and improvement of named entity recognition systems with a focus on historical content. This paper describes how these datasets were produced, what challenges were encountered in their creation and informs about their final quality and availability.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
60,999
inproceedings
daille-etal-2016-ambiguity
Ambiguity Diagnosis for Terms in Digital Humanities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1690/
Daille, B{\'eatrice and Jacquey, Evelyne and Lejeune, Ga{\"el and Melo, Luis Felipe and Toussaint, Yannick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4353--4359
Among all researches dedicating to terminology and word sense disambiguation, little attention has been devoted to the ambiguity of term occurrences. If a lexical unit is indeed a term of the domain, it is not true, even in a specialised corpus, that all its occurrences are terminological. Some occurrences are terminological and other are not. Thus, a global decision at the corpus level about the terminological status of all occurrences of a lexical unit would then be erroneous. In this paper, we propose three original methods to characterise the ambiguity of term occurrences in the domain of social sciences for French. These methods differently model the context of the term occurrences: one is relying on text mining, the second is based on textometry, and the last one focuses on text genre properties. The experimental results show the potential of the proposed approaches and give an opportunity to discuss about their hybridisation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,000
inproceedings
navarro-etal-2016-metrical
Metrical Annotation of a Large Corpus of {S}panish Sonnets: Representation, Scansion and Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1691/
Navarro, Borja and Ribes Lafoz, Mar{\'i}a and S{\'a}nchez, Noelia
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4360--4364
In order to analyze metrical and semantics aspects of poetry in Spanish with computational techniques, we have developed a large corpus annotated with metrical information. In this paper we will present and discuss the development of this corpus: the formal representation of metrical patterns, the semi-automatic annotation process based on a new automatic scansion system, the main annotation problems, and the evaluation, in which an inter-annotator agreement of 96{\%} has been obtained. The corpus is open and available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,001
inproceedings
haaf-2016-corpus
Corpus Analysis based on Structural Phenomena in Texts: Exploiting {TEI} Encoding for Linguistic Research
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1692/
Haaf, Susanne
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4365--4372
This paper poses the question, how linguistic corpus-based research may be enriched by the exploitation of conceptual text structures and layout as provided via TEI annotation. Examples for possible areas of research and usage scenarios are provided based on the German historical corpus of the Deutsches Textarchiv (DTA) project, which has been consistently tagged accordant to the TEI Guidelines, more specifically to the DTA {\guilsinglright}Base Format{\guilsinglleft} (DTABf). The paper shows that by including TEI-XML structuring in corpus-based analyses significances can be observed for different linguistic phenomena, as e.g. the development of conceptual text structures themselves, the syntactic embedding of terms in certain conceptual text structures, and phenomena of language change which become obvious via the layout of a text. The exemplary study carried out here shows some of the potential for the exploitation of TEI annotation for linguistic research, which might be kept in mind when making design decisions for new corpora as well when working with existing TEI corpora.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,002
inproceedings
van-erp-etal-2016-evaluating
Evaluating Entity Linking: An Analysis of Current Benchmark Datasets and a Roadmap for Doing a Better Job
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1693/
van Erp, Marieke and Mendes, Pablo and Paulheim, Heiko and Ilievski, Filip and Plu, Julien and Rizzo, Giuseppe and Waitelonis, Joerg
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4373--4379
Entity linking has become a popular task in both natural language processing and semantic web communities. However, we find that the benchmark datasets for entity linking tasks do not accurately evaluate entity linking systems. In this paper, we aim to chart the strengths and weaknesses of current benchmark datasets and sketch a roadmap for the community to devise better benchmark datasets.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,003
inproceedings
preotiuc-pietro-etal-2016-studying
Studying the Temporal Dynamics of Word Co-occurrences: An Application to Event Detection
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1694/
Preo{\c{t}}iuc-Pietro, Daniel and Srijith, P. K. and Hepple, Mark and Cohn, Trevor
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4380--4387
Streaming media provides a number of unique challenges for computational linguistics. This paper studies the temporal variation in word co-occurrence statistics, with application to event detection. We develop a spectral clustering approach to find groups of mutually informative terms occurring in discrete time frames. Experiments on large datasets of tweets show that these groups identify key real world events as they occur in time, despite no explicit supervision. The performance of our method rivals state-of-the-art methods for event detection on F-score, obtaining higher recall at the expense of precision.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,004
inproceedings
mojica-de-la-vega-ng-2016-markov
{M}arkov {L}ogic {N}etworks for Text Mining: A Qualitative and Empirical Comparison with Integer Linear Programming
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1695/
Mojica de la Vega, Luis Gerardo and Ng, Vincent
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4388--4395
Joint inference approaches such as Integer Linear Programming (ILP) and Markov Logic Networks (MLNs) have recently been successfully applied to many natural language processing (NLP) tasks, often outperforming their pipeline counterparts. However, MLNs are arguably much less popular among NLP researchers than ILP. While NLP researchers who desire to employ these joint inference frameworks do not necessarily have to understand their theoretical underpinnings, it is imperative that they understand which of them should be applied under what circumstances. With the goal of helping NLP researchers better understand the relative strengths and weaknesses of MLNs and ILP; we will compare them along different dimensions of interest, such as expressiveness, ease of use, scalability, and performance. To our knowledge, this is the first systematic comparison of ILP and MLNs on an NLP task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,005
inproceedings
zaatari-etal-2016-arabic
{A}rabic Corpora for Credibility Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1696/
Zaatari, Ayman Al and Ballouli, Rim El and ELbassouni, Shady and El-Hajj, Wassim and Hajj, Hazem and Shaban, Khaled and Habash, Nizar and Yahya, Emad
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4396--4401
A significant portion of data generated on blogging and microblogging websites is non-credible as shown in many recent studies. To filter out such non-credible information, machine learning can be deployed to build automatic credibility classifiers. However, as in the case with most supervised machine learning approaches, a sufficiently large and accurate training data must be available. In this paper, we focus on building a public Arabic corpus of blogs and microblogs that can be used for credibility classification. We focus on Arabic due to the recent popularity of blogs and microblogs in the Arab World and due to the lack of any such public corpora in Arabic. We discuss our data acquisition approach and annotation process, provide rigid analysis on the annotated data and finally report some results on the effectiveness of our data for credibility classification.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,006
inproceedings
kaplan-etal-2016-solving
Solving the {AL} Chicken-and-Egg Corpus and Model Problem: Model-free Active Learning for Phenomena-driven Corpus Construction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1697/
Kaplan, Dain and Rubens, Neil and Teufel, Simone and Tokunaga, Takenobu
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4402--4409
Active learning (AL) is often used in corpus construction (CC) for selecting {\textquotedblleft}informative{\textquotedblright} documents for annotation. This is ideal for focusing annotation efforts when all documents cannot be annotated, but has the limitation that it is carried out in a closed-loop, selecting points that will improve an existing model. For phenomena-driven and exploratory CC, the lack of existing-models and specific task(s) for using it make traditional AL inapplicable. In this paper we propose a novel method for model-free AL utilising characteristics of phenomena for applying AL to select documents for annotation. The method can also supplement traditional closed-loop AL-based CC to extend the utility of the corpus created beyond a single task. We introduce our tool, MOVE, and show its potential with a real world case-study.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,007
inproceedings
freitas-etal-2016-quemdisse
{QUEMDISSE}? Reported speech in {P}ortuguese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1698/
Freitas, Cl{\'a}udia and Freitas, Bianca and Santos, Diana
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4410--4416
This paper presents some work on direct and indirect speech in Portuguese using corpus-based methods: we report on a study whose aim was to identify (i) Portuguese verbs used to introduce reported speech and (ii) syntactic patterns used to convey reported speech, in order to enhance the performance of a quotation extraction system, dubbed QUEMDISSE?. In addition, (iii) we present a Portuguese corpus annotated with reported speech, using the lexicon and rules provided by (i) and (ii), and discuss the process of their annotation and what was learned.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,008
inproceedings
minard-etal-2016-meantime
{MEANTIME}, the {N}ews{R}eader Multilingual Event and Time Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1699/
Minard, Anne-Lyse and Speranza, Manuela and Urizar, Ruben and Altuna, Bego{\~n}a and van Erp, Marieke and Schoen, Anneleen and van Son, Chantal
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4417--4422
In this paper, we present the NewsReader MEANTIME corpus, a semantically annotated corpus of Wikinews articles. The corpus consists of 480 news articles, i.e. 120 English news articles and their translations in Spanish, Italian, and Dutch. MEANTIME contains annotations at different levels. The document-level annotation includes markables (e.g. entity mentions, event mentions, time expressions, and numerical expressions), relations between markables (modeling, for example, temporal information and semantic role labeling), and entity and event intra-document coreference. The corpus-level annotation includes entity and event cross-document coreference. Semantic annotation on the English section was performed manually; for the annotation in Italian, Spanish, and (partially) Dutch, a procedure was devised to automatically project the annotations on the English texts onto the translated texts, based on the manual alignment of the annotated elements; this enabled us not only to speed up the annotation process but also provided cross-lingual coreference. The English section of the corpus was extended with timeline annotations for the SemEval 2015 TimeLine shared task. The {\textquotedblleft}First CLIN Dutch Shared Task{\textquotedblright} at CLIN26 was based on the Dutch section, while the EVALITA 2016 FactA (Event Factuality Annotation) shared task, based on the Italian section, is currently being organized.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,009
inproceedings
moran-2016-acqdiv
The {ACQDIV} Database: {M}in(d)ing the Ambient Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1700/
Moran, Steven
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4423--4429
One of the most pressing questions in cognitive science remains unanswered: what cognitive mechanisms enable children to learn any of the world`s 7000 or so languages? Much discovery has been made with regard to specific learning mechanisms in specific languages, however, given the remarkable diversity of language structures (Evans and Levinson 2009, Bickel 2014) the burning question remains: what are the underlying processes that make language acquisition possible, despite substantial cross-linguistic variation in phonology, morphology, syntax, etc.? To investigate these questions, a comprehensive cross-linguistic database of longitudinal child language acquisition corpora from maximally diverse languages has been built.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,010
inproceedings
danieli-etal-2016-summarizing
Summarizing Behaviours: An Experiment on the Annotation of Call-Centre Conversations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1701/
Danieli, Morena and R, Balamurali A and Stepanov, Evgeny and Favre, Benoit and Bechet, Frederic and Riccardi, Giuseppe
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4430--4433
Annotating and predicting behavioural aspects in conversations is becoming critical in the conversational analytics industry. In this paper we look into inter-annotator agreement of agent behaviour dimensions on two call center corpora. We find that the task can be annotated consistently over time, but that subjectivity issues impacts the quality of the annotation. The reformulation of some of the annotated dimensions is suggested in order to improve agreement.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,011
inproceedings
koiso-etal-2016-survey
Survey of Conversational Behavior: Towards the Design of a Balanced Corpus of Everyday {J}apanese Conversation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1702/
Koiso, Hanae and Tsuchiya, Tomoyuki and Watanabe, Ryoko and Yokomori, Daisuke and Aizawa, Masao and Den, Yasuharu
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4434--4439
In 2016, we set about building a large-scale corpus of everyday Japanese conversation{\textemdash}a collection of conversations embedded in naturally occurring activities in daily life. We will collect more than 200 hours of recordings over six years,publishing the corpus in 2022. To construct such a huge corpus, we have conducted a pilot project, one of whose purposes is to establish a corpus design for collecting various kinds of everyday conversations in a balanced manner. For this purpose, we conducted a survey of everyday conversational behavior, with about 250 adults, in order to reveal how diverse our everyday conversational behavior is and to build an empirical foundation for corpus design. The questionnaire included when, where, how long,with whom, and in what kind of activity informants were engaged in conversations. We found that ordinary conversations show the following tendencies: i) they mainly consist of chats, business talks, and consultations; ii) in general, the number of participants is small and the duration of the conversation is short; iii) many conversations are conducted in private places such as homes, as well as in public places such as offices and schools; and iv) some questionnaire items are related to each other. This paper describes an overview of this survey study, and then discusses how to design a large-scale corpus of everyday Japanese conversation on this basis.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,012
inproceedings
stefanov-beskow-2016-multi
A Multi-party Multi-modal Dataset for Focus of Visual Attention in Human-human and Human-robot Interaction
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1703/
Stefanov, Kalin and Beskow, Jonas
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4440--4444
This papers describes a data collection setup and a newly recorded dataset. The main purpose of this dataset is to explore patterns in the focus of visual attention of humans under three different conditions - two humans involved in task-based interaction with a robot; same two humans involved in task-based interaction where the robot is replaced by a third human, and a free three-party human interaction. The dataset contains two parts - 6 sessions with duration of approximately 3 hours and 9 sessions with duration of approximately 4.5 hours. Both parts of the dataset are rich in modalities and recorded data streams - they include the streams of three Kinect v2 devices (color, depth, infrared, body and face data), three high quality audio streams, three high resolution GoPro video streams, touch data for the task-based interactions and the system state of the robot. In addition, the second part of the dataset introduces the data streams from three Tobii Pro Glasses 2 eye trackers. The language of all interactions is English and all data streams are spatially and temporally aligned.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,013
inproceedings
abbott-etal-2016-internet
{I}nternet Argument Corpus 2.0: An {SQL} schema for Dialogic Social Media and the Corpora to go with it
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1704/
Abbott, Rob and Ecker, Brian and Anand, Pranav and Walker, Marilyn
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4445--4452
Large scale corpora have benefited many areas of research in natural language processing, but until recently, resources for dialogue have lagged behind. Now, with the emergence of large scale social media websites incorporating a threaded dialogue structure, content feedback, and self-annotation (such as stance labeling), there are valuable new corpora available to researchers. In previous work, we released the INTERNET ARGUMENT CORPUS, one of the first larger scale resources available for opinion sharing dialogue. We now release the INTERNET ARGUMENT CORPUS 2.0 (IAC 2.0) in the hope that others will find it as useful as we have. The IAC 2.0 provides more data than IAC 1.0 and organizes it using an extensible, repurposable SQL schema. The database structure in conjunction with the associated code facilitates querying from and combining multiple dialogically structured data sources. The IAC 2.0 schema provides support for forum posts, quotations, markup (bold, italic, etc), and various annotations, including Stanford CoreNLP annotations. We demonstrate the generalizablity of the schema by providing code to import the ConVote corpus.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,014
inproceedings
gilmartin-campbell-2016-capturing
Capturing Chat: Annotation and Tools for Multiparty Casual Conversation.
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1705/
Gilmartin, Emer and Campbell, Nick
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4453--4457
Casual multiparty conversation is an understudied but very common genre of spoken interaction, whose analysis presents a number of challenges in terms of data scarcity and annotation. We describe the annotation process used on the d64 and DANS multimodal corpora of multiparty casual talk, which have been manually segmented, transcribed, annotated for laughter and disfluencies, and aligned using the Penn Aligner. We also describe a visualization tool, STAVE, developed during the annotation process, which allows long stretches of talk or indeed entire conversations to be viewed, aiding preliminary identification of features and patterns worthy of analysis. It is hoped that this tool will be of use to other researchers working in this field.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,015
inproceedings
kamocki-oregan-2016-privacy
Privacy Issues in Online Machine Translation Services - {E}uropean Perspective
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1706/
Kamocki, Pawel and O{'}Regan, Jim
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4458--4462
In order to develop its full potential, global communication needs linguistic support systems such as Machine Translation (MT). In the past decade, free online MT tools have become available to the general public, and the quality of their output is increasing. However, the use of such tools may entail various legal implications, especially as far as processing of personal data is concerned. This is even more evident if we take into account that their business model is largely based on providing translation in exchange for data, which can subsequently be used to improve the translation model, but also for commercial purposes. The purpose of this paper is to examine how free online MT tools fit in the European data protection framework, harmonised by the EU Data Protection Directive. The perspectives of both the user and the MT service provider are taken into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,016
inproceedings
chiarcos-etal-2016-lin
{L}in|gu|is|tik: Building the Linguist`s Pathway to Bibliographies, Libraries, Language Resources and Linked Open Data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1707/
Chiarcos, Christian and F{\"ath, Christian and Renner-Westermann, Heike and Abromeit, Frank and Dimitrova, Vanya
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4463--4471
This paper introduces a novel research tool for the field of linguistics: The Lin|gu|is|tik web portal provides a virtual library which offers scientific information on every linguistic subject. It comprises selected internet sources and databases as well as catalogues for linguistic literature, and addresses an interdisciplinary audience. The virtual library is the most recent outcome of the Special Subject Collection Linguistics of the German Research Foundation (DFG), and also integrates the knowledge accumulated in the Bibliography of Linguistic Literature. In addition to the portal, we describe long-term goals and prospects with a special focus on ongoing efforts regarding an extension towards integrating language resources and Linguistic Linked Open Data.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,017
inproceedings
nguyen-etal-2016-towards
Towards a Language Service Infrastructure for Mobile Environments
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1708/
Nguyen, Ngoc and Lin, Donghui and Nakaguchi, Takao and Ishida, Toru
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4472--4478
Since mobile devices have feature-rich configurations and provide diverse functions, the use of mobile devices combined with the language resources of cloud environments is high promising for achieving a wide range communication that goes beyond the current language barrier. However, there are mismatches between using resources of mobile devices and services in the cloud such as the different communication protocol and different input and output methods. In this paper, we propose a language service infrastructure for mobile environments to combine these services. The proposed language service infrastructure allows users to use and mashup existing language resources on both cloud environments and their mobile devices. Furthermore, it allows users to flexibly use services in the cloud or services on mobile devices in their composite service without implementing several different composite services that have the same functionality. A case study of Mobile Shopping Translation System using both a service in the cloud (translation service) and services on mobile devices (Bluetooth low energy (BLE) service and text-to-speech service) is introduced.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,018
inproceedings
agosti-etal-2016-designing
Designing A Long Lasting Linguistic Project: The Case Study of {ASI}t
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1709/
Agosti, Maristella and Di Buccio, Emanuele and Di Nunzio, Giorgio Maria and Poletto, Cecilia and Rinke, Esther
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4479--4483
In this paper, we discuss the requirements that a long lasting linguistic database should have in order to meet the needs of the linguists together with the aim of durability and sharing of data. In particular, we discuss the generalizability of the Syntactic Atlas of Italy, a linguistic project that builds on a long standing tradition of collecting and analyzing linguistic corpora, on a more recent project that focuses on the synchronic and diachronic analysis of the syntax of Italian and Portuguese relative clauses. The results that are presented are in line with the FLaReNet Strategic Agenda that highlighted the most pressing needs for research areas, such as Natural Language Processing, and presented a set of recommendations for the development and progress of Language resources in Europe.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,019
inproceedings
cavar-etal-2016-global
Global Open Resources and Information for Language and Linguistic Analysis ({GORILLA})
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1710/
Cavar, Damir and Cavar, Malgorzata and Moe, Lwin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4484--4491
The infrastructure Global Open Resources and Information for Language and Linguistic Analysis (GORILLA) was created as a resource that provides a bridge between disciplines such as documentary, theoretical, and corpus linguistics, speech and language technologies, and digital language archiving services. GORILLA is designed as an interface between digital language archive services and language data producers. It addresses various problems of common digital language archive infrastructures. At the same time it serves the speech and language technology communities by providing a platform to create and share speech and language data from low-resourced and endangered languages. It hosts an initial collection of language models for speech and natural language processing (NLP), and technologies or software tools for corpus creation and annotation. GORILLA is designed to address the Transcription Bottleneck in language documentation, and, at the same time to provide solutions to the general Language Resource Bottleneck in speech and language technologies. It does so by facilitating the cooperation between documentary and theoretical linguistics, and speech and language technologies research and development, in particular for low-resourced and endangered languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,020
inproceedings
druskat-etal-2016-corpus
corpus-tools.org: An Interoperable Generic Software Tool Set for Multi-layer Linguistic Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1711/
Druskat, Stephan and Gast, Volker and Krause, Thomas and Zipser, Florian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4492--4499
This paper introduces an open source, interoperable generic software tool set catering for the entire workflow of creation, migration, annotation, query and analysis of multi-layer linguistic corpora. It consists of four components: Salt, a graph-based meta model and API for linguistic data, the common data model for the rest of the tool set; Pepper, a conversion tool and platform for linguistic data that can be used to convert many different linguistic formats into each other; Atomic, an extensible, platform-independent multi-layer desktop annotation software for linguistic corpora; ANNIS, a search and visualization architecture for multi-layer linguistic corpora with many different visualizations and a powerful native query language. The set was designed to solve the following issues in a multi-layer corpus workflow: Lossless data transition between tools through a common data model generic enough to allow for a potentially unlimited number of different types of annotation, conversion capabilities for different linguistic formats to cater for the processing of data from different sources and/or with existing annotations, a high level of extensibility to enhance the sustainability of the whole tool set, analysis capabilities encompassing corpus and annotation query alongside multi-faceted visualizations of all annotation layers.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,021
inproceedings
schafer-2016-commoncow
{C}ommon{COW}: Massively Huge Web Corpora from {C}ommon{C}rawl Data and a Method to Distribute them Freely under Restrictive {EU} Copyright Laws
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1712/
Sch{\"afer, Roland
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4500--4504
In this paper, I describe a method of creating massively huge web corpora from the CommonCrawl data sets and redistributing the resulting annotations in a stand-off format. Current EU (and especially German) copyright legislation categorically forbids the redistribution of downloaded material without express prior permission by the authors. Therefore, such stand-off annotations (or other derivates) are the only format in which European researchers (like myself) are allowed to re-distribute the respective corpora. In order to make the full corpora available to the public despite such restrictions, the stand-off format presented here allows anybody to locally reconstruct the full corpora with the least possible computational effort.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,022
inproceedings
katakis-etal-2016-clarin
{CLARIN}-{EL} Web-based Annotation Tool
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1713/
Katakis, Ioannis Manousos and Petasis, Georgios and Karkaletsis, Vangelis
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4505--4512
This paper presents a new Web-based annotation tool, the {\textquotedblleft}CLARIN-EL Web-based Annotation Tool{\textquotedblright}. Based on an existing annotation infrastructure offered by the {\textquotedblleft}Ellogon{\textquotedblright} language enginneering platform, this new tool transfers a large part of Ellogon`s features and functionalities to a Web environment, by exploiting the capabilities of cloud computing. This new annotation tool is able to support a wide range of annotation tasks, through user provided annotation schemas in XML. The new annotation tool has already been employed in several annotation tasks, including the anotation of arguments, which is presented as a use case. The CLARIN-EL annotation tool is compared to existing solutions along several dimensions and features. Finally, future work includes the improvement of integration with the CLARIN-EL infrastructure, and the inclusion of features not currently supported, such as the annotation of aligned documents.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,023
inproceedings
kattenberg-etal-2016-two
Two Architectures for Parallel Processing of Huge Amounts of Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1714/
Kattenberg, Mathijs and Beloki, Zuhaitz and Soroa, Aitor and Artola, Xabier and Fokkens, Antske and Huygen, Paul and Verstoep, Kees
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4513--4519
This paper presents two alternative NLP architectures to analyze massive amounts of documents, using parallel processing. The two architectures focus on different processing scenarios, namely batch-processing and streaming processing. The batch-processing scenario aims at optimizing the overall throughput of the system, i.e., minimizing the overall time spent on processing all documents. The streaming architecture aims to minimize the time to process real-time incoming documents and is therefore especially suitable for live feeds. The paper presents experiments with both architectures, and reports the overall gain when they are used for batch as well as for streaming processing. All the software described in the paper is publicly available under free licenses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,024
inproceedings
cassidy-2016-publishing
Publishing the Trove Newspaper Corpus
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1715/
Cassidy, Steve
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4520--4525
The Trove Newspaper Corpus is derived from the National Library of Australia`s digital archive of newspaper text. The corpus is a snapshot of the NLA collection taken in 2015 to be made available for language research as part of the Alveo Virtual Laboratory and contains 143 million articles dating from 1806 to 2007. This paper describes the work we have done to make this large corpus available as a research collection, facilitating access to individual documents and enabling large scale processing of the newspaper text in a cloud-based environment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,025
inproceedings
edlund-gustafson-2016-hidden
Hidden Resources {\textemdash} Strategies to Acquire and Exploit Potential Spoken Language Resources in National Archives
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1717/
Edlund, Jens and Gustafson, Joakim
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4531--4534
In 2014, the Swedish government tasked a Swedish agency, The Swedish Post and Telecom Authority (PTS), with investigating how to best create and populate an infrastructure for spoken language resources (Ref N2014/2840/ITP). As a part of this work, the department of Speech, Music and Hearing at KTH Royal Institute of Technology have taken inventory of existing potential spoken language resources, mainly in Swedish national archives and other governmental or public institutions. In this position paper, key priorities, perspectives, and strategies that may be of general, rather than Swedish, interest are presented. We discuss broad types of potential spoken language resources available; to what extent these resources are free to use; and thirdly the main contribution: strategies to ensure the continuous acquisition of spoken language resources in a manner that facilitates speech and speech technology research.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,027
inproceedings
mapelli-etal-2016-elra
The {ELRA} License Wizard
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1718/
Mapelli, Val{\'e}rie and Popescu, Vladimir and Liu, Lin and Barrera, Meritxell Fern{\'a}ndez and Choukri, Khalid
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4535--4538
To allow an easy understanding of the various licenses that exist for the use of Language Resources (ELRA`s, META-SHARE`s, Creative Commons', etc.), ELRA has developed a License Wizardto help the right-holders share/distribute their resources under the appropriate license. It also aims to be exploited by users to better understand the legal obligations that apply in various licensing situations. The present paper elaborates on the License Wizard functionalities of this web configurator, which enables to select a number of legal features and obtain the user license adapted to the users selection, to define which user licenses they would like to select in order to distribute their Language Resources, to integrate the user license terms into a Distribution Agreement that could be proposed to ELRA or META-SHARE for further distribution through the ELRA Catalogue of Language Resources. Thanks to a flexible back office, the structure of the legal feature selection can easily be reviewed to include other features that may be relevant for other licenses. Integrating contributions from other initiatives thus aim to be one of the obvious next steps, with a special focus on CLARIN and Linked Data experiences.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,028
inproceedings
grouas-etal-2016-review
Review on the Existing Language Resources for Languages of {F}rance
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1719/
Grouas, Thibault and Mapelli, Val{\'e}rie and Samier, Quentin
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4539--4542
With the support of the DGLFLF, ELDA conducted an inventory of existing language resources for the regional languages of France. The main aim of this inventory was to assess the exploitability of the identified resources within technologies. A total of 2,299 Language Resources were identified. As a second step, a deeper analysis of a set of three language groups (Breton, Occitan, overseas languages) was carried out along with a focus of their exploitability within three technologies: automatic translation, voice recognition/synthesis and spell checkers. The survey was followed by the organisation of the TLRF2015 Conference which aimed to present the state of the art in the field of the Technologies for Regional Languages of France. The next step will be to activate the network of specialists built up during the TLRF conference and to begin the organisation of a second TLRF conference. Meanwhile, the French Ministry of Culture continues its actions related to linguistic diversity and technology, in particular through a project with Wikimedia France related to contributions to Wikipedia in regional languages, the upcoming new version of the {\textquotedblleft}Corpus de la Parole{\textquotedblright} and the reinforcement of the DGLFLF`s Observatory of Linguistic Practices.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,029
inproceedings
cieri-etal-2016-selection
Selection Criteria for Low Resource Language Programs
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1720/
Cieri, Christopher and Maxwell, Mike and Strassel, Stephanie and Tracey, Jennifer
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4543--4549
This paper documents and describes the criteria used to select languages for study within programs that include low resource languages whether given that label or another similar one. It focuses on five US common task, Human Language Technology research and development programs in which the authors have provided information or consulting related to the choice of language. The paper does not describe the actual selection process which is the responsibility of program management and highly specific to a program`s individual goals and context. Instead it concentrates on the data and criteria that have been considered relevant previously with the thought that future program managers and their consultants may adapt these and apply them with different prioritization to future programs.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,030
inproceedings
barrera-etal-2016-enhancing
Enhancing Cross-border {EU} {E}-commerce through Machine Translation: Needed Language Resources, Challenges and Opportunities
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1721/
Barrera, Meritxell Fern{\'a}ndez and Popescu, Vladimir and Toral, Antonio and Gaspari, Federico and Choukri, Khalid
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4550--4556
This paper discusses the role that statistical machine translation (SMT) can play in the development of cross-border EU e-commerce,by highlighting extant obstacles and identifying relevant technologies to overcome them. In this sense, it firstly proposes a typology of e-commerce static and dynamic textual genres and it identifies those that may be more successfully targeted by SMT. The specific challenges concerning the automatic translation of user-generated content are discussed in detail. Secondly, the paper highlights the risk of data sparsity inherent to e-commerce and it explores the state-of-the-art strategies to achieve domain adequacy via adaptation. Thirdly, it proposes a robust workflow for the development of SMT systems adapted to the e-commerce domain by relying on inexpensive methods. Given the scarcity of user-generated language corpora for most language pairs, the paper proposes to obtain monolingual target-language data to train language models and aligned parallel corpora to tune and evaluate MT systems by means of crowdsourcing.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,031
inproceedings
santus-etal-2016-nine
Nine Features in a Random Forest to Learn Taxonomical Semantic Relations
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1722/
Santus, Enrico and Lenci, Alessandro and Chiu, Tin-Shing and Lu, Qin and Huang, Chu-Ren
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4557--4564
ROOT9 is a supervised system for the classification of hypernyms, co-hyponyms and random words that is derived from the already introduced ROOT13 (Santus et al., 2016). It relies on a Random Forest algorithm and nine unsupervised corpus-based features. We evaluate it with a 10-fold cross validation on 9,600 pairs, equally distributed among the three classes and involving several Parts-Of-Speech (i.e. adjectives, nouns and verbs). When all the classes are present, ROOT9 achieves an F1 score of 90.7{\%}, against a baseline of 57.2{\%} (vector cosine). When the classification is binary, ROOT9 achieves the following results against the baseline. hypernyms-co-hyponyms 95.7{\%} vs. 69.8{\%}, hypernyms-random 91.8{\%} vs. 64.1{\%} and co-hyponyms-random 97.8{\%} vs. 79.4{\%}. In order to compare the performance with the state-of-the-art, we have also evaluated ROOT9 in subsets of the Weeds et al. (2014) datasets, proving that it is in fact competitive. Finally, we investigated whether the system learns the semantic relation or it simply learns the prototypical hypernyms, as claimed by Levy et al. (2015). The second possibility seems to be the most likely, even though ROOT9 can be trained on negative examples (i.e., switched hypernyms) to drastically reduce this bias.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,032
inproceedings
santus-etal-2016-nerd
What a Nerd! Beating Students and Vector Cosine in the {ESL} and {TOEFL} Datasets
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1723/
Santus, Enrico and Lenci, Alessandro and Chiu, Tin-Shing and Lu, Qin and Huang, Chu-Ren
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4565--4572
In this paper, we claim that Vector Cosine {\textemdash} which is generally considered one of the most efficient unsupervised measures for identifying word similarity in Vector Space Models {\textemdash} can be outperformed by a completely unsupervised measure that evaluates the extent of the intersection among the most associated contexts of two target words, weighting such intersection according to the rank of the shared contexts in the dependency ranked lists. This claim comes from the hypothesis that similar words do not simply occur in similar contexts, but they share a larger portion of their most relevant contexts compared to other related words. To prove it, we describe and evaluate APSyn, a variant of Average Precision that {\textemdash} independently of the adopted parameters {\textemdash} outperforms the Vector Cosine and the co-occurrence on the ESL and TOEFL test sets. In the best setting, APSyn reaches 0.73 accuracy on the ESL dataset and 0.70 accuracy in the TOEFL dataset, beating therefore the non-English US college applicants (whose average, as reported in the literature, is 64.50{\%}) and several state-of-the-art approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,033
inproceedings
del-tredici-bel-2016-assessing
Assessing the Potential of Metaphoricity of verbs using corpus data
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1724/
Del Tredici, Marco and Bel, N{\'u}ria
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4573--4577
The paper investigates the relation between metaphoricity and distributional characteristics of verbs, introducing POM, a corpus-derived index that can be used to define the upper bound of metaphoricity of any expression in which a given verb occurs. The work moves from the observation that while some verbs can be used to create highly metaphoric expressions, others can not. We conjecture that this fact is related to the number of contexts in which a verb occurs and to the frequency of each context. This intuition is modelled by introducing a method in which each context of a verb in a corpus is assigned a vector representation, and a clustering algorithm is employed to identify similar contexts. Eventually, the Standard Deviation of the relative frequency values of the clusters is computed and taken as the POM of the target verb. We tested POM in two experimental settings obtaining values of accuracy of 84{\%} and 92{\%}. Since we are convinced, along with (Shutoff, 2015), that metaphor detection systems should be concerned only with the identification of highly metaphoric expressions, we believe that POM could be profitably employed by these systems to a priori exclude expressions that, due to the verb they include, can only have low degrees of metaphoricity
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,034
inproceedings
lafourcade-ramadier-2016-semantic
Semantic Relation Extraction with Semantic Patterns Experiment on Radiology Reports
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1725/
Lafourcade, Mathieu and Ramadier, Lionel
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4578--4582
This work presents a practical system for indexing terms and relations from French radiology reports, called IMAIOS. In this paper, we present how semantic relations (causes, consequences, symptoms, locations, parts...) between medical terms can be extracted. For this purpose, we handcrafted some linguistic patterns from on a subset of our radiology report corpora. As semantic patterns (de (of)) may be too general or ambiguous, semantic constraints have been added. For instance, in the sentence n{\'e}oplasie du sein (neoplasm of breast) the system knowing neoplasm as a disease and breast as an anatomical location, identify the relation as being a location: neoplasm r-lieu breast. An evaluation of the effect of semantic constraints is proposed.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,035
inproceedings
hongchao-etal-2016-evalution
{EVAL}ution-{MAN}: A {C}hinese Dataset for the Training and Evaluation of {DSM}s
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1726/
Hongchao, Liu and Neergaard, Karl and Santus, Enrico and Huang, Chu-Ren
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4583--4587
Distributional semantic models (DSMs) are currently being used in the measurement of word relatedness and word similarity. One shortcoming of DSMs is that they do not provide a principled way to discriminate different semantic relations. Several approaches have been adopted that rely on annotated data either in the training of the model or later in its evaluation. In this paper, we introduce a dataset for training and evaluating DSMs on semantic relations discrimination between words, in Mandarin, Chinese. The construction of the dataset followed EVALution 1.0, which is an English dataset for the training and evaluating of DSMs. The dataset contains 360 relation pairs, distributed in five different semantic relations, including antonymy, synonymy, hypernymy, meronymy and nearsynonymy. All relation pairs were checked manually to estimate their quality. In the 360 word relation pairs, there are 373 relata. They were all extracted and subsequently manually tagged according to their semantic type. The relatas' frequency was calculated in a combined corpus of Sinica and Chinese Gigaword. To the best of our knowledge, EVALution-MAN is the first of its kind for Mandarin, Chinese.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,036
inproceedings
anwar-sharma-2016-towards
Towards Building Semantic Role Labeler for {I}ndian Languages
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1727/
Anwar, Maaz and Sharma, Dipti
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4588--4595
We present a statistical system for identifying the semantic relationships or semantic roles for two major Indian Languages, Hindi and Urdu. Given an input sentence and a predicate/verb, the system first identifies the arguments pertaining to that verb and then classifies it into one of the semantic labels which can either be a DOER, THEME, LOCATIVE, CAUSE, PURPOSE etc. The system is based on 2 statistical classifiers trained on roughly 130,000 words for Urdu and 100,000 words for Hindi that were hand-annotated with semantic roles under the PropBank project for these two languages. Our system achieves an accuracy of 86{\%} in identifying the arguments of a verb for Hindi and 75{\%} for Urdu. At the subsequent task of classifying the constituents into their semantic roles, the Hindi system achieved 58{\%} precision and 42{\%} recall whereas Urdu system performed better and achieved 83{\%} precision and 80{\%} recall. Our study also allowed us to compare the usefulness of different linguistic features and feature combinations in the semantic role labeling task. We also examine the use of statistical syntactic parsing as feature in the role labeling task.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,037
inproceedings
samardzic-milicevic-2016-framework
A Framework for Automatic Acquisition of {C}roatian and {S}erbian Verb Aspect from Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1728/
Samard{\v{z}}i{\'c}, Tanja and Mili{\v{c}}evi{\'c}, Maja
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4596--4601
Verb aspect is a grammatical and lexical category that encodes temporal unfolding and duration of events described by verbs. It is a potentially interesting source of information for various computational tasks, but has so far not been studied in much depth from the perspective of automatic processing. Slavic languages are particularly interesting in this respect, as they encode aspect through complex and not entirely consistent lexical derivations involving prefixation and suffixation. Focusing on Croatian and Serbian, in this paper we propose a novel framework for automatic classification of their verb types into a number of fine-grained aspectual classes based on the observable morphology of verb forms. In addition, we provide a set of around 2000 verbs classified based on our framework. This set can be used for linguistic research as well as for testing automatic classification on a larger scale. With minor adjustments the approach is also applicable to other Slavic languages
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,038
inproceedings
lendvai-etal-2016-monolingual
Monolingual Social Media Datasets for Detecting Contradiction and Entailment
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1729/
Lendvai, Piroska and Augenstein, Isabelle and Bontcheva, Kalina and Declerck, Thierry
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4602--4605
Entailment recognition approaches are useful for application domains such as information extraction, question answering or summarisation, for which evidence from multiple sentences needs to be combined. We report on a new 3-way judgement Recognizing Textual Entailment (RTE) resource that originates in the Social Media domain, and explain our semi-automatic creation method for the special purpose of information verification, which draws on manually established rumourous claims reported during crisis events. From about 500 English tweets related to 70 unique claims we compile and evaluate 5.4k RTE pairs, while continue automatizing the workflow to generate similar-sized datasets in other languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,039
inproceedings
pustejovsky-krishnaswamy-2016-voxml
{V}ox{ML}: A Visualization Modeling Language
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1730/
Pustejovsky, James and Krishnaswamy, Nikhil
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4606--4613
We present the specification for a modeling language, VoxML, which encodes semantic knowledge of real-world objects represented as three-dimensional models, and of events and attributes related to and enacted over these objects. VoxML is intended to overcome the limitations of existing 3D visual markup languages by allowing for the encoding of a broad range of semantic knowledge that can be exploited by a variety of systems and platforms, leading to multimodal simulations of real-world scenarios using conceptual objects that represent their semantic values
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,040
inproceedings
teraoka-2016-metonymy
Metonymy Analysis Using Associative Relations between Words
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1731/
Teraoka, Takehiro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4614--4620
Metonymy is a figure of speech in which one item`s name represents another item that usually has a close relation with the first one. Metonymic expressions need to be correctly detected and interpreted because sentences including such expressions have different mean- ings from literal ones; computer systems may output inappropriate results in natural language processing. In this paper, an associative approach for analyzing metonymic expressions is proposed. By using associative information and two conceptual distances between words in a sentence, a previous method is enhanced and a decision tree is trained to detect metonymic expressions. After detecting these expressions, they are interpreted as metonymic understanding words by using associative information. This method was evaluated by comparing it with two baseline methods based on previous studies on the Japanese language that used case frames and co-occurrence information. As a result, the proposed method exhibited significantly better accuracy (0.85) of determining words as metonymic or literal expressions than the baselines. It also exhibited better accuracy (0.74) of interpreting the detected metonymic expressions than the baselines.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,041
inproceedings
goodwin-harabagiu-2016-embedding
Embedding Open-domain Common-sense Knowledge from Text
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1732/
Goodwin, Travis and Harabagiu, Sanda
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4621--4628
Our ability to understand language often relies on common-sense knowledge {\textemdash} background information the speaker can assume is known by the reader. Similarly, our comprehension of the language used in complex domains relies on access to domain-specific knowledge. Capturing common-sense and domain-specific knowledge can be achieved by taking advantage of recent advances in open information extraction (IE) techniques and, more importantly, of knowledge embeddings, which are multi-dimensional representations of concepts and relations. Building a knowledge graph for representing common-sense knowledge in which concepts discerned from noun phrases are cast as vertices and lexicalized relations are cast as edges leads to learning the embeddings of common-sense knowledge accounting for semantic compositionality as well as implied knowledge. Common-sense knowledge is acquired from a vast collection of blogs and books as well as from WordNet. Similarly, medical knowledge is learned from two large sets of electronic health records. The evaluation results of these two forms of knowledge are promising: the same knowledge acquisition methodology based on learning knowledge embeddings works well both for common-sense knowledge and for medical knowledge Interestingly, the common-sense knowledge that we have acquired was evaluated as being less neutral than than the medical knowledge, as it often reflected the opinion of the knowledge utterer. In addition, the acquired medical knowledge was evaluated as more plausible than the common-sense knowledge, reflecting the complexity of acquiring common-sense knowledge due to the pragmatics and economicity of language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,042
inproceedings
mencia-etal-2016-medical
Medical Concept Embeddings via Labeled Background Corpora
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1733/
Menc{\'i}a, Eneldo Loza and de Melo, Gerard and Nam, Jinseok
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4629--4636
In recent years, we have seen an increasing amount of interest in low-dimensional vector representations of words. Among other things, these facilitate computing word similarity and relatedness scores. The most well-known example of algorithms to produce representations of this sort are the word2vec approaches. In this paper, we investigate a new model to induce such vector spaces for medical concepts, based on a joint objective that exploits not only word co-occurrences but also manually labeled documents, as available from sources such as PubMed. Our extensive experimental analysis shows that our embeddings lead to significantly higher correlations with human similarity and relatedness assessments than previous work. Due to the simplicity and versatility of vector representations, these findings suggest that our resource can easily be used as a drop-in replacement to improve any systems relying on medical concept similarity measures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,043
inproceedings
dumont-etal-2016-question
Question-Answering with Logic Specific to Video Games
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1734/
Dumont, Corentin and Tian, Ran and Inui, Kentaro
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4637--4643
We present a corpus and a knowledge database aiming at developing Question-Answering in a new context, the open world of a video game. We chose a popular game called {\textquoteleft}Minecraft', and created a QA corpus with a knowledge database related to this game and the ontology of a meaning representation that will be used to structure this database. We are interested in the logic rules specific to the game, which may not exist in the real world. The ultimate goal of this research is to build a QA system that can answer natural language questions from players by using inference on these game-specific logic rules. The QA corpus is partially composed of online quiz questions and partially composed of manually written variations of the most relevant ones. The knowledge database is extracted from several wiki-like websites about Minecraft. It is composed of unstructured data, such as text, that will be structured using the meaning representation we defined, and already structured data such as infoboxes. A preliminary examination of the data shows that players are asking creative questions about the game, and that the QA corpus can be used for clustering verbs and linking them to predefined actions in the game.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,044
inproceedings
kohn-etal-2016-mining
Mining the Spoken {W}ikipedia for Speech Data and Beyond
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1735/
K{\"ohn, Arne and Stegen, Florian and Baumann, Timo
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4644--4647
We present a corpus of time-aligned spoken data of Wikipedia articles as well as the pipeline that allows to generate such corpora for many languages. There are initiatives to create and sustain spoken Wikipedia versions in many languages and hence the data is freely available, grows over time, and can be used for automatic corpus creation. Our pipeline automatically downloads and aligns this data. The resulting German corpus currently totals 293h of audio, of which we align 71h in full sentences and another 86h of sentences with some missing words. The English corpus consists of 287h, for which we align 27h in full sentence and 157h with some missing words. Results are publically available.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,045
inproceedings
herms-etal-2016-corpus
A Corpus of Read and Spontaneous {U}pper {S}axon {G}erman Speech for {ASR} Evaluation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1736/
Herms, Robert and Seelig, Laura and M{\"unch, Stefanie and Eibl, Maximilian
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4648--4651
In this Paper we present a corpus named SXUCorpus which contains read and spontaneous speech of the Upper Saxon German dialect. The data has been collected from eight archives of local television stations located in the Free State of Saxony. The recordings include broadcasted topics of news, economy, weather, sport, and documentation from the years 1992 to 1996 and have been manually transcribed and labeled. In the paper, we report the methodology of collecting and processing analog audiovisual material, constructing the corpus and describe the properties of the data. In its current version, the corpus is available to the scientific community and is designed for automatic speech recognition (ASR) evaluation with a development set and a test set. We performed ASR experiments with the open-source framework sphinx-4 including a configuration for Standard German on the dataset. Additionally, we show the influence of acoustic model and language model adaptation by the utilization of the development set.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,046
inproceedings
meunier-etal-2016-typaloc
The {TYPALOC} Corpus: A Collection of Various Dysarthric Speech Recordings in Read and Spontaneous Styles
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1738/
Meunier, Christine and Fougeron, Cecile and Fredouille, Corinne and Bigi, Brigitte and Crevier-Buchman, Lise and Delais-Roussarie, Elisabeth and Georgeton, Laurianne and Ghio, Alain and Laaridh, Imed and Legou, Thierry and Pillot-Loiseau, Claire and Pouchoulin, Gilles
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4658--4665
This paper presents the TYPALOC corpus of French Dysarthric and Healthy speech and the rationale underlying its constitution. The objective is to compare phonetic variation in the speech of dysarthric vs. healthy speakers in different speech conditions (read and unprepared speech). More precisely, we aim to compare the extent, types and location of phonetic variation within these different populations and speech conditions. The TYPALOC corpus is constituted of a selection of 28 dysarthric patients (three different pathologies) and of 12 healthy control speakers recorded while reading the same text and in a more natural continuous speech condition. Each audio signal has been segmented into Inter-Pausal Units. Then, the corpus has been manually transcribed and automatically aligned. The alignment has been corrected by an expert phonetician. Moreover, the corpus benefits from an automatic syllabification and an Automatic Detection of Acoustic Phone-Based Anomalies. Finally, in order to interpret phonetic variations due to pathologies, a perceptual evaluation of each patient has been conducted. Quantitative data are provided at the end of the paper.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,048
inproceedings
yilmaz-etal-2016-longitudinal
A Longitudinal Bilingual {F}risian-{D}utch Radio Broadcast Database Designed for Code-Switching Research
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1739/
Yilmaz, Emre and Andringa, Maaike and Kingma, Sigrid and Dijkstra, Jelske and van der Kuip, Frits and Van de Velde, Hans and Kampstra, Frederik and Algra, Jouke and van den Heuvel, Henk and van Leeuwen, David
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4666--4669
We present a new speech database containing 18.5 hours of annotated radio broadcasts in the Frisian language. Frisian is mostly spoken in the province Fryslan and it is the second official language of the Netherlands. The recordings are collected from the archives of Omrop Fryslan, the regional public broadcaster of the province Fryslan. The database covers almost a 50-year time span. The native speakers of Frisian are mostly bilingual and often code-switch in daily conversations due to the extensive influence of the Dutch language. Considering the longitudinal and code-switching nature of the data, an appropriate annotation protocol has been designed and the data is manually annotated with the orthographic transcription, speaker identities, dialect information, code-switching details and background noise/music information.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,049
inproceedings
zgank-etal-2016-si
The {SI} {TED}x-{UM} speech database: a new {S}lovenian Spoken Language Resource
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1740/
{\v{Z}}gank, Andrej and Mau{\v{c}}ec, Mirjam Sepesy and Verdonik, Darinka
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4670--4673
This paper presents a new Slovenian spoken language resource built from TEDx Talks. The speech database contains 242 talks in total duration of 54 hours. The annotation and transcription of acquired spoken material was generated automatically, applying acoustic segmentation and automatic speech recognition. The development and evaluation subset was also manually transcribed using the guidelines specified for the Slovenian GOS corpus. The manual transcriptions were used to evaluate the quality of unsupervised transcriptions. The average word error rate for the SI TEDx-UM evaluation subset was 50.7{\%}, with out of vocabulary rate of 24{\%} and language model perplexity of 390. The unsupervised transcriptions contain 372k tokens, where 32k of them were different.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,050
inproceedings
iribe-etal-2016-speech
Speech Corpus Spoken by Young-old, Old-old and Oldest-old {J}apanese
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1741/
Iribe, Yurie and Kitaoka, Norihide and Segawa, Shuhei
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4674--4677
We have constructed a new speech data corpus, using the utterances of 100 elderly Japanese people, to improve speech recognition accuracy of the speech of older people. Humanoid robots are being developed for use in elder care nursing homes. Interaction with such robots is expected to help maintain the cognitive abilities of nursing home residents, as well as providing them with companionship. In order for these robots to interact with elderly people through spoken dialogue, a high performance speech recognition system for speech of elderly people is needed. To develop such a system, we recorded speech uttered by 100 elderly Japanese, most of them are living in nursing homes, with an average age of 77.2. Previously, a seniors' speech corpus named S-JNAS was developed, but the average age of the participants was 67.6 years, but the target age for nursing home care is around 75 years old, much higher than that of the S-JNAS samples. In this paper we compare our new corpus with an existing Japanese read speech corpus, JNAS, which consists of adult speech, and with the above mentioned S-JNAS, the senior version of JNAS.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,051
inproceedings
wagner-etal-2016-polish
{P}olish Rhythmic Database {\textemdash} New Resources for Speech Timing and Rhythm Analysis
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1742/
Wagner, Agnieszka and Klessa, Katarzyna and Bachan, Jolanta
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4678--4683
This paper reports on a new database {\textemdash} Polish rhythmic database and tools developed with the aim of investigating timing phenomena and rhythmic structure in Polish including topics such as, inter alia, the effect of speaking style and tempo on timing patterns, phonotactic and phrasal properties of speech rhythm and stability of rhythm metrics. So far, 19 native and 12 non-native speakers with different first languages have been recorded. The collected speech data (5 h 14 min.) represents five different speaking styles and five different tempi. For the needs of speech corpus management, annotation and analysis, a database was developed and integrated with Annotation Pro (Klessa et al., 2013, Klessa, 2016). Currently, the database is the only resource for Polish which allows for a systematic study of a broad range of phenomena related to speech timing and rhythm. The paper also introduces new tools and methods developed to facilitate the database annotation and analysis with respect to various timing and rhythm measures. In the end, the results of an ongoing research and first experimental results using the new resources are reported and future work is sketched.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,052
inproceedings
viszlay-etal-2016-extension
An Extension of the {S}lovak Broadcast News Corpus based on Semi-Automatic Annotation
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1743/
Viszlay, Peter and Sta{\v{s}}, J{\'a}n and Koct{\'u}r, Tom{\'a}{\v{s}} and Lojka, Martin and Juh{\'a}r, Jozef
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4684--4687
In this paper, we introduce an extension of our previously released TUKE-BNews-SK corpus based on a semi-automatic annotation scheme. It firstly relies on the automatic transcription of the BN data performed by our Slovak large vocabulary continuous speech recognition system. The generated hypotheses are then manually corrected and completed by trained human annotators. The corpus is composed of 25 hours of fully-annotated spontaneous and prepared speech. In addition, we have acquired 900 hours of another BN data, part of which we plan to annotate semi-automatically. We present a preliminary corpus evaluation that gives very promising results.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,053
inproceedings
cavar-etal-2016-generating
Generating a {Y}iddish Speech Corpus, Forced Aligner and Basic {ASR} System for the {AHEYM} Project
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1744/
{\'C}avar, Malgorzata and {\'C}avar, Damir and Kerler, Dov-Ber and Quilitzsch, Anya
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4688--4693
To create automatic transcription and annotation tools for the AHEYM corpus of recorded interviews with Yiddish speakers in Eastern Europe we develop initial Yiddish language resources that are used for adaptations of speech and language technologies. Our project aims at the development of resources and technologies that can make the entire AHEYM corpus and other Yiddish resources more accessible to not only the community of Yiddish speakers or linguists with language expertise, but also historians and experts from other disciplines or the general public. In this paper we describe the rationale behind our approach, the procedures and methods, and challenges that are not specific to the AHEYM corpus, but apply to all documentary language data that is collected in the field. To the best of our knowledge, this is the first attempt to create a speech corpus and speech technologies for Yiddish. This is also the first attempt to work out speech and language technologies to transcribe and translate a large collection of Yiddish spoken language resources.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,054
inproceedings
quispesaravia-etal-2016-coh
{C}oh-{M}etrix-{E}sp: A Complexity Analysis Tool for Documents Written in {S}panish
Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios
may
2016
Portoro{\v{z}}, Slovenia
European Language Resources Association (ELRA)
https://aclanthology.org/L16-1745/
Quispesaravia, Andre and Perez, Walter and Sobrevilla Cabezudo, Marco and Alva-Manchego, Fernando
Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16)
4694--4698
Text Complexity Analysis is an useful task in Education. For example, it can help teachers select appropriate texts for their students according to their educational level. This task requires the analysis of several text features that people do mostly manually (e.g. syntactic complexity, words variety, etc.). In this paper, we present a tool useful for Complexity Analysis, called Coh-Metrix-Esp. This is the Spanish version of Coh-Metrix and is able to calculate 45 readability indices. We analyse how these indices behave in a corpus of {\textquotedblleft}simple{\textquotedblright} and {\textquotedblleft}complex{\textquotedblright} documents, and also use them as features in a complexity binary classifier for texts in Spanish. After some experiments with machine learning algorithms, we got 0.9 F-measure for a corpus that contains tales for kids and adults and 0.82 F-measure for a corpus with texts written for students of Spanish as a foreign language.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,055
inproceedings
dyer-etal-2016-practical
Practical Neural Networks for {NLP}: From Theory to Code
Yang, Bishan and Hwa, Rebecca
nov
2016
Austin, Texas
Association for Computational Linguistics
https://aclanthology.org/D16-2001/
Dyer, Chris and Goldberg, Yoav and Neubig, Graham
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
null
This tutorial aims to bring NLP researchers up to speed with the current techniques in deep learning and neural networks, and show them how they can turn their ideas into practical implementations. We will start with simple classification models (logistic regression and multilayer perceptrons) and cover more advanced patterns that come up in NLP such as recurrent networks for sequence tagging and prediction problems, structured networks (e.g., compositional architectures based on syntax trees), structured output spaces (sequences and trees), attention for sequence-to-sequence transduction, and feature induction for complex algorithm states. A particular emphasis will be on learning to represent complex objects as recursive compositions of simpler objects. This representation will reflect characterize standard objects in NLP, such as the composition of characters and morphemes into words, and words into sentences and documents. In addition, new opportunities such as learning to embed ``algorithm states'' such as those used in transition-based parsing and other sequential structured prediction models (for which effective features may be difficult to engineer by hand) will be covered.Everything in the tutorial will be grounded in code {---} we will show how to program seemingly complex neural-net models using toolkits based on the computation-graph formalism. Computation graphs decompose complex computations into a DAG, with nodes representing inputs, target outputs, parameters, or (sub)differentiable functions (e.g., ``tanh'', ``matrix multiply'', and ``softmax''), and edges represent data dependencies. These graphs can be run ``forward'' to make predictions and compute errors (e.g., log loss, squared error) and then ``backward'' to compute derivatives with respect to model parameters. In particular we`ll cover the Python bindings of the CNN library. CNN has been designed from the ground up for NLP applications, dynamically structured NNs, rapid prototyping, and a transparent data and execution model.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,412
inproceedings
chen-liu-2016-lifelong
Lifelong Machine Learning for Natural Language Processing
Yang, Bishan and Hwa, Rebecca
nov
2016
Austin, Texas
Association for Computational Linguistics
https://aclanthology.org/D16-2003/
Chen, Zhiyuan and Liu, Bing
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
null
Machine learning (ML) has been successfully used as a prevalent approach to solving numerous NLP problems. However, the classic ML paradigm learns in isolation. That is, given a dataset, an ML algorithm is executed on the dataset to produce a model without using any related or prior knowledge. Although this type of isolated learning is very useful, it also has serious limitations as it does not accumulate knowledge learned in the past and use the knowledge to help future learning, which is the hallmark of human learning and human intelligence. Lifelong machine learning (LML) aims to achieve this capability. Specifically, it aims to design and develop computational learning systems and algorithms that learn as humans do, i.e., retaining the results learned in the past, abstracting knowledge from them, and using the knowledge to help future learning. In this tutorial, we will introduce the existing research of LML and to show that LML is very suitable for NLP tasks and has potential to help NLP make major progresses.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,414
inproceedings
zhang-vo-2016-neural
Neural Networks for Sentiment Analysis
Yang, Bishan and Hwa, Rebecca
nov
2016
Austin, Texas
Association for Computational Linguistics
https://aclanthology.org/D16-2004/
Zhang, Yue and Vo, Duy Tin
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
null
Sentiment analysis has been a major research topic in natural language processing (NLP). Traditionally, the problem has been attacked using discrete models and manually-defined sparse features. Over the past few years, neural network models have received increased research efforts in most sub areas of sentiment analysis, giving highly promising results. A main reason is the capability of neural models to automatically learn dense features that capture subtle semantic information over words, sentences and documents, which are difficult to model using traditional discrete features based on words and ngram patterns. This tutorial gives an introduction to neural network models for sentiment analysis, discussing the mathematics of word embeddings, sequence models and tree structured models and their use in sentiment analysis on the word, sentence and document levels, and fine-grained sentiment analysis. The tutorial covers a range of neural network models (e.g. CNN, RNN, RecNN, LSTM) and their extensions, which are employed in four main subtasks of sentiment analysis:Sentiment-oriented embeddings;Sentence-level sentiment;Document-level sentiment;Fine-grained sentiment.The content of the tutorial is divided into 3 sections of 1 hour each. We assume that the audience is familiar with linear algebra and basic neural network structures, introduce the mathematical details of the most typical models. First, we will introduce the sentiment analysis task, basic concepts related to neural network models for sentiment analysis, and show detail approaches to integrate sentiment information into embeddings. Sentence-level models will be described in the second section. Finally, we will discuss neural network models use for document-level and fine-grained sentiment.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,415
inproceedings
banchs-2016-continuous
Continuous Vector Spaces for Cross-language {NLP} Applications
Yang, Bishan and Hwa, Rebecca
nov
2016
Austin, Texas
Association for Computational Linguistics
https://aclanthology.org/D16-2005/
Banchs, Rafael E.
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
null
The mathematical metaphor offered by the geometric concept of distance in vector spaces with respect to semantics and meaning has been proven to be useful in many monolingual natural language processing applications. There is also some recent and strong evidence that this paradigm can also be useful in the cross-language setting. In this tutorial, we present and discuss some of the most recent advances on exploiting the vector space model paradigm in specific cross-language natural language processing applications, along with a comprehensive review of the theoretical background behind them.First, the tutorial introduces some fundamental concepts of distributional semantics and vector space models. More specifically, the concepts of distributional hypothesis and term-document matrices are revised, followed by a brief discussion on linear and non-linear dimensionality reduction techniques and their implications to the parallel distributed approach to semantic cognition. Next, some classical examples of using vector space models in monolingual natural language processing applications are presented. Specific examples in the areas of information retrieval, related term identification and semantic compositionality are described.Then, the tutorial focuses its attention on the use of the vector space model paradigm in cross-language applications. To this end, some recent examples are presented and discussed in detail, addressing the specific problems of cross-language information retrieval, cross-language sentence matching, and machine translation. Some of the most recent developments in the area of Neural Machine Translation are also discussed.Finally, the tutorial concludes with a discussion about current and future research problems related to the use of vector space models in cross-language settings. Future avenues for scientific research are described, with major emphasis on the extension from vector and matrix representations to tensors, as well as the problem of encoding word position information into the vector-based representations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,416
inproceedings
sun-feng-2016-methods
Methods and Theories for Large-scale Structured Prediction
Yang, Bishan and Hwa, Rebecca
nov
2016
Austin, Texas
Association for Computational Linguistics
https://aclanthology.org/D16-2006/
Sun, Xu and Feng, Yansong
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
null
Many important NLP tasks are casted as structured prediction problems, and try to predict certain forms of structured output from the input. Examples of structured prediction include POS tagging, named entity recognition, PCFG parsing, dependency parsing, machine translation, and many others. When apply structured prediction to a specific NLP task, there are the following challenges:1. Model selection: Among various models/algorithms with different characteristics, which one should we choose for a specific NLP task?2. Training: How to train the model parameters effectively and efficiently?3. Overfitting: To achieve good accuracy on test data, it is important to control the overfitting from the training data. How to control the overfitting risk for structured prediction?This tutorial will provide a clear overview of recent advances in structured prediction methods and theories, and address the above issues when we apply structured prediction to NLP tasks. We will introduce large margin methods (e.g., perceptrons, MIRA), graphical models (e.g., CRFs), and deep learning methods (e.g., RNN, LSTM), and show the respective advantages and disadvantages for NLP applications. For the training algorithms, we will introduce online/ stochastic training methods, and we will introduce parallel online/stochastic learning algorithms and theories to speed up the training (e.g., the Hogwild algorithm). For controlling the overfitting from training data, we will introduce the weight regularization methods, structure regularization, and implicit regularization methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,417
inproceedings
bawden-crabbe-2016-boosting
Boosting for Efficient Model Selection for Syntactic Parsing
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1001/
Bawden, Rachel and Crabb{\'e}, Beno{\^i}t
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
1--11
We present an efficient model selection method using boosting for transition-based constituency parsing. It is designed for exploring a high-dimensional search space, defined by a large set of feature templates, as for example is typically the case when parsing morphologically rich languages. Our method removes the need to manually define heuristic constraints, which are often imposed in current state-of-the-art selection methods. Our experiments for French show that the method is more efficient and is also capable of producing compact, state-of-the-art models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,419
inproceedings
guo-etal-2016-universal
A Universal Framework for Inductive Transfer Parsing across Multi-typed Treebanks
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1002/
Guo, Jiang and Che, Wanxiang and Wang, Haifeng and Liu, Ting
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
12--22
Various treebanks have been released for dependency parsing. Despite that treebanks may belong to different languages or have different annotation schemes, they contain common syntactic knowledge that is potential to benefit each other. This paper presents a universal framework for transfer parsing across multi-typed treebanks with deep multi-task learning. We consider two kinds of treebanks as source: the multilingual universal treebanks and the monolingual heterogeneous treebanks. Knowledge across the source and target treebanks are effectively transferred through multi-level parameter sharing. Experiments on several benchmark datasets in various languages demonstrate that our approach can make effective use of arbitrary source treebanks to improve target parsing models.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,420
inproceedings
pate-johnson-2016-grammar
Grammar induction from (lots of) words alone
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1003/
Pate, John K and Johnson, Mark
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
23--32
Grammar induction is the task of learning syntactic structure in a setting where that structure is hidden. Grammar induction from words alone is interesting because it is similiar to the problem that a child learning a language faces. Previous work has typically assumed richer but cognitively implausible input, such as POS tag annotated data, which makes that work less relevant to human language acquisition. We show that grammar induction from words alone is in fact feasible when the model is provided with sufficient training data, and present two new streaming or mini-batch algorithms for PCFG inference that can learn from millions of words of training data. We compare the performance of these algorithms to a batch algorithm that learns from less data. The minibatch algorithms outperform the batch algorithm, showing that cheap inference with more data is better than intensive inference with less data. Additionally, we show that the harmonic initialiser, which previous work identified as essential when learning from small POS-tag annotated corpora (Klein and Manning, 2004), is not superior to a uniform initialisation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,421
inproceedings
ren-etal-2016-redundancy
A Redundancy-Aware Sentence Regression Framework for Extractive Summarization
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1004/
Ren, Pengjie and Wei, Furu and Chen, Zhumin and Ma, Jun and Zhou, Ming
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
33--43
Existing sentence regression methods for extractive summarization usually model sentence importance and redundancy in two separate processes. They first evaluate the importance f(s) of each sentence s and then select sentences to generate a summary based on both the importance scores and redundancy among sentences. In this paper, we propose to model importance and redundancy simultaneously by directly evaluating the relative importance f(s|S) of a sentence s given a set of selected sentences S. Specifically, we present a new framework to conduct regression with respect to the relative gain of s given S calculated by the ROUGE metric. Besides the single sentence features, additional features derived from the sentence relations are incorporated. Experiments on the DUC 2001, 2002 and 2004 multi-document summarization datasets show that the proposed method outperforms state-of-the-art extractive summarization approaches.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,422
inproceedings
laokulrat-etal-2016-generating
Generating Video Description using Sequence-to-sequence Model with Temporal Attention
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1005/
Laokulrat, Natsuda and Phan, Sang and Nishida, Noriki and Shu, Raphael and Ehara, Yo and Okazaki, Naoaki and Miyao, Yusuke and Nakayama, Hideki
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
44--52
Automatic video description generation has recently been getting attention after rapid advancement in image caption generation. Automatically generating description for a video is more challenging than for an image due to its temporal dynamics of frames. Most of the work relied on Recurrent Neural Network (RNN) and recently attentional mechanisms have also been applied to make the model learn to focus on some frames of the video while generating each word in a describing sentence. In this paper, we focus on a sequence-to-sequence approach with temporal attention mechanism. We analyze and compare the results from different attention model configuration. By applying the temporal attention mechanism to the system, we can achieve a METEOR score of 0.310 on Microsoft Video Description dataset, which outperformed the state-of-the-art system so far.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,423
inproceedings
luo-etal-2016-improved
An Improved Phrase-based Approach to Annotating and Summarizing Student Course Responses
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1006/
Luo, Wencan and Liu, Fei and Litman, Diane
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
53--63
Teaching large classes remains a great challenge, primarily because it is difficult to attend to all the student needs in a timely manner. Automatic text summarization systems can be leveraged to summarize the student feedback, submitted immediately after each lecture, but it is left to be discovered what makes a good summary for student responses. In this work we explore a new methodology that effectively extracts summary phrases from the student responses. Each phrase is tagged with the number of students who raise the issue. The phrases are evaluated along two dimensions: with respect to text content, they should be informative and well-formed, measured by the ROUGE metric; additionally, they shall attend to the most pressing student needs, measured by a newly proposed metric. This work is enabled by a phrase-based annotation and highlighting scheme, which is new to the summarization task. The phrase-based framework allows us to summarize the student responses into a set of bullet points and present to the instructor promptly.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,424
inproceedings
mirza-tonelli-2016-catena
{CATENA}: {CA}usal and {TE}mporal relation extraction from {NA}tural language texts
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1007/
Mirza, Paramita and Tonelli, Sara
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
64--75
We present CATENA, a sieve-based system to perform temporal and causal relation extraction and classification from English texts, exploiting the interaction between the temporal and the causal model. We evaluate the performance of each sieve, showing that the rule-based, the machine-learned and the reasoning components all contribute to achieving state-of-the-art performance on TempEval-3 and TimeBank-Dense data. Although causal relations are much sparser than temporal ones, the architecture and the selected features are mostly suitable to serve both tasks. The effects of the interaction between the temporal and the causal components, although limited, yield promising results and confirm the tight connection between the temporal and the causal dimension of texts.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,425
inproceedings
iso-etal-2016-forecasting
Forecasting Word Model: {T}witter-based Influenza Surveillance and Prediction
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1008/
Iso, Hayate and Wakamiya, Shoko and Aramaki, Eiji
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
76--86
Because of the increasing popularity of social media, much information has been shared on the internet, enabling social media users to understand various real world events. Particularly, social media-based infectious disease surveillance has attracted increasing attention. In this work, we specifically examine influenza: a common topic of communication on social media. The fundamental theory of this work is that several words, such as symptom words (fever, headache, etc.), appear in advance of flu epidemic occurrence. Consequently, past word occurrence can contribute to estimation of the number of current patients. To employ such forecasting words, one can first estimate the optimal time lag for each word based on their cross correlation. Then one can build a linear model consisting of word frequencies at different time points for nowcasting and for forecasting influenza epidemics. Experimentally obtained results (using 7.7 million tweets of August 2012 {--} January 2016), the proposed model achieved the best nowcasting performance to date (correlation ratio 0.93) and practically sufficient forecasting performance (correlation ratio 0.91 in 1-week future prediction, and correlation ratio 0.77 in 3-weeks future prediction). This report is the first of the relevant literature to describe a model enabling prediction of future epidemics using Twitter.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,426
inproceedings
reimers-etal-2016-task
Task-Oriented Intrinsic Evaluation of Semantic Textual Similarity
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1009/
Reimers, Nils and Beyer, Philip and Gurevych, Iryna
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
87--96
Semantic Textual Similarity (STS) is a foundational NLP task and can be used in a wide range of tasks. To determine the STS of two texts, hundreds of different STS systems exist, however, for an NLP system designer, it is hard to decide which system is the best one. To answer this question, an intrinsic evaluation of the STS systems is conducted by comparing the output of the system to human judgments on semantic similarity. The comparison is usually done using Pearson correlation. In this work, we show that relying on intrinsic evaluations with Pearson correlation can be misleading. In three common STS based tasks we could observe that the Pearson correlation was especially ill-suited to detect the best STS system for the task and other evaluation measures were much better suited. In this work we define how the validity of an intrinsic evaluation can be assessed and compare different intrinsic evaluation methods. Understanding of the properties of the targeted task is crucial and we propose a framework for conducting the intrinsic evaluation which takes the properties of the targeted task into account.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,427
inproceedings
arcan-etal-2016-expanding
Expanding wordnets to new languages with multilingual sense disambiguation
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1010/
Arcan, Mihael and McCrae, John Philip and Buitelaar, Paul
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
97--108
Princeton WordNet is one of the most important resources for natural language processing, but is only available for English. While it has been translated using the expand approach to many other languages, this is an expensive manual process. Therefore it would be beneficial to have a high-quality automatic translation approach that would support NLP techniques, which rely on WordNet in new languages. The translation of wordnets is fundamentally complex because of the need to translate all senses of a word including low frequency senses, which is very challenging for current machine translation approaches. For this reason we leverage existing translations of WordNet in other languages to identify contextual information for wordnet senses from a large set of generic parallel corpora. We evaluate our approach using 10 translated wordnets for European languages. Our experiment shows a significant improvement over translation without any contextual information. Furthermore, we evaluate how the choice of pivot languages affects performance of multilingual word sense disambiguation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,428
inproceedings
saha-etal-2016-correlational
A Correlational Encoder Decoder Architecture for Pivot Based Sequence Generation
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1011/
Saha, Amrita and Khapra, Mitesh M. and Chandar, Sarath and Rajendran, Janarthanan and Cho, Kyunghyun
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
109--118
Interlingua based Machine Translation (MT) aims to encode multiple languages into a common linguistic representation and then decode sentences in multiple target languages from this representation. In this work we explore this idea in the context of neural encoder decoder architectures, albeit on a smaller scale and without MT as the end goal. Specifically, we consider the case of three languages or modalities X, Z and Y wherein we are interested in generating sequences in Y starting from information available in X. However, there is no parallel training data available between X and Y but, training data is available between X {\&} Z and Z {\&} Y (as is often the case in many real world applications). Z thus acts as a pivot/bridge. An obvious solution, which is perhaps less elegant but works very well in practice is to train a two stage model which first converts from X to Z and then from Z to Y. Instead we explore an interlingua inspired solution which jointly learns to do the following (i) encode X and Z to a common representation and (ii) decode Y from this common representation. We evaluate our model on two tasks: (i) bridge transliteration and (ii) bridge captioning. We report promising results in both these applications and believe that this is a right step towards truly interlingua inspired encoder decoder architectures.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,429
inproceedings
aufrant-etal-2016-zero
Zero-resource Dependency Parsing: Boosting Delexicalized Cross-lingual Transfer with Linguistic Knowledge
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1012/
Aufrant, Lauriane and Wisniewski, Guillaume and Yvon, Fran{\c{c}}ois
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
119--130
This paper studies cross-lingual transfer for dependency parsing, focusing on very low-resource settings where delexicalized transfer is the only fully automatic option. We show how to boost parsing performance by rewriting the source sentences so as to better match the linguistic regularities of the target language. We contrast a data-driven approach with an approach relying on linguistically motivated rules automatically extracted from the World Atlas of Language Structures. Our findings are backed up by experiments involving 40 languages. They show that both approaches greatly outperform the baseline, the knowledge-driven method yielding the best accuracies, with average improvements of +2.9 UAS, and up to +90 UAS (absolute) on some frequent PoS configurations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,430
inproceedings
bollmann-sogaard-2016-improving
Improving historical spelling normalization with bi-directional {LSTM}s and multi-task learning
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1013/
Bollmann, Marcel and S{\o}gaard, Anders
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
131--139
Natural-language processing of historical documents is complicated by the abundance of variant spellings and lack of annotated data. A common approach is to normalize the spelling of historical words to modern forms. We explore the suitability of a deep neural network architecture for this task, particularly a deep bi-LSTM network applied on a character level. Our model compares well to previously established normalization algorithms when evaluated on a diverse set of texts from Early New High German. We show that multi-task learning with additional normalization data can improve our model`s performance further.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,431
inproceedings
ren-zhang-2016-deceptive
Deceptive Opinion Spam Detection Using Neural Network
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1014/
Ren, Yafeng and Zhang, Yue
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
140--150
Deceptive opinion spam detection has attracted significant attention from both business and research communities. Existing approaches are based on manual discrete features, which can capture linguistic and psychological cues. However, such features fail to encode the semantic meaning of a document from the discourse perspective, which limits the performance. In this paper, we empirically explore a neural network model to learn document-level representation for detecting deceptive opinion spam. In particular, given a document, the model learns sentence representations with a convolutional neural network, which are combined using a gated recurrent neural network with attention mechanism to model discourse information and yield a document vector. Finally, the document representation is used directly as features to identify deceptive opinion spam. Experimental results on three domains (Hotel, Restaurant, and Doctor) show that our proposed method outperforms state-of-the-art methods.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,432
inproceedings
li-etal-2016-integrating
Integrating Topic Modeling with Word Embeddings by Mixtures of v{MF}s
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1015/
Li, Ximing and Chi, Jinjin and Li, Changchun and Ouyang, Jihong and Fu, Bo
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
151--160
Gaussian LDA integrates topic modeling with word embeddings by replacing discrete topic distribution over word types with multivariate Gaussian distribution on the embedding space. This can take semantic information of words into account. However, the Euclidean similarity used in Gaussian topics is not an optimal semantic measure for word embeddings. Acknowledgedly, the cosine similarity better describes the semantic relatedness between word embeddings. To employ the cosine measure and capture complex topic structure, we use von Mises-Fisher (vMF) mixture models to represent topics, and then develop a novel mix-vMF topic model (MvTM). Using public pre-trained word embeddings, we evaluate MvTM on three real-world data sets. Experimental results show that our model can discover more coherent topics than the state-of-the-art baseline models, and achieve competitive classification performance.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,433
inproceedings
takeda-komatani-2016-bayesian
{B}ayesian Language Model based on Mixture of Segmental Contexts for Spontaneous Utterances with Unexpected Words
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1016/
Takeda, Ryu and Komatani, Kazunori
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
161--170
This paper describes a Bayesian language model for predicting spontaneous utterances. People sometimes say unexpected words, such as fillers or hesitations, that cause the miss-prediction of words in normal N-gram models. Our proposed model considers mixtures of possible segmental contexts, that is, a kind of context-word selection. It can reduce negative effects caused by unexpected words because it represents conditional occurrence probabilities of a word as weighted mixtures of possible segmental contexts. The tuning of mixture weights is the key issue in this approach as the segment patterns becomes numerous, thus we resolve it by using Bayesian model. The generative process is achieved by combining the stick-breaking process and the process used in the variable order Pitman-Yor language model. Experimental evaluations revealed that our model outperformed contiguous N-gram models in terms of perplexity for noisy text including hesitations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,434
inproceedings
ma-etal-2016-label
Label Embedding for Zero-shot Fine-grained Named Entity Typing
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1017/
Ma, Yukun and Cambria, Erik and Gao, Sa
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
171--180
Named entity typing is the task of detecting the types of a named entity in context. For instance, given {\textquotedblleft}Eric is giving a presentation{\textquotedblright}, our goal is to infer that {\textquoteleft}Eric' is a speaker or a presenter and a person. Existing approaches to named entity typing cannot work with a growing type set and fails to recognize entity mentions of unseen types. In this paper, we present a label embedding method that incorporates prototypical and hierarchical information to learn pre-trained label embeddings. In addition, we adapt a zero-shot learning framework that can predict both seen and previously unseen entity types. We perform evaluation on three benchmark datasets with two settings: 1) few-shots recognition where all types are covered by the training set; and 2) zero-shot recognition where fine-grained types are assumed absent from training set. Results show that prior knowledge encoded using our label embedding methods can significantly boost the performance of classification for both cases.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,435
inproceedings
shen-etal-2016-role
The Role of Context in Neural Morphological Disambiguation
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1018/
Shen, Qinlan and Clothiaux, Daniel and Tagtow, Emily and Littell, Patrick and Dyer, Chris
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
181--191
Languages with rich morphology often introduce sparsity in language processing tasks. While morphological analyzers can reduce this sparsity by providing morpheme-level analyses for words, they will often introduce ambiguity by returning multiple analyses for the same surface form. The problem of disambiguating between these morphological parses is further complicated by the fact that a correct parse for a word is not only be dependent on the surface form but also on other words in its context. In this paper, we present a language-agnostic approach to morphological disambiguation. We address the problem of using context in morphological disambiguation by presenting several LSTM-based neural architectures that encode long-range surface-level and analysis-level contextual dependencies. We applied our approach to Turkish, Russian, and Arabic to compare effectiveness across languages, matching state-of-the-art results in two of the three languages. Our results also demonstrate that while context plays a role in learning how to disambiguate, the type and amount of context needed varies between languages.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,436
inproceedings
sun-2016-asynchronous
Asynchronous Parallel Learning for Neural Networks and Structured Models with Dense Features
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1019/
Sun, Xu
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
192--202
Existing asynchronous parallel learning methods are only for the sparse feature models, and they face new challenges for the dense feature models like neural networks (e.g., LSTM, RNN). The problem for dense features is that asynchronous parallel learning brings gradient errors derived from overwrite actions. We show that gradient errors are very common and inevitable. Nevertheless, our theoretical analysis shows that the learning process with gradient errors can still be convergent towards the optimum of objective functions for many practical applications. Thus, we propose a simple method \textit{AsynGrad} for asynchronous parallel learning with gradient error. Base on various dense feature models (LSTM, dense-CRF) and various NLP tasks, experiments show that \textit{AsynGrad} achieves substantial improvement on training speed, and without any loss on accuracy.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,437
inproceedings
wu-etal-2016-empirical
An Empirical Exploration of Skip Connections for Sequential Tagging
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1020/
Wu, Huijia and Zhang, Jiajun and Zong, Chengqing
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
203--212
In this paper, we empirically explore the effects of various kinds of skip connections in stacked bidirectional LSTMs for sequential tagging. We investigate three kinds of skip connections connecting to LSTM cells: (a) skip connections to the gates, (b) skip connections to the internal states and (c) skip connections to the cell outputs. We present comprehensive experiments showing that skip connections to cell outputs outperform the remaining two. Furthermore, we observe that using gated identity functions as skip mappings works pretty well. Based on this novel skip connections, we successfully train deep stacked bidirectional LSTM models and obtain state-of-the-art results on CCG supertagging and comparable results on POS tagging.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,438
inproceedings
wang-etal-2016-exploring
Exploring Text Links for Coherent Multi-Document Summarization
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1021/
Wang, Xun and Nishino, Masaaki and Hirao, Tsutomu and Sudoh, Katsuhito and Nagata, Masaaki
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
213--223
Summarization aims to represent source documents by a shortened passage. Existing methods focus on the extraction of key information, but often neglect coherence. Hence the generated summaries suffer from a lack of readability. To address this problem, we have developed a graph-based method by exploring the links between text to produce coherent summaries. Our approach involves finding a sequence of sentences that best represent the key information in a coherent way. In contrast to the previous methods that focus only on salience, the proposed method addresses both coherence and informativeness based on textual linkages. We conduct experiments on the DUC2004 summarization task data set. A performance comparison reveals that the summaries generated by the proposed system achieve comparable results in terms of the ROUGE metric, and show improvements in readability by human evaluation.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,439
inproceedings
mcmahan-stone-2016-syntactic
Syntactic realization with data-driven neural tree grammars
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1022/
McMahan, Brian and Stone, Matthew
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
224--235
A key component in surface realization in natural language generation is to choose concrete syntactic relationships to express a target meaning. We develop a new method for syntactic choice based on learning a stochastic tree grammar in a neural architecture. This framework can exploit state-of-the-art methods for modeling word sequences and generalizing across vocabulary. We also induce embeddings to generalize over elementary tree structures and exploit a tree recurrence over the input structure to model long-distance influences between NLG choices. We evaluate the models on the task of linearizing unannotated dependency trees, documenting the contribution of our modeling techniques to improvements in both accuracy and run time.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,440
inproceedings
li-etal-2016-abstractive
Abstractive News Summarization based on Event Semantic Link Network
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1023/
Li, Wei and He, Lei and Zhuge, Hai
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
236--246
This paper studies the abstractive multi-document summarization for event-oriented news texts through event information extraction and abstract representation. Fine-grained event mentions and semantic relations between them are extracted to build a unified and connected event semantic link network, an abstract representation of source texts. A network reduction algorithm is proposed to summarize the most salient and coherent event information. New sentences with good linguistic quality are automatically generated and selected through sentences over-generation and greedy-selection processes. Experimental results on DUC 2006 and DUC 2007 datasets show that our system significantly outperforms the state-of-the-art extractive and abstractive baselines under both pyramid and ROUGE evaluation metrics.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,441
inproceedings
peyrard-eckle-kohler-2016-general
A General Optimization Framework for Multi-Document Summarization Using Genetic Algorithms and Swarm Intelligence
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1024/
Peyrard, Maxime and Eckle-Kohler, Judith
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
247--257
Extracting summaries via integer linear programming and submodularity are popular and successful techniques in extractive multi-document summarization. However, many interesting optimization objectives are neither submodular nor factorizable into an integer linear program. We address this issue and present a general optimization framework where any function of input documents and a system summary can be plugged in. Our framework includes two kinds of summarizers {--} one based on genetic algorithms, the other using a swarm intelligence approach. In our experimental evaluation, we investigate the optimization of two information-theoretic summary evaluation metrics and find that our framework yields competitive results compared to several strong summarization baselines. Our comparative analysis of the genetic and swarm summarizers reveals interesting complementary properties.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,442
inproceedings
rojas-barahona-etal-2016-exploiting
Exploiting Sentence and Context Representations in Deep Neural Models for Spoken Language Understanding
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1025/
Rojas-Barahona, Lina M. and Ga{\v{s}}i{\'c}, Milica and Mrk{\v{s}}i{\'c}, Nikola and Su, Pei-Hao and Ultes, Stefan and Wen, Tsung-Hsien and Young, Steve
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
258--267
This paper presents a deep learning architecture for the semantic decoder component of a Statistical Spoken Dialogue System. In a slot-filling dialogue, the semantic decoder predicts the dialogue act and a set of slot-value pairs from a set of n-best hypotheses returned by the Automatic Speech Recognition. Most current models for spoken language understanding assume (i) word-aligned semantic annotations as in sequence taggers and (ii) delexicalisation, or a mapping of input words to domain-specific concepts using heuristics that try to capture morphological variation but that do not scale to other domains nor to language variation (e.g., morphology, synonyms, paraphrasing ). In this work the semantic decoder is trained using unaligned semantic annotations and it uses distributed semantic representation learning to overcome the limitations of explicit delexicalisation. The proposed architecture uses a convolutional neural network for the sentence representation and a long-short term memory network for the context representation. Results are presented for the publicly available DSTC2 corpus and an In-car corpus which is similar to DSTC2 but has a significantly higher word error rate (WER).
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,443
inproceedings
kohn-baumann-2016-predictive
Predictive Incremental Parsing Helps Language Modeling
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1026/
K{\"ohn, Arne and Baumann, Timo
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
268--277
Predictive incremental parsing produces syntactic representations of sentences as they are produced, e.g. by typing or speaking. In order to generate connected parses for such unfinished sentences, upcoming word types can be hypothesized and structurally integrated with already realized words. For example, the presence of a determiner as the last word of a sentence prefix may indicate that a noun will appear somewhere in the completion of that sentence, and the determiner can be attached to the predicted noun. We combine the forward-looking parser predictions with backward-looking N-gram histories and analyze in a set of experiments the impact on language models, i.e. stronger discriminative power but also higher data sparsity. Conditioning N-gram models, MaxEnt models or RNN-LMs on parser predictions yields perplexity reductions of about 6{\%}. Our method (a) retains online decoding capabilities and (b) incurs relatively little computational overhead which sets it apart from previous approaches that use syntax for language modeling. Our method is particularly attractive for modular systems that make use of a syntax parser anyway, e.g. as part of an understanding pipeline where predictive parsing improves language modeling at no additional cost.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,444
inproceedings
wang-etal-2016-neural
A Neural Attention Model for Disfluency Detection
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1027/
Wang, Shaolei and Che, Wanxiang and Liu, Ting
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
278--287
In this paper, we study the problem of disfluency detection using the encoder-decoder framework. We treat disfluency detection as a sequence-to-sequence problem and propose a neural attention-based model which can efficiently model the long-range dependencies between words and make the resulting sentence more likely to be grammatically correct. Our model firstly encode the source sentence with a bidirectional Long Short-Term Memory (BI-LSTM) and then use the neural attention as a pointer to select an ordered sub sequence of the input as the output. Experiments show that our model achieves the state-of-the-art f-score of 86.7{\%} on the commonly used English Switchboard test set. We also evaluate the performance of our model on the in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,445
inproceedings
shen-etal-2016-consistent
Consistent Word Segmentation, Part-of-Speech Tagging and Dependency Labelling Annotation for {C}hinese Language
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1029/
Shen, Mo and Li, Wingmui and Choe, HyunJeong and Chu, Chenhui and Kawahara, Daisuke and Kurohashi, Sadao
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
298--308
In this paper, we propose a new annotation approach to Chinese word segmentation, part-of-speech (POS) tagging and dependency labelling that aims to overcome the two major issues in traditional morphology-based annotation: Inconsistency and data sparsity. We re-annotate the Penn Chinese Treebank 5.0 (CTB5) and demonstrate the advantages of this approach compared to the original CTB5 annotation through word segmentation, POS tagging and machine translation experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,447
inproceedings
rei-etal-2016-attending
Attending to Characters in Neural Sequence Labeling Models
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1030/
Rei, Marek and Crichton, Gamal and Pyysalo, Sampo
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
309--318
Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,448
inproceedings
zhou-etal-2016-word
A Word Labeling Approach to {T}hai Sentence Boundary Detection and {POS} Tagging
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1031/
Zhou, Nina and Aw, AiTi and Lertcheva, Nattadaporn and Wang, Xuancong
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
319--327
Previous studies on Thai Sentence Boundary Detection (SBD) mostly assumed sentence ends at a space disambiguation problem, which classified space either as an indicator for Sentence Boundary (SB) or non-Sentence Boundary (nSB). In this paper, we propose a word labeling approach which treats space as a normal word, and detects SB between any two words. This removes the restriction for SB to be oc-curred only at space and makes our system more robust for modern Thai writing. It is because in modern Thai writing, space is not consistently used to indicate SB. As syntactic information contributes to better SBD, we further propose a joint Part-Of-Speech (POS) tagging and SBD framework based on Factorial Conditional Random Field (FCRF) model. We compare the performance of our proposed ap-proach with reported methods on ORCHID corpus. We also performed experiments of FCRF model on the TaLAPi corpus. The results show that the word labelling approach has better performance than pre-vious space-based classification approaches and FCRF joint model outperforms LCRF model in terms of SBD in all experiments.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,449
inproceedings
horsmann-zesch-2016-assigning
Assigning Fine-grained {P}o{S} Tags based on High-precision Coarse-grained Tagging
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1032/
Horsmann, Tobias and Zesch, Torsten
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
328--336
We propose a new approach to PoS tagging where in a first step, we assign a coarse-grained tag corresponding to the main syntactic category. Based on this high-precision decision, in the second step we utilize specially trained fine-grained models with heavily reduced decision complexity. By analyzing the system under oracle conditions, we show that there is a quite large potential for significantly outperforming a competitive baseline. When we take error-propagation from the coarse-grained tagging into account, our approach is on par with the state of the art. Our approach also allows tailoring the tagger towards recognizing single word classes which are of interest e.g. for researchers searching for specific phenomena in large corpora. In a case study, we significantly outperform a standard model that also makes use of the same optimizations.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,450
inproceedings
more-tsarfaty-2016-data
Data-Driven Morphological Analysis and Disambiguation for Morphologically Rich Languages and {U}niversal {D}ependencies
Matsumoto, Yuji and Prasad, Rashmi
dec
2016
Osaka, Japan
The COLING 2016 Organizing Committee
https://aclanthology.org/C16-1033/
More, Amir and Tsarfaty, Reut
Proceedings of {COLING} 2016, the 26th International Conference on Computational Linguistics: Technical Papers
337--348
Parsing texts into universal dependencies (UD) in realistic scenarios requires infrastructure for the morphological analysis and disambiguation (MA{\&}D) of typologically different languages as a first tier. MA{\&}D is particularly challenging in morphologically rich languages (MRLs), where the ambiguous space-delimited tokens ought to be disambiguated with respect to their constituent morphemes, each morpheme carrying its own tag and a rich set features. Here we present a novel, language-agnostic, framework for MA{\&}D, based on a transition system with two variants {---} word-based and morpheme-based {---} and a dedicated transition to mitigate the biases of variable-length morpheme sequences. Our experiments on a Modern Hebrew case study show state of the art results, and we show that the morpheme-based MD consistently outperforms our word-based variant. We further illustrate the utility and multilingual coverage of our framework by morphologically analyzing and disambiguating the large set of languages in the UD treebanks.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
61,451