entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | trips-2016-syntactic | Syntactic Analysis of Phrasal Compounds in Corpora: a Challenge for {NLP} Tools | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1174/ | Trips, Carola | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1092--1097 | The paper introduces a {\textquotedblleft}train once, use many{\textquotedblright} approach for the syntactic analysis of phrasal compounds (PC) of the type XP+N like {\textquotedblleft}Would you like to sit on my knee?{\textquotedblright} nonsense. PCs are a challenge for NLP tools since they require the identification of a syntactic phrase within a morphological complex. We propose a method which uses a state-of-the-art dependency parser not only to analyse sentences (the environment of PCs) but also to compound the non-head of PCs in a well-defined particular condition which is the analysis of the non-head spanning from the left boundary (mostly marked by a determiner) to the nominal head of the PC. This method contains the following steps: (a) the use an English state-of-the-art dependency parser with data comprising sentences with PCs from the British National Corpus (BNC), (b) the detection of parsing errors of PCs, (c) the separate treatment of the non-head structure using the same model, and (d) the attachment of the non-head to the compound head. The evaluation of the method showed that the accuracy of 76{\%} could be improved by adding a step in the PC compounder module which specified user-defined contexts being sensitive to the part of speech of the non-head parts and by using TreeTagger, in line with our approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,484 |
inproceedings | khalifa-etal-2016-dalila | {DALILA}: The Dialectal {A}rabic Linguistic Learning Assistant | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1175/ | Khalifa, Salam and Bouamor, Houda and Habash, Nizar | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1098--1102 | Dialectal Arabic (DA) poses serious challenges for Natural Language Processing (NLP). The number and sophistication of tools and datasets in DA are very limited in comparison to Modern Standard Arabic (MSA) and other languages. MSA tools do not effectively model DA which makes the direct use of MSA NLP tools for handling dialects impractical. This is particularly a challenge for the creation of tools to support learning Arabic as a living language on the web, where authentic material can be found in both MSA and DA. In this paper, we present the Dialectal Arabic Linguistic Learning Assistant (DALILA), a Chrome extension that utilizes cutting-edge Arabic dialect NLP research to assist learners and non-native speakers in understanding text written in either MSA or DA. DALILA provides dialectal word analysis and English gloss corresponding to each word. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,485 |
inproceedings | steiner-2016-refurbishing | Refurbishing a Morphological Database for {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1176/ | Steiner, Petra | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1103--1108 | The CELEX database is one of the standard lexical resources for German. It yields a wealth of data especially for phonological and morphological applications. The morphological part comprises deep-structure morphological analyses of German. However, as it was developed in the Nineties, both encoding and spelling are outdated. About one fifth of over 50,000 datasets contain umlauts and signs such as {\ss}. Changes to a modern version cannot be obtained by simple substitution. In this paper, we shortly describe the original content and form of the orthographic and morphological database for German in CELEX. Then we present our work on modernizing the linguistic data. Lemmas and morphological analyses are transferred to a modern standard of encoding by first merging orthographic and morphological information of the lemmas and their entries and then performing a second substitution for the morphs within their morphological analyses. Changes to modern German spelling are performed by substitution rules according to orthographical standards. We show an example of the use of the data for the disambiguation of morphological structures. The discussion describes prospects of future work on this or similar lexicons. The Perl script is publicly available on our website. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,486 |
inproceedings | lopez-etal-2016-encoding | Encoding Adjective Scales for Fine-grained Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1177/ | Lopez, C{\'e}dric and Segond, Fr{\'e}d{\'e}rique and Fellbaum, Christiane | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1109--1113 | We propose an automatic approach towards determining the relative location of adjectives on a common scale based on their strength. We focus on adjectives expressing different degrees of goodness occurring in French product (perfumes) reviews. Using morphosyntactic patterns, we extract from the reviews short phrases consisting of a noun that encodes a particular aspect of the perfume and an adjective modifying that noun. We then associate each such n-gram with the corresponding product aspect and its related star rating. Next, based on the star scores, we generate adjective scales reflecting the relative strength of specific adjectives associated with a shared attribute of the product. An automatic ordering of the adjectives {\textquotedblleft}correct{\textquotedblright} (correct), {\textquotedblleft}sympa{\textquotedblright} (nice), {\textquotedblleft}bon{\textquotedblright} (good) and {\textquotedblleft}excellent{\textquotedblright} (excellent) according to their score in our resource is consistent with an intuitive scale based on human judgments. Our long-term objective is to generate different adjective scales in an empirical manner, which could allow the enrichment of lexical resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,487 |
inproceedings | sanger-etal-2016-scare | {SCARE} {\textemdash} The Sentiment Corpus of App Reviews with Fine-grained Annotations in {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1178/ | S{\"anger, Mario and Leser, Ulf and Kemmerer, Steffen and Adolphs, Peter and Klinger, Roman | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1114--1121 | The automatic analysis of texts containing opinions of users about, e.g., products or political views has gained attention within the last decades. However, previous work on the task of analyzing user reviews about mobile applications in app stores is limited. Publicly available corpora do not exist, such that a comparison of different methods and models is difficult. We fill this gap by contributing the Sentiment Corpus of App Reviews (SCARE), which contains fine-grained annotations of application aspects, subjective (evaluative) phrases and relations between both. This corpus consists of 1,760 annotated application reviews from the Google Play Store with 2,487 aspects and 3,959 subjective phrases. We describe the process and methodology how the corpus was created. The Fleiss Kappa between four annotators reveals an agreement of 0.72. We provide a strong baseline with a linear-chain conditional random field and word-embedding features with a performance of 0.62 for aspect detection and 0.63 for the extraction of subjective phrases. The corpus is available to the research community to support the development of sentiment analysis methods on mobile application reviews. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,488 |
inproceedings | apidianaki-etal-2016-datasets | Datasets for Aspect-Based Sentiment Analysis in {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1179/ | Apidianaki, Marianna and Tannier, Xavier and Richart, C{\'e}cile | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1122--1126 | Aspect Based Sentiment Analysis (ABSA) is the task of mining and summarizing opinions from text about specific entities and their aspects. This article describes two datasets for the development and testing of ABSA systems for French which comprise user reviews annotated with relevant entities, aspects and polarity values. The first dataset contains 457 restaurant reviews (2365 sentences) for training and testing ABSA systems, while the second contains 162 museum reviews (655 sentences) dedicated to out-of-domain evaluation. Both datasets were built as part of SemEval-2016 Task 5 {\textquotedblleft}Aspect-Based Sentiment Analysis{\textquotedblright} where seven different languages were represented, and are publicly available for research purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,489 |
inproceedings | shaikh-etal-2016-anew | {ANEW}+: Automatic Expansion and Validation of Affective Norms of Words Lexicons in Multiple Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1180/ | Shaikh, Samira and Cho, Kit and Strzalkowski, Tomek and Feldman, Laurie and Lien, John and Liu, Ting and Broadwell, George Aaron | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1127--1132 | In this article we describe our method of automatically expanding an existing lexicon of words with affective valence scores. The automatic expansion process was done in English. In addition, we describe our procedure for automatically creating lexicons in languages where such resources may not previously exist. The foreign languages we discuss in this paper are Spanish, Russian and Farsi. We also describe the procedures to systematically validate our newly created resources. The main contributions of this work are: 1) A general method for expansion and creation of lexicons with scores of words on psychological constructs such as valence, arousal or dominance; and 2) a procedure for ensuring validity of the newly constructed resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,490 |
inproceedings | sidarenka-2016-potts | {P}ot{TS}: The {P}otsdam {T}witter Sentiment Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1181/ | Sidarenka, Uladzimir | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1133--1141 | In this paper, we introduce a novel comprehensive dataset of 7,992 German tweets, which were manually annotated by two human experts with fine-grained opinion relations. A rich annotation scheme used for this corpus includes such sentiment-relevant elements as opinion spans, their respective sources and targets, emotionally laden terms with their possible contextual negations and modifiers. Various inter-annotator agreement studies, which were carried out at different stages of work on these data (at the initial training phase, upon an adjudication step, and after the final annotation run), reveal that labeling evaluative judgements in microblogs is an inherently difficult task even for professional coders. These difficulties, however, can be alleviated by letting the annotators revise each other`s decisions. Once rechecked, the experts can proceed with the annotation of further messages, staying at a fairly high level of agreement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,491 |
inproceedings | maynard-bontcheva-2016-challenges | Challenges of Evaluating Sentiment Analysis Tools on Social Media | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1182/ | Maynard, Diana and Bontcheva, Kalina | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1142--1148 | This paper discusses the challenges in carrying out fair comparative evaluations of sentiment analysis systems. Firstly, these are due to differences in corpus annotation guidelines and sentiment class distribution. Secondly, different systems often make different assumptions about how to interpret certain statements, e.g. tweets with URLs. In order to study the impact of these on evaluation results, this paper focuses on tweet sentiment analysis in particular. One existing and two newly created corpora are used, and the performance of four different sentiment analysis systems is reported; we make our annotated datasets and sentiment analysis applications publicly available. We see considerable variations in results across the different corpora, which calls into question the validity of many existing annotated datasets and evaluations, and we make some observations about both the systems and the datasets as a result. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,492 |
inproceedings | liew-etal-2016-emotweet | {E}mo{T}weet-28: A Fine-Grained Emotion Corpus for Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1183/ | Liew, Jasy Suet Yan and Turtle, Howard R. and Liddy, Elizabeth D. | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1149--1156 | This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,493 |
inproceedings | kiritchenko-mohammad-2016-happy | Happy Accident: A Sentiment Composition Lexicon for Opposing Polarity Phrases | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1184/ | Kiritchenko, Svetlana and Mohammad, Saif | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1157--1164 | Sentiment composition is the determining of sentiment of a multi-word linguistic unit, such as a phrase or a sentence, based on its constituents. We focus on sentiment composition in phrases formed by at least one positive and at least one negative word {\textemdash} phrases like {\textquoteleft}happy accident' and {\textquoteleft}best winter break'. We refer to such phrases as opposing polarity phrases. We manually annotate a collection of opposing polarity phrases and their constituent single words with real-valued sentiment intensity scores using a method known as Best{\textemdash}Worst Scaling. We show that the obtained annotations are consistent. We explore the entries in the lexicon for linguistic regularities that govern sentiment composition in opposing polarity phrases. Finally, we list the current and possible future applications of the lexicon. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,494 |
inproceedings | balahur-tanev-2016-detecting | Detecting Implicit Expressions of Affect from Text using Semantic Knowledge on Common Concept Properties | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1185/ | Balahur, Alexandra and Tanev, Hristo | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1165--1170 | Emotions are an important part of the human experience. They are responsible for the adaptation and integration in the environment, offering, most of the time together with the cognitive system, the appropriate responses to stimuli in the environment. As such, they are an important component in decision-making processes. In today`s society, the avalanche of stimuli present in the environment (physical or virtual) makes people more prone to respond to stronger affective stimuli (i.e., those that are related to their basic needs and motivations {\textemdash} survival, food, shelter, etc.). In media reporting, this is translated in the use of arguments (factual data) that are known to trigger specific (strong, affective) behavioural reactions from the readers. This paper describes initial efforts to detect such arguments from text, based on the properties of concepts. The final system able to retrieve and label this type of data from the news in traditional and social platforms is intended to be integrated Europe Media Monitor family of applications to detect texts that trigger certain (especially negative) reactions from the public, with consequences on citizen safety and security. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,495 |
inproceedings | loukachevitch-levchik-2016-creating | Creating a General {R}ussian Sentiment Lexicon | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1186/ | Loukachevitch, Natalia and Levchik, Anatolii | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1171--1176 | The paper describes the new Russian sentiment lexicon - RuSentiLex. The lexicon was gathered from several sources: opinionated words from domain-oriented Russian sentiment vocabularies, slang and curse words extracted from Twitter, objective words with positive or negative connotations from a news collection. The words in the lexicon having different sentiment orientations in specific senses are linked to appropriate concepts of the thesaurus of Russian language RuThes. All lexicon entries are classified according to four sentiment categories and three sources of sentiment (opinion, emotion, or fact). The lexicon can serve as the first version for the construction of domain-specific sentiment lexicons or can be used for feature generation in machine-learning approaches. In this role, the RuSentiLex lexicon was utilized by the participants of the SentiRuEval-2016 Twitter reputation monitoring shared task and allowed them to achieve high results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,496 |
inproceedings | van-son-etal-2016-grasp | {GR}a{SP}: A Multilayered Annotation Scheme for Perspectives | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1187/ | van Son, Chantal and Caselli, Tommaso and Fokkens, Antske and Maks, Isa and Morante, Roser and Aroyo, Lora and Vossen, Piek | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1177--1184 | This paper presents a framework and methodology for the annotation of perspectives in text. In the last decade, different aspects of linguistic encoding of perspectives have been targeted as separated phenomena through different annotation initiatives. We propose an annotation scheme that integrates these different phenomena. We use a multilayered annotation approach, splitting the annotation of different aspects of perspectives into small subsequent subtasks in order to reduce the complexity of the task and to better monitor interactions between layers. Currently, we have included four layers of perspective annotation: events, attribution, factuality and opinion. The annotations are integrated in a formal model called GRaSP, which provides the means to represent instances (e.g. events, entities) and propositions in the (real or assumed) world in relation to their mentions in text. Then, the relation between the source and target of a perspective is characterized by means of perspective annotations. This enables us to place alternative perspectives on the same entity, event or proposition next to each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,497 |
inproceedings | khiari-etal-2016-integration | Integration of Lexical and Semantic Knowledge for Sentiment Analysis in {SMS} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1188/ | Khiari, Wejdene and Roche, Mathieu and Hafsia, Asma Bouhafs | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1185--1189 | With the explosive growth of online social media (forums, blogs, and social networks), exploitation of these new information sources has become essential. Our work is based on the sud4science project. The goal of this project is to perform multidisciplinary work on a corpus of authentic SMS, in French, collected in 2011 and anonymised (88milSMS corpus: \url{http://88milsms.huma-num.fr}). This paper highlights a new method to integrate opinion detection knowledge from an SMS corpus by combining lexical and semantic information. More precisely, our approach gives more weight to words with a sentiment (i.e. presence of words in a dedicated dictionary) for a classification task based on three classes: positive, negative, and neutral. The experiments were conducted on two corpora: an elongated SMS corpus (i.e. repetitions of characters in messages) and a non-elongated SMS corpus. We noted that non-elongated SMS were much better classified than elongated SMS. Overall, this study highlighted that the integration of semantic knowledge always improves classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,498 |
inproceedings | tamburini-2016-specialising | Specialising Paragraph Vectors for Text Polarity Detection | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1189/ | Tamburini, Fabio | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1190--1195 | This paper presents some experiments for specialising Paragraph Vectors, a new technique for creating text fragment (phrase, sentence, paragraph, text, ...) embedding vectors, for text polarity detection. The first extension regards the injection of polarity information extracted from a polarity lexicon into embeddings and the second extension aimed at inserting word order information into Paragraph Vectors. These two extensions, when training a logistic-regression classifier on the combined embeddings, were able to produce a relevant gain in performance when compared to the standard Paragraph Vector methods proposed by Le and Mikolov (2014). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,499 |
inproceedings | jadi-etal-2016-evaluating | Evaluating Lexical Similarity to build Sentiment Similarity | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1190/ | Jadi, Gr{\'e}goire and Claveau, Vincent and Daille, B{\'e}atrice and Monceaux, Laura | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1196--1201 | In this article, we propose to evaluate the lexical similarity information provided by word representations against several opinion resources using traditional Information Retrieval tools. Word representation have been used to build and to extend opinion resources such as lexicon, and ontology and their performance have been evaluated on sentiment analysis tasks. We question this method by measuring the correlation between the sentiment proximity provided by opinion resources and the semantic similarity provided by word representations using different correlation coefficients. We also compare the neighbors found in word representations and list of similar opinion words. Our results show that the proximity of words in state-of-the-art word representations is not very effective to build sentiment similarity. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,500 |
inproceedings | koper-etal-2016-visualisation | Visualisation and Exploration of High-Dimensional Distributional Features in Lexical Semantic Classification | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1191/ | K{\"oper, Maximilian and Zai{\ss, Melanie and Han, Qi and Koch, Steffen and Schulte im Walde, Sabine | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1202--1206 | Vector space models and distributional information are widely used in NLP. The models typically rely on complex, high-dimensional objects. We present an interactive visualisation tool to explore salient lexical-semantic features of high-dimensional word objects and word similarities. Most visualisation tools provide only one low-dimensional map of the underlying data, so they are not capable of retaining the local and the global structure. We overcome this limitation by providing an additional trust-view to obtain a more realistic picture of the actual object distances. Additional tool options include the reference to a gold standard classification, the reference to a cluster analysis as well as listing the most salient (common) features for a selected subset of the words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,501 |
inproceedings | maharjan-etal-2016-semaligner | {S}em{A}ligner: A Method and Tool for Aligning Chunks with Semantic Relation Types and Semantic Similarity Scores | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1192/ | Maharjan, Nabin and Banjade, Rajendra and Niraula, Nobal Bikram and Rus, Vasile | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1207--1211 | This paper introduces a ruled-based method and software tool, called SemAligner, for aligning chunks across texts in a given pair of short English texts. The tool, based on the top performing method at the Interpretable Short Text Similarity shared task at SemEval 2015, where it was used with human annotated (gold) chunks, can now additionally process plain text-pairs using two powerful chunkers we developed, e.g. using Conditional Random Fields. Besides aligning chunks, the tool automatically assigns semantic relations to the aligned chunks (such as EQUI for equivalent and OPPO for opposite) and semantic similarity scores that measure the strength of the semantic relation between the aligned chunks. Experiments show that SemAligner performs competitively for system generated chunks and that these results are also comparable to results obtained on gold chunks. SemAligner has other capabilities such as handling various input formats and chunkers as well as extending lookup resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,502 |
inproceedings | falk-martin-2016-aspectual | Aspectual Flexibility Increases with Agentivity and {C}oncreteness{A} Computational Classification Experiment on Polysemous Verbs | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1193/ | Falk, Ingrid and Martin, Fabienne | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1212--1220 | We present an experimental study making use of a machine learning approach to identify the factors that affect the aspectual value that characterizes verbs under each of their readings. The study is based on various morpho-syntactic and semantic features collected from a French lexical resource and on a gold standard aspectual classification of verb readings designed by an expert. Our results support the tested hypothesis, namely that agentivity and abstractness influence lexical aspect. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,503 |
inproceedings | cordeiro-etal-2016-mwetoolkit | mwetoolkit+sem: Integrating Word Embeddings in the mwetoolkit for Semantic {MWE} Processing | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1194/ | Cordeiro, Silvio and Ramisch, Carlos and Villavicencio, Aline | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1221--1225 | This paper presents mwetoolkit+sem: an extension of the mwetoolkit that estimates semantic compositionality scores for multiword expressions (MWEs) based on word embeddings. First, we describe our implementation of vector-space operations working on distributional vectors. The compositionality score is based on the cosine distance between the MWE vector and the composition of the vectors of its member words. Our generic system can handle several types of word embeddings and MWE lists, and may combine individual word representations using several composition techniques. We evaluate our implementation on a dataset of 1042 English noun compounds, comparing different configurations of the underlying word embeddings and word-composition models. We show that our vector-based scores model non-compositionality better than standard association measures such as log-likelihood. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,504 |
inproceedings | iosif-etal-2016-cognitively | Cognitively Motivated Distributional Representations of Meaning | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1195/ | Iosif, Elias and Georgiladakis, Spiros and Potamianos, Alexandros | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1226--1232 | Although meaning is at the core of human cognition, state-of-the-art distributional semantic models (DSMs) are often agnostic to the findings in the area of semantic cognition. In this work, we present a novel type of DSMs motivated by the dual-processing cognitive perspective that is triggered by lexico-semantic activations in the short-term human memory. The proposed model is shown to perform better than state-of-the-art models for computing semantic similarity between words. The fusion of different types of DSMs is also investigated achieving results that are comparable or better than the state-of-the-art. The used corpora along with a set of tools, as well as large repositories of vectorial word representations are made publicly available for four languages (English, German, Italian, and Greek). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,505 |
inproceedings | hayashi-luo-2016-extending | Extending Monolingual Semantic Textual Similarity Task to Multiple Cross-lingual Settings | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1196/ | Hayashi, Yoshihiko and Luo, Wentao | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1233--1239 | This paper describes our independent effort for extending the monolingual semantic textual similarity (STS) task setting to multiple cross-lingual settings involving English, Japanese, and Chinese. So far, we have adopted a {\textquotedblleft}monolingual similarity after translation{\textquotedblright} strategy to predict the semantic similarity between a pair of sentences in different languages. With this strategy, a monolingual similarity method is applied after having (one of) the target sentences translated into a pivot language. Therefore, this paper specifically details the required and developed resources to implement this framework, while presenting our current results for English-Japanese-Chinese cross-lingual STS tasks that may exemplify the validity of the framework. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,506 |
inproceedings | copestake-etal-2016-resources | Resources for building applications with Dependency {M}inimal {R}ecursion {S}emantics | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1197/ | Copestake, Ann and Emerson, Guy and Goodman, Michael Wayne and Horvat, Matic and Kuhnle, Alexander and Muszy{\'n}ska, Ewa | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1240--1247 | We describe resources aimed at increasing the usability of the semantic representations utilized within the DELPH-IN (Deep Linguistic Processing with HPSG) consortium. We concentrate in particular on the Dependency Minimal Recursion Semantics (DMRS) formalism, a graph-based representation designed for compositional semantic representation with deep grammars. Our main focus is on English, and specifically English Resource Semantics (ERS) as used in the English Resource Grammar. We first give an introduction to ERS and DMRS and a brief overview of some existing resources and then describe in detail a new repository which has been developed to simplify the use of ERS/DMRS. We explain a number of operations on DMRS graphs which our repository supports, with sketches of the algorithms, and illustrate how these operations can be exploited in application building. We believe that this work will aid researchers to exploit the rich and effective but complex DELPH-IN resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,507 |
inproceedings | kuo-chen-2016-subtask | Subtask Mining from Search Query Logs for How-Knowledge Acceleration | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1198/ | Kuo, Chung-Lun and Chen, Hsin-Hsi | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1248--1252 | How-knowledge is indispensable in daily life, but has relatively less quantity and poorer quality than what-knowledge in publicly available knowledge bases. This paper first extracts task-subtask pairs from wikiHow, then mines linguistic patterns from search query logs, and finally applies the mined patterns to extract subtasks to complete given how-to tasks. To evaluate the proposed methodology, we group tasks and the corresponding recommended subtasks into pairs, and evaluate the results automatically and manually. The automatic evaluation shows the accuracy of 0.4494. We also classify the mined patterns based on prepositions and find that the prepositions like {\textquotedblleft}on{\textquotedblright}, {\textquotedblleft}to{\textquotedblright}, and {\textquotedblleft}with{\textquotedblright} have the better performance. The results can be used to accelerate how-knowledge base construction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,508 |
inproceedings | ryzhova-etal-2016-typology | Typology of Adjectives Benchmark for Compositional Distributional Models | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1199/ | Ryzhova, Daria and Kyuseva, Maria and Paperno, Denis | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1253--1257 | In this paper we present a novel application of compositional distributional semantic models (CDSMs): prediction of lexical typology. The paper introduces the notion of typological closeness, which is a novel rigorous formalization of semantic similarity based on comparison of multilingual data. Starting from the Moscow Database of Qualitative Features for adjective typology, we create four datasets of typological closeness, on which we test a range of distributional semantic models. We show that, on the one hand, vector representations of phrases based on data from one language can be used to predict how words within the phrase translate into different languages, and, on the other hand, that typological data can serve as a semantic benchmark for distributional models. We find that compositional distributional models, especially parametric ones, perform way above non-compositional alternatives on the task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,509 |
inproceedings | bosc-etal-2016-dart | {DART}: a Dataset of Arguments and their Relations on {T}witter | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1200/ | Bosc, Tom and Cabrio, Elena and Villata, Serena | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1258--1263 | The problem of understanding the stream of messages exchanged on social media such as Facebook and Twitter is becoming a major challenge for automated systems. The tremendous amount of data exchanged on these platforms as well as the specific form of language adopted by social media users constitute a new challenging context for existing argument mining techniques. In this paper, we describe a resource of natural language arguments called DART (Dataset of Arguments and their Relations on Twitter) where the complete argument mining pipeline over Twitter messages is considered: (i) we identify which tweets can be considered as arguments and which cannot, and (ii) we identify what is the relation, i.e., support or attack, linking such tweets to each other. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,510 |
inproceedings | mota-etal-2016-port4nooj | {P}ort4{N}oo{J} v3.0: Integrated Linguistic Resources for {P}ortuguese {NLP} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1201/ | Mota, Cristina and Carvalho, Paula and Barreiro, Anabela | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1264--1269 | This paper introduces Port4NooJ v3.0, the latest version of the Portuguese module for NooJ, highlights its main features, and details its three main new components: (i) a lexicon-grammar based dictionary of 5,177 human intransitive adjectives, and a set of local grammars that use the distributional properties of those adjectives for paraphrasing (ii) a polarity dictionary with 9,031 entries for sentiment analysis, and (iii) a set of priority dictionaries and local grammars for named entity recognition. These new components were derived and/or adapted from publicly available resources. The Port4NooJ v3.0 resource is innovative in terms of the specificity of the linguistic knowledge it incorporates. The dictionary is bilingual Portuguese-English, and the semantico-syntactic information assigned to each entry validates the linguistic relation between the terms in both languages. These characteristics, which cannot be found in any other public resource for Portuguese, make it a valuable resource for translation and paraphrasing. The paper presents the current statistics and describes the different complementary and synergic components and integration efforts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,511 |
inproceedings | rozis-etal-2016-collecting | Collecting Language Resources for the {L}atvian e-Government Machine Translation Platform | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1202/ | Rozis, Roberts and Vasi{\c{l}}jevs, Andrejs and Skadi{\c{n}}{\v{s}}, Raivis | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1270--1276 | This paper describes corpora collection activity for building large machine translation systems for Latvian e-Government platform. We describe requirements for corpora, selection and assessment of data sources, collection of the public corpora and creation of new corpora from miscellaneous sources. Methodology, tools and assessment methods are also presented along with the results achieved, challenges faced and conclusions made. Several approaches to address the data scarceness are discussed. We summarize the volume of obtained corpora and provide quality metrics of MT systems trained on this data. Resulting MT systems for English-Latvian, Latvian English and Latvian Russian are integrated in the Latvian e-service portal and are freely available on website HUGO.LV. This paper can serve as a guidance for similar activities initiated in other countries, particularly in the context of European Language Resource Coordination action. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,512 |
inproceedings | brugman-etal-2016-nederlab | {N}ederlab: Towards a Single Portal and Research Environment for Diachronic {D}utch Text Corpora | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1203/ | Brugman, Hennie and Reynaert, Martin and van der Sijs, Nicoline and van Stipriaan, Ren{\'e} and Tjong Kim Sang, Erik and van den Bosch, Antal | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1277--1281 | The Nederlab project aims to bring together all digitized texts relevant to the Dutch national heritage, the history of the Dutch language and culture (circa 800 {--} present) in one user friendly and tool enriched open access web interface. This paper describes Nederlab halfway through the project period and discusses the collections incorporated, back-office processes, system back-end as well as the Nederlab Research Portal end-user web application. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,513 |
inproceedings | soler-wanner-2016-semi | A Semi-Supervised Approach for Gender Identification | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1204/ | Soler, Juan and Wanner, Leo | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1282--1287 | In most of the research studies on Author Profiling, large quantities of correctly labeled data are used to train the models. However, this does not reflect the reality in forensic scenarios: in practical linguistic forensic investigations, the resources that are available to profile the author of a text are usually scarce. To pay tribute to this fact, we implemented a Semi-Supervised Learning variant of the k nearest neighbors algorithm that uses small sets of labeled data and a larger amount of unlabeled data to classify the authors of texts by gender (man vs woman). We describe the enriched KNN algorithm and show that the use of unlabeled instances improves the accuracy of our gender identification model. We also present a feature set that facilitates the use of a very small number of instances, reaching accuracies higher than 70{\%} with only 113 instances to train the model. It is also shown that the algorithm also performs well using publicly available data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,514 |
inproceedings | korkontzelos-etal-2016-ensemble | Ensemble Classification of Grants using {LDA}-based Features | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1205/ | Korkontzelos, Yannis and Thomas, Beverley and Miwa, Makoto and Ananiadou, Sophia | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1288--1294 | Classifying research grants into useful categories is a vital task for a funding body to give structure to the portfolio for analysis, informing strategic planning and decision-making. Automating this classification process would save time and effort, providing the accuracy of the classifications is maintained. We employ five classification models to classify a set of BBSRC-funded research grants in 21 research topics based on unigrams, technical terms and Latent Dirichlet Allocation models. To boost precision, we investigate methods for combining their predictions into five aggregate classifiers. Evaluation confirmed that ensemble classification models lead to higher precision. It was observed that there is not a single best-performing aggregate method for all research topics. Instead, the best-performing method for a research topic depends on the number of positive training instances available for this topic. Subject matter experts considered the predictions of aggregate models to correct erroneous or incomplete manual assignments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,515 |
inproceedings | yang-etal-2016-edit | Edit Categories and Editor Role Identification in {W}ikipedia | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1206/ | Yang, Diyi and Halfaker, Aaron and Kraut, Robert and Hovy, Eduard | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1295--1299 | In this work, we introduced a corpus for categorizing edit types in Wikipedia. This fine-grained taxonomy of edit types enables us to differentiate editing actions and find editor roles in Wikipedia based on their low-level edit types. To do this, we first created an annotated corpus based on 1,996 edits obtained from 953 article revisions and built machine-learning models to automatically identify the edit categories associated with edits. Building on this automated measurement of edit types, we then applied a graphical model analogous to Latent Dirichlet Allocation to uncover the latent roles in editors' edit histories. Applying this technique revealed eight different roles editors play, such as Social Networker, Substantive Expert, etc. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,516 |
inproceedings | al-shargi-etal-2016-morphologically | Morphologically Annotated Corpora and Morphological Analyzers for {M}oroccan and Sanaani Yemeni {A}rabic | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1207/ | Al-Shargi, Faisal and Kaplan, Aidan and Eskander, Ramy and Habash, Nizar and Rambow, Owen | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1300--1306 | We present new language resources for Moroccan and Sanaani Yemeni Arabic. The resources include corpora for each dialect which have been morphologically annotated, and morphological analyzers for each dialect which are derived from these corpora. These are the first sets of resources for Moroccan and Yemeni Arabic. The resources will be made available to the public. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,517 |
inproceedings | zabokrtsky-etal-2016-merging | Merging Data Resources for Inflectional and Derivational Morphology in {C}zech | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1208/ | {\v{Z}}abokrtsk{\'y}, Zden{\v{e}}k and {\v{S}}ev{\v{c}}{\'i}kov{\'a}, Magda and Straka, Milan and Vidra, Jon{\'a}{\v{s}} and Limbursk{\'a}, Ad{\'e}la | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1307--1314 | The paper deals with merging two complementary resources of morphological data previously existing for Czech, namely the inflectional dictionary MorfFlex CZ and the recently developed lexical network DeriNet. The MorfFlex CZ dictionary has been used by a morphological analyzer capable of analyzing/generating several million Czech word forms according to the rules of Czech inflection. The DeriNet network contains several hundred thousand Czech lemmas interconnected with links corresponding to derivational relations (relations between base words and words derived from them). After summarizing basic characteristics of both resources, the process of merging is described, focusing on both rather technical aspects (growth of the data, measuring the quality of newly added derivational relations) and linguistic issues (treating lexical homonymy and vowel/consonant alternations). The resulting resource contains 970 thousand lemmas connected with 715 thousand derivational relations and is publicly available on the web under the CC-BY-NC-SA license. The data were incorporated in the MorphoDiTa library version 2.0 (which provides morphological analysis, generation, tagging and lemmatization for Czech) and can be browsed and searched by two web tools (DeriNet Viewer and DeriNet Search tool). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,518 |
inproceedings | novak-etal-2016-new | A New Integrated Open-source Morphological Analyzer for {H}ungarian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1209/ | Nov{\'a}k, Attila and Sikl{\'o}si, Borb{\'a}la and Oravecz, Csaba | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1315--1322 | The goal of a Hungarian research project has been to create an integrated Hungarian natural language processing framework. This infrastructure includes tools for analyzing Hungarian texts, integrated into a standardized environment. The morphological analyzer is one of the core components of the framework. The goal of this paper is to describe a fast and customizable morphological analyzer and its development framework, which synthesizes and further enriches the morphological knowledge implemented in previous tools existing for Hungarian. In addition, we present the method we applied to add semantic knowledge to the lexical database of the morphology. The method utilizes neural word embedding models and morphological and shallow syntactic knowledge. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,519 |
inproceedings | chodroff-etal-2016-new | New release of Mixer-6: Improved validity for phonetic study of speaker variation and identification | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1210/ | Chodroff, Eleanor and Maciejewski, Matthew and Trmal, Jan and Khudanpur, Sanjeev and Godfrey, John | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1323--1327 | The Mixer series of speech corpora were collected over several years, principally to support annual NIST evaluations of speaker recognition (SR) technologies. These evaluations focused on conversational speech over a variety of channels and recording conditions. One of the series, Mixer-6, added a new condition, read speech, to support basic scientific research on speaker characteristics, as well as technology evaluation. With read speech it is possible to make relatively precise measurements of phonetic events and features, which can be correlated with the performance of speaker recognition algorithms, or directly used in phonetic analysis of speaker variability. The read speech, as originally recorded, was adequate for large-scale evaluations (e.g., fixed-text speaker ID algorithms) but only marginally suitable for acoustic-phonetic studies. Numerous errors due largely to speaker behavior remained in the corpus, with no record of their locations or rate of occurrence. We undertook the effort to correct this situation with automatic methods supplemented by human listening and annotation. The present paper describes the tools and methods, resulting corrections, and some examples of the kinds of research studies enabled by these enhancements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,520 |
inproceedings | coutinho-etal-2016-assessing | Assessing the Prosody of Non-Native Speakers of {E}nglish: Measures and Feature Sets | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1211/ | Coutinho, Eduardo and H{\"onig, Florian and Zhang, Yue and Hantke, Simone and Batliner, Anton and N{\"oth, Elmar and Schuller, Bj{\"orn | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1328--1332 | In this paper, we describe a new database with audio recordings of non-native (L2) speakers of English, and the perceptual evaluation experiment conducted with native English speakers for assessing the prosody of each recording. These annotations are then used to compute the gold standard using different methods, and a series of regression experiments is conducted to evaluate their impact on the performance of a regression model predicting the degree of naturalness of L2 speech. Further, we compare the relevance of different feature groups modelling prosody in general (without speech tempo), speech rate and pauses modelling speech tempo (fluency), voice quality, and a variety of spectral features. We also discuss the impact of various fusion strategies on performance. Overall, our results demonstrate that the prosody of non-native speakers of English as L2 can be reliably assessed using supra-segmental audio features; prosodic features seem to be the most important ones. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,521 |
inproceedings | trouvain-etal-2016-ifcasl | The {IFCASL} Corpus of {F}rench and {G}erman Non-native and Native Read Speech | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1212/ | Trouvain, Juergen and Bonneau, Anne and Colotte, Vincent and Fauth, Camille and Fohr, Dominique and Jouvet, Denis and J{\"ugler, Jeanin and Laprie, Yves and Mella, Odile and M{\"obius, Bernd and Zimmerer, Frank | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1333--1338 | The IFCASL corpus is a French-German bilingual phonetic learner corpus designed, recorded and annotated in a project on individualized feedback in computer-assisted spoken language learning. The motivation for setting up this corpus was that there is no phonetically annotated and segmented corpus for this language pair of comparable of size and coverage. In contrast to most learner corpora, the IFCASL corpus incorporate data for a language pair in both directions, i.e. in our case French learners of German, and German learners of French. In addition, the corpus is complemented by two sub-corpora of native speech by the same speakers. The corpus provides spoken data by about 100 speakers with comparable productions, annotated and segmented on the word and the phone level, with more than 50{\%} manually corrected data. The paper reports on inter-annotator agreement and the optimization of the acoustic models for forced speech-text alignment in exercises for computer-assisted pronunciation training. Example studies based on the corpus data with a phonetic focus include topics such as the realization of /h/ and glottal stop, final devoicing of obstruents, vowel quantity and quality, pitch range, and tempo. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,522 |
inproceedings | saint-dizier-2016-lelio | {LELIO}: An Auto-Adaptative System to Acquire Domain Lexical Knowledge in Technical Texts | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1213/ | Saint-Dizier, Patrick | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1339--1345 | In this paper, we investigate some language acquisition facets of an auto-adaptative system that can automatically acquire most of the relevant lexical knowledge and authoring practices for an application in a given domain. This is the LELIO project: producing customized LELIE solutions. Our goal, within the framework of LELIE (a system that tags language uses that do not follow the Constrained Natural Language principles), is to automate the long, costly and error prone lexical customization of LELIE to a given application domain. Technical texts being relatively restricted in terms of syntax and lexicon, results obtained show that this approach is feasible and relatively reliable. By auto-adaptative, we mean that the system learns from a sample of the application corpus the various lexical terms and uses crucial for LELIE to work properly (e.g. verb uses, fuzzy terms, business terms, stylistic patterns). A technical writer validation method is developed at each step of the acquisition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,523 |
inproceedings | murawaki-mori-2016-wikification | Wikification for Scriptio Continua | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1214/ | Murawaki, Yugo and Mori, Shinsuke | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1346--1351 | The fact that Japanese employs scriptio continua, or a writing system without spaces, complicates the first step of an NLP pipeline. Word segmentation is widely used in Japanese language processing, and lexical knowledge is crucial for reliable identification of words in text. Although external lexical resources like Wikipedia are potentially useful, segmentation mismatch prevents them from being straightforwardly incorporated into the word segmentation task. If we intentionally violate segmentation standards with the direct incorporation, quantitative evaluation will be no longer feasible. To address this problem, we propose to define a separate task that directly links given texts to an external resource, that is, wikification in the case of Wikipedia. By doing so, we can circumvent segmentation mismatch that may not necessarily be important for downstream applications. As the first step to realize the idea, we design the task of Japanese wikification and construct wikification corpora. We annotated subsets of the Balanced Corpus of Contemporary Written Japanese plus Twitter short messages. We also implement a simple wikifier and investigate its performance on these corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,524 |
inproceedings | niton-etal-2016-accessing | Accessing and Elaborating Walenty - a Valence Dictionary of {P}olish - via {I}nternet Browser | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1215/ | Nito{\'n}, Bart{\l}omiej and Bartosiak, Tomasz and Hajnicz, El{\.z}bieta | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1352--1359 | This article presents Walenty - a new valence dictionary of Polish predicates, concentrating on its creation process and access via Internet browser. The dictionary contains two layers, syntactic and semantic. The syntactic layer describes syntactic and morphosyntactic constraints predicates put on their dependants. The semantic layer shows how predicates and their arguments are involved in a situation described in an utterance. These two layers are connected, representing how semantic arguments can be realised on the surface. Walenty also contains a powerful phraseological (idiomatic) component. Walenty has been created and can be accessed remotely with a dedicated tool called Slowal. In this article, we focus on most important functionalities of this system. First, we will depict how to access the dictionary and how built-in filtering system (covering both syntactic and semantic phenomena) works. Later, we will describe the process of creating dictionary by Slowal tool that both supports and controls the work of lexicographers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,525 |
inproceedings | santos-etal-2016-ceplexicon | {CEPLEX}icon {\textemdash} A Lexicon of Child {E}uropean {P}ortuguese | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1216/ | Santos, Ana L{\'u}cia and Freitas, Maria Jo{\~a}o and Cardoso, Aida | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1360--1364 | CEPLEXicon (version 1.1) is a child lexicon resulting from the automatic tagging of two child corpora: the corpus Santos (Santos, 2006; Santos et al. 2014) and the corpus Child {\textemdash} Adult Interaction (Freitas et al. 2012), which integrates information from the corpus Freitas (Freitas, 1997). This lexicon includes spontaneous speech produced by seven children (1;02.00 to 3;11.12) during approximately 86h of child-adult interaction. The automatic tagging comprised the lemmatization and morphosyntactic classification of the speech produced by the seven children included in the two child corpora; the lexicon contains information pertaining to lemmas and syntactic categories as well as absolute number of occurrences and frequencies in three age intervals: {\ensuremath{<}} 2 years; {\ensuremath{\geq}} 2 years and {\ensuremath{<}} 3 years; {\ensuremath{\geq}} 3 years. The information included in this lexicon and the format in which it is presented enables research in different areas and allows researchers to obtain measures of lexical growth. CEPLEXicon is available through the ELRA catalogue. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,526 |
inproceedings | grefenstette-2016-extracting | Extracting Weighted Language Lexicons from {W}ikipedia | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1217/ | Grefenstette, Gregory | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1365--1368 | Language models are used in applications as diverse as speech recognition, optical character recognition and information retrieval. They are used to predict word appearance, and to weight the importance of words in these applications. One basic element of language models is the list of words in a language. Another is the unigram frequency of each word. But this basic information is not available for most languages in the world. Since the multilingual Wikipedia project encourages the production of encyclopedic-like articles in many world languages, we can find there an ever-growing source of text from which to extract these two language modelling elements: word list and frequency. Here we present a simple technique for converting this Wikipedia text into lexicons of weighted unigrams for the more than 280 languages present currently present in Wikipedia. The lexicons produced, and the source code for producing them in a Linux-based system are here made available for free on the Web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,527 |
inproceedings | hathout-sajous-2016-wiktionnaires | Wiktionnaire`s Wikicode {GLAWI}fied: a Workable {F}rench Machine-Readable Dictionary | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1218/ | Hathout, Nabil and Sajous, Franck | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1369--1376 | GLAWI is a free, large-scale and versatile Machine-Readable Dictionary (MRD) that has been extracted from the French language edition of Wiktionary, called Wiktionnaire. In (Sajous and Hathout, 2015), we introduced GLAWI, gave the rationale behind the creation of this lexicographic resource and described the extraction process, focusing on the conversion and standardization of the heterogeneous data provided by this collaborative dictionary. In the current article, we describe the content of GLAWI and illustrate how it is structured. We also suggest various applications, ranging from linguistic studies, NLP applications to psycholinguistic experimentation. They all can take advantage of the diversity of the lexical knowledge available in GLAWI. Besides this diversity and extensive lexical coverage, GLAWI is also remarkable because it is the only free lexical resource of contemporary French that contains definitions. This unique material opens way to the renewal of MRD-based methods, notably the automated extraction and acquisition of semantic relations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,528 |
inproceedings | hollink-etal-2016-corpus | A Corpus of Images and Text in Online News | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1219/ | Hollink, Laura and Bedjeti, Adriatik and van Harmelen, Martin and Elliott, Desmond | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1377--1382 | In recent years, several datasets have been released that include images and text, giving impulse to new methods that combine natural language processing and computer vision. However, there is a need for datasets of images in their natural textual context. The ION corpus contains 300K news articles published between August 2014 - 2015 in five online newspapers from two countries. The 1-year coverage over multiple publishers ensures a broad scope in terms of topics, image quality and editorial viewpoints. The corpus consists of JSON-LD files with the following data about each article: the original URL of the article on the news publisher`s website, the date of publication, the headline of the article, the URL of the image displayed with the article (if any), and the caption of that image. Neither the article text nor the images themselves are included in the corpus. Instead, the images are distributed as high-dimensional feature vectors extracted from a Convolutional Neural Network, anticipating their use in computer vision tasks. The article text is represented as a list of automatically generated entity and topic annotations in the form of Wikipedia/DBpedia pages. This facilitates the selection of subsets of the corpus for separate analysis or evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,529 |
inproceedings | camgoz-etal-2016-bosphorussign | {B}osphorus{S}ign: A {T}urkish {S}ign {L}anguage Recognition Corpus in Health and Finance Domains | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1220/ | Camg{\"oz, Necati Cihan and K{\ind{\iro{\u{glu, Ahmet Alp and Karab{\"ukl{\"u, Serpil and Kelepir, Meltem and {\"Ozsoy, Ay{\c{se Sumru and Akarun, Lale | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1383--1388 | There are as many sign languages as there are deaf communities in the world. Linguists have been collecting corpora of different sign languages and annotating them extensively in order to study and understand their properties. On the other hand, the field of computer vision has approached the sign language recognition problem as a grand challenge and research efforts have intensified in the last 20 years. However, corpora collected for studying linguistic properties are often not suitable for sign language recognition as the statistical methods used in the field require large amounts of data. Recently, with the availability of inexpensive depth cameras, groups from the computer vision community have started collecting corpora with large number of repetitions for sign language recognition research. In this paper, we present the BosphorusSign Turkish Sign Language corpus, which consists of 855 sign and phrase samples from the health, finance and everyday life domains. The corpus is collected using the state-of-the-art Microsoft Kinect v2 depth sensor, and will be the first in this sign language research field. Furthermore, there will be annotations rendered by linguists so that the corpus will appeal both to the linguistic and sign language recognition research communities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,530 |
inproceedings | vacher-etal-2016-cirdo | The {CIRDO} Corpus: Comprehensive Audio/Video Database of Domestic Falls of Elderly People | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1221/ | Vacher, Michel and Bouakaz, Sa{\"ida and Chaumon, Marc-Eric Bobillier and Aman, Fr{\'ed{\'eric and Khan, R. A. and Bekkadja, Slima and Portet, Fran{\c{cois and Guillou, Erwan and Rossato, Solange and Lecouteux, Benjamin | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1389--1396 | Ambient Assisted Living aims at enhancing the quality of life of older and disabled people at home thanks to Smart Homes. In particular, regarding elderly living alone at home, the detection of distress situation after a fall is very important to reassure this kind of population. However, many studies do not include tests in real settings, because data collection in this domain is very expensive and challenging and because of the few available data sets. The C IRDO corpus is a dataset recorded in realistic conditions in D OMUS , a fully equipped Smart Home with microphones and home automation sensors, in which participants performed scenarios including real falls on a carpet and calls for help. These scenarios were elaborated thanks to a field study involving elderly persons. Experiments related in a first part to distress detection in real-time using audio and speech analysis and in a second part to fall detection using video analysis are presented. Results show the difficulty of the task. The database can be used as standardized database by researchers to evaluate and compare their systems for elderly person`s assistance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,531 |
inproceedings | shrestha-moens-2016-semi | Semi-automatically Alignment of Predicates between Speech and {O}nto{N}otes data | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1222/ | Shrestha, Niraj and Moens, Marie-Francine | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1397--1401 | Speech data currently receives a growing attention and is an important source of information. We still lack suitable corpora of transcribed speech annotated with semantic roles that can be used for semantic role labeling (SRL), which is not the case for written data. Semantic role labeling in speech data is a challenging and complex task due to the lack of sentence boundaries and the many transcription errors such as insertion, deletion and misspellings of words. In written data, SRL evaluation is performed at the sentence level, but in speech data sentence boundaries identification is still a bottleneck which makes evaluation more complex. In this work, we semi-automatically align the predicates found in transcribed speech obtained with an automatic speech recognizer (ASR) with the predicates found in the corresponding written documents of the OntoNotes corpus and manually align the semantic roles of these predicates thus obtaining annotated semantic frames in the speech data. This data can serve as gold standard alignments for future research in semantic role labeling of speech data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,532 |
inproceedings | del-carmen-cabeza-pereiro-etal-2016-corilse | {CORILSE}: a {S}panish {S}ign {L}anguage Repository for Linguistic Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1223/ | del Carmen Cabeza-Pereiro, Mar{\'i}a and Garcia-Miguel, Jos{\'e} M{\textordfeminine} and Mateo, Carmen Garc{\'i}a and Castro, Jos{\'e} Luis Alba | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1402--1407 | CORILSE is a computerized corpus of Spanish Sign Language (Lengua de Signos Espa{\~n}ola, LSE). It consists of a set of recordings from different discourse genres by Galician signers living in the city of Vigo. In this paper we describe its annotation system, developed on the basis of pre-existing ones (mostly the model of Auslan corpus). This includes primary annotation of id-glosses for manual signs, annotation of non-manual component, and secondary annotation of grammatical categories and relations, because this corpus is been built for grammatical analysis, in particular argument structures in LSE. Up until this moment the annotation has been basically made by hand, which is a slow and time-consuming task. The need to facilitate this process leads us to engage in the development of automatic or semi-automatic tools for manual and facial recognition. Finally, we also present the web repository that will make the corpus available to different types of users, and will allow its exploitation for research purposes and other applications (e.g. teaching of LSE or design of tasks for signed language assessment). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,533 |
inproceedings | schreitter-krenn-2016-ofai | The {OFAI} Multi-Modal Task Description Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1224/ | Schreitter, Stephanie and Krenn, Brigitte | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1408--1414 | The OFAI Multimodal Task Description Corpus (OFAI-MMTD Corpus) is a collection of dyadic teacher-learner (human-human and human-robot) interactions. The corpus is multimodal and tracks the communication signals exchanged between interlocutors in task-oriented scenarios including speech, gaze and gestures. The focus of interest lies on the communicative signals conveyed by the teacher and which objects are salient at which time. Data are collected from four different task description setups which involve spatial utterances, navigation instructions and more complex descriptions of joint tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,534 |
inproceedings | mori-etal-2016-japanese | A {J}apanese Chess Commentary Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1225/ | Mori, Shinsuke and Richardson, John and Ushiku, Atsushi and Sasada, Tetsuro and Kameko, Hirotaka and Tsuruoka, Yoshimasa | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1415--1420 | In recent years there has been a surge of interest in the natural language prosessing related to the real world, such as symbol grounding, language generation, and nonlinguistic data search by natural language queries. In order to concentrate on language ambiguities, we propose to use a well-defined {\textquotedblleft}real world,{\textquotedblright} that is game states. We built a corpus consisting of pairs of sentences and a game state. The game we focus on is shogi (Japanese chess). We collected 742,286 commentary sentences in Japanese. They are spontaneously generated contrary to natural language annotations in many image datasets provided by human workers on Amazon Mechanical Turk. We defined domain specific named entities and we segmented 2,508 sentences into words manually and annotated each word with a named entity tag. We describe a detailed definition of named entities and show some statistics of our game commentary corpus. We also show the results of the experiments of word segmentation and named entity recognition. The accuracies are as high as those on general domain texts indicating that we are ready to tackle various new problems related to the real world. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,535 |
inproceedings | poignant-etal-2016-camomile | The {CAMOMILE} Collaborative Annotation Platform for Multi-modal, Multi-lingual and Multi-media Documents | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1226/ | Poignant, Johann and Budnik, Mateusz and Bredin, Herv{\'e} and Barras, Claude and Stefas, Mickael and Bruneau, Pierrick and Adda, Gilles and Besacier, Laurent and Ekenel, Hazim and Francopoulo, Gil and Hernando, Javier and Mariani, Joseph and Morros, Ramon and Qu{\'e}not, Georges and Rosset, Sophie and Tamisier, Thomas | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1421--1425 | In this paper, we describe the organization and the implementation of the CAMOMILE collaborative annotation framework for multimodal, multimedia, multilingual (3M) data. Given the versatile nature of the analysis which can be performed on 3M data, the structure of the server was kept intentionally simple in order to preserve its genericity, relying on standard Web technologies. Layers of annotations, defined as data associated to a media fragment from the corpus, are stored in a database and can be managed through standard interfaces with authentication. Interfaces tailored specifically to the needed task can then be developed in an agile way, relying on simple but reliable services for the management of the centralized annotations. We then present our implementation of an active learning scenario for person annotation in video, relying on the CAMOMILE server; during a dry run experiment, the manual annotation of 716 speech segments was thus propagated to 3504 labeled tracks. The code of the CAMOMILE framework is distributed in open source. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,536 |
inproceedings | luecking-etal-2016-finding | Finding Recurrent Features of Image Schema Gestures: the {FIGURE} corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1227/ | Luecking, Andy and Mehler, Alexander and Walther, D{\'esir{\'ee and Mauri, Marcel and Kurf{\"urst, Dennis | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1426--1431 | The Frankfurt Image GestURE corpus (FIGURE) is introduced. The corpus data is collected in an experimental setting where 50 naive participants spontaneously produced gestures in response to five to six terms from a total of 27 stimulus terms. The stimulus terms have been compiled mainly from image schemata from psycholinguistics, since such schemata provide a panoply of abstract contents derived from natural language use. The gestures have been annotated for kinetic features. FIGURE aims at finding (sets of) stable kinetic feature configurations associated with the stimulus terms. Given such configurations, they can be used for designing HCI gestures that go beyond pre-defined gesture vocabularies or touchpad gestures. It is found, for instance, that movement trajectories are far more informative than handshapes, speaking against purely handshape-based HCI vocabularies. Furthermore, the mean temporal duration of hand and arm movements associated vary with the stimulus terms, indicating a dynamic dimension not covered by vocabulary-based approaches. Descriptive results are presented and related to findings from gesture studies and natural language dialogue. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,537 |
inproceedings | engelmann-etal-2016-interaction | An Interaction-Centric Dataset for Learning Automation Rules in Smart Homes | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1228/ | Engelmann, Kai Frederic and Holthaus, Patrick and Wrede, Britta and Wrede, Sebastian | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1432--1437 | The term smart home refers to a living environment that by its connected sensors and actuators is capable of providing intelligent and contextualised support to its user. This may result in automated behaviors that blends into the user`s daily life. However, currently most smart homes do not provide such intelligent support. A first step towards such intelligent capabilities lies in learning automation rules by observing the user`s behavior. We present a new type of corpus for learning such rules from user behavior as observed from the events in a smart homes sensor and actuator network. The data contains information about intended tasks by the users and synchronized events from this network. It is derived from interactions of 59 users with the smart home in order to solve five tasks. The corpus contains recordings of more than 40 different types of data streams and has been segmented and pre-processed to increase signal quality. Overall, the data shows a high noise level on specific data types that can be filtered out by a simple smoothing approach. The resulting data provides insights into event patterns resulting from task specific user behavior and thus constitutes a basis for machine learning approaches to learn automation rules. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,538 |
inproceedings | becker-etal-2016-web | A Web Tool for Building Parallel Corpora of Spoken and Sign Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1229/ | Becker, Alex and Kepler, Fabio and Candeias, Sara | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1438--1445 | In this paper we describe our work in building an online tool for manually annotating texts in any spoken language with SignWriting in any sign language. The existence of such tool will allow the creation of parallel corpora between spoken and sign languages that can be used to bootstrap the creation of efficient tools for the Deaf community. As an example, a parallel corpus between English and American Sign Language could be used for training Machine Learning models for automatic translation between the two languages. Clearly, this kind of tool must be designed in a way that it eases the task of human annotators, not only by being easy to use, but also by giving smart suggestions as the annotation progresses, in order to save time and effort. By building a collaborative, online, easy to use annotation tool for building parallel corpora between spoken and sign languages we aim at helping the development of proper resources for sign languages that can then be used in state-of-the-art models currently used in tools for spoken languages. There are several issues and difficulties in creating this kind of resource, and our presented tool already deals with some of them, like adequate text representation of a sign and many to many alignments between words and signs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,539 |
inproceedings | muzaffar-etal-2016-issues | Issues and Challenges in Annotating {U}rdu Action Verbs on the {IMAGACT}4{ALL} Platform | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1230/ | Muzaffar, Sharmin and Behera, Pitambar and Jha, Girish | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1446--1451 | In South-Asian languages such as Hindi and Urdu, action verbs having compound constructions and serial verbs constructions pose serious problems for natural language processing and other linguistic tasks. Urdu is an Indo-Aryan language spoken by 51, 500, 0001 speakers in India. Action verbs that occur spontaneously in day-to-day communication are highly ambiguous in nature semantically and as a consequence cause disambiguation issues that are relevant and applicable to Language Technologies (LT) like Machine Translation (MT) and Natural Language Processing (NLP). IMAGACT4ALL is an ontology-driven web-based platform developed by the University of Florence for storing action verbs and their inter-relations. This group is currently collaborating with Jawaharlal Nehru University (JNU) in India to connect Indian languages on this platform. Action verbs are frequently used in both written and spoken discourses and refer to various meanings because of their polysemic nature. The IMAGACT4ALL platform stores each 3d animation image, each one of them referring to a variety of possible ontological types, which in turn makes the annotation task for the annotator quite challenging with regard to selecting verb argument structure having a range of probability distribution. The authors, in this paper, discuss the issues and challenges such as complex predicates (compound and conjunct verbs), ambiguously animated video illustrations, semantic discrepancies, and the factors of verb-selection preferences that have produced significant problems in annotating Urdu verbs on the IMAGACT ontology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,540 |
inproceedings | xiao-etal-2016-domain | Domain Ontology Learning Enhanced by Optimized Relation Instance in {DB}pedia | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1231/ | Xiao, Liumingjing and Ruan, Chong and Yang, An and Zhang, Junhao and Hu, Junfeng | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1452--1456 | Ontologies are powerful to support semantic based applications and intelligent systems. While ontology learning are challenging due to its bottleneck in handcrafting structured knowledge sources and training data. To address this difficulty, many researchers turn to ontology enrichment and population using external knowledge sources such as DBpedia. In this paper, we propose a method using DBpedia in a different manner. We utilize relation instances in DBpedia to supervise the ontology learning procedure from unstructured text, rather than populate the ontology structure as a post-processing step. We construct three language resources in areas of computer science: enriched Wikipedia concept tree, domain ontology, and gold standard from NSFC taxonomy. Experiment shows that the result of ontology learning from corpus of computer science can be improved via the relation instances extracted from DBpedia in the same field. Furthermore, making distinction between the relation instances and applying a proper weighting scheme in the learning procedure lead to even better result. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,541 |
inproceedings | johannessen-etal-2016-constructing | Constructing a {N}orwegian Academic Wordlist | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1232/ | Johannessen, Janne M and Saidi, Arash and Hagen, Kristin | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1457--1462 | We present the development of a Norwegian Academic Wordlist (AKA list) for the Norwegian Bokm{\"al variety. To identify specific academic vocabulary we developed a 100-million-word academic corpus based on the University of Oslo archive of digital publications. Other corpora were used for testing and developing general word lists. We tried two different methods, those of Carlund et al. (2012) and Gardner {\& Davies (2013), and compared them. The resulting list is presented on a web site, where the words can be inspected in different ways, and freely downloaded. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,542 |
inproceedings | segers-etal-2016-event | The Event and Implied Situation Ontology ({ESO}): Application and Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1233/ | Segers, Roxane and Rospocher, Marco and Vossen, Piek and Laparra, Egoitz and Rigau, German and Minard, Anne-Lyse | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1463--1470 | This paper presents the Event and Implied Situation Ontology (ESO), a manually constructed resource which formalizes the pre and post situations of events and the roles of the entities affected by an event. The ontology is built on top of existing resources such as WordNet, SUMO and FrameNet. The ontology is injected to the Predicate Matrix, a resource that integrates predicate and role information from amongst others FrameNet, VerbNet, PropBank, NomBank and WordNet. We illustrate how these resources are used on large document collections to detect information that otherwise would have remained implicit. The ontology is evaluated on two aspects: recall and precision based on a manually annotated corpus and secondly, on the quality of the knowledge inferred by the situation assertions in the ontology. Evaluation results on the quality of the system show that 50{\%} of the events typed and enriched with ESO assertions are correct. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,543 |
inproceedings | sukhareva-chiarcos-2016-combining | Combining Ontologies and Neural Networks for Analyzing Historical Language Varieties. A Case Study in {M}iddle {L}ow {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1234/ | Sukhareva, Maria and Chiarcos, Christian | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1471--1480 | In this paper, we describe experiments on the morphosyntactic annotation of historical language varieties for the example of Middle Low German (MLG), the official language of the German Hanse during the Middle Ages and a dominant language around the Baltic Sea by the time. To our best knowledge, this is the first experiment in automatically producing morphosyntactic annotations for Middle Low German, and accordingly, no part-of-speech (POS) tagset is currently agreed upon. In our experiment, we illustrate how ontology-based specifications of projected annotations can be employed to circumvent this issue: Instead of training and evaluating against a given tagset, we decomponse it into independent features which are predicted independently by a neural network. Using consistency constraints (axioms) from an ontology, then, the predicted feature probabilities are decoded into a sound ontological representation. Using these representations, we can finally bootstrap a POS tagset capturing only morphosyntactic features which could be reliably predicted. In this way, our approach is capable to optimize precision and recall of morphosyntactic annotations simultaneously with bootstrapping a tagset rather than performing iterative cycles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,544 |
inproceedings | girard-rivier-etal-2016-ecological | Ecological Gestures for {HRI}: the {GEE} Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1235/ | Girard-Rivier, Maxence and Magnani, Romain and Auberg{\'e}, V{\'e}ronique and Sasa, Yuko and Tsvetanova, Liliya and Aman, Fr{\'e}d{\'e}ric and Bayol, Clarisse | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1481--1484 | As part of a human-robot interaction project, we are interested by gestural modality as one of many ways to communicate. In order to develop a relevant gesture recognition system associated to a smart home butler robot. Our methodology is based on an IQ game-like Wizard of Oz experiment to collect spontaneous and implicitly produced gestures in an ecological context. During the experiment, the subject has to use non-verbal cues (i.e. gestures) to interact with a robot that is the referee. The subject is unaware that his gestures will be the focus of our study. In the second part of the experiment, we asked the subjects to do the gestures he had produced in the experiment, those are the explicit gestures. The implicit gestures are compared with explicitly produced ones to determine a relevant ontology. This preliminary qualitative analysis will be the base to build a big data corpus in order to optimize acceptance of the gesture dictionary in coherence with the {\textquotedblleft}socio-affective glue{\textquotedblright} dynamics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,545 |
inproceedings | nazar-renau-2016-taxonomy | A Taxonomy of {S}panish Nouns, a Statistical Algorithm to Generate it and its Implementation in Open Source Code | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1236/ | Nazar, Rogelio and Renau, Irene | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1485--1492 | In this paper we describe our work in progress in the automatic development of a taxonomy of Spanish nouns, we offer the Perl implementation we have so far, and we discuss the different problems that still need to be addressed. We designed a statistically-based taxonomy induction algorithm consisting of a combination of different strategies not involving explicit linguistic knowledge. Being all quantitative, the strategies we present are however of different nature. Some of them are based on the computation of distributional similarity coefficients which identify pairs of sibling words or co-hyponyms, while others are based on asymmetric co-occurrence and identify pairs of parent-child words or hypernym-hyponym relations. A decision making process is then applied to combine the results of the previous steps, and finally connect lexical units to a basic structure containing the most general categories of the language. We evaluate the quality of the taxonomy both manually and also using Spanish Wordnet as a gold-standard. We estimate an average of 89.07{\%} precision and 25.49{\%} recall considering only the results which the algorithm presents with high degree of certainty, or 77.86{\%} precision and 33.72{\%} recall considering all results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,546 |
inproceedings | westpfahl-schmidt-2016-folk | {FOLK}-Gold {\textemdash} A Gold Standard for Part-of-Speech-Tagging of Spoken {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1237/ | Westpfahl, Swantje and Schmidt, Thomas | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1493--1499 | In this paper, we present a GOLD standard of part-of-speech tagged transcripts of spoken German. The GOLD standard data consists of four annotation layers {\textemdash transcription (modified orthography), normalization (standard orthography), lemmatization and POS tags {\textemdash all of which have undergone careful manual quality control. It comes with guidelines for the manual POS annotation of transcripts of German spoken data and an extended version of the STTS (Stuttgart T{\"ubingen Tagset) which accounts for phenomena typically found in spontaneous spoken German. The GOLD standard was developed on the basis of the Research and Teaching Corpus of Spoken German, FOLK, and is, to our knowledge, the first such dataset based on a wide variety of spontaneous and authentic interaction types. It can be used as a basis for further development of language technology and corpus linguistic applications for German spoken language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,547 |
inproceedings | albogamy-ramsay-2016-fast | Fast and Robust {POS} tagger for {A}rabic Tweets Using Agreement-based Bootstrapping | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1238/ | Albogamy, Fahad and Ramsay, Allan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1500--1506 | Part-of-Speech(POS) tagging is a key step in many NLP algorithms. However, tweets are difficult to POS tag because they are short, are not always written maintaining formal grammar and proper spelling, and abbreviations are often used to overcome their restricted lengths. Arabic tweets also show a further range of linguistic phenomena such as usage of different dialects, romanised Arabic and borrowing foreign words. In this paper, we present an evaluation and a detailed error analysis of state-of-the-art POS taggers for Arabic when applied to Arabic tweets. On the basis of this analysis, we combine normalisation and external knowledge to handle the domain noisiness and exploit bootstrapping to construct extra training data in order to improve POS tagging for Arabic tweets. Our results show significant improvements over the performance of a number of well-known taggers for Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,548 |
inproceedings | eger-etal-2016-lemmatization | Lemmatization and Morphological Tagging in {G}erman and {L}atin: A Comparison and a Survey of the State-of-the-art | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1239/ | Eger, Steffen and Gleim, R{\"udiger and Mehler, Alexander | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1507--1513 | This paper relates to the challenge of morphological tagging and lemmatization in morphologically rich languages by example of German and Latin. We focus on the question what a practitioner can expect when using state-of-the-art solutions out of the box. Moreover, we contrast these with old(er) methods and implementations for POS tagging. We examine to what degree recent efforts in tagger development are reflected by improved accuracies {\textemdash} and at what cost, in terms of training and processing time. We also conduct in-domain vs. out-domain evaluation. Out-domain evaluations are particularly insightful because the distribution of the data which is being tagged by a user will typically differ from the distribution on which the tagger has been trained. Furthermore, two lemmatization techniques are evaluated. Finally, we compare pipeline tagging vs. a tagging approach that acknowledges dependencies between inflectional categories. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,549 |
inproceedings | vor-der-bruck-mehler-2016-tlt | {TLT}-{CRF}: A Lexicon-supported Morphological Tagger for {L}atin Based on Conditional Random Fields | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1240/ | vor der Br{\"uck, Tim and Mehler, Alexander | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1514--1519 | We present a morphological tagger for Latin, called TTLab Latin Tagger based on Conditional Random Fields (TLT-CRF) which uses a large Latin lexicon. Beyond Part of Speech (PoS), TLT-CRF tags eight inflectional categories of verbs, adjectives or nouns. It utilizes a statistical model based on CRFs together with a rule interpreter that addresses scenarios of sparse training data. We present results of evaluating TLT-CRF to answer the question what can be learnt following the paradigm of 1st order CRFs in conjunction with a large lexical resource and a rule interpreter. Furthermore, we investigate the contigency of representational features and targeted parts of speech to learn about selective features. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,550 |
inproceedings | aufrant-etal-2016-cross-lingual | Cross-lingual and Supervised Models for Morphosyntactic Annotation: a Comparison on {R}omanian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1241/ | Aufrant, Lauriane and Wisniewski, Guillaume and Yvon, Fran{\c{c}}ois | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1520--1526 | Because of the small size of Romanian corpora, the performance of a PoS tagger or a dependency parser trained with the standard supervised methods fall far short from the performance achieved in most languages. That is why, we apply state-of-the-art methods for cross-lingual transfer on Romanian tagging and parsing, from English and several Romance languages. We compare the performance with monolingual systems trained with sets of different sizes and establish that training on a few sentences in target language yields better results than transferring from large datasets in other languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,551 |
inproceedings | ljubesic-erjavec-2016-corpus | Corpus vs. Lexicon Supervision in Morphosyntactic Tagging: the Case of {S}lovene | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1242/ | Ljube{\v{s}}i{\'c}, Nikola and Erjavec, Toma{\v{z}} | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1527--1531 | In this paper we present a tagger developed for inflectionally rich languages for which both a training corpus and a lexicon are available. We do not constrain the tagger by the lexicon entries, allowing both for lexicon incompleteness and noisiness. By using the lexicon indirectly through features we allow for known and unknown words to be tagged in the same manner. We test our tagger on Slovene data, obtaining a 25{\%} error reduction of the best previous results both on known and unknown words. Given that Slovene is, in comparison to some other Slavic languages, a well-resourced language, we perform experiments on the impact of token (corpus) vs. type (lexicon) supervision, obtaining useful insights in how to balance the effort of extending resources to yield better tagging results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,552 |
inproceedings | nguyen-etal-2016-challenges | Challenges and Solutions for Consistent Annotation of {V}ietnamese Treebank | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1243/ | Nguyen, Quy and Miyao, Yusuke and Le, Ha and Nguyen, Ngan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1532--1539 | Treebanks are important resources for researchers in natural language processing, speech recognition, theoretical linguistics, etc. To strengthen the automatic processing of the Vietnamese language, a Vietnamese treebank has been built. However, the quality of this treebank is not satisfactory and is a possible source for the low performance of Vietnamese language processing. We have been building a new treebank for Vietnamese with about 40,000 sentences annotated with three layers: word segmentation, part-of-speech tagging, and bracketing. In this paper, we describe several challenges of Vietnamese language and how we solve them in developing annotation guidelines. We also present our methods to improve the quality of the annotation guidelines and ensure annotation accuracy and consistency. Experiment results show that inter-annotator agreement ratios and accuracy are higher than 90{\%} which is satisfactory. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,553 |
inproceedings | suzuki-etal-2016-correcting | Correcting Errors in a Treebank Based on Tree Mining | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1244/ | Suzuki, Kanta and Kato, Yoshihide and Matsubara, Shigeki | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1540--1545 | This paper provides a new method to correct annotation errors in a treebank. The previous error correction method constructs a pseudo parallel corpus where incorrect partial parse trees are paired with correct ones, and extracts error correction rules from the parallel corpus. By applying these rules to a treebank, the method corrects errors. However, this method does not achieve wide coverage of error correction. To achieve wide coverage, our method adopts a different approach. In our method, we consider that an infrequent pattern which can be transformed to a frequent one is an annotation error pattern. Based on a tree mining technique, our method seeks such infrequent tree patterns, and constructs error correction rules each of which consists of an infrequent pattern and a corresponding frequent pattern. We conducted an experiment using the Penn Treebank. We obtained 1,987 rules which are not constructed by the previous method, and the rules achieved good precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,554 |
inproceedings | blache-etal-2016-4couv | 4{C}ouv: A New Treebank for {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1245/ | Blache, Philippe and de Montcheuil, Gr{\'e}goire and Pr{\'e}vot, Laurent and Rauzy, St{\'e}phane | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1546--1551 | The question of the type of text used as primary data in treebanks is of certain importance. First, it has an influence at the discourse level: an article is not organized in the same way as a novel or a technical document. Moreover, it also has consequences in terms of semantic interpretation: some types of texts can be easier to interpret than others. We present in this paper a new type of treebank which presents the particularity to answer to specific needs of experimental linguistic. It is made of short texts (book backcovers) that presents a strong coherence in their organization and can be rapidly interpreted. This type of text is adapted to short reading sessions, making it easy to acquire physiological data (e.g. eye movement, electroencepholagraphy). Such a resource offers reliable data when looking for correlations between computational models and human language processing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,555 |
inproceedings | de-carvalho-etal-2016-cintil | {CINTIL} {D}ependency{B}ank {PREMIUM} - A Corpus of Grammatical Dependencies for {P}ortuguese | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1246/ | de Carvalho, Rita and Querido, Andreia and Campos, Marisa and Pereira, Rita Valadas and Silva, Jo{\~a}o and Branco, Ant{\'o}nio | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1552--1557 | This paper presents a new linguistic resource for the study and computational processing of Portuguese. CINTIL DependencyBank PREMIUM is a corpus of Portuguese news text, accurately manually annotated with a wide range of linguistic information (morpho-syntax, named-entities, syntactic function and semantic roles), making it an invaluable resource specially for the development and evaluation of data-driven natural language processing tools. The corpus is under active development, reaching 4,000 sentences in its current version. The paper also reports on the training and evaluation of a dependency parser over this corpus. CINTIL DependencyBank PREMIUM is freely-available for research purposes through META-SHARE. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,556 |
inproceedings | muischnek-etal-2016-estonian | {E}stonian Dependency Treebank: from Constraint Grammar tagset to {U}niversal {D}ependencies | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1247/ | Muischnek, Kadri and M{\"u{\"urisep, Kaili and Puolakainen, Tiina | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1558--1565 | This paper presents the first version of Estonian Universal Dependencies Treebank which has been semi-automatically acquired from Estonian Dependency Treebank and comprises ca 400,000 words (ca 30,000 sentences) representing the genres of fiction, newspapers and scientific writing. Article analyses the differences between two annotation schemes and the conversion procedure to Universal Dependencies format. The conversion has been conducted by manually created Constraint Grammar transfer rules. As the rules enable to consider unbounded context, include lexical information and both flat and tree structure features at the same time, the method has proved to be reliable and flexible enough to handle most of transformations. The automatic conversion procedure achieved LAS 95.2{\%}, UAS 96.3{\%} and LA 98.4{\%}. If punctuation marks were excluded from the calculations, we observed LAS 96.4{\%}, UAS 97.7{\%} and LA 98.2{\%}. Still the refinement of the guidelines and methodology is needed in order to re-annotate some syntactic phenomena, e.g. inter-clausal relations. Although automatic rules usually make quite a good guess even in obscure conditions, some relations should be checked and annotated manually after the main conversion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,557 |
inproceedings | dobrovoljc-nivre-2016-universal | The {U}niversal {D}ependencies Treebank of Spoken {S}lovenian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1248/ | Dobrovoljc, Kaja and Nivre, Joakim | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1566--1573 | This paper presents the construction of an open-source dependency treebank of spoken Slovenian, the first syntactically annotated collection of spontaneous speech in Slovenian. The treebank has been manually annotated using the Universal Dependencies annotation scheme, a one-layer syntactic annotation scheme with a high degree of cross-modality, cross-framework and cross-language interoperability. In this original application of the scheme to spoken language transcripts, we address a wide spectrum of syntactic particularities in speech, either by extending the scope of application of existing universal labels or by proposing new speech-specific extensions. The initial analysis of the resulting treebank and its comparison with the written Slovenian UD treebank confirms significant syntactic differences between the two language modalities, with spoken data consisting of shorter and more elliptic sentences, less and simpler nominal phrases, and more relations marking disfluencies, interaction, deixis and modality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,558 |
inproceedings | thu-etal-2016-introducing | Introducing the {A}sian Language Treebank ({ALT}) | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1249/ | Thu, Ye Kyaw and Pa, Win Pa and Utiyama, Masao and Finch, Andrew and Sumita, Eiichiro | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1574--1578 | This paper introduces the ALT project initiated by the Advanced Speech Translation Research and Development Promotion Center (ASTREC), NICT, Kyoto, Japan. The aim of this project is to accelerate NLP research for Asian languages such as Indonesian, Japanese, Khmer, Laos, Malay, Myanmar, Philippine, Thai and Vietnamese. The original resource for this project was English articles that were randomly selected from Wikinews. The project has so far created a corpus for Myanmar and will extend in scope to include other languages in the near future. A 20000-sentence corpus of Myanmar that has been manually translated from an English corpus has been word segmented, word aligned, part-of-speech tagged and constituency parsed by human annotators. In this paper, we present the implementation steps for creating the treebank in detail, including a description of the ALT web-based treebanking tool. Moreover, we report statistics on the annotation quality of the Myanmar treebank created so far. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,559 |
inproceedings | ovrelid-hohle-2016-universal | {U}niversal {D}ependencies for {N}orwegian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1250/ | {\O}vrelid, Lilja and Hohle, Petter | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1579--1585 | This article describes the conversion of the Norwegian Dependency Treebank to the Universal Dependencies scheme. This paper details the mapping of PoS tags, morphological features and dependency relations and provides a description of the structural changes made to NDT analyses in order to make it compliant with the UD guidelines. We further present PoS tagging and dependency parsing experiments which report first results for the processing of the converted treebank. The full converted treebank was made available with the 1.2 release of the UD treebanks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,560 |
inproceedings | rehm-etal-2016-fostering | Fostering the Next Generation of {E}uropean Language Technology: Recent Developments {\textemdash} Emerging Initiatives {\textemdash} Challenges and Opportunities | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1251/ | Rehm, Georg and Haji{\v{c}}, Jan and van Genabith, Josef and Vasiljevs, Andrejs | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1586--1592 | META-NET is a European network of excellence, founded in 2010, that consists of 60 research centres in 34 European countries. One of the key visions and goals of META-NET is a truly multilingual Europe, which is substantially supported and realised through language technologies. In this article we provide an overview of recent developments around the multilingual Europe topic, we also describe recent and upcoming events as well as recent and upcoming strategy papers. Furthermore, we provide overviews of two new emerging initiatives, the CEF.AT and ELRC activity on the one hand and the Cracking the Language Barrier federation on the other. The paper closes with several suggested next steps in order to address the current challenges and to open up new opportunities. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,561 |
inproceedings | fort-couillault-2016-yes | Yes, We Care! Results of the Ethics and Natural Language Processing Surveys | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1252/ | Fort, Kar{\"en and Couillault, Alain | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1593--1600 | We present here the context and results of two surveys (a French one and an international one) concerning Ethics and NLP, which we designed and conducted between June and September 2015. These surveys follow other actions related to raising concern for ethics in our community, including a Journ{\'e}e d'{\'e}tudes, a workshop and the Ethics and Big Data Charter. The concern for ethics shows to be quite similar in both surveys, despite a few differences which we present and discuss. The surveys also lead to think there is a growing awareness in the field concerning ethical issues, which translates into a willingness to get involved in ethics-related actions, to debate about the topic and to see ethics be included in major conferences themes. We finally discuss the limits of the surveys and the means of action we consider for the future. The raw data from the two surveys are freely available online. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,562 |
inproceedings | lewis-etal-2016-open | Open Data Vocabularies for Assigning Usage Rights to Data Resources from Translation Projects | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1253/ | Lewis, David and Fatema, Kaniz and Maldonado, Alfredo and Walshe, Brian and Calvo, Arturo | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1601--1609 | An assessment of the intellectual property requirements for data used in machine-aided translation is provided based on a recent EC-funded legal review. This is compared against the capabilities offered by current linked open data standards from the W3C for publishing and sharing translation memories from translation projects, and proposals for adequately addressing the intellectual property needs of stakeholders in translation projects using open data vocabularies are suggested. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,563 |
inproceedings | mapelli-etal-2016-language | Language Resource Citation: the {ISLRN} Dissemination and Further Developments | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1254/ | Mapelli, Val{\'e}rie and Popescu, Vladimir and Liu, Lin and Choukri, Khalid | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1610--1613 | This article presents the latest dissemination activities and technical developments that were carried out for the International Standard Language Resource Number (ISLRN) service. It also recalls the main principle and submission process for providers to obtain their 13-digit ISLRN identifier. Up to March 2016, 2100 Language Resources were allocated an ISLRN number, not only ELRA`s and LDC`s catalogued Language Resources, but also the ones from other important organisations like the Joint Research Centre (JRC) and the Resource Management Agency (RMA) who expressed their strong support to this initiative. In the research field, not only assigning a unique identification number is important, but also referring to a Language Resource as an object \textit{per se} (like publications) has now become an obvious requirement. The ISLRN could also become an important parameter to be considered to compute a Language Resource Impact Factor (LRIF) in order to recognize the merits of the producers of Language Resources. Integrating the ISLRN number into a LR-oriented bibliographical reference is thus part of the objective. The idea is to make use of a BibTeX entry that would take into account Language Resources items, including ISLRN.The ISLRN being a requested field within the LREC 2016 submission, we expect that several other LRs will be allocated an ISLRN number by the conference date. With this expansion, this number aims to be a spreadly-used LR citation instrument within works referring to LRs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,564 |
inproceedings | dipersio-cieri-2016-trends | Trends in {HLT} Research: A Survey of {LDC}`s Data Scholarship Program | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1255/ | DiPersio, Denise and Cieri, Christopher | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1614--1618 | Since its inception in 2010, the Linguistic Data Consortium`s data scholarship program has awarded no cost grants in data to 64 recipients from 26 countries. A survey of the twelve cycles to date {\textemdash} two awards each in the Fall and Spring semesters from Fall 2010 through Spring 2016 {\textemdash} yields an interesting view into graduate program research trends in human language technology and related fields and the particular data sets deemed important to support that research. The survey also reveals regions in which such activity appears to be on a rise, including in Arabic-speaking regions and portions of the Americas and Asia. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,565 |
inproceedings | bosco-etal-2016-tweeting | Tweeting and Being Ironic in the Debate about a Political Reform: the {F}rench Annotated Corpus {TW}itter-{M}ariage{P}our{T}ous | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1256/ | Bosco, Cristina and Lai, Mirko and Patti, Viviana and Virone, Daniela | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1619--1626 | The paper introduces a new annotated French data set for Sentiment Analysis, which is a currently missing resource. It focuses on the collection from Twitter of data related to the socio-political debate about the reform of the bill for wedding in France. The design of the annotation scheme is described, which extends a polarity label set by making available tags for marking target semantic areas and figurative language devices. The annotation process is presented and the disagreement discussed, in particular, in the perspective of figurative language use and in that of the semantic oriented annotation, which are open challenges for NLP systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,566 |
inproceedings | verhoeven-etal-2016-twisty | {T}wi{S}ty: A Multilingual {T}witter Stylometry Corpus for Gender and Personality Profiling | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1258/ | Verhoeven, Ben and Daelemans, Walter and Plank, Barbara | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1632--1637 | Personality profiling is the task of detecting personality traits of authors based on writing style. Several personality typologies exist, however, the Briggs-Myer Type Indicator (MBTI) is particularly popular in the non-scientific community, and many people use it to analyse their own personality and talk about the results online. Therefore, large amounts of self-assessed data on MBTI are readily available on social-media platforms such as Twitter. We present a novel corpus of tweets annotated with the MBTI personality type and gender of their author for six Western European languages (Dutch, German, French, Italian, Portuguese and Spanish). We outline the corpus creation and annotation, show statistics of the obtained data distributions and present first baselines on Myers-Briggs personality profiling and gender prediction for all six languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,568 |
inproceedings | imran-etal-2016-twitter | {T}witter as a Lifeline: Human-annotated {T}witter Corpora for {NLP} of Crisis-related Messages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1259/ | Imran, Muhammad and Mitra, Prasenjit and Castillo, Carlos | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1638--1643 | Microblogging platforms such as Twitter provide active communication channels during mass convergence and emergency events such as earthquakes, typhoons. During the sudden onset of a crisis situation, affected people post useful information on Twitter that can be used for situational awareness and other humanitarian disaster response efforts, if processed timely and effectively. Processing social media information pose multiple challenges such as parsing noisy, brief and informal messages, learning information categories from the incoming stream of messages and classifying them into different classes among others. One of the basic necessities of many of these tasks is the availability of data, in particular human-annotated data. In this paper, we present human-annotated Twitter corpora collected during 19 different crises that took place between 2013 and 2015. To demonstrate the utility of the annotations, we train machine learning classifiers. Moreover, we publish first largest word2vec word embeddings trained on 52 million crisis-related tweets. To deal with tweets language issues, we present human-annotated normalized lexical resources for different lexical variations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,569 |
inproceedings | begum-etal-2016-functions | Functions of Code-Switching in Tweets: An Annotation Framework and Some Initial Experiments | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1260/ | Begum, Rafiya and Bali, Kalika and Choudhury, Monojit and Rudra, Koustav and Ganguly, Niloy | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1644--1650 | Code-Switching (CS) between two languages is extremely common in communities with societal multilingualism where speakers switch between two or more languages when interacting with each other. CS has been extensively studied in spoken language by linguists for several decades but with the popularity of social-media and less formal Computer Mediated Communication, we now see a big rise in the use of CS in the text form. This poses interesting challenges and a need for computational processing of such code-switched data. As with any Computational Linguistic analysis and Natural Language Processing tools and applications, we need annotated data for understanding, processing, and generation of code-switched language. In this study, we focus on CS between English and Hindi Tweets extracted from the Twitter stream of Hindi-English bilinguals. We present an annotation scheme for annotating the pragmatic functions of CS in Hindi-English (Hi-En) code-switched tweets based on a linguistic analysis and some initial experiments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,570 |
inproceedings | tanaka-etal-2016-universal | {U}niversal {D}ependencies for {J}apanese | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1261/ | Tanaka, Takaaki and Miyao, Yusuke and Asahara, Masayuki and Uematsu, Sumire and Kanayama, Hiroshi and Mori, Shinsuke and Matsumoto, Yuji | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1651--1658 | We present an attempt to port the international syntactic annotation scheme, Universal Dependencies, to the Japanese language in this paper. Since the Japanese syntactic structure is usually annotated on the basis of unique chunk-based dependencies, we first introduce word-based dependencies by using a word unit called the Short Unit Word, which usually corresponds to an entry in the lexicon UniDic. Porting is done by mapping the part-of-speech tagset in UniDic to the universal part-of-speech tagset, and converting a constituent-based treebank to a typed dependency tree. The conversion is not straightforward, and we discuss the problems that arose in the conversion and the current solutions. A treebank consisting of 10,000 sentences was built by converting the existent resources and currently released to the public. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,571 |
inproceedings | nivre-etal-2016-universal | {U}niversal {D}ependencies v1: A Multilingual Treebank Collection | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1262/ | Nivre, Joakim and de Marneffe, Marie-Catherine and Ginter, Filip and Goldberg, Yoav and Haji{\v{c}}, Jan and Manning, Christopher D. and McDonald, Ryan and Petrov, Slav and Pyysalo, Sampo and Silveira, Natalia and Tsarfaty, Reut and Zeman, Daniel | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1659--1666 | Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,572 |
inproceedings | kato-etal-2016-construction | Construction of an {E}nglish Dependency Corpus incorporating Compound Function Words | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1263/ | Kato, Akihiko and Shindo, Hiroyuki and Matsumoto, Yuji | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1667--1671 | The recognition of multiword expressions (MWEs) in a sentence is important for such linguistic analyses as syntactic and semantic parsing, because it is known that combining an MWE into a single token improves accuracy for various NLP tasks, such as dependency parsing and constituency parsing. However, MWEs are not annotated in Penn Treebank. Furthermore, when converting word-based dependency to MWE-aware dependency directly, one could combine nodes in an MWE into a single node. Nevertheless, this method often leads to the following problem: A node derived from an MWE could have multiple heads and the whole dependency structure including MWE might be cyclic. Therefore we converted a phrase structure to a dependency structure after establishing an MWE as a single subtree. This approach can avoid an occurrence of multiple heads and/or cycles. In this way, we constructed an English dependency corpus taking into account compound function words, which are one type of MWEs that serve as functional expressions. In addition, we report experimental results of dependency parsing using a constructed corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,573 |
inproceedings | simi-attardi-2016-adapting | Adapting the {TANL} tool suite to {U}niversal {D}ependencies | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1264/ | Simi, Maria and Attardi, Giuseppe | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1672--1678 | TANL is a suite of tools for text analytics based on the software architecture paradigm of data driven pipelines. The strategies for upgrading TANL to the use of Universal Dependencies range from a minimalistic approach consisting of introducing pre/post-processing steps into the native pipeline to revising the whole pipeline. We explore the issue in the context of the Italian Treebank, considering both the efforts involved, how to avoid losing linguistically relevant information and the loss of accuracy in the process. In particular we compare different strategies for parsing and discuss the implications of simplifying the pipeline when detailed part-of-speech and morphological annotations are not available, as it is the case for less resourceful languages. The experiments are relative to the Italian linguistic pipeline, but the use of different parsers in our evaluations and the avoidance of language specific tagging make the results general enough to be useful in helping the transition to UD for other languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,574 |
inproceedings | wong-lee-2016-dependency | A Dependency Treebank of the {C}hinese Buddhist Canon | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1265/ | Wong, Tak-sum and Lee, John | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1679--1683 | We present a dependency treebank of the Chinese Buddhist Canon, which contains 1,514 texts with about 50 million Chinese characters. The treebank was created by an automatic parser trained on a smaller treebank, containing four manually annotated sutras (Lee and Kong, 2014). We report results on word segmentation, part-of-speech tagging and dependency parsing, and discuss challenges posed by the processing of medieval Chinese. In a case study, we exploit the treebank to examine verbs frequently associated with Buddha, and to analyze usage patterns of quotative verbs in direct speech. Our results suggest that certain quotative verbs imply status differences between the speaker and the listener. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,575 |
inproceedings | lossio-ventura-etal-2016-automatic | Automatic Biomedical Term Polysemy Detection | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1266/ | Lossio-Ventura, Juan Antonio and Jonquet, Clement and Roche, Mathieu and Teisseire, Maguelonne | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1684--1688 | Polysemy is the capacity for a word to have multiple meanings. Polysemy detection is a first step for Word Sense Induction (WSI), which allows to find different meanings for a term. The polysemy detection is also important for information extraction (IE) systems. In addition, the polysemy detection is important for building/enriching terminologies and ontologies. In this paper, we present a novel approach to detect if a biomedical term is polysemic, with the long term goal of enriching biomedical ontologies. This approach is based on the extraction of new features. In this context we propose to extract features following two manners: (i) extracted directly from the text dataset, and (ii) from an induced graph. Our method obtains an Accuracy and F-Measure of 0.978. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,576 |
inproceedings | alagic-snajder-2016-cro36wsd | {C}ro36{WSD}: A Lexical Sample for {C}roatian Word Sense Disambiguation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1267/ | Alagi{\'c}, Domagoj and {\v{S}}najder, Jan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1689--1694 | We introduce Cro36WSD, a freely-available medium-sized lexical sample for Croatian word sense disambiguation (WSD).Cro36WSD comprises 36 words: 12 adjectives, 12 nouns, and 12 verbs, balanced across both frequency bands and polysemy levels. We adopt the multi-label annotation scheme in the hope of lessening the drawbacks of discrete sense inventories and obtaining more realistic annotations from human experts. Sense-annotated data is collected through multiple annotation rounds to ensure high-quality annotations: with a 115 person-hours effort we reached an inter-annotator agreement score of 0.877. We analyze the obtained data and perform a correlation analysis between several relevant variables, including word frequency, number of senses, sense distribution skewness, average annotation time, and the observed inter-annotator agreement (IAA). Using the obtained data, we compile multi- and single-labeled dataset variants using different label aggregation schemes. Finally, we evaluate three different baseline WSD models on both dataset variants and report on the insights gained. We make both dataset variants freely available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,577 |
inproceedings | postma-etal-2016-addressing | Addressing the {MFS} Bias in {WSD} systems | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1268/ | Postma, Marten and Izquierdo, Ruben and Agirre, Eneko and Rigau, German and Vossen, Piek | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1695--1700 | Word Sense Disambiguation (WSD) systems tend to have a strong bias towards assigning the Most Frequent Sense (MFS), which results in high performance on the MFS but in a very low performance on the less frequent senses. We addressed the MFS bias in WSD systems by combining the output from a WSD system with a set of mostly static features to create a MFS classifier to decide when to and not to choose the MFS. The output from this MFS classifier, which is based on the Random Forest algorithm, is then used to modify the output from the original WSD system. We applied our classifier to one of the state-of-the-art supervised WSD systems, i.e. IMS, and to of the best state-of-the-art unsupervised WSD systems, i.e. UKB. Our main finding is that we are able to improve the system output in terms of choosing between the MFS and the less frequent senses. When we apply the MFS classifier to fine-grained WSD, we observe an improvement on the less frequent sense cases, whereas we maintain the overall recall. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,578 |
inproceedings | camacho-collados-etal-2016-large | A Large-Scale Multilingual Disambiguation of Glosses | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1269/ | Camacho-Collados, Jos{\'e} and Delli Bovi, Claudio and Raganato, Alessandro and Navigli, Roberto | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1701--1708 | Linking concepts and named entities to knowledge bases has become a crucial Natural Language Understanding task. In this respect, recent works have shown the key advantage of exploiting textual definitions in various Natural Language Processing applications. However, to date there are no reliable large-scale corpora of sense-annotated textual definitions available to the research community. In this paper we present a large-scale high-quality corpus of disambiguated glosses in multiple languages, comprising sense annotations of both concepts and named entities from a unified sense inventory. Our approach for the construction and disambiguation of the corpus builds upon the structure of a large multilingual semantic network and a state-of-the-art disambiguation system; first, we gather complementary information of equivalent definitions across different languages to provide context for disambiguation, and then we combine it with a semantic similarity-based refinement. As a result we obtain a multilingual corpus of textual definitions featuring over 38 million definitions in 263 languages, and we make it freely available at \url{http://lcl.uniroma1.it/disambiguated-glosses}. Experiments on Open Information Extraction and Sense Clustering show how two state-of-the-art approaches improve their performance by integrating our disambiguated corpus into their pipeline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,579 |
inproceedings | ecker-etal-2016-unsupervised | Unsupervised Ranked Cross-Lingual Lexical Substitution for Low-Resource Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1270/ | Ecker, Stefan and Horbach, Andrea and Thater, Stefan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1709--1717 | We propose an unsupervised system for a variant of cross-lingual lexical substitution (CLLS) to be used in a reading scenario in computer-assisted language learning (CALL), in which single-word translations provided by a dictionary are ranked according to their appropriateness in context. In contrast to most alternative systems, ours does not rely on either parallel corpora or machine translation systems, making it suitable for low-resource languages as the language to be learned. This is achieved by a graph-based scoring mechanism which can deal with ambiguous translations of context words provided by a dictionary. Due to this decoupling from the source language, we need monolingual corpus resources only for the target language, i.e. the language of the translation candidates. We evaluate our approach for the language pair Norwegian Nynorsk-English on an exploratory manually annotated gold standard and report promising results. When running our system on the original SemEval CLLS task, we rank 6th out of 18 (including 2 baselines and our 2 system variants) in the best evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,580 |
inproceedings | stede-mamprin-2016-information | Information structure in the {P}otsdam Commentary Corpus: Topics | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1271/ | Stede, Manfred and Mamprin, Sara | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1718--1723 | The Potsdam Commentary Corpus is a collection of 175 German newspaper commentaries annotated on a variety of different layers. This paper introduces a new layer that covers the linguistic notion of information-structural topic (not to be confused with {\textquoteleft}topic' as applied to documents in information retrieval). To our knowledge, this is the first larger topic-annotated resource for German (and one of the first for any language). We describe the annotation guidelines and the annotation process, and the results of an inter-annotator agreement study, which compare favourably to the related work. The annotated corpus is freely available for research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,581 |
inproceedings | read-etal-2016-corpus | A Corpus of Clinical Practice Guidelines Annotated with the Importance of Recommendations | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1272/ | Read, Jonathon and Velldal, Erik and Cavazza, Marc and Georg, Gersende | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1724--1731 | In this paper we present the Corpus of REcommendation STrength (CREST), a collection of HTML-formatted clinical guidelines annotated with the location of recommendations. Recommendations are labelled with an author-provided indicator of their strength of importance. As data was drawn from many disparate authors, we define a unified scheme of importance labels, and provide a mapping for each guideline. We demonstrate the utility of the corpus and its annotations in some initial measurements investigating the type of language constructions associated with strong and weak recommendations, and experiments into promising features for recommendation classification, both with respect to strong and weak labels, and to all labels of the unified scheme. An error analysis indicates that, while there is a strong relationship between lexical choices and strength labels, there can be substantial variance in the choices made by different authors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,582 |
inproceedings | isard-2016-methodius | The Methodius Corpus of Rhetorical Discourse Structures and Generated Texts | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1273/ | Isard, Amy | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1732--1736 | Using the Methodius Natural Language Generation (NLG) System, we have created a corpus which consists of a collection of generated texts which describe ancient Greek artefacts. Each text is linked to two representations created as part of the NLG process. The first is a content plan, which uses rhetorical relations to describe the high-level discourse structure of the text, and the second is a logical form describing the syntactic structure, which is sent to the OpenCCG surface realization module to produce the final text output. In recent work, White and Howcroft (2015) have used the SPaRKy restaurant corpus, which contains similar combination of texts and representations, for their research on the induction of rules for the combination of clauses. In the first instance this corpus will be used to test their algorithms on an additional domain, and extend their work to include the learning of referring expression generation rules. As far as we know, the SPaRKy restaurant corpus is the only existing corpus of this type, and we hope that the creation of this new corpus in a different domain will provide a useful resource to the Natural Language Generation community. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,583 |
inproceedings | duma-etal-2016-applying | Applying Core Scientific Concepts to Context-Based Citation Recommendation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1274/ | Duma, Daniel and Liakata, Maria and Clare, Amanda and Ravenscroft, James and Klein, Ewan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 1737--1742 | The task of recommending relevant scientific literature for a draft academic paper has recently received significant interest. In our effort to ease the discovery of scientific literature and augment scientific writing, we aim to improve the relevance of results based on a shallow semantic analysis of the source document and the potential documents to recommend. We investigate the utility of automatic argumentative and rhetorical annotation of documents for this purpose. Specifically, we integrate automatic Core Scientific Concepts (CoreSC) classification into a prototype context-based citation recommendation system and investigate its usefulness to the task. We frame citation recommendation as an information retrieval task and we use the categories of the annotation schemes to apply different weights to the similarity formula. Our results show interesting and consistent correlations between the type of citation and the type of sentence containing the relevant information. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,584 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.