entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | anwar-etal-2016-proposition | A {P}roposition {B}ank of {U}rdu | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1377/ | Anwar, Maaz and Bhat, Riyaz Ahmad and Sharma, Dipti and Vaidya, Ashwini and Palmer, Martha and Khan, Tafseer Ahmed | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2379--2386 | This paper describes our efforts for the development of a Proposition Bank for Urdu, an Indo-Aryan language. Our primary goal is the labeling of syntactic nodes in the existing Urdu dependency Treebank with specific argument labels. In essence, it involves annotation of predicate argument structures of both simple and complex predicates in the Treebank corpus. We describe the overall process of building the PropBank of Urdu. We discuss various statistics pertaining to the Urdu PropBank and the issues which the annotators encountered while developing the PropBank. We also discuss how these challenges were addressed to successfully expand the PropBank corpus. While reporting the Inter-annotator agreement between the two annotators, we show that the annotators share similar understanding of the annotation guidelines and of the linguistic phenomena present in the language. The present size of this Propbank is around 180,000 tokens which is double-propbanked by the two annotators for simple predicates. Another 100,000 tokens have been annotated for complex predicates of Urdu. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,687 |
inproceedings | kriz-etal-2016-czech | {C}zech Legal Text Treebank 1.0 | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1378/ | Kr{\'i}{\v{z}}, Vincent and Hladk{\'a}, Barbora and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2387--2392 | We introduce a new member of the family of Prague dependency treebanks. The Czech Legal Text Treebank 1.0 is a morphologically and syntactically annotated corpus of 1,128 sentences. The treebank contains texts from the legal domain, namely the documents from the Collection of Laws of the Czech Republic. Legal texts differ from other domains in several language phenomena influenced by rather high frequency of very long sentences. A manual annotation of such sentences presents a new challenge. We describe a strategy and tools for this task. The resulting treebank can be explored in various ways. It can be downloaded from the LINDAT/CLARIN repository and viewed locally using the TrEd editor or it can be accessed on-line using the KonText and TreeQuery tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,688 |
inproceedings | list-etal-2016-concepticon | {C}oncepticon: A Resource for the Linking of Concept Lists | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1379/ | List, Johann-Mattis and Cysouw, Michael and Forkel, Robert | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2393--2400 | We present an attempt to link the large amount of different concept lists which are used in the linguistic literature, ranging from Swadesh lists in historical linguistics to naming tests in clinical studies and psycholinguistics. This resource, our Concepticon, links 30 222 concept labels from 160 conceptlists to 2495 concept sets. Each concept set is given a unique identifier, a unique label, and a human-readable definition. Concept sets are further structured by defining different relations between the concepts. The resource can be used for various purposes. Serving as a rich reference for new and existing databases in diachronic and synchronic linguistics, it allows researchers a quick access to studies on semantic change, cross-linguistic polysemies, and semantic associations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,689 |
inproceedings | falk-stein-2016-lvf | {LVF}-lemon {\textemdash} Towards a Linked Data Representation of {\textquotedblleft}Les Verbes fran{\c{c}}ais{\textquotedblright} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1380/ | Falk, Ingrid and Stein, Achim | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2401--2406 | In this study we elaborate a road map for the conversion of a traditional lexical syntactico-semantic resource for French into a linguistic linked open data (LLOD) model. Our approach uses current best-practices and the analyses of earlier similar undertakings (lemonUBY and PDEV-lemon) to tease out the most appropriate representation for our resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,690 |
inproceedings | galvan-etal-2016-riddle | Riddle Generation using Word Associations | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1381/ | Galv{\'a}n, Paloma and Francisco, Virginia and Herv{\'a}s, Raquel and M{\'e}ndez, Gonzalo | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2407--2412 | In knowledge bases where concepts have associated properties, there is a large amount of comparative information that is implicitly encoded in the values of the properties these concepts share. Although there have been previous approaches to generating riddles, none of them seem to take advantage of structured information stored in knowledge bases such as Thesaurus Rex, which organizes concepts according to the fine grained ad-hoc categories they are placed into by speakers in everyday language, along with associated properties or modifiers. Taking advantage of these shared properties, we have developed a riddle generator that creates riddles about concepts represented as common nouns. The base of these riddles are comparisons between the target concept and other entities that share some of its properties. In this paper, we describe the process we have followed to generate the riddles starting from the target concept and we show the results of the first evaluation we have carried out to test the quality of the resulting riddles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,691 |
inproceedings | rudnicka-etal-2016-challenges | Challenges of Adjective Mapping between pl{W}ord{N}et and {P}rinceton {W}ord{N}et | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1382/ | Rudnicka, Ewa and Witkowski, Wojciech and Podlaska, Katarzyna | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2413--2418 | The paper presents the strategy and results of mapping adjective synsets between plWordNet (the wordnet of Polish, cf. Piasecki et al. 2009, Maziarz et al. 2013) and Princeton WordNet (cf. Fellbaum 1998). The main challenge of this enterprise has been very different synset relation structures in the two networks: horizontal, dumbbell-model based in PWN and vertical, hyponymy-based in plWN. Moreover, the two wordnets display differences in the grouping of adjectives into semantic domains and in the size of the adjective category. The handle the above contrasts, a series of automatic prompt algorithms and a manual mapping procedure relying on corresponding synset and lexical unit relations as well as on inter-lingual relations between noun synsets were proposed in the pilot stage of mapping (Rudnicka et al. 2015). In the paper we discuss the final results of the mapping process as well as explain example mapping choices. Suggestions for further development of mapping are also given. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,692 |
inproceedings | gabryszak-etal-2016-relation | Relation- and Phrase-level Linking of {F}rame{N}et with Sar-graphs | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1383/ | Gabryszak, Aleksandra and Krause, Sebastian and Hennig, Leonhard and Xu, Feiyu and Uszkoreit, Hans | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2419--2424 | Recent research shows the importance of linking linguistic knowledge resources for the creation of large-scale linguistic data. We describe our approach for combining two English resources, FrameNet and sar-graphs, and illustrate the benefits of the linked data in a relation extraction setting. While FrameNet consists of schematic representations of situations, linked to lexemes and their valency patterns, sar-graphs are knowledge resources that connect semantic relations from factual knowledge graphs to the linguistic phrases used to express instances of these relations. We analyze the conceptual similarities and differences of both resources and propose to link sar-graphs and FrameNet on the levels of relations/frames as well as phrases. The former alignment involves a manual ontology mapping step, which allows us to extend sar-graphs with new phrase patterns from FrameNet. The phrase-level linking, on the other hand, is fully automatic. We investigate the quality of the automatically constructed links and identify two main classes of errors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,693 |
inproceedings | indig-etal-2016-mapping | Mapping Ontologies Using Ontologies: Cross-lingual Semantic Role Information Transfer | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1384/ | Indig, Bal{\'a}zs and Mih{\'a}ltz, M{\'a}rton and Simonyi, Andr{\'a}s | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2425--2430 | This paper presents the process of enriching the verb frame database of a Hungarian natural language parser to enable the assignment of semantic roles. We accomplished this by linking the parser`s verb frame database to existing linguistic resources such as VerbNet and WordNet, and automatically transferring back semantic knowledge. We developed OWL ontologies that map the various constraint description formalisms of the linked resources and employed a logical reasoning device to facilitate the linking procedure. We present results and discuss the challenges and pitfalls that arose from this undertaking. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,694 |
inproceedings | harige-buitelaar-2016-generating | Generating a Large-Scale Entity Linking Dictionary from {W}ikipedia Link Structure and Article Text | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1385/ | Harige, Ravindra and Buitelaar, Paul | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2431--2434 | Wikipedia has been increasingly used as a knowledge base for open-domain Named Entity Linking and Disambiguation. In this task, a dictionary with entity surface forms plays an important role in finding a set of candidate entities for the mentions in text. Existing dictionaries mostly rely on the Wikipedia link structure, like anchor texts, redirect links and disambiguation links. In this paper, we introduce a dictionary for Entity Linking that includes name variations extracted from Wikipedia article text, in addition to name variations derived from the Wikipedia link structure. With this approach, we show an increase in the coverage of entities and their mentions in the dictionary in comparison to other Wikipedia based dictionaries. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,695 |
inproceedings | mccrae-etal-2016-open | The Open Linguistics Working Group: Developing the Linguistic Linked Open Data Cloud | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1386/ | McCrae, John Philip and Chiarcos, Christian and Bond, Francis and Cimiano, Philipp and Declerck, Thierry and de Melo, Gerard and Gracia, Jorge and Hellmann, Sebastian and Klimek, Bettina and Moran, Steven and Osenova, Petya and Pareja-Lora, Antonio and Pool, Jonathan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2435--2441 | The Open Linguistics Working Group (OWLG) brings together researchers from various fields of linguistics, natural language processing, and information technology to present and discuss principles, case studies, and best practices for representing, publishing and linking linguistic data collections. A major outcome of our work is the Linguistic Linked Open Data (LLOD) cloud, an LOD (sub-)cloud of linguistic resources, which covers various linguistic databases, lexicons, corpora, terminologies, and metadata repositories. We present and summarize five years of progress on the development of the cloud and of advancements in open data in linguistics, and we describe recent community activities. The paper aims to serve as a guideline to orient and involve researchers with the community and/or Linguistic Linked Open Data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,696 |
inproceedings | lesnikova-etal-2016-cross | Cross-lingual {RDF} Thesauri Interlinking | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1387/ | Lesnikova, Tatiana and David, J{\'e}r{\^o}me and Euzenat, J{\'e}r{\^o}me | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2442--2449 | Various lexical resources are being published in RDF. To enhance the usability of these resources, identical resources in different data sets should be linked. If lexical resources are described in different natural languages, then techniques to deal with multilinguality are required for interlinking. In this paper, we evaluate machine translation for interlinking concepts, i.e., generic entities named with a common noun or term. In our previous work, the evaluated method has been applied on named entities. We conduct two experiments involving different thesauri in different languages. The first experiment involves concepts from the TheSoz multilingual thesaurus in three languages: English, French and German. The second experiment involves concepts from the EuroVoc and AGROVOC thesauri in English and Chinese respectively. Our results demonstrate that machine translation can be beneficial for cross-lingual thesauri interlinking independently of a dataset structure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,697 |
inproceedings | rehm-2016-language | The Language Resource Life Cycle: Towards a Generic Model for Creating, Maintaining, Using and Distributing Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1388/ | Rehm, Georg | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2450--2454 | Language Resources (LRs) are an essential ingredient of current approaches in Linguistics, Computational Linguistics, Language Technology and related fields. LRs are collections of spoken or written language data, typically annotated with linguistic analysis information. Different types of LRs exist, for example, corpora, ontologies, lexicons, collections of spoken language data (audio), or collections that also include video (multimedia, multimodal). Often, LRs are distributed with specific tools, documentation, manuals or research publications. The different phases that involve creating and distributing an LR can be conceptualised as a life cycle. While the idea of handling the LR production and maintenance process in terms of a life cycle has been brought up quite some time ago, a best practice model or common approach can still be considered a research gap. This article wants to help fill this gap by proposing an initial version of a generic Language Resource Life Cycle that can be used to inform, direct, control and evaluate LR research and development activities (including description, management, production, validation and evaluation workflows). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,698 |
inproceedings | harashima-etal-2016-large | A Large-scale Recipe and Meal Data Collection as Infrastructure for Food Research | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1389/ | Harashima, Jun and Ariga, Michiaki and Murata, Kenta and Ioki, Masayuki | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2455--2459 | Everyday meals are an important part of our daily lives and, currently, there are many Internet sites that help us plan these meals. Allied to the growth in the amount of food data such as recipes available on the Internet is an increase in the number of studies on these data, such as recipe analysis and recipe search. However, there are few publicly available resources for food research; those that do exist do not include a wide range of food data or any meal data (that is, likely combinations of recipes). In this study, we construct a large-scale recipe and meal data collection as the underlying infrastructure to promote food research. Our corpus consists of approximately 1.7 million recipes and 36000 meals in cookpad, one of the largest recipe sites in the world. We made the corpus available to researchers in February 2015 and as of February 2016, 82 research groups at 56 universities have made use of it to enhance their studies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,699 |
inproceedings | orasmaa-etal-2016-estnltk | {E}st{NLTK} - {NLP} Toolkit for {E}stonian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1390/ | Orasmaa, Siim and Petmanson, Timo and Tkachenko, Alexander and Laur, Sven and Kaalep, Heiki-Jaan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2460--2466 | Although there are many tools for natural language processing tasks in Estonian, these tools are very loosely interoperable, and it is not easy to build practical applications on top of them. In this paper, we introduce a new Python library for natural language processing in Estonian, which provides unified programming interface for various NLP components. The EstNLTK toolkit provides utilities for basic NLP tasks including tokenization, morphological analysis, lemmatisation and named entity recognition as well as offers more advanced features such as a clause segmentation, temporal expression extraction and normalization, verb chain detection, Estonian Wordnet integration and rule-based information extraction. Accompanied by a detailed API documentation and comprehensive tutorials, EstNLTK is suitable for a wide range of audience. We believe EstNLTK is mature enough to be used for developing NLP-backed systems both in industry and research. EstNLTK is freely available under the GNU GPL version 2+ license, which is standard for academic software. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,700 |
inproceedings | roux-2016-south | {S}outh {A}frican National Centre for Digital Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1391/ | Roux, Justus | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2467--2470 | This presentation introduces the imminent establishment of a new language resource infrastructure focusing on languages spoken in Southern Africa, with an eventual aim to become a hub for digital language resources within Sub-Saharan Africa. The Constitution of South Africa makes provision for 11 official languages all with equal status. The current language Resource Management Agency will be merged with the new Centre, which will have a wider focus than that of data acquisition, management and distribution. The Centre will entertain two main programs: Digitisation and Digital Humanities. The digitisation program will focus on the systematic digitisation of relevant text, speech and multi-modal data across the official languages. Relevancy will be determined by a Scientific Advisory Board. This will take place on a continuous basis through specified projects allocated to national members of the Centre, as well as through open-calls aimed at the academic as well as local communities. The digital resources will be managed and distributed through a dedicated web-based portal. The development of the Digital Humanities program will entail extensive academic support for projects implementing digital language based data. The Centre will function as an enabling research infrastructure primarily supported by national government and hosted by the North-West University. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,701 |
inproceedings | lyding-schone-2016-design | Design and Development of the {MERLIN} Learner Corpus Platform | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1392/ | Lyding, Verena and Sch{\"one, Karin | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2471--2477 | In this paper, we report on the design and development of an online search platform for the MERLIN corpus of learner texts in Czech, German and Italian. It was created in the context of the MERLIN project, which aims at empirically illustrating features of the Common European Framework of Reference (CEFR) for evaluating language competences based on authentic learner text productions compiled into a learner corpus. Furthermore, the project aims at providing access to the corpus through a search interface adapted to the needs of multifaceted target groups involved with language learning and teaching. This article starts by providing a brief overview on the project ambition, the data resource and its intended target groups. Subsequently, the main focus of the article is on the design and development process of the platform, which is carried out in a user-centred fashion. The paper presents the user studies carried out to collect requirements, details the resulting decisions concerning the platform design and its implementation, and reports on the evaluation of the platform prototype and final adjustments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,702 |
inproceedings | windhouwer-etal-2016-flat | {FLAT}: Constructing a {CLARIN} Compatible Home for Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1393/ | Windhouwer, Menzo and Kemps-Snijders, Marc and Trilsbeek, Paul and Moreira, Andr{\'e} and van der Veen, Bas and Silva, Guilherme and von Reihn, Daniel | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2478--2483 | Language resources are valuable assets, both for institutions and researchers. To safeguard these resources requirements for repository systems and data management have been specified by various branch organizations, e.g., CLARIN and the Data Seal of Approval. This paper describes these and some additional ones posed by the authors' home institutions. And it shows how they are met by FLAT, to provide a new home for language resources. The basis of FLAT is formed by the Fedora Commons repository system. This repository system can meet many of the requirements out-of-the box, but still additional configuration and some development work is needed to meet the remaining ones, e.g., to add support for Handles and Component Metadata. This paper describes design decisions taken in the construction of FLAT`s system architecture via a mix-and-match strategy, with a preference for the reuse of existing solutions. FLAT is developed and used by the Meertens Institute and The Language Archive, but is also freely available for anyone in need of a CLARIN-compliant repository for their language resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,703 |
inproceedings | odijk-2016-clariah | {CLARIAH} in the {N}etherlands | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1394/ | Odijk, Jan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2484--2488 | I introduce CLARIAH in the Netherlands, which aims to contribute the Netherlands part of a Europe-wide humanities research infrastructure. I describe the digital turn in the humanities, the background and context of CLARIAH, both nationally and internationally, its relation to the CLARIN and DARIAH infrastructures, and the rationale for joining forces between CLARIN and DARIAH in the Netherlands. I also describe the first results of joining forces as achieved in the CLARIAH-SEED project, and the plans of the CLARIAH-CORE project, which is currently running | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,704 |
inproceedings | zinn-etal-2016-crosswalking | Crosswalking from {CMDI} to {D}ublin {C}ore and {MARC} 21 | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1395/ | Zinn, Claus and Trippel, Thorsten and Kaminski, Steve and Dima, Emanuel | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2489--2495 | The Component MetaData Infrastructure (CMDI) is a framework for the creation and usage of metadata formats to describe all kinds of resources in the CLARIN world. To better connect to the library world, and to allow librarians to enter metadata for linguistic resources into their catalogues, a crosswalk from CMDI-based formats to bibliographic standards is required. The general and rather fluid nature of CMDI, however, makes it hard to map arbitrary CMDI schemas to metadata standards such as Dublin Core (DC) or MARC 21, which have a mature, well-defined and fixed set of field descriptors. In this paper, we address the issue and propose crosswalks between CMDI-based profiles originating from the NaLiDa project and DC and MARC 21, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,705 |
inproceedings | dipersio-etal-2016-data | Data Management Plans and Data Centers | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1396/ | DiPersio, Denise and Cieri, Christopher and Jaquette, Daniel | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2496--2501 | Data management plans, data sharing plans and the like are now required by funders worldwide as part of research proposals. Concerned with promoting the notion of open scientific data, funders view such plans as the framework for satisfying the generally accepted requirements for data generated in funded research projects, among them that it be accessible, usable, standardized to the degree possible, secure and stable. This paper examines the origins of data management plans, their requirements and issues they raise for data centers and HLT resource development in general. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,706 |
inproceedings | hahn-etal-2016-uima | {UIMA}-Based {JC}o{R}e 2.0 Goes {G}it{H}ub and Maven Central {\textemdash} State-of-the-Art Software Resource Engineering and Distribution of {NLP} Pipelines | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1397/ | Hahn, Udo and Matthies, Franz and Faessler, Erik and Hellrich, Johannes | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2502--2509 | We introduce JCoRe 2.0, the relaunch of a UIMA-based open software repository for full-scale natural language processing originating from the Jena University Language {\&} Information Engineering (JULIE) Lab. In an attempt to put the new release of JCoRe on firm software engineering ground, we uploaded it to GitHub, a social coding platform, with an underlying source code versioning system and various means to support collaboration for software development and code modification management. In order to automate the builds of complex NLP pipelines and properly represent and track dependencies of the underlying Java code, we incorporated Maven as part of our software configuration management efforts. In the meantime, we have deployed our artifacts on Maven Central, as well. JCoRe 2.0 offers a broad range of text analytics functionality (mostly) for English-language scientific abstracts and full-text articles, especially from the life sciences domain. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,707 |
inproceedings | offersgaard-hansen-2016-facilitating | Facilitating Metadata Interoperability in {CLARIN}-{DK} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1398/ | Offersgaard, Lene and Hansen, Dorte Haltrup | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2510--2515 | The issue for CLARIN archives at the metadata level is to facilitate the user`s possibility to describe their data, even with their own standard, and at the same time make these metadata meaningful for a variety of users with a variety of resource types, and ensure that the metadata are useful for search across all resources both at the national and at the European level. We see that different people from different research communities fill in the metadata in different ways even though the metadata was defined and documented. This has impacted when the metadata are harvested and displayed in different environments. A loss of information is at stake. In this paper we view the challenges of ensuring metadata interoperability through examples of propagation of metadata values from the CLARIN-DK archive to the VLO. We see that the CLARIN Community in many ways support interoperability, but argue that agreeing upon standards, making clear definitions of the semantics of the metadata and their content is inevitable for the interoperability to work successfully. The key points are clear and freely available definitions, accessible documentation and easily usable facilities and guidelines for the metadata creators. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,708 |
inproceedings | tufis-etal-2016-ipr | The {IPR}-cleared Corpus of Contemporary Written and Spoken {R}omanian Language | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1399/ | Tufiș, Dan and Mititelu, Verginica Barbu and Irimia, Elena and Dumitrescu, Ștefan Daniel and Boroș, Tiberiu | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2516--2521 | The article describes the current status of a large national project, CoRoLa, aiming at building a reference corpus for the contemporary Romanian language. Unlike many other national corpora, CoRoLa contains only - IPR cleared texts and speech data, obtained from some of the country`s most representative publishing houses, broadcasting agencies, editorial offices, newspapers and popular bloggers. For the written component 500 million tokens are targeted and for the oral one 300 hours of recordings. The choice of texts is done according to their functional style, domain and subdomain, also with an eye to the international practice. A metadata file (following the CMDI model) is associated to each text file. Collected texts are cleaned and transformed in a format compatible with the tools for automatic processing (segmentation, tokenization, lemmatization, part-of-speech tagging). The paper also presents up-to-date statistics about the structure of the corpus almost two years before its official launching. The corpus will be freely available for searching. Users will be able to download the results of their searches and those original files when not against stipulations in the protocols we have with text providers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,709 |
inproceedings | kren-etal-2016-syn2015 | {SYN}2015: Representative Corpus of Contemporary Written {C}zech | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1400/ | K{\v{r}}en, Michal and Cvr{\v{c}}ek, V{\'a}clav and {\v{C}}apka, Tom{\'a}{\v{s}} and {\v{C}}erm{\'a}kov{\'a}, Anna and Hn{\'a}tkov{\'a}, Milena and Chlumsk{\'a}, Lucie and Jel{\'i}nek, Tom{\'a}{\v{s}} and Kov{\'a}{\v{r}}{\'i}kov{\'a}, Dominika and Petkevi{\v{c}}, Vladim{\'i}r and Proch{\'a}zka, Pavel and Skoumalov{\'a}, Hana and {\v{S}}krabal, Michal and Trune{\v{c}}ek, Petr and Vond{\v{r}}i{\v{c}}ka, Pavel and Zasina, Adrian Jan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2522--2528 | The paper concentrates on the design, composition and annotation of SYN2015, a new 100-million representative corpus of contemporary written Czech. SYN2015 is a sequel of the representative corpora of the SYN series that can be described as traditional (as opposed to the web-crawled corpora), featuring cleared copyright issues, well-defined composition, reliability of annotation and high-quality text processing. At the same time, SYN2015 is designed as a reflection of the variety of written Czech text production with necessary methodological and technological enhancements that include a detailed bibliographic annotation and text classification based on an updated scheme. The corpus has been produced using a completely rebuilt text processing toolchain called SynKorp. SYN2015 is lemmatized, morphologically and syntactically annotated with state-of-the-art tools. It has been published within the framework of the Czech National Corpus and it is available via the standard corpus query interface KonText at \url{http://kontext.korpus.cz} as well as a dataset in shuffled format. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,710 |
inproceedings | del-gratta-etal-2016-lrec | {LREC} as a Graph: People and Resources in a Network | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1401/ | Del Gratta, Riccardo and Frontini, Francesca and Monachini, Monica and Pardelli, Gabriella and Russo, Irene and Bartolini, Roberto and Khan, Fahad and Soria, Claudia and Calzolari, Nicoletta | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2529--2532 | This proposal describes a new way to visualise resources in the LREMap, a community-built repository of language resource descriptions and uses. The LREMap is represented as a force-directed graph, where resources, papers and authors are nodes. The analysis of the visual representation of the underlying graph is used to study how the community gathers around LRs and how LRs are used in research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,711 |
inproceedings | kamocki-etal-2016-public | The Public License Selector: Making Open Licensing Easier | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1402/ | Kamocki, Pawel and Stra{\v{n}}{\'a}k, Pavel and Sedl{\'a}k, Michal | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2533--2538 | Researchers in Natural Language Processing rely on availability of data and software, ideally under open licenses, but little is done to actively encourage it. In fact, the current Copyright framework grants exclusive rights to authors to copy their works, make them available to the public and make derivative works (such as annotated language corpora). Moreover, in the EU databases are protected against unauthorized extraction and re-utilization of their contents. Therefore, proper public licensing plays a crucial role in providing access to research data. A public license is a license that grants certain rights not to one particular user, but to the general public (everybody). Our article presents a tool that we developed and whose purpose is to assist the user in the licensing process. As software and data should be licensed under different licenses, the tool is composed of two separate parts: Data and Software. The underlying logic as well as elements of the graphic interface are presented below. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,712 |
inproceedings | vitkute-adzgauskiene-etal-2016-nlp | {NLP} Infrastructure for the {L}ithuanian Language | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1403/ | Vitkut{\.{e}}-Ad{\v{z}}gauskien{\.{e}}, Daiva and Utka, Andrius and Amilevi{\v{c}}ius, Darius and Krilavi{\v{c}}ius, Tomas | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2539--2542 | The Information System for Syntactic and Semantic Analysis of the Lithuanian language (lith. Lietuvi{\k{u}} kalbos sintaksin{\.{e}}s ir semantin{\.{e}}s analiz{\.{e}}s informacin{\.{e}} sistema, LKSSAIS) is the first infrastructure for the Lithuanian language combining Lithuanian language tools and resources for diverse linguistic research and applications tasks. It provides access to the basic as well as advanced natural language processing tools and resources, including tools for corpus creation and management, text preprocessing and annotation, ontology building, named entity recognition, morphosyntactic and semantic analysis, sentiment analysis, etc. It is an important platform for researchers and developers in the field of natural language technology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,713 |
inproceedings | krieg-holz-etal-2016-code | {C}od{E} Alltag: A {G}erman-Language {E}-Mail Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1404/ | Krieg-Holz, Ulrike and Schuschnig, Christian and Matthies, Franz and Redling, Benjamin and Hahn, Udo | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2543--2550 | We introduce CODE ALLTAG, a text corpus composed of German-language e-mails. It is divided into two partitions: the first of these portions, CODE ALLTAG{\_}XL, consists of a bulk-size collection drawn from an openly accessible e-mail archive (roughly 1.5M e-mails), whereas the second portion, CODE ALLTAG{\_}S+d, is much smaller in size (less than thousand e-mails), yet excels with demographic data from each author of an e-mail. CODE ALLTAG, thus, currently constitutes the largest E-Mail corpus ever built. In this paper, we describe, for both parts, the solicitation process for gathering e-mails, present descriptive statistical properties of the corpus, and, for CODE ALLTAG{\_}S+d, reveal a compilation of demographic features of the donors of e-mails. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,714 |
inproceedings | kulick-bies-2016-rapid | Rapid Development of Morphological Analyzers for Typologically Diverse Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1405/ | Kulick, Seth and Bies, Ann | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2551--2557 | The Low Resource Language research conducted under DARPA`s Broad Operational Language Translation (BOLT) program required the rapid creation of text corpora of typologically diverse languages (Turkish, Hausa, and Uzbek) which were annotated with morphological information, along with other types of annotation. Since the output of morphological analyzers is a significant aid to morphological annotation, we developed a morphological analyzer for each language in order to support the annotation task, and also as a deliverable by itself. Our framework for analyzer creation results in tables similar to those used in the successful SAMA analyzer for Arabic, but with a more abstract linguistic level, from which the tables are derived. A lexicon was developed from available resources for integration with the analyzer, and given the speed of development and uncertain coverage of the lexicon, we assumed that the analyzer would necessarily be lacking in some coverage for the project annotation. Our analyzer framework was therefore focused on rapid implementation of the key structures of the language, together with accepting {\textquotedblleft}wildcard{\textquotedblright} solutions as possible analyses for a word with an unknown stem, building upon our similar experiences with morphological annotation with Modern Standard Arabic and Egyptian Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,715 |
inproceedings | chakrabarty-etal-2016-neural | A Neural Lemmatizer for {B}engali | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1406/ | Chakrabarty, Abhisek and Chaturvedi, Akshay and Garain, Utpal | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2558--2561 | We propose a novel neural lemmatization model which is language independent and supervised in nature. To handle the words in a neural framework, word embedding technique is used to represent words as vectors. The proposed lemmatizer makes use of contextual information of the surface word to be lemmatized. Given a word along with its contextual neighbours as input, the model is designed to produce the lemma of the concerned word as output. We introduce a new network architecture that permits only dimension specific connections between the input and the output layer of the model. For the present work, Bengali is taken as the reference language. Two datasets are prepared for training and testing purpose consisting of 19,159 and 2,126 instances respectively. As Bengali is a resource scarce language, these datasets would be beneficial for the respective research community. Evaluation method shows that the neural lemmatizer achieves 69.57{\%} accuracy on the test dataset and outperforms the simple cosine similarity based baseline strategy by a margin of 1.37{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,716 |
inproceedings | tyers-etal-2016-finite | A Finite-state Morphological Analyser for Tuvan | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1407/ | Tyers, Francis and Bayyr-ool, Aziyana and Salchak, Aelita and Washington, Jonathan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2562--2567 | {\textasciitilde}This paper describes the development of free/open-source finite-state morphological transducers for Tuvan, a Turkic language spoken in and around the Tuvan Republic in Russia. The finite-state toolkit used for the work is the Helsinki Finite-State Toolkit (HFST), we use the lexc formalism for modelling the morphotactics and twol formalism for modelling morphophonological alternations. We present a novel description of the morphological combinatorics of pseudo-derivational morphemes in Tuvan. An evaluation is presented which shows that the transducer has a reasonable coverage{\textemdash}around 93{\%}{\textemdash}on freely-available corpora of the languages, and high precision{\textemdash}over 99{\%}{\textemdash}on a manually verified test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,717 |
inproceedings | spektors-etal-2016-tezaurs | {T}{\={e}}zaurs.lv: the Largest Open Lexical Database for {L}atvian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1408/ | Spektors, Andrejs and Auzina, Ilze and Dargis, Roberts and Gruzitis, Normunds and Paikens, Peteris and Pretkalnina, Lauma and Rituma, Laura and Saulite, Baiba | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2568--2571 | We describe an extensive and versatile lexical resource for Latvian, an under-resourced Indo-European language, which we call Tezaurs (Latvian for {\textquoteleft}thesaurus'). It comprises a large explanatory dictionary of more than 250,000 entries that are derived from more than 280 external sources. The dictionary is enriched with phonetic, morphological, semantic and other annotations, as well as augmented by various language processing tools allowing for the generation of inflectional forms and pronunciation, for on-the-fly selection of corpus examples, for suggesting synonyms, etc. Tezaurs is available as a public and widely used web application for end-users, as an open data set for the use in language technology (LT), and as an API {\textemdash} a set of web services for the integration into third-party applications. The ultimate goal of Tezaurs is to be the central computational lexicon for Latvian, bringing together all Latvian words and frequently used multi-word units and allowing for the integration of other LT resources and tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,718 |
inproceedings | motlani-etal-2016-finite | A Finite-State Morphological Analyser for {S}indhi | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1409/ | Motlani, Raveesh and Tyers, Francis and Sharma, Dipti | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2572--2577 | Morphological analysis is a fundamental task in natural-language processing, which is used in other NLP applications such as part-of-speech tagging, syntactic parsing, information retrieval, machine translation, etc. In this paper, we present our work on the development of free/open-source finite-state morphological analyser for Sindhi. We have used Apertium`s lttoolbox as our finite-state toolkit to implement the transducer. The system is developed using a paradigm-based approach, wherein a paradigm defines all the word forms and their morphological features for a given stem (lemma). We have evaluated our system on the Sindhi Wikipedia corpus and achieved a reasonable coverage of 81{\%} and a precision of over 97{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,719 |
inproceedings | forsberg-hulden-2016-deriving | Deriving Morphological Analyzers from Example Inflections | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1410/ | Forsberg, Markus and Hulden, Mans | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2578--2583 | This paper presents a semi-automatic method to derive morphological analyzers from a limited number of example inflections suitable for languages with alphabetic writing systems. The system we present learns the inflectional behavior of morphological paradigms from examples and converts the learned paradigms into a finite-state transducer that is able to map inflected forms of previously unseen words into lemmas and corresponding morphosyntactic descriptions. We evaluate the system when provided with inflection tables for several languages collected from the Wiktionary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,720 |
inproceedings | smith-hulden-2016-morphological | Morphological Analysis of Sahidic {C}optic for Automatic Glossing | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1411/ | Smith, Daniel and Hulden, Mans | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2584--2588 | We report on the implementation of a morphological analyzer for the Sahidic dialect of Coptic, a now extinct Afro-Asiatic language. The system is developed in the finite-state paradigm. The main purpose of the project is provide a method by which scholars and linguists can semi-automatically gloss extant texts written in Sahidic. Since a complete lexicon containing all attested forms in different manuscripts requires significant expertise in Coptic spanning almost 1,000 years, we have equipped the analyzer with a core lexicon and extended it with a {\textquotedblleft}guesser{\textquotedblright} ability to capture out-of-vocabulary items in any inflection. We also suggest an ASCII transliteration for the language. A brief evaluation is provided. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,721 |
inproceedings | wolinski-kieras-2016-line | The on-line version of Grammatical Dictionary of {P}olish | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1412/ | Woli{\'n}ski, Marcin and Kiera{\'s}, Witold | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2589--2594 | We present the new online edition of a dictionary of Polish inflection {\textemdash} the Grammatical Dictionary of Polish (\url{http://sgjp.pl}). The dictionary is interesting for several reasons: it is comprehensive (over 330,000 lexemes corresponding to almost 4,300,000 different textual words; 1116 handcrafted inflectional patterns), the inflection is presented in an explicit manner in the form of carefully designed tables, the user interface facilitates advanced queries by several features (lemmas, forms, applicable grammatical categories, types of inflection). Moreover, the data of the dictionary is used in morphological analysers, including our product Morfeusz (\url{http://sgjp.pl/morfeusz}). From the start, the dictionary was meant to be comfortable for the human reader as well as to be ready for use in NLP applications. In the paper we briefly discuss both aspects of the resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,722 |
inproceedings | koper-schulte-im-walde-2016-automatically | Automatically Generated Affective Norms of Abstractness, Arousal, Imageability and Valence for 350 000 {G}erman Lemmas | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1413/ | K{\"oper, Maximilian and Schulte im Walde, Sabine | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2595--2598 | This paper presents a collection of 350,000 German lemmatised words, rated on four psycholinguistic affective attributes. All ratings were obtained via a supervised learning algorithm that can automatically calculate a numerical rating of a word. We applied this algorithm to abstractness, arousal, imageability and valence. Comparison with human ratings reveals high correlation across all rating types. The full resource is publically available at: \url{http://www.ims.uni-stuttgart.de/data/affective_norms/} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,723 |
inproceedings | passarotti-etal-2016-latin | {L}atin {V}allex. A Treebank-based Semantic Valency Lexicon for {L}atin | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1414/ | Passarotti, Marco and Saavedra, Berta Gonz{\'a}lez and Onambele, Christophe | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2599--2606 | Despite a centuries-long tradition in lexicography, Latin lacks state-of-the-art computational lexical resources. This situation is strictly related to the still quite limited amount of linguistically annotated textual data for Latin, which can help the building of new lexical resources by supporting them with empirical evidence. However, projects for creating new language resources for Latin have been launched over the last decade to fill this gap. In this paper, we present Latin Vallex, a valency lexicon for Latin built in mutual connection with the semantic and pragmatic annotation of two Latin treebanks featuring texts of different eras. On the one hand, such a connection between the empirical evidence provided by the treebanks and the lexicon allows to enhance each frame entry in the lexicon with its frequency in real data. On the other hand, each valency-capable word in the treebanks is linked to a frame entry in the lexicon. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,724 |
inproceedings | hayashi-2016-framework | A Framework for Cross-lingual/Node-wise Alignment of Lexical-Semantic Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1415/ | Hayashi, Yoshihiko | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2607--2613 | Given lexical-semantic resources in different languages, it is useful to establish cross-lingual correspondences, preferably with semantic relation labels, between the concept nodes in these resources. This paper presents a framework for enabling a cross-lingual/node-wise alignment of lexical-semantic resources, where cross-lingual correspondence candidates are first discovered and ranked, and then classified by a succeeding module. Indeed, we propose that a two-tier classifier configuration is feasible for the second module: the first classifier filters out possibly irrelevant correspondence candidates and the second classifier assigns a relatively fine-grained semantic relation label to each of the surviving candidates. The results of Japanese-to-English alignment experiments using EDR Electronic Dictionary and Princeton WordNet are described to exemplify the validity of the proposal. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,725 |
inproceedings | piao-etal-2016-lexical | Lexical Coverage Evaluation of Large-scale Multilingual Semantic Lexicons for Twelve Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1416/ | Piao, Scott and Rayson, Paul and Archer, Dawn and Bianchi, Francesca and Dayrell, Carmen and El-Haj, Mahmoud and Jim{\'enez, Ricardo-Mar{\'ia and Knight, Dawn and K{\v{ren, Michal and L{\"ofberg, Laura and Nawab, Rao Muhammad Adeel and Shafi, Jawad and Teh, Phoey Lee and Mudraya, Olga | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2614--2619 | The last two decades have seen the development of various semantic lexical resources such as WordNet (Miller, 1995) and the USAS semantic lexicon (Rayson et al., 2004), which have played an important role in the areas of natural language processing and corpus-based studies. Recently, increasing efforts have been devoted to extending the semantic frameworks of existing lexical knowledge resources to cover more languages, such as EuroWordNet and Global WordNet. In this paper, we report on the construction of large-scale multilingual semantic lexicons for twelve languages, which employ the unified Lancaster semantic taxonomy and provide a multilingual lexical knowledge base for the automatic UCREL semantic annotation system (USAS). Our work contributes towards the goal of constructing larger-scale and higher-quality multilingual semantic lexical resources and developing corpus annotation tools based on them. Lexical coverage is an important factor concerning the quality of the lexicons and the performance of the corpus annotation tools, and in this experiment we focus on evaluating the lexical coverage achieved by the multilingual lexicons and semantic annotation tools based on them. Our evaluation shows that some semantic lexicons such as those for Finnish and Italian have achieved lexical coverage of over 90{\%} while others need further expansion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,726 |
inproceedings | recski-2016-building | Building Concept Graphs from Monolingual Dictionary Entries | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1417/ | Recski, G{\'a}bor | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2620--2624 | We present the dict{\_}to{\_}4lang tool for processing entries of three monolingual dictionaries of English and mapping definitions to concept graphs following the 4lang principles of semantic representation introduced by (Kornai, 2010). 4lang representations are domain- and language-independent, and make use of only a very limited set of primitives to encode the meaning of all utterances. Our pipeline relies on the Stanford Dependency Parser for syntactic analysis, the dep to 4lang module then builds directed graphs of concepts based on dependency relations between words in each definition. Several issues are handled by construction-specific rules that are applied to the output of dep{\_}to{\_}4lang. Manual evaluation suggests that ca. 75{\%} of graphs built from the Longman Dictionary are either entirely correct or contain only minor errors. dict{\_}to{\_}4lang is available under an MIT license as part of the 4lang library and has been used successfully in measuring Semantic Textual Similarity (Recski and {\'A}cs, 2015). An interactive demo of core 4lang functionalities is available at \url{http://4lang.hlt.bme.hu}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,727 |
inproceedings | hajnicz-etal-2016-semantic | Semantic Layer of the Valence Dictionary of {P}olish Walenty | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1418/ | Hajnicz, El{\.z}bieta and Andrzejczuk, Anna and Bartosiak, Tomasz | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2625--2632 | This article presents the semantic layer of Walenty{\textemdash}a new valence dictionary of Polish predicates, with a number of novel features, as compared to other such dictionaries. The dictionary contains two layers, syntactic and semantic. The syntactic layer describes syntactic and morphosyntactic constraints predicates put on their dependants. In particular, it includes a comprehensive and powerful phraseological component. The semantic layer shows how predicates and their arguments are involved in a described situation in an utterance. These two layers are connected, representing how semantic arguments can be realised on the surface. Each syntactic schema and each semantic frame are illustrated by at least one exemplary sentence attested in linguistic reality. The semantic layer consists of semantic frames represented as lists of pairs and connected with PlWordNet lexical units. Semantic roles have a two-level representation (basic roles are provided with an attribute) enabling representation of arguments in a flexible way. Selectional preferences are based on PlWordNet structure as well. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,728 |
inproceedings | busso-lenci-2016-italian | {I}talian {V}erb{N}et: A Construction-based Approach to {I}talian Verb Classification | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1419/ | Busso, Lucia and Lenci, Alessandro | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2633--2642 | This paper proposes a new method for Italian verb classification -and a preliminary example of resulting classes- inspired by Levin (1993) and VerbNet (Kipper-Schuler, 2005), yet partially independent from these resources; we achieved such a result by integrating Levin and VerbNet`s models of classification with other theoretic frameworks and resources. The classification is rooted in the constructionist framework (Goldberg, 1995; 2006) and is distribution-based. It is also semantically characterized by a link to FrameNet`ssemanticframesto represent the event expressed by a class. However, the new Italian classes maintain the hierarchic {\textquotedblleft}tree{\textquotedblright} structure and monotonic nature of VerbNet`s classes, and, where possible, the original names (e.g.: Verbs of Killing, Verbs of Putting, etc.). We therefore propose here a taxonomy compatible with VerbNet but at the same time adapted to Italian syntax and semantics. It also addresses a number of problems intrinsic to the original classifications, such as the role of argument alternations, here regarded simply as epiphenomena, consistently with the constructionist approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,729 |
inproceedings | grabar-hamon-2016-large | A Large Rated Lexicon with {F}rench Medical Words | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1420/ | Grabar, Natalia and Hamon, Thierry | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2643--2648 | Patients are often exposed to medical terms, such as anosognosia, myelodysplastic, or hepatojejunostomy, that can be semantically complex and hardly understandable by non-experts in medicine. Hence, it is important to assess which words are potentially non-understandable and require further explanations. The purpose of our work is to build specific lexicon in which the words are rated according to whether they are understandable or non-understandable. We propose to work with medical words in French such as provided by an international medical terminology. The terms are segmented in single words and then each word is manually processed by three annotators. The objective is to assign each word into one of the three categories: I can understand, I am not sure, I cannot understand. The annotators do not have medical training nor they present specific medical problems. They are supposed to represent an average patient. The inter-annotator agreement is then computed. The content of the categories is analyzed. Possible applications in which this lexicon can be helpful are proposed and discussed. The rated lexicon is freely available for the research purposes. It is accessible online at \url{http://natalia.grabar.perso.sfr.fr/rated-lexicon.html} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,730 |
inproceedings | panchenko-2016-best | Best of Both Worlds: Making Word Sense Embeddings Interpretable | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1421/ | Panchenko, Alexander | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2649--2655 | Word sense embeddings represent a word sense as a low-dimensional numeric vector. While this representation is potentially useful for NLP applications, its interpretability is inherently limited. We propose a simple technique that improves interpretability of sense vectors by mapping them to synsets of a lexical resource. Our experiments with AdaGram sense embeddings and BabelNet synsets show that it is possible to retrieve synsets that correspond to automatically learned sense vectors with Precision of 0.87, Recall of 0.42 and AUC of 0.78. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,731 |
inproceedings | zilio-etal-2016-verblexpor | {V}erb{L}ex{P}or: a lexical resource with semantic roles for {P}ortuguese | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1422/ | Zilio, Leonardo and Finatto, Maria Jos{\'e} Bocorny and Villavicencio, Aline | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2656--2661 | This paper presents a lexical resource developed for Portuguese. The resource contains sentences annotated with semantic roles. The sentences were extracted from two domains: Cardiology research papers and newspaper articles. Both corpora were analyzed with the PALAVRAS parser and subsequently processed with a subcategorization frames extractor, so that each sentence that contained at least one main verb was stored in a database together with its syntactic organization. The annotation was manually carried out by a linguist using an annotation interface. Both the annotated and non-annotated data were exported to an XML format, which is readily available for download. The reason behind exporting non-annotated data is that there is syntactic information collected from the parser annotation in the non-annotated data, and this could be useful for other researchers. The sentences from both corpora were annotated separately, so that it is possible to access sentences either from the Cardiology or from the newspaper corpus. The full resource presents more than seven thousand semantically annotated sentences, containing 192 different verbs and more than 15 thousand individual arguments and adjuncts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,732 |
inproceedings | lopez-de-lacalle-etal-2016-multilingual | A Multilingual Predicate Matrix | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1423/ | Lopez de Lacalle, Maddalen and Laparra, Egoitz and Aldabe, Itziar and Rigau, German | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2662--2668 | This paper presents the Predicate Matrix 1.3, a lexical resource resulting from the integration of multiple sources of predicate information including FrameNet, VerbNet, PropBank and WordNet. This new version of the Predicate Matrix has been extended to cover nominal predicates by adding mappings to NomBank. Similarly, we have integrated resources in Spanish, Catalan and Basque. As a result, the Predicate Matrix 1.3 provides a multilingual lexicon to allow interoperable semantic analysis in multiple languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,733 |
inproceedings | wilkinson-tim-2016-gold | A Gold Standard for Scalar Adjectives | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1424/ | Wilkinson, Bryan and Tim, Oates | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2669--2675 | We present a gold standard for evaluating scale membership and the order of scalar adjectives. In addition to evaluating existing methods of ordering adjectives, this knowledge will aid in studying the organization of adjectives in the lexicon. This resource is the result of two elicitation tasks conducted with informants from Amazon Mechanical Turk. The first task is notable for gathering open-ended lexical data from informants. The data is analyzed using Cultural Consensus Theory, a framework from anthropology, to not only determine scale membership but also the level of consensus among the informants (Romney et al., 1986). The second task gathers a culturally salient ordering of the words determined to be members. We use this method to produce 12 scales of adjectives for use in evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,734 |
inproceedings | sekulic-snajder-2016-verbcrocean | {V}erb{CRO}cean: A Repository of Fine-Grained Semantic Verb Relations for {C}roatian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1425/ | Sekuli{\'c}, Ivan and {\v{S}}najder, Jan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2676--2681 | In this paper we describe VerbCROcean, a broad-coverage repository of fine-grained semantic relations between Croatian verbs. Adopting the methodology of Chklovski and Pantel (2004) used for acquiring the English VerbOcean, we first acquire semantically related verb pairs from a web corpus hrWaC by relying on distributional similarity of subject-verb-object paths in the dependency trees. We then classify the semantic relations between each pair of verbs as similarity, intensity, antonymy, or happens-before, using a number of manually-constructed lexico-syntatic patterns. We evaluate the quality of the resulting resource on a manually annotated sample of 1000 semantic verb relations. The evaluation revealed that the predictions are most accurate for the similarity relation, and least accurate for the intensity relation. We make available two variants of VerbCROcean: a coverage-oriented version, containing about 36k verb pairs at a precision of 41{\%}, and a precision-oriented version containing about 5k verb pairs, at a precision of 56{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,735 |
inproceedings | simoes-etal-2016-enriching | Enriching a {P}ortuguese {W}ord{N}et using Synonyms from a Monolingual Dictionary | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1426/ | Sim{\~o}es, Alberto and G{\'o}mez Guinovart, Xavier and Almeida, Jos{\'e} Jo{\~a}o | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2682--2687 | In this article we present an exploratory approach to enrich a WordNet-like lexical ontology with the synonyms present in a standard monolingual Portuguese dictionary. The dictionary was converted from PDF into XML and senses were automatically identified and annotated. This allowed us to extract them, independently of definitions, and to create sets of synonyms (synsets). These synsets were then aligned with WordNet synsets, both in the same language (Portuguese) and projecting the Portuguese terms into English, Spanish and Galician. This process allowed both the addition of new term variants to existing synsets, as to create new synsets for Portuguese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,736 |
inproceedings | batanovic-etal-2016-reliable | Reliable Baselines for Sentiment Analysis in Resource-Limited Languages: The {S}erbian Movie Review Dataset | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1427/ | Batanovi{\'c}, Vuk and Nikoli{\'c}, Bo{\v{s}}ko and Milosavljevi{\'c}, Milan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2688--2696 | Collecting data for sentiment analysis in resource-limited languages carries a significant risk of sample selection bias, since the small quantities of available data are most likely not representative of the whole population. Ignoring this bias leads to less robust machine learning classifiers and less reliable evaluation results. In this paper we present a dataset balancing algorithm that minimizes the sample selection bias by eliminating irrelevant systematic differences between the sentiment classes. We prove its superiority over the random sampling method and we use it to create the Serbian movie review dataset {\textemdash} SerbMR {\textemdash} the first balanced and topically uniform sentiment analysis dataset in Serbian. In addition, we propose an incremental way of finding the optimal combination of simple text processing options and machine learning features for sentiment classification. Several popular classifiers are used in conjunction with this evaluation approach in order to establish strong but reliable baselines for sentiment analysis in Serbian. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,737 |
inproceedings | wang-ku-2016-antusd | {ANTUSD}: A Large {C}hinese Sentiment Dictionary | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1428/ | Wang, Shih-Ming and Ku, Lun-Wei | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2697--2702 | This paper introduces the augmented NTU sentiment dictionary, abbreviated as ANTUSD, which is constructed by collecting sentiment stats of words in several sentiment annotation work. A total of 26,021 words were collected in ANTUSD. For each word, the CopeOpi numerical sentiment score and the number of positive annotation, neutral annotation, negative annotation, non-opinionated annotation, and not-a-word annotation are provided. Words and their sentiment information in ANTUSD have been linked to the Chinese ontology E-HowNet to provide rich semantic information. We demonstrate the usage of ANTUSD in polarity classification of words, and the results show that a superior f-score 98.21 is achieved, which supports the usefulness of the ANTUSD. ANTUSD can be freely obtained through application from NLPSA lab, Academia Sinica: \url{http://academiasinicanlplab.github.io/} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,738 |
inproceedings | akhtar-etal-2016-aspect | Aspect based Sentiment Analysis in {H}indi: Resource Creation and Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1429/ | Akhtar, Md Shad and Ekbal, Asif and Bhattacharyya, Pushpak | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2703--2709 | Due to the phenomenal growth of online product reviews, sentiment analysis (SA) has gained huge attention, for example, by online service providers. A number of benchmark datasets for a wide range of domains have been made available for sentiment analysis, especially in resource-rich languages. In this paper we assess the challenges of SA in Hindi by providing a benchmark setup, where we create an annotated dataset of high quality, build machine learning models for sentiment analysis in order to show the effective usage of the dataset, and finally make the resource available to the community for further advancement of research. The dataset comprises of Hindi product reviews crawled from various online sources. Each sentence of the review is annotated with aspect term and its associated sentiment. As classification algorithms we use Conditional Random Filed (CRF) and Support Vector Machine (SVM) for aspect term extraction and sentiment analysis, respectively. Evaluation results show the average F-measure of 41.07{\%} for aspect term extraction and accuracy of 54.05{\%} for sentiment classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,739 |
inproceedings | adouane-johansson-2016-gulf | {G}ulf {A}rabic Linguistic Resource Building for Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1430/ | Adouane, Wafia and Johansson, Richard | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2710--2715 | This paper deals with building linguistic resources for Gulf Arabic, one of the Arabic variations, for sentiment analysis task using machine learning. To our knowledge, no previous works were done for Gulf Arabic sentiment analysis despite the fact that it is present in different online platforms. Hence, the first challenge is the absence of annotated data and sentiment lexicons. To fill this gap, we created these two main linguistic resources. Then we conducted different experiments: use Naive Bayes classifier without any lexicon; add a sentiment lexicon designed basically for MSA; use only the compiled Gulf Arabic sentiment lexicon and finally use both MSA and Gulf Arabic sentiment lexicons. The Gulf Arabic lexicon gives a good improvement of the classifier accuracy (90.54 {\%}) over a baseline that does not use the lexicon (82.81{\%}), while the MSA lexicon causes the accuracy to drop to (76.83{\%}). Moreover, mixing MSA and Gulf Arabic lexicons causes the accuracy to drop to (84.94{\%}) compared to using only Gulf Arabic lexicon. This indicates that it is useless to use MSA resources to deal with Gulf Arabic due to the considerable differences and conflicting structures between these two languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,740 |
inproceedings | noferesti-shamsfard-2016-using | Using Data Mining Techniques for Sentiment Shifter Identification | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1431/ | Noferesti, Samira and Shamsfard, Mehrnoush | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2716--2720 | Sentiment shifters, i.e., words and expressions that can affect text polarity, play an important role in opinion mining. However, the limited ability of current automated opinion mining systems to handle shifters represents a major challenge. The majority of existing approaches rely on a manual list of shifters; few attempts have been made to automatically identify shifters in text. Most of them just focus on negating shifters. This paper presents a novel and efficient semi-automatic method for identifying sentiment shifters in drug reviews, aiming at improving the overall accuracy of opinion mining systems. To this end, we use weighted association rule mining (WARM), a well-known data mining technique, for finding frequent dependency patterns representing sentiment shifters from a domain-specific corpus. These patterns that include different kinds of shifter words such as shifter verbs and quantifiers are able to handle both local and long-distance shifters. We also combine these patterns with a lexicon-based approach for the polarity classification task. Experiments on drug reviews demonstrate that extracted shifters can improve the precision of the lexicon-based approach for polarity classification 9.25 percent. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,741 |
inproceedings | asher-etal-2016-discourse | Discourse Structure and Dialogue Acts in Multiparty Dialogue: the {STAC} Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1432/ | Asher, Nicholas and Hunter, Julie and Morey, Mathieu and Farah, Benamara and Afantenos, Stergos | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2721--2727 | This paper describes the STAC resource, a corpus of multi-party chats annotated for discourse structure in the style of SDRT (Asher and Lascarides, 2003; Lascarides and Asher, 2009). The main goal of the STAC project is to study the discourse structure of multi-party dialogues in order to understand the linguistic strategies adopted by interlocutors to achieve their conversational goals, especially when these goals are opposed. The STAC corpus is not only a rich source of data on strategic conversation, but also the first corpus that we are aware of that provides full discourse structures for multi-party dialogues. It has other remarkable features that make it an interesting resource for other topics: interleaved threads, creative language, and interactions between linguistic and extra-linguistic contexts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,742 |
inproceedings | dubuisson-duplessis-etal-2016-purely | Purely Corpus-based Automatic Conversation Authoring | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1433/ | Dubuisson Duplessis, Guillaume and Letard, Vincent and Ligozat, Anne-Laure and Rosset, Sophie | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2728--2735 | This paper presents an automatic corpus-based process to author an open-domain conversational strategy usable both in chatterbot systems and as a fallback strategy for out-of-domain human utterances. Our approach is implemented on a corpus of television drama subtitles. This system is used as a chatterbot system to collect a corpus of 41 open-domain textual dialogues with 27 human participants. The general capabilities of the system are studied through objective measures and subjective self-reports in terms of understandability, repetition and coherence of the system responses selected in reaction to human utterances. Subjective evaluations of the collected dialogues are presented with respect to amusement, engagement and enjoyability. The main factors influencing those dimensions in our chatterbot experiment are discussed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,743 |
inproceedings | inoue-ueno-2016-dialogue | Dialogue System Characterisation by Back-channelling Patterns Extracted from Dialogue Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1434/ | Inoue, Masashi and Ueno, Hiroshi | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2736--2740 | In this study, we describe the use of back-channelling patterns extracted from a dialogue corpus as a mean to characterising text-based dialogue systems. Our goal was to provide system users with the feeling that they are interacting with distinct individuals rather than artificially created characters. An analysis of the corpus revealed that substantial difference exists among speakers regarding the usage patterns of back-channelling. The patterns consist of back-channelling frequency, types, and expressions. They were used for system characterisation. Implemented system characters were tested by asking users of the dialogue system to identify the source speakers in the corpus. Experimental results suggest that possibility of using back-channelling patterns alone to characterize the dialogue system in some cases even among the same age and gender groups. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,744 |
inproceedings | pincus-traum-2016-towards | Towards Automatic Identification of Effective Clues for Team Word-Guessing Games | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1435/ | Pincus, Eli and Traum, David | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2741--2747 | Team word-guessing games where one player, the clue-giver, gives clues attempting to elicit a target-word from another player, the receiver, are a popular form of entertainment and also used for educational purposes. Creating an engaging computational agent capable of emulating a talented human clue-giver in a timed word-guessing game depends on the ability to provide effective clues (clues able to elicit a correct guess from a human receiver). There are many available web resources and databases that can be mined for the raw material for clues for target-words; however, a large number of those clues are unlikely to be able to elicit a correct guess from a human guesser. In this paper, we propose a method for automatically filtering a clue corpus for effective clues for an arbitrary target-word from a larger set of potential clues, using machine learning on a set of features of the clues, including point-wise mutual information between a clue`s constituent words and a clue`s target-word. The results of the experiments significantly improve the average clue quality over previous approaches, and bring quality rates in-line with measures of human clue quality derived from a corpus of human-human interactions. The paper also introduces the data used to develop this method; audio recordings of people making guesses after having heard the clues being spoken by a synthesized voice. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,745 |
inproceedings | wang-etal-2016-automatic | Automatic Construction of Discourse Corpora for Dialogue Translation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1436/ | Wang, Longyue and Zhang, Xiaojun and Tu, Zhaopeng and Way, Andy and Liu, Qun | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2748--2754 | In this paper, a novel approach is proposed to automatically construct parallel discourse corpus for dialogue machine translation. Firstly, the parallel subtitle data and its corresponding monolingual movie script data are crawled and collected from Internet. Then tags such as speaker and discourse boundary from the script data are projected to its subtitle data via an information retrieval approach in order to map monolingual discourse to bilingual texts. We not only evaluate the mapping results, but also integrate speaker information into the translation. Experiments show our proposed method can achieve 81.79{\%} and 98.64{\%} accuracy on speaker and dialogue boundary annotation, and speaker-based language model adaptation can obtain around 0.5 BLEU points improvement in translation qualities. Finally, we publicly release around 100K parallel discourse data with manual speaker and dialogue boundary annotation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,746 |
inproceedings | fomicheva-bel-2016-using | Using Contextual Information for Machine Translation Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1437/ | Fomicheva, Marina and Bel, N{\'u}ria | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2755--2761 | Automatic evaluation of Machine Translation (MT) is typically approached by measuring similarity between the candidate MT and a human reference translation. An important limitation of existing evaluation systems is that they are unable to distinguish candidate-reference differences that arise due to acceptable linguistic variation from the differences induced by MT errors. In this paper we present a new metric, UPF-Cobalt, that addresses this issue by taking into consideration the syntactic contexts of candidate and reference words. The metric applies a penalty when the words are similar but the contexts in which they occur are not equivalent. In this way, Machine Translations (MTs) that are different from the human translation but still essentially correct are distinguished from those that share high number of words with the reference but alter the meaning of the sentence due to translation errors. The results show that the method proposed is indeed beneficial for automatic MT evaluation. We report experiments based on two different evaluation tasks with various types of manual quality assessment. The metric significantly outperforms state-of-the-art evaluation systems in varying evaluation settings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,747 |
inproceedings | rodrigues-etal-2016-bootstrapping | Bootstrapping a Hybrid {MT} System to a New Language Pair | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1438/ | Rodrigues, Jo{\~a}o Ant{\'o}nio and Rendeiro, Nuno and Querido, Andreia and {\v{S}}tajner, Sanja and Branco, Ant{\'o}nio | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2762--2765 | The usual concern when opting for a rule-based or a hybrid machine translation (MT) system is how much effort is required to adapt the system to a different language pair or a new domain. In this paper, we describe a way of adapting an existing hybrid MT system to a new language pair, and show that such a system can outperform a standard phrase-based statistical machine translation system with an average of 10 persons/month of work. This is specifically important in the case of domain-specific MT for which there is not enough parallel data for training a statistical machine translation system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,748 |
inproceedings | matsuzaki-etal-2016-translation | Translation Errors and Incomprehensibility: a Case Study using Machine-Translated Second Language Proficiency Tests | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1440/ | Matsuzaki, Takuya and Fujita, Akira and Todo, Naoya and Arai, Noriko H. | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2771--2776 | This paper reports on an experiment where 795 human participants answered to the questions taken from second language proficiency tests that were translated to their native language. The output of three machine translation systems and two different human translations were used as the test material. We classified the translation errors in the questions according to an error taxonomy and analyzed the participants' response on the basis of the type and frequency of the translation errors. Through the analysis, we identified several types of errors that deteriorated most the accuracy of the participants' answers, their confidence on the answers, and their overall evaluation of the translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,750 |
inproceedings | neale-etal-2016-word | Word Sense-Aware Machine Translation: Including Senses as Contextual Features for Improved Translation Models | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1441/ | Neale, Steven and Gomes, Lu{\'i}s and Agirre, Eneko and de Lacalle, Oier Lopez and Branco, Ant{\'o}nio | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2777--2783 | Although it is commonly assumed that word sense disambiguation (WSD) should help to improve lexical choice and improve the quality of machine translation systems, how to successfully integrate word senses into such systems remains an unanswered question. Some successful approaches have involved reformulating either WSD or the word senses it produces, but work on using traditional word senses to improve machine translation have met with limited success. In this paper, we build upon previous work that experimented on including word senses as contextual features in maxent-based translation models. Training on a large, open-domain corpus (Europarl), we demonstrate that this aproach yields significant improvements in machine translation from English to Portuguese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,751 |
inproceedings | cohen-etal-2016-supercat | {S}uper{CAT}: The (New and Improved) Corpus Analysis Toolkit | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1442/ | Cohen, K. Bretonnel and Baumgartner Jr., William A. and Temnikova, Irina | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2784--2788 | This paper reports SuperCAT, a corpus analysis toolkit. It is a radical extension of SubCAT, the Sublanguage Corpus Analysis Toolkit, from sublanguage analysis to corpus analysis in general. The idea behind SuperCAT is that representative corpora have no tendency towards closure{\textemdash}that is, they tend towards infinity. In contrast, non-representative corpora have a tendency towards closure{\textemdash}roughly, finiteness. SuperCAT focuses on general techniques for the quantitative description of the characteristics of any corpus (or other language sample), particularly concerning the characteristics of lexical distributions. Additionally, SuperCAT features a complete re-engineering of the previous SubCAT architecture. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,752 |
inproceedings | roziewski-stokowiec-2016-languagecrawl | {L}anguage{C}rawl: A Generic Tool for Building Language Models Upon {C}ommon-{C}rawl | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1443/ | Roziewski, Szymon and Stokowiec, Wojciech | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2789--2793 | The web data contains immense amount of data, hundreds of billion words are waiting to be extracted and used for language research. In this work we introduce our tool LanguageCrawl which allows NLP researchers to easily construct web-scale corpus from Common Crawl Archive: a petabyte scale, open repository of web crawl information. Three use-cases are presented: filtering Polish websites, building an N-gram corpora and training continuous skip-gram language model with hierarchical softmax. Each of them has been implemented within the LanguageCrawl toolkit, with the possibility to adjust specified language and N-gram ranks. Special effort has been put on high computing efficiency, by applying highly concurrent multitasking. We make our tool publicly available to enrich NLP resources. We strongly believe that our work will help to facilitate NLP research, especially in under-resourced languages, where the lack of appropriately sized corpora is a serious hindrance to applying data-intensive methods, such as deep neural networks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,753 |
inproceedings | eckart-etal-2016-features | Features for Generic Corpus Querying | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1444/ | Eckart, Thomas and Kuras, Christoph and Quasthoff, Uwe | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2794--2798 | The availability of large corpora for more and more languages enforces generic querying and standard interfaces. This development is especially relevant in the context of integrated research environments like CLARIN or DARIAH. The paper focuses on several applications and implementation details on the basis of a unified corpus format, a unique POS tag set, and prepared data for word similarities. All described data or applications are already or will be in the near future accessible via well-documented RESTful Web services. The target group are all kinds of interested persons with varying level of experience in programming or corpus query languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,754 |
inproceedings | baisa-etal-2016-european | {E}uropean {U}nion Language Resources in {S}ketch {E}ngine | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1445/ | Baisa, V{\'i}t and Michelfeit, Jan and Medve{\v{d}}, Marek and Jakub{\'i}{\v{c}}ek, Milo{\v{s}} | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2799--2803 | Several parallel corpora built from European Union language resources are presented here. They were processed by state-of-the-art tools and made available for researchers in the corpus manager Sketch Engine. A completely new resource is introduced: EUR-Lex Corpus, being one of the largest parallel corpus available at the moment, containing 840 million English tokens and the largest language pair English-French has more than 25 million aligned segments (paragraphs). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,755 |
inproceedings | banski-etal-2016-corpus | {C}orpus {Q}uery {L}ingua {F}ranca ({CQLF}) | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1446/ | Ba{\'n}ski, Piotr and Frick, Elena and Witt, Andreas | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2804--2809 | The present paper describes Corpus Query Lingua Franca (ISO CQLF), a specification designed at ISO Technical Committee 37 Subcommittee 4 {\textquotedblleft}Language resource management{\textquotedblright} for the purpose of facilitating the comparison of properties of corpus query languages. We overview the motivation for this endeavour and present its aims and its general architecture. CQLF is intended as a multi-part specification; here, we concentrate on the basic metamodel that provides a frame that the other parts fit in. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,756 |
inproceedings | kiss-etal-2016-sense | A sense-based lexicon of count and mass expressions: The Bochum {E}nglish Countability Lexicon | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1447/ | Kiss, Tibor and Pelletier, Francis Jeffry and Husic, Halima and Simunic, Roman Nino and Poppek, Johanna Marie | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2810--2814 | The present paper describes the current release of the Bochum English Countability Lexicon (BECL 2.1), a large empirical database consisting of lemmata from Open ANC (\url{http://www.anc.org}) with added senses from WordNet (Fellbaum 1998). BECL 2.1 contains {\ensuremath{\approx}} 11,800 annotated noun-sense pairs, divided in four major countability classes and 18 fine-grained subclasses. In the current version, BECL also provides information on nouns whose senses occur in more than one class allowing a closer look on polysemy and homonymy with regard to countability. Further included are sets of similar senses using the Leacock and Chodorow (LCH) score for semantic similarity (Leacock {\&} Chodorow 1998), information on orthographic variation, on the completeness of all WordNet senses in the database and an annotated representation of different types of proper names. The further development of BECL will investigate the different countability classes of proper names and the general relation between semantic similarity and countability as well as recurring syntactic patterns for noun-sense pairs. The BECL 2.1 database is also publicly available via \url{http://count-and-mass.org}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,757 |
inproceedings | kornai-etal-2016-detecting | Detecting Optional Arguments of Verbs | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1448/ | Kornai, Andr{\'a}s and Nemeskey, D{\'a}vid M{\'a}rk and Recski, G{\'a}bor | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2815--2818 | We propose a novel method for detecting optional arguments of Hungarian verbs using only positive data. We introduce a custom variant of collexeme analysis that explicitly models the noise in verb frames. Our method is, for the most part, unsupervised: we use the spectral clustering algorithm described in Brew and Schulte in Walde (2002) to build a noise model from a short, manually verified seed list of verbs. We experimented with both raw count- and context-based clusterings and found their performance almost identical. The code for our algorithm and the frame list are freely available at \url{http://hlt.bme.hu/en/resources/tade}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,758 |
inproceedings | kloppenburg-nissim-2016-leveraging | Leveraging Native Data to Correct Preposition Errors in Learners' {D}utch | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1449/ | Kloppenburg, Lennart and Nissim, Malvina | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2819--2824 | We address the task of automatically correcting preposition errors in learners' Dutch by modelling preposition usage in native language. Specifically, we build two models exploiting a large corpus of Dutch. The first is a binary model for detecting whether a preposition should be used at all in a given position or not. The second is a multiclass model for selecting the appropriate preposition in case one should be used. The models are tested on native as well as learners data. For the latter we exploit a crowdsourcing strategy to elicit native judgements. On native test data the models perform very well, showing that we can model preposition usage appropriately. However, the evaluation on learners' data shows that while detecting that a given preposition is wrong is doable reasonably well, detecting the absence of a preposition is a lot more difficult. Observing such results and the data we deal with, we envisage various ways of improving performance, and report them in the final section of this article. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,759 |
inproceedings | basile-sangati-2016-h | {D}({H})ante: A New Set of Tools for {XIII} Century {I}talian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1450/ | Basile, Angelo and Sangati, Federico | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2825--2828 | In this paper we describe 1) the process of converting a corpus of Dante Alighieri from a TEI XML format in to a pseudo-CoNLL format; 2) how a pos-tagger trained on modern Italian performs on Dante`s Italian 3) the performances of two different pos-taggers trained on the given corpus. We are making our conversion scripts and models available to the community. The two other models trained on the corpus performs reasonably well. The tool used for the conversion process might turn useful for bridging the gap between traditional digital humanities and modern NLP applications since the TEI original format is not usually suitable for being processed with standard NLP tools. We believe our work will serve both communities: the DH community will be able to tag new documents and the NLP world will have an easier way in converting existing documents to a standardized machine-readable format. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,760 |
inproceedings | celli-etal-2016-multilevel | Multilevel Annotation of Agreement and Disagreement in {I}talian News Blogs | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1451/ | Celli, Fabio and Riccardi, Giuseppe and Alam, Firoj | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2829--2832 | In this paper, we present a corpus of news blog conversations in Italian annotated with gold standard agreement/disagreement relations at message and sentence levels. This is the first resource of this kind in Italian. From the analysis of ADRs at the two levels emerged that agreement annotated at message level is consistent and generally reflected at sentence level, moreover, the argumentation structure of disagreement is more complex than agreement. The manual error analysis revealed that this resource is useful not only for the analysis of argumentation, but also for the detection of irony/sarcasm in online debates. The corpus and annotation tool are available for research purposes on request. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,761 |
inproceedings | ozates-etal-2016-sentence | Sentence Similarity based on Dependency Tree Kernels for Multi-document Summarization | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1452/ | {\"Ozate{\c{s, {\c{Saziye Bet{\"ul and {\"Ozg{\"ur, Arzucan and Radev, Dragomir | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2833--2838 | We introduce an approach based on using the dependency grammar representations of sentences to compute sentence similarity for extractive multi-document summarization. We adapt and investigate the effects of two untyped dependency tree kernels, which have originally been proposed for relation extraction, to the multi-document summarization problem. In addition, we propose a series of novel dependency grammar based kernels to better represent the syntactic and semantic similarities among the sentences. The proposed methods incorporate the type information of the dependency relations for sentence similarity calculation. To our knowledge, this is the first study that investigates using dependency tree based sentence similarity for multi-document summarization. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,762 |
inproceedings | loaiciga-gulordava-2016-discontinuous | Discontinuous Verb Phrases in Parsing and Machine Translation of {E}nglish and {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1453/ | Lo{\'a}iciga, Sharid and Gulordava, Kristina | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2839--2845 | In this paper, we focus on the verb-particle (V-Prt) split construction in English and German and its difficulty for parsing and Machine Translation (MT). For German, we use an existing test suite of V-Prt split constructions, while for English, we build a new and comparable test suite from raw data. These two data sets are then used to perform an analysis of errors in dependency parsing, word-level alignment and MT, which arise from the discontinuous order in V-Prt split constructions. In the automatic alignments of parallel corpora, most of the particles align to NULL. These mis-alignments and the inability of phrase-based MT system to recover discontinuous phrases result in low quality translations of V-Prt split constructions both in English and German. However, our results show that the V-Prt split phrases are correctly parsed in 90{\%} of cases, suggesting that syntactic-based MT should perform better on these constructions. We evaluate a syntactic-based MT system on German and compare its performance to the phrase-based system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,763 |
inproceedings | krisch-etal-2016-lexical | A Lexical Resource for the Identification of {\textquotedblleft}Weak Words{\textquotedblright} in {G}erman Specification Documents | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1454/ | Krisch, Jennifer and Dick, Melanie and Jauch, Ronny and Heid, Ulrich | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2846--2850 | We report on the creation of a lexical resource for the identification of potentially unspecific or imprecise constructions in German requirements documentation from the car manufacturing industry. In requirements engineering, such expressions are called {\textquotedblleft}weak words{\textquotedblright}: they are not sufficiently precise to ensure an unambiguous interpretation by the contractual partners, who for the definition of their cooperation, typically rely on specification documents (Melchisedech, 2000); an example are dimension adjectives, such as kurz or lang ({\textquoteleft}short', {\textquoteleft}long') which need to be modified by adverbials indicating the exact duration, size etc. Contrary to standard practice in requirements engineering, where the identification of such weak words is merely based on stopword lists, we identify weak uses in context, by querying annotated text. The queries are part of the resource, as they define the conditions when a word use is weak. We evaluate the recognition of weak uses on our development corpus and on an unseen evaluation corpus, reaching stable F1-scores above 0.95. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,764 |
inproceedings | vetulani-etal-2016-recent | Recent Advances in Development of a Lexicon-Grammar of {P}olish: {P}ol{N}et 3.0 | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1455/ | Vetulani, Zygmunt and Vetulani, Gra{\.z}yna and Kochanowski, Bart{\l}omiej | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2851--2854 | The granularity of PolNet (Polish Wordnet) is the main theoretical issue discussed in the paper. We describe the latest extension of PolNet including valency information of simple verbs and noun-verb collocations using manual and machine-assisted methods. Valency is defined to include both semantic and syntactic selectional restrictions. We assume the valency structure of a verb to be an index of meaning. Consistently we consider it an attribute of a synset. Strict application of this principle results in fine granularity of the verb section of the wordnet. Considering valency as a distinctive feature of synsets was an essential step to transform the initial PolNet (first intended as a lexical ontology) into a lexicon-grammar. For the present refinement of PolNet we assume that the category of language register is a part of meaning. The totality of PolNet 2.0 synsets is being revised in order to split the PolNet 2.0 synsets that contain different register words into register-uniform sub-synsets. We completed this operation for synsets that were used as values of semantic roles. The operation augmented the number of considered synsets by 29{\%}. In the paper we report an extension of the class of collocation-based verb synsets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,765 |
inproceedings | mahlow-2016-c | {C}-{WEP}{\textemdash}{R}ich Annotated Collection of Writing Errors by Professionals | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1456/ | Mahlow, Cerstin | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2855--2861 | This paper presents C-WEP, the Collection of Writing Errors by Professionals Writers of German. It currently consists of 245 sentences with grammatical errors. All sentences are taken from published texts. All authors are professional writers with high skill levels with respect to German, the genres, and the topics. The purpose of this collection is to provide seeds for more sophisticated writing support tools as only a very small proportion of those errors can be detected by state-of-the-art checkers. C-WEP is annotated on various levels and freely available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,766 |
inproceedings | klyueva-stranak-2016-improving | Improving corpus search via parsing | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1457/ | Klyueva, Natalia and Stra{\v{n}}{\'a}k, Pavel | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2862--2866 | In this paper, we describe an addition to the corpus query system Kontext that enables to enhance the search using syntactic attributes in addition to the existing features, mainly lemmas and morphological categories. We present the enhancements of the corpus query system itself, the attributes we use to represent syntactic structures in data, and some examples of querying the syntactically annotated corpora, such as treebanks in various languages as well as an automatically parsed large corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,767 |
inproceedings | palogiannidi-etal-2016-affective | Affective Lexicon Creation for the {G}reek Language | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1458/ | Palogiannidi, Elisavet and Koutsakis, Polychronis and Iosif, Elias and Potamianos, Alexandros | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2867--2872 | Starting from the English affective lexicon ANEW (Bradley and Lang, 1999a) we have created the first Greek affective lexicon. It contains human ratings for the three continuous affective dimensions of valence, arousal and dominance for 1034 words. The Greek affective lexicon is compared with affective lexica in English, Spanish and Portuguese. The lexicon is automatically expanded by selecting a small number of manually annotated words to bootstrap the process of estimating affective ratings of unknown words. We experimented with the parameters of the semantic-affective model in order to investigate their impact to its performance, which reaches 85{\%} binary classification accuracy (positive vs. negative ratings). We share the Greek affective lexicon that consists of 1034 words and the automatically expanded Greek affective lexicon that contains 407K words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,768 |
inproceedings | szabo-etal-2016-hungarian | A {H}ungarian Sentiment Corpus Manually Annotated at Aspect Level | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1459/ | Szab{\'o}, Martina Katalin and Vincze, Veronika and Simk{\'o}, Katalin Ilona and Varga, Viktor and Hangya, Viktor | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2873--2878 | In this paper we present a Hungarian sentiment corpus manually annotated at aspect level. Our corpus consists of Hungarian opinion texts written about different types of products. The main aim of creating the corpus was to produce an appropriate database providing possibilities for developing text mining software tools. The corpus is a unique Hungarian database: to the best of our knowledge, no digitized Hungarian sentiment corpus that is annotated on the level of fragments and targets has been made so far. In addition, many language elements of the corpus, relevant from the point of view of sentiment analysis, got distinct types of tags in the annotation. In this paper, on the one hand, we present the method of annotation, and we discuss the difficulties concerning text annotation process. On the other hand, we provide some quantitative and qualitative data on the corpus. We conclude with a description of the applicability of the corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,769 |
inproceedings | ruppenhofer-brandes-2016-effect | Effect Functors for Opinion Inference | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1460/ | Ruppenhofer, Josef and Brandes, Jasper | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2879--2887 | Sentiment analysis has so far focused on the detection of explicit opinions. However, of late implicit opinions have received broader attention, the key idea being that the evaluation of an event type by a speaker depends on how the participants in the event are valued and how the event itself affects the participants. We present an annotation scheme for adding relevant information, couched in terms of so-called effect functors, to German lexical items. Our scheme synthesizes and extends previous proposals. We report on an inter-annotator agreement study. We also present results of a crowdsourcing experiment to test the utility of some known and some new functors for opinion inference where, unlike in previous work, subjects are asked to reason from event evaluation to participant evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,770 |
inproceedings | klenner-amsler-2016-sentiframes | {S}entiframes: A Resource for Verb-centered {G}erman Sentiment Inference | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1461/ | Klenner, Manfred and Amsler, Michael | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2888--2891 | In this paper, a German verb resource for verb-centered sentiment inference is introduced and evaluated. Our model specifies verb polarity frames that capture the polarity effects on the fillers of the verb`s arguments given a sentence with that verb frame. Verb signatures and selectional restrictions are also part of the model. An algorithm to apply the verb resource to treebank sentences and the results of our first evaluation are discussed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,771 |
inproceedings | stranisci-etal-2016-annotating | Annotating Sentiment and Irony in the Online {I}talian Political Debate on {\#}labuonascuola | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1462/ | Stranisci, Marco and Bosco, Cristina and Hern{\'a}ndez Far{\'i}as, Delia Iraz{\'u} and Patti, Viviana | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2892--2899 | In this paper we present the TWitterBuonaScuola corpus (TW-BS), a novel Italian linguistic resource for Sentiment Analysis, developed with the main aim of analyzing the online debate on the controversial Italian political reform {\textquotedblleft}Buona Scuola{\textquotedblright} (Good school), aimed at reorganizing the national educational and training systems. We describe the methodologies applied in the collection and annotation of data. The collection has been driven by the detection of the hashtags mainly used by the participants to the debate, while the annotation has been focused on sentiment polarity and irony, but also extended to mark the aspects of the reform that were mainly discussed in the debate. An in-depth study of the disagreement among annotators is included. We describe the collection and annotation stages, and the in-depth analysis of disagreement made with Crowdflower, a crowdsourcing annotation platform. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,772 |
inproceedings | el-beltagy-2016-nileulex | {N}ile{UL}ex: A Phrase and Word Level Sentiment Lexicon for {E}gyptian and {M}odern {S}tandard {A}rabic | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1463/ | El-Beltagy, Samhaa R. | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2900--2905 | This paper presents NileULex, which is an Arabic sentiment lexicon containing close to six thousands Arabic words and compound phrases. Forty five percent of the terms and expressions in the lexicon are Egyptian or colloquial while fifty five percent are Modern Standard Arabic. While the collection of many of the terms included in the lexicon was done automatically, the actual addition of any term was done manually. One of the important criterions for adding terms to the lexicon, was that they be as unambiguous as possible. The result is a lexicon with a much higher quality than any translated variant or automatically constructed one. To demonstrate that a lexicon such as this can directly impact the task of sentiment analysis, a very basic machine learning based sentiment analyser that uses unigrams, bigrams, and lexicon based features was applied on two different Twitter datasets. The obtained results were compared to a baseline system that only uses unigrams and bigrams. The same lexicon based features were also generated using a publicly available translation of a popular sentiment lexicon. The experiments show that usage of the developed lexicon improves the results over both the baseline and the publicly available lexicon. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,773 |
inproceedings | wawer-2016-opfi | {OPFI}: A Tool for Opinion Finding in {P}olish | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1464/ | Wawer, Aleksander | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2906--2909 | The paper contains a description of OPFI: Opinion Finder for the Polish Language, a freely available tool for opinion target extraction. The goal of the tool is opinion finding: a task of identifying tuples composed of sentiment (positive or negative) and its target (about what or whom is the sentiment expressed). OPFI is not dependent on any particular method of sentiment identification and provides a built-in sentiment dictionary as a convenient option. Technically, it contains implementations of three different modes of opinion tuple generation: one hybrid based on dependency parsing and CRF, the second based on shallow parsing and the third on deep learning, namely GRU neural network. The paper also contains a description of related language resources: two annotated treebanks and one set of tweets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,774 |
inproceedings | de-clercq-hoste-2016-rude | Rude waiter but mouthwatering pastries! An exploratory study into {D}utch Aspect-Based Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1465/ | De Clercq, Orph{\'e}e and Hoste, V{\'e}ronique | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2910--2917 | The fine-grained task of automatically detecting all sentiment expressions within a given document and the aspects to which they refer is known as aspect-based sentiment analysis. In this paper we present the first full aspect-based sentiment analysis pipeline for Dutch and apply it to customer reviews. To this purpose, we collected reviews from two different domains, i.e. restaurant and smartphone reviews. Both corpora have been manually annotated using newly developed guidelines that comply to standard practices in the field. For our experimental pipeline we perceive aspect-based sentiment analysis as a task consisting of three main subtasks which have to be tackled incrementally: aspect term extraction, aspect category classification and polarity classification. First experiments on our Dutch restaurant corpus reveal that this is indeed a feasible approach that yields promising results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,775 |
inproceedings | shi-etal-2016-building | Building A Case-based Semantic {E}nglish-{C}hinese Parallel Treebank | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1466/ | Shi, Huaxing and Zhao, Tiejun and Su, Keh-Yih | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2918--2924 | We construct a case-based English-to-Chinese semantic constituent parallel Treebank for a Statistical Machine Translation (SMT) task by labelling each node of the Deep Syntactic Tree (DST) with our refined semantic cases. Since subtree span-crossing is harmful in tree-based SMT, DST is adopted to alleviate this problem. At the same time, we tailor an existing case set to represent bilingual shallow semantic relations more precisely. This Treebank is a part of a semantic corpus building project, which aims to build a semantic bilingual corpus annotated with syntactic, semantic cases and word senses. Data in our Treebank is from the news domain of Datum corpus. 4,000 sentence pairs are selected to cover various lexicons and part-of-speech (POS) n-gram patterns as much as possible. This paper presents the construction of this case Treebank. Also, we have tested the effect of adopting DST structure in alleviating subtree span-crossing. Our preliminary analysis shows that the compatibility between Chinese and English trees can be significantly increased by transforming the parse-tree into the DST. Furthermore, the human agreement rate in annotation is found to be acceptable (90{\%} in English nodes, 75{\%} in Chinese nodes). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,776 |
inproceedings | li-etal-2016-uzbek | {U}zbek-{E}nglish and {T}urkish-{E}nglish Morpheme Alignment Corpora | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1467/ | Li, Xuansong and Tracey, Jennifer and Grimes, Stephen and Strassel, Stephanie | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2925--2930 | Morphologically-rich languages pose problems for machine translation (MT) systems, including word-alignment errors, data sparsity and multiple affixes. Current alignment models at word-level do not distinguish words and morphemes, thus yielding low-quality alignment and subsequently affecting end translation quality. Models using morpheme-level alignment can reduce the vocabulary size of morphologically-rich languages and overcomes data sparsity. The alignment data based on smallest units reveals subtle language features and enhances translation quality. Recent research proves such morpheme-level alignment (MA) data to be valuable linguistic resources for SMT, particularly for languages with rich morphology. In support of this research trend, the Linguistic Data Consortium (LDC) created Uzbek-English and Turkish-English alignment data which are manually aligned at the morpheme level. This paper describes the creation of MA corpora, including alignment and tagging process and approaches, highlighting annotation challenges and specific features of languages with rich morphology. The light tagging annotation on the alignment layer adds extra value to the MA data, facilitating users in flexibly tailoring the data for various MT model training. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,777 |
inproceedings | chu-etal-2016-parallel | Parallel Sentence Extraction from Comparable Corpora with Neural Network Features | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1468/ | Chu, Chenhui and Dabre, Raj and Kurohashi, Sadao | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2931--2935 | Parallel corpora are crucial for machine translation (MT), however they are quite scarce for most language pairs and domains. As comparable corpora are far more available, many studies have been conducted to extract parallel sentences from them for MT. In this paper, we exploit the neural network features acquired from neural MT for parallel sentence extraction. We observe significant improvements for both accuracy in sentence extraction and MT performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,778 |
inproceedings | vicente-etal-2016-tweetmt | {T}weet{MT}: A Parallel Microblog Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1469/ | Vicente, I{\~n}aki San and Alegr{\'i}a, I{\~n}aki and Espa{\~n}a-Bonet, Cristina and Gamallo, Pablo and Oliveira, Hugo Gon{\c{c}}alo and Garcia, Eva Mart{\'i}nez and Toral, Antonio and Zubiaga, Arkaitz and Aranberri, Nora | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2936--2941 | We introduce TweetMT, a parallel corpus of tweets in four language pairs that combine five languages (Spanish from/to Basque, Catalan, Galician and Portuguese), all of which have an official status in the Iberian Peninsula. The corpus has been created by combining automatic collection and crowdsourcing approaches, and it is publicly available. It is intended for the development and testing of microtext machine translation systems. In this paper we describe the methodology followed to build the corpus, and present the results of the shared task in which it was tested. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,779 |
inproceedings | neves-etal-2016-scielo | The Scielo Corpus: a Parallel Corpus of Scientific Publications for Biomedicine | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1470/ | Neves, Mariana and Yepes, Antonio Jimeno and N{\'e}v{\'e}ol, Aur{\'e}lie | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2942--2948 | The biomedical scientific literature is a rich source of information not only in the English language, for which it is more abundant, but also in other languages, such as Portuguese, Spanish and French. We present the first freely available parallel corpus of scientific publications for the biomedical domain. Documents from the {\textquotedblright}Biological Sciences{\textquotedblright} and {\textquotedblright}Health Sciences{\textquotedblright} categories were retrieved from the Scielo database and parallel titles and abstracts are available for the following language pairs: Portuguese/English (about 86,000 documents in total), Spanish/English (about 95,000 documents) and French/English (about 2,000 documents). Additionally, monolingual data was also collected for all four languages. Sentences in the parallel corpus were automatically aligned and a manual analysis of 200 documents by native experts found that a minimum of 79{\%} of sentences were correctly aligned in all language pairs. We demonstrate the utility of the corpus by running baseline machine translation experiments. We show that for all language pairs, a statistical machine translation system trained on the parallel corpora achieves performance that rivals or exceeds the state of the art in the biomedical domain. Furthermore, the corpora are currently being used in the biomedical task in the First Conference on Machine Translation (WMT`16). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,780 |
inproceedings | ljubesic-etal-2016-producing | Producing Monolingual and Parallel Web Corpora at the Same Time - {S}pider{L}ing and Bitextor`s Love Affair | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1471/ | Ljube{\v{s}}i{\'c}, Nikola and Espl{\`a}-Gomis, Miquel and Toral, Antonio and Rojas, Sergio Ortiz and Klubi{\v{c}}ka, Filip | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2949--2956 | This paper presents an approach for building large monolingual corpora and, at the same time, extracting parallel data by crawling the top-level domain of a given language of interest. For gathering linguistically relevant data from top-level domains we use the SpiderLing crawler, modified to crawl data written in multiple languages. The output of this process is then fed to Bitextor, a tool for harvesting parallel data from a collection of documents. We call the system combining these two tools Spidextor, a blend of the names of its two crucial parts. We evaluate the described approach intrinsically by measuring the accuracy of the extracted bitexts from the Croatian top-level domain {\textquotedblleft}.hr{\textquotedblright} and the Slovene top-level domain {\textquotedblleft}.si{\textquotedblright}, and extrinsically on the English-Croatian language pair by comparing an SMT system built from the crawled data with third-party systems. We finally present parallel datasets collected with our approach for the English-Croatian, English-Finnish, English-Serbian and English-Slovene language pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,781 |
inproceedings | bell-etal-2016-towards | Towards Using Social Media to Identify Individuals at Risk for Preventable Chronic Illness | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1472/ | Bell, Dane and Fried, Daniel and Huangfu, Luwen and Surdeanu, Mihai and Kobourov, Stephen | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2957--2964 | We describe a strategy for the acquisition of training data necessary to build a social-media-driven early detection system for individuals at risk for (preventable) type 2 diabetes mellitus (T2DM). The strategy uses a game-like quiz with data and questions acquired semi-automatically from Twitter. The questions are designed to inspire participant engagement and collect relevant data to train a public-health model applied to individuals. Prior systems designed to use social media such as Twitter to predict obesity (a risk factor for T2DM) operate on entire communities such as states, counties, or cities, based on statistics gathered by government agencies. Because there is considerable variation among individuals within these groups, training data on the individual level would be more effective, but this data is difficult to acquire. The approach proposed here aims to address this issue. Our strategy has two steps. First, we trained a random forest classifier on data gathered from (public) Twitter statuses and state-level statistics with state-of-the-art accuracy. We then converted this classifier into a 20-questions-style quiz and made it available online. In doing so, we achieved high engagement with individuals that took the quiz, while also building a training set of voluntarily supplied individual-level data for future classification. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,782 |
inproceedings | sommerdijk-etal-2016-tweets | Can Tweets Predict {TV} Ratings? | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1473/ | Sommerdijk, Bridget and Sanders, Eric and van den Bosch, Antal | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2965--2970 | We set out to investigate whether TV ratings and mentions of TV programmes on the Twitter social media platform are correlated. If such a correlation exists, Twitter may be used as an alternative source for estimating viewer popularity. Moreover, the Twitter-based rating estimates may be generated during the programme, or even before. We count the occurrences of programme-specific hashtags in an archive of Dutch tweets of eleven popular TV shows broadcast in the Netherlands in one season, and perform correlation tests. Overall we find a strong correlation of 0.82; the correlation remains strong, 0.79, if tweets are counted a half hour before broadcast time. However, the two most popular TV shows account for most of the positive effect; if we leave out the single and second most popular TV shows, the correlation drops to being moderate to weak. Also, within a TV show, correlations between ratings and tweet counts are mostly weak, while correlations between TV ratings of the previous and next shows are strong. In absence of information on previous shows, Twitter-based counts may be a viable alternative to classic estimation methods for TV ratings. Estimates are more reliable with more popular TV shows. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,783 |
inproceedings | park-etal-2016-classifying | Classifying Out-of-vocabulary Terms in a Domain-Specific Social Media Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1474/ | Park, SoHyun and Fazly, Afsaneh and Lee, Annie and Seibel, Brandon and Zi, Wenjie and Cook, Paul | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2971--2975 | In this paper we consider the problem of out-of-vocabulary term classification in web forum text from the automotive domain. We develop a set of nine domain- and application-specific categories for out-of-vocabulary terms. We then propose a supervised approach to classify out-of-vocabulary terms according to these categories, drawing on features based on word embeddings, and linguistic knowledge of common properties of out-of-vocabulary terms. We show that the features based on word embeddings are particularly informative for this task. The categories that we predict could serve as a preliminary, automatically-generated source of lexical knowledge about out-of-vocabulary terms. Furthermore, we show that this approach can be adapted to give a semi-automated method for identifying out-of-vocabulary terms of a particular category, automotive named entities, that is of particular interest to us. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,784 |
inproceedings | sakaki-etal-2016-corpus | Corpus for Customer Purchase Behavior Prediction in Social Media | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1475/ | Sakaki, Shigeyuki and Chen, Francine and Korpusik, Mandy and Chen, Yan-Ying | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2976--2980 | Many people post about their daily life on social media. These posts may include information about the purchase activity of people, and insights useful to companies can be derived from them: e.g. profile information of a user who mentioned something about their product. As a further advanced analysis, we consider extracting users who are likely to buy a product from the set of users who mentioned that the product is attractive. In this paper, we report our methodology for building a corpus for Twitter user purchase behavior prediction. First, we collected Twitter users who posted a want phrase + product name: e.g. {\textquotedblleft}want a Xperia{\textquotedblright} as candidate want users, and also candidate bought users in the same way. Then, we asked an annotator to judge whether a candidate user actually bought a product. We also annotated whether tweets randomly sampled from want/bought user timelines are relevant or not to purchase. In this annotation, 58{\%} of want user tweets and 35{\%} of bought user tweets were annotated as relevant. Our data indicate that information embedded in timeline tweets can be used to predict purchase behavior of tweeted products. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,785 |
inproceedings | celebi-ozgur-2016-segmenting | Segmenting Hashtags using Automatically Created Training Data | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1476/ | {\c{Celebi, Arda and {\"Ozg{\"ur, Arzucan | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2981--2985 | Hashtags, which are commonly composed of multiple words, are increasingly used to convey the actual messages in tweets. Understanding what tweets are saying is getting more dependent on understanding hashtags. Therefore, identifying the individual words that constitute a hashtag is an important, yet a challenging task due to the abrupt nature of the language used in tweets. In this study, we introduce a feature-rich approach based on using supervised machine learning methods to segment hashtags. Our approach is unsupervised in the sense that instead of using manually segmented hashtags for training the machine learning classifiers, we automatically create our training data by using tweets as well as by automatically extracting hashtag segmentations from a large corpus. We achieve promising results with such automatically created noisy training data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,786 |
inproceedings | hovy-johannsen-2016-exploring | Exploring Language Variation Across {E}urope - A Web-based Tool for Computational Sociolinguistics | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Goggi, Sara and Grobelnik, Marko and Maegaard, Bente and Mariani, Joseph and Mazo, Helene and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2016 | Portoro{\v{z}}, Slovenia | European Language Resources Association (ELRA) | https://aclanthology.org/L16-1477/ | Hovy, Dirk and Johannsen, Anders | Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}`16) | 2986--2989 | Language varies not only between countries, but also along regional and socio-demographic lines. This variation is one of the driving factors behind language change. However, investigating language variation is a complex undertaking: the more factors we want to consider, the more data we need. Traditional qualitative methods are not well-suited to do this, an therefore restricted to isolated factors. This reduction limits the potential insights, and risks attributing undue importance to easily observed factors. While there is a large interest in linguistics to increase the quantitative aspect of such studies, it requires training in both variational linguistics and computational methods, a combination that is still not common. We take a first step here to alleviating the problem by providing an interface, www.languagevariation.com, to explore large-scale language variation along multiple socio-demographic factors {--} without programming knowledge. It makes use of large amounts of data and provides statistical analyses, maps, and interactive features that will enable scholars to explore language variation in a data-driven way. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 60,787 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.