entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | faessler-etal-2014-disclose | Disclose Models, Hide the Data - How to Make Use of Confidential Corpora without Seeing Sensitive Raw Data | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1712/ | Faessler, Erik and Hellrich, Johannes and Hahn, Udo | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Confidential corpora from the medical, enterprise, security or intelligence domains often contain sensitive raw data which lead to severe restrictions as far as the public accessibility and distribution of such language resources are concerned. The enforcement of strict mechanisms of data protection consitutes a serious barrier for progress in language technology (products) in such domains, since these data are extremely rare or even unavailable for scientists and developers not directly involved in the creation and maintenance of such resources. In order to by-pass this problem, we here propose to distribute trained language models which were derived from such resources as a substitute for the original confidential raw data which remain hidden to the outside world. As an example, we exploit the access-protected German-language medical FRAMED corpus from which we generate and distribute models for sentence splitting, tokenization and POS tagging based on software taken from OPENNLP, NLTK and JCORE, our own UIMA-based text analytics pipeline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,748 |
inproceedings | khapra-etal-2014-transliteration | When Transliteration Met Crowdsourcing : An Empirical Study of Transliteration via Crowdsourcing using Efficient, Non-redundant and Fair Quality Control | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1713/ | Khapra, Mitesh M. and Ramanathan, Ananthakrishnan and Kunchukuttan, Anoop and Visweswariah, Karthik and Bhattacharyya, Pushpak | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Sufficient parallel transliteration pairs are needed for training state of the art transliteration engines. Given the cost involved, it is often infeasible to collect such data using experts. Crowdsourcing could be a cheaper alternative, provided that a good quality control (QC) mechanism can be devised for this task. Most QC mechanisms employed in crowdsourcing are aggressive (unfair to workers) and expensive (unfair to requesters). In contrast, we propose a low-cost QC mechanism which is fair to both workers and requesters. At the heart of our approach, lies a rule based Transliteration Equivalence approach which takes as input a list of vowels in the two languages and a mapping of the consonants in the two languages. We empirically show that our approach outperforms other popular QC mechanisms (\textit{viz.}, consensus and sampling) on two vital parameters : (i) fairness to requesters (lower cost per correct transliteration) and (ii) fairness to workers (lower rate of rejecting correct answers). Further, as an extrinsic evaluation we use the standard NEWS 2010 test set and show that such quality controlled crowdsourced data compares well to expert data when used for training a transliteration engine. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,749 |
inproceedings | baumgardt-etal-2014-open | Open Philology at the {U}niversity of {L}eipzig | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1714/ | Baumgardt, Frederik and Celano, Giuseppe and Crane, Gregory R. and Dee, Stella and Foradi, Maryam and Franzini, Emily and Franzini, Greta and Lent, Monica and Moritz, Maria and Stoyanova, Simona | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | The Open Philology Project at the University of Leipzig aspires to re-assert the value of philology in its broadest sense. Philology signifies the widest possible use of the linguistic record to enable a deep understanding of the complete lived experience of humanity. Pragmatically, we focus on Greek and Latin because (1) substantial collections and services are already available within these languages, (2) substantial user communities exist (c. 35,000 unique users a month at the Perseus Digital Library), and (3) a European-based project is better positioned to process extensive cultural heritage materials in these languages rather than in Chinese or Sanskrit. The Open Philology Project has been designed with the hope that it can contribute to any historical language that survives within the human record. It includes three tasks: (1) the creation of an open, extensible, repurposable collection of machine-readable linguistic sources; (2) the development of dynamic textbooks that use annotated corpora to customize the vocabulary and grammar of texts that learners want to read, and at the same time engage students in collaboratively producing new annotated data; (3) the establishment of new workflows for, and forms of, publication, from individual annotations with argumentation to traditional publications with integrated machine-actionable data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,750 |
inproceedings | del-tredici-nissim-2014-modular | A Modular System for Rule-based Text Categorisation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1715/ | Del Tredici, Marco and Nissim, Malvina | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We introduce a modular rule-based approach to text categorisation which is more flexible and less time consuming to build than a standard rule-based system because it works with a hierarchical structure and allows for re-usability of rules. When compared to currently more wide-spread machine learning models on a case study, our modular system shows competitive results, and it has the advantage of reducing manual effort over time, since only fewer rules must be written when moving to a (partially) new domain, while annotation of training data is always required in the same amount. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,751 |
inproceedings | hajlaoui-etal-2014-dcep | {DCEP} -Digital Corpus of the {E}uropean Parliament | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1716/ | Hajlaoui, Najeh and Kolovratnik, David and V{\"ayrynen, Jaakko and Steinberger, Ralf and Varga, Daniel | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We are presenting a new highly multilingual document-aligned parallel corpus called DCEP - Digital Corpus of the European Parliament. It consists of various document types covering a wide range of subject domains. With a total of 1.37 billion words in 23 languages (253 language pairs), gathered in the course of ten years, this is the largest single release of documents by a European Union institution. DCEP contains most of the content of the European Parliament`s official Website. It includes different document types produced between 2001 and 2012, excluding only the documents already exist in the Europarl corpus to avoid overlapping. We are presenting the typical acquisition steps of the DCEP corpus: data access, document alignment, sentence splitting, normalisation and tokenisation, and sentence alignment efforts. The sentence-level alignment is still in progress but based on some first experiments; we showed that DCEP is very useful for NLP applications, in particular for Statistical Machine Translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,752 |
inproceedings | mariani-etal-2014-facing | Facing the Identification Problem in Language-Related Scientific Data Analysis. | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1717/ | Mariani, Joseph and Cieri, Christopher and Francopoulo, Gil and Paroubek, Patrick and Delaborde, Marine | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper describes the problems that must be addressed when studying large amounts of data over time which require entity normalization applied not to the usual genres of news or political speech, but to the genre of academic discourse about language resources, technologies and sciences. It reports on the normalization processes that had to be applied to produce data usable for computing statistics in three past studies on the LRE Map, the ISCA Archive and the LDC Bibliography. It shows the need for human expertise during normalization and the necessity to adapt the work to the study objectives. It investigates possible improvements for reducing the workload necessary to produce comparable results. Through this paper, we show the necessity to define and agree on international persistent and unique identifiers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,753 |
inproceedings | soury-devillers-2014-smile | Smile and Laughter in Human-Machine Interaction: a study of engagement | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1718/ | Soury, Mariette and Devillers, Laurence | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This article presents a corpus featuring adults playing games in interaction with machine trying to induce laugh. This corpus was collected during Interspeech 2013 in Lyon to study behavioral differences correlated to different personalities and cultures. We first present the collection protocol, then the corpus obtained and finally different quantitative and qualitative measures. Smiles and laughs are types of affect bursts which are defined as short emotional non-speech expressions. Here we correlate smile and laugh with personality traits and cultural background. Our final objective is to propose a measure of engagement deduced from those affect bursts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,754 |
inproceedings | robaldo-etal-2014-exploiting | Exploiting networks in Law | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1719/ | Robaldo, Livio and Boella, Guido and Di Caro, Luigi and Violato, Andrea | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In this paper we first introduce the working context related to the understanding of an heterogeneous network of references contained in the Italian regulatory framework. We then present an extended analysis of a large network of laws, providing several types of analytical evaluation that can be used within a legal management system for understanding the data through summarization, visualization, and browsing. In the legal domain, yet several tasks are strictly supervised by humans, with strong consumption of time and energy that would dramatically drop with the help of automatic or semi-automatic supporting tools. We overview different techniques and methodologies explaining how they can be helpful in actual scenarios. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,755 |
inproceedings | bjarnadottir-dadason-2014-utilizing | Utilizing constituent structure for compound analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1720/ | Bjarnad{\'o}ttir, Krist{\'i}n and Da{\dh}ason, J{\'o}n | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Compounding is extremely productive in Icelandic and multi-word compounds are common. The likelihood of finding previously unseen compounds in texts is thus very high, which makes out-of-vocabulary words a problem in the use of NLP tools. The tool de-scribed in this paper splits Icelandic compounds and shows their binary constituent structure. The probability of a constituent in an unknown (or unanalysed) compound forming a combined constituent with either of its neighbours is estimated, with the use of data on the constituent structure of over 240 thousand compounds from the Database of Modern Icelandic Inflection, and word frequencies from {\'I}slenskur or{\dh}asj{\'o}{\dh}ur, a corpus of approx. 550 million words. Thus, the structure of an unknown compound is derived by com-parison with compounds with partially the same constituents and similar structure in the training data. The granularity of the split re-turned by the decompounder is important in tasks such as semantic analysis or machine translation, where a flat (non-structured) se-quence of constituents is insufficient. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,756 |
inproceedings | zaghouani-etal-2014-large | Large Scale {A}rabic Error Annotation: Guidelines and Framework | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1721/ | Zaghouani, Wajdi and Mohit, Behrang and Habash, Nizar and Obeid, Ossama and Tomeh, Nadi and Rozovskaya, Alla and Farra, Noura and Alkuhlani, Sarah and Oflazer, Kemal | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We present annotation guidelines and a web-based annotation framework developed as part of an effort to create a manually annotated Arabic corpus of errors and corrections for various text types. Such a corpus will be invaluable for developing Arabic error correction tools, both for training models and as a gold standard for evaluating error correction algorithms. We summarize the guidelines we created. We also describe issues encountered during the training of the annotators, as well as problems that are specific to the Arabic language that arose during the annotation process. Finally, we present the annotation tool that was developed as part of this project, the annotation pipeline, and the quality of the resulting annotations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,757 |
inproceedings | pellegrini-etal-2014-el | El-{WOZ}: a client-server wizard-of-oz interface | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1722/ | Pellegrini, Thomas and Hedayati, Vahid and Costa, Angela | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In this paper, we present a speech recording interface developed in the context of a project on automatic speech recognition for elderly native speakers of European Portuguese. In order to collect spontaneous speech in a situation of interaction with a machine, this interface was designed as a Wizard-of-Oz (WOZ) plateform. In this setup, users interact with a fake automated dialog system controled by a human wizard. It was implemented as a client-server application and the subjects interact with a talking head. The human wizard chooses pre-defined questions or sentences in a graphical user interface, which are then synthesized and spoken aloud by the avatar on the client side. A small spontaneous speech corpus was collected in a daily center. Eight speakers between 75 and 90 years old were recorded. They appreciated the interface and felt at ease with the avatar. Manual orthographic transcriptions were created for the total of about 45 minutes of speech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,758 |
inproceedings | cheng-etal-2014-parsing | Parsing {C}hinese Synthetic Words with a Character-based Dependency Model | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1723/ | Cheng, Fei and Duh, Kevin and Matsumoto, Yuji | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Synthetic word analysis is a potentially important but relatively unexplored problem in Chinese natural language processing. Two issues with the conventional pipeline methods involving word segmentation are (1) the lack of a common segmentation standard and (2) the poor segmentation performance on OOV words. These issues may be circumvented if we adopt the view of character-based parsing, providing both internal structures to synthetic words and global structure to sentences in a seamless fashion. However, the accuracy of synthetic word parsing is not yet satisfactory, due to the lack of research. In view of this, we propose and present experiments on several synthetic word parsers. Additionally, we demonstrate the usefulness of incorporating large unlabelled corpora and a dictionary for this task. Our parsers significantly outperform the baseline (a pipeline method). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,759 |
inproceedings | ben-jannet-etal-2014-eter | {ETER} : a new metric for the evaluation of hierarchical named entity recognition | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1724/ | Ben Jannet, Mohamed and Adda-Decker, Martine and Galibert, Olivier and Kahn, Juliette and Rosset, Sophie | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper addresses the question of hierarchical named entity evaluation. In particular, we focus on metrics to deal with complex named entity structures as those introduced within the QUAERO project. The intended goal is to propose a smart way of evaluating partially correctly detected complex entities, beyond the scope of traditional metrics. None of the existing metrics are fully adequate to evaluate the proposed QUAERO task involving entity detection, classification and decomposition. We are discussing the strong and weak points of the existing metrics. We then introduce a new metric, the Entity Tree Error Rate (ETER), to evaluate hierarchical and structured named entity detection, classification and decomposition. The ETER metric builds upon the commonly accepted SER metric, but it takes the complex entity structure into account by measuring errors not only at the slot (or complex entity) level but also at a basic (atomic) entity level. We are comparing our new metric to the standard one using first some examples and then a set of real data selected from the ETAPE evaluation results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,760 |
inproceedings | araki-etal-2014-detecting | Detecting Subevent Structure for Event Coreference Resolution | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1725/ | Araki, Jun and Liu, Zhengzhong and Hovy, Eduard and Mitamura, Teruko | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In the task of event coreference resolution, recent work has shown the need to perform not only full coreference but also partial coreference of events. We show that subevents can form a particular hierarchical event structure. This paper examines a novel two-stage approach to finding and improving subevent structures. First, we introduce a multiclass logistic regression model that can detect subevent relations in addition to full coreference. Second, we propose a method to improve subevent structure based on subevent clusters detected by the model. Using a corpus in the Intelligence Community domain, we show that the method achieves over 3.2 BLANC F1 gain in detecting subevent relations against the logistic regression model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,761 |
inproceedings | shah-etal-2014-efficient | An efficient and user-friendly tool for machine translation quality estimation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1726/ | Shah, Kashif and Turchi, Marco and Specia, Lucia | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We present a new version of QUEST {\textemdash} an open source framework for machine translation quality estimation {\textemdash} which brings a number of improvements: (i) it provides a Web interface and functionalities such that non-expert users, e.g. translators or lay-users of machine translations, can get quality predictions (or internal features of the framework) for translations without having to install the toolkit, obtain resources or build prediction models; (ii) it significantly improves over the previous runtime performance by keeping resources (such as language models) in memory; (iii) it provides an option for users to submit the source text only and automatically obtain translations from Bing Translator; (iv) it provides a ranking of multiple translations submitted by users for each source text according to their estimated quality. We exemplify the use of this new version through some experiments with the framework. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,762 |
inproceedings | balahur-etal-2014-resource | Resource Creation and Evaluation for Multilingual Sentiment Analysis in Social Media Texts | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1727/ | Balahur, Alexandra and Turchi, Marco and Steinberger, Ralf and Perea-Ortega, Jose-Manuel and Jacquet, Guillaume and K{\"u{\c{c{\"uk, Dilek and Zavarella, Vanni and El Ghali, Adil | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper presents an evaluation of the use of machine translation to obtain and employ data for training multilingual sentiment classifiers. We show that the use of machine translated data obtained similar results as the use of native-speaker translations of the same data. Additionally, our evaluations pinpoint to the fact that the use of multilingual data, including that obtained through machine translation, leads to improved results in sentiment classification. Finally, we show that the performance of the sentiment classifiers built on machine translated data can be improved using original data from the target language and that even a small amount of such texts can lead to significant growth in the classification performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,763 |
inproceedings | bingel-haider-2014-named | Named Entity Tagging a Very Large Unbalanced Corpus: Training and Evaluating {NE} Classifiers | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1728/ | Bingel, Joachim and Haider, Thomas | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We describe a systematic and application-oriented approach to training and evaluating named entity recognition and classification (NERC) systems, the purpose of which is to identify an optimal system and to train an optimal model for named entity tagging DeReKo, a very large general-purpose corpus of contemporary German (Kupietz et al., 2010). DeReKo {\textquoteleft}s strong dispersion wrt. genre, register and time forces us to base our decision for a specific NERC system on an evaluation performed on a representative sample of DeReKo instead of performance figures that have been reported for the individual NERC systems when evaluated on more uniform and less diverse data. We create and manually annotate such a representative sample as evaluation data for three different NERC systems, for each of which various models are learnt on multiple training data. The proposed sampling method can be viewed as a generally applicable method for sampling evaluation data from an unbalanced target corpus for any sort of natural language processing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,764 |
inproceedings | lacroix-bechet-2014-validation | Validation Issues induced by an Automatic Pre-Annotation Mechanism in the Building of Non-projective Dependency Treebanks | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1729/ | Lacroix, Oph{\'e}lie and B{\'e}chet, Denis | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In order to build large dependency treebanks using the CDG Lab, a grammar-based dependency treebank development tool, an annotator usually has to fill a selection form before parsing. This step is usually necessary because, otherwise, the search space is too big for long sentences and the parser fails to produce at least one solution. With the information given by the annotator on the selection form the parser can produce one or several dependency structures and the annotator can proceed by adding positive or negative annotations on dependencies and launching iteratively the parser until the right dependency structure has been found. However, the selection form is sometimes difficult and long to fill because the annotator must have an idea of the result before parsing. The CDG Lab proposes to replace this form by an automatic pre-annotation mechanism. However, this model introduces some issues during the annotation phase that do not exist when the annotator uses a selection form. The article presents those issues and proposes some modifications of the CDG Lab in order to use effectively the automatic pre-annotation mechanism. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,765 |
inproceedings | ai-charfuelan-2014-mat | {MAT}: a tool for {L}2 pronunciation errors annotation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1730/ | Ai, Renlong and Charfuelan, Marcela | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In the area of Computer Assisted Language Learning(CALL), second language (L2) learners spoken data is an important resource for analysing and annotating typical L2 pronunciation errors. The annotation of L2 pronunciation errors in spoken data is not an easy task though, normally it requires manual annotation from trained linguists or phoneticians. In order to facilitate this task, in this paper, we present the MAT tool, a web-based tool intended to facilitate the annotation of L2 learners' pronunciation errors at various levels. The tool has been designed taking into account recent studies on error detection in pronunciation training. It also aims at providing an easy and fast annotation process via a comprehensive and friendly user interface. The tool is based on the MARY TTS open source platform, from which it uses the components: text analyser (tokeniser, syllabifier, phonemiser), phonetic aligner and speech signal processor. Annotation results at sentence, word, syllable and phoneme levels are stored in XML format. The tool is currently under evaluation with a L2 learners spoken corpus recorded in the SPRINTER (Language Technology for Interactive, Multi-Media Online Language Learning) project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,766 |
inproceedings | zervanou-etal-2014-word | Word Semantic Similarity for Morphologically Rich Languages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1731/ | Zervanou, Kalliopi and Iosif, Elias and Potamianos, Alexandros | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In this work, we investigate the role of morphology on the performance of semantic similarity for morphologically rich languages, such as German and Greek. The challenge in processing languages with richer morphology than English, lies in reducing estimation error while addressing the semantic distortion introduced by a stemmer or a lemmatiser. For this purpose, we propose a methodology for selective stemming, based on a semantic distortion metric. The proposed algorithm is tested on the task of similarity estimation between words using two types of corpus-based similarity metrics: co-occurrence-based and context-based. The performance on morphologically rich languages is boosted by stemming with the context-based metric, unlike English, where the best results are obtained by the co-occurrence-based metric. A key finding is that the estimation error reduction is different when a word is used as a feature, rather than when it is used as a target word. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,767 |
inproceedings | elliot-etal-2014-lexterm | {L}ex{T}erm Manager: Design for an Integrated Lexicography and Terminology System | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1732/ | Elliot, Joshua and Kearsley, Logan and Housley, Jason and Melby, Alan | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We present a design for a multi-modal database system for lexical information that can be accessed in either lexicographical or terminological views. The use of a single merged data model makes it easy to transfer common information between termbases and dictionaries, thus facilitating information sharing and re-use. Our combined model is based on the LMF and TMF metamodels for lexicographical and terminological databases and is compatible with both, thus allowing for the import of information from existing dictionaries and termbases, which may be transferred to the complementary view and re-exported. We also present a new Linguistic Configuration Model, analogous to a TBX XCS file, which can be used to specify multiple language-specific schemata for validating and understanding lexical information in a single database. Linguistic configurations are mutable and can be refined and evolved over time as understanding of documentary needs improves. The system is designed with a client-server architecture using the HTTP protocol, allowing for the independent implementation of multiple clients for specific use cases and easy deployment over the web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,768 |
inproceedings | peterson-etal-2014-focusing | Focusing Annotation for Semantic Role Labeling | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1733/ | Peterson, Daniel and Palmer, Martha and Wu, Shumin | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Annotation of data is a time-consuming process, but necessary for many state-of-the-art solutions to NLP tasks, including semantic role labeling (SRL). In this paper, we show that language models may be used to select sentences that are more useful to annotate. We simulate a situation where only a portion of the available data can be annotated, and compare language model based selection against a more typical baseline of randomly selected data. The data is ordered using an off-the-shelf language modeling toolkit. We show that the least probable sentences provide dramatic improved system performance over the baseline, especially when only a small portion of the data is annotated. In fact, the lion`s share of the performance can be attained by annotating only 10-20{\%} of the data. This result holds for training a model based on new annotation, as well as when adding domain-specific annotation to a general corpus for domain adaptation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,769 |
inproceedings | lapponi-etal-2014-road | Off-Road {LAF}: Encoding and Processing Annotations in {NLP} Workflows | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1734/ | Lapponi, Emanuele and Velldal, Erik and Oepen, Stephan and Knudsen, Rune Lain | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | The Linguistic Annotation Framework (LAF) provides an abstract data model for specifying interchange representations to ensure interoperability among different annotation formats. This paper describes an ongoing effort to adapt the LAF data model as the interchange representation in complex workflows as used in the Language Analysis Portal (LAP), an on-line and large-scale processing service that is developed as part of the Norwegian branch of the Common Language Resources and Technology Infrastructure (CLARIN) initiative. Unlike several related on-line processing environments, which predominantly instantiate a distributed architecture of web services, LAP achives scalability to potentially very large data volumes through integration with the Norwegian national e-Infrastructure, and in particular job sumission to a capacity compute cluster. This setup leads to tighter integration requirements and also calls for efficient, low-overhead communication of (intermediate) processing results with workflows. We meet these demands by coupling the LAF data model with a lean, non-redundant JSON-based interchange format and integration of an agile and performant NoSQL database, allowing parallel access from cluster nodes, as the central repository of linguistic annotation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,770 |
inproceedings | labropoulou-etal-2014-developing | Developing a Framework for Describing Relations among Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1735/ | Labropoulou, Penny and Cieri, Christopher and Gavrilidou, Maria | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In this paper, we study relations holding between language resources as implemented in activities concerned with their documentation. We envision the term language resources with an inclusive definition covering datasets (corpora, lexica, ontologies, grammars, etc.), tools (including web services, workflows, platforms etc.), related publications and documentation, specifications and guidelines. However, the scope of the paper is limited to relations holding for datasets and tools. The study fosuses on the META-SHARE infrastructure and the Linguistic Data Consortium and takes into account the ISOcat DCR relations. Based on this study, we propose a taxonomy of relations, discuss their semantics and provide specifications for their use in order to cater for semantic interoperability. Issues of granularity, redundancy in codification, naming conventions and semantics of the relations are presented. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,771 |
inproceedings | de-groc-tannier-2014-evaluating | Evaluating Web-as-corpus Topical Document Retrieval with an Index of the {O}pen{D}irectory | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1736/ | de Groc, Cl{\'e}ment and Tannier, Xavier | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This article introduces a novel protocol and resource to evaluate Web-as-corpus topical document retrieval. To the contrary of previous work, our goal is to provide an automatic, reproducible and robust evaluation for this task. We rely on the OpenDirectory (DMOZ) as a source of topically annotated webpages and index them in a search engine. With this OpenDirectory search engine, we can then easily evaluate the impact of various parameters such as the number of seed terms, queries or documents, or the usefulness of various term selection algorithms. A first fully automatic evaluation is described and provides baseline performances for this task. The article concludes with practical information regarding the availability of the index and resource files. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,772 |
inproceedings | pal-etal-2014-word | Word Alignment-Based Reordering of Source Chunks in {PB}-{SMT} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1737/ | Pal, Santanu and Naskar, Sudip Kumar and Bandyopadhyay, Sivaji | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | Reordering poses a big challenge in statistical machine translation between distant language pairs. The paper presents how reordering between distant language pairs can be handled efficiently in phrase-based statistical machine translation. The problem of reordering between distant languages has been approached with prior reordering of the source text at chunk level to simulate the target language ordering. Prior reordering of the source chunks is performed in the present work by following the target word order suggested by word alignment. The testset is reordered using monolingual MT trained on source and reordered source. This approach of prior reordering of the source chunks was compared with pre-ordering of source words based on word alignments and the traditional approach of prior source reordering based on language-pair specific reordering rules. The effects of these reordering approaches were studied on an English{--}Bengali translation task, a language pair with different word order. From the experimental results it was found that word alignment based reordering of the source chunks is more effective than the other reordering approaches, and it produces statistically significant improvements over the baseline system on BLEU. On manual inspection we found significant improvements in terms of word alignments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,773 |
inproceedings | landsbergen-etal-2014-taalportaal | {T}aalportaal: an online grammar of {D}utch and {F}risian | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1738/ | Landsbergen, Frank and Tiberius, Carole and Dernison, Roderik | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In this paper, we present the Taalportaal project. Taalportaal will create an online portal containing an exhaustive and fully searchable electronic reference of Dutch and Frisian phonology, morphology and syntax. Its content will be in English. The main aim of the project is to serve the scientific community by organizing, integrating and completing the grammatical knowledge of both languages, and to make this data accessible in an innovative way. The project is carried out by a consortium of four universities and research institutions. Content is generated in two ways: (1) by a group of authors who, starting from existing grammatical resources, write text directly in XML, and (2) by integrating the full Syntax of Dutch into the portal, after an automatic conversion from Word to XML. We discuss the projects workflow, content creation and management, the actual web application, and the way in which we plan to enrich the portals content, such as by crosslinking between topics and linking to external resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,774 |
inproceedings | yates-etal-2014-framework | A Framework for Public Health Surveillance | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1739/ | Yates, Andrew and Parker, Jon and Goharian, Nazli and Frieder, Ophir | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | With the rapid growth of social media, there is increasing potential to augment traditional public health surveillance methods with data from social media. We describe a framework for performing public health surveillance on Twitter data. Our framework, which is publicly available, consists of three components that work together to detect health-related trends in social media: a concept extraction component for identifying health-related concepts, a concept aggregation component for identifying how the extracted health-related concepts relate to each other, and a trend detection component for determining when the aggregated health-related concepts are trending. We describe the architecture of the framework and several components that have been implemented in the framework, identify other components that could be used with the framework, and evaluate our framework on approximately 1.5 years of tweets. While it is difficult to determine how accurately a Twitter trend reflects a trend in the real world, we discuss the differences in trends detected by several different methods and compare flu trends detected by our framework to data from Google Flu Trends. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,775 |
inproceedings | uresova-etal-2014-multilingual | Multilingual Test Sets for Machine Translation of Search Queries for Cross-Lingual Information Retrieval in the Medical Domain | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1740/ | Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka and Haji{\v{c}}, Jan and Pecina, Pavel and Du{\v{s}}ek, Ond{\v{r}}ej | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper presents development and test sets for machine translation of search queries in cross-lingual information retrieval in the medical domain. The data consists of the total of 1,508 real user queries in English translated to Czech, German, and French. We describe the translation and review process involving medical professionals and present a baseline experiment where our data sets are used for tuning and evaluation of a machine translation system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,776 |
inproceedings | ngonga-ngomo-etal-2014-tool | A tool suite for creating question answering benchmarks | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1741/ | Ngonga Ngomo, Axel-Cyrille and Heino, Norman and Speck, Ren{\'e} and Malakasiotis, Prodromos | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We introduce the BIOASQ suite, a set of open-source Web tools for the creation, assessment and community-driven improvement of question answering benchmarks. The suite comprises three main tools: (1) the annotation tool supports the creation of benchmarks per se. In particular, this tool allows a team of experts to create questions and answers as well as to annotate the latter with documents, document snippets, RDF triples and ontology concepts. While the creation of questions is supported by different views and contextual information pertaining to the same question, the creation of answers is supported by the integration of several search engines and context information to facilitate the retrieval of the said answers as well as their annotation. (2) The assessment tool allows comparing several answers to the same question. Therewith, it can be used to assess the inter-annotator agreement as well as to manually evaluate automatically generated answers. (3) The third tool in the suite, the social network, aims to ensure the sustainability and iterative improvement of the benchmark by empowering communities of experts to provide insights on the questions in the benchmark. The BIOASQ suite has already been used successfully to create the 311 questions comprised in the BIOASQ question answering benchmark. It has also been evaluated by the experts who used it to create the BIOASQ benchmark. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,777 |
inproceedings | de-groc-etal-2014-thematic | Thematic Cohesion: measuring terms discriminatory power toward themes | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1742/ | de Groc, Cl{\'e}ment and Tannier, Xavier and de Loupy, Claude | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We present a new measure of thematic cohesion. This measure associates each term with a weight representing its discriminatory power toward a theme, this theme being itself expressed by a list of terms (a thematic lexicon). This thematic cohesion criterion can be used in many applications, such as query expansion, computer-assisted translation, or iterative construction of domain-specific lexicons and corpora. The measure is computed in two steps. First, a set of documents related to the terms is gathered from the Web by querying a Web search engine. Then, we produce an oriented co-occurrence graph, where vertices are the terms and edges represent the fact that two terms co-occur in a document. This graph can be interpreted as a recommendation graph, where two terms occurring in a same document means that they recommend each other. This leads to using a random walk algorithm that assigns a global importance value to each vertex of the graph. After observing the impact of various parameters on those importance values, we evaluate their correlation with retrieval effectiveness. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,778 |
inproceedings | gornostay-vasiljevs-2014-terminology | Terminology Resources and Terminology Work Benefit from Cloud Services | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1743/ | Gornostay, Tatiana and Vasi{\c{l}}jevs, Andrejs | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper presents the concept of the innovative platform TaaS Terminology as a Service. TaaS brings the benefits of cloud services to the user, in order to foster the creation of terminology resources and to maintain their up-to-datedness by integrating automated data extraction and user-supported clean-up of raw terminological data and sharing user-validated terminology. The platform is based on cutting-edge technologies, provides single-access-point terminology services, and facilitates the establishment of emerging trends beyond conventional praxis and static models in terminology work. A cloud-based, user-oriented, collaborative, portable, interoperable, and multilingual platform offers such terminology services as terminology project creation and sharing, data collection for translation lookup, user document upload and management, terminology extraction customisation and execution, raw terminological data management, validated terminological data export and reuse, and other terminology services. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,779 |
inproceedings | asadullah-etal-2014-bidirectionnal | Bidirectionnal converter between syntactic annotations : from {F}rench Treebank Dependencies to {PASSAGE} annotations, and back | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1744/ | Asadullah, Munshi and Paroubek, Patrick and Vilnat, Anne | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | We present here part of a bidirectional converter between the French Tree-bank Dependency (FTB - DEP) annotations into the PASSAGE format. FTB - DEP is the representation used by several freely available parsers and the PASSAGE annotation was used to hand-annotate a relatively large sized corpus, used as gold-standard in the PASSAGE evaluation campaigns. Our converter will give the means to evaluate these parsers on the PASSAGE corpus. We shall illustrate the mapping of important syntactic phenomena using the corpus made of the examples of the FTB - DEP annotation guidelines, which we have hand-annotated with PASSAGE annotations and used to compute quantitative performance measures on the FTB - DEP guidelines.n this paper we will briefly introduce the two annotation formats. Then, we detail the two converters, and the rules which have been written. The last part will detail the results we obtained on the phenomenon we mostly study, the passive forms. We evaluate the converters by a double conversion, from PASSAGE to CoN LL and back to PASSAGE. We will detailed in this paper the linguistic phenomenon we detail here, the passive form. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,780 |
inproceedings | zampieri-gebre-2014-varclass | {V}ar{C}lass: An Open-source Language Identification Tool for Language Varieties | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1745/ | Zampieri, Marcos and Gebre, Binyam | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | This paper presents VarClass, an open-source tool for language identification available both to be downloaded as well as through a graphical user-friendly interface. The main difference of VarClass in comparison to other state-of-the-art language identification tools is its focus on language varieties. General purpose language identification tools do not take language varieties into account and our work aims to fill this gap. VarClass currently contains language models for over 27 languages in which 10 of them are language varieties. We report an average performance of over 90.5{\%} accuracy in a challenging dataset. More language models will be included in the upcoming months. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,781 |
inproceedings | rettinger-etal-2014-recsa | {RECSA}: Resource for Evaluating Cross-lingual Semantic Annotation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Loftsson, Hrafn and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2014 | Reykjavik, Iceland | European Language Resources Association (ELRA) | https://aclanthology.org/L14-1746/ | Rettinger, Achim and Zhang, Lei and Berovi{\'c}, Da{\v{s}}a and Merkler, Danijela and Sreba{\v{c}}i{\'c}, Matea and Tadi{\'c}, Marko | Proceedings of the Ninth International Conference on Language Resources and Evaluation ({LREC}`14) | null | In recent years large repositories of structured knowledge (DBpedia, Freebase, YAGO) have become a valuable resource for language technologies, especially for the automatic aggregation of knowledge from textual data. One essential component of language technologies, which leverage such knowledge bases, is the linking of words or phrases in specific text documents with elements from the knowledge base (KB). We call this semantic annotation. In the same time, initiatives like Wikidata try to make those knowledge bases less language dependent in order to allow cross-lingual or language independent knowledge access. This poses a new challenge to semantic annotation tools which typically are language dependent and link documents in one language to a structured knowledge base grounded in the same language. Ultimately, the goal is to construct cross-lingual semantic annotation tools that can link words or phrases in one language to a structured knowledge database in any other language or to a language independent representation. To support this line of research we developed what we believe could serve as a gold standard Resource for Evaluating Cross-lingual Semantic Annotation (RECSA). We compiled a hand-annotated parallel corpus of 300 news articles in three languages with cross-lingual semantic groundings to the English Wikipedia and DBPedia. We hope that this new language resource, which is freely available, will help to establish a standard test set and methodology to comparatively evaluate cross-lingual semantic annotation technologies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 67,782 |
inproceedings | mohammad-zhu-2014-sentiment | Sentiment Analysis of Social Media Texts | Specia, Lucia and Carreras, Xavier | oct | 2014 | Doha, Qatar | Association for Computational Linguistics | https://aclanthology.org/D14-2001/ | Mohammad, Saif M. and Zhu, Xiaodan | Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Automatically detecting sentiment of product reviews, blogs, tweets, and SMS messages has attracted extensive interest from both the academia and industry. It has a number of applications, including: tracking sentiment towards products, movies, politicians, etc.; improving customer relation models; detecting happiness and well-being; and improving automatic dialogue systems. In this tutorial, we will describe how you can create a state-of-the-art sentiment analysis system, with a focus on social media posts.We begin with an introduction to sentiment analysis and its various forms: term level, message level, document level, and aspect level. We will describe how sentiment analysis systems are evaluated, especially through recent SemEval shared tasks: Sentiment Analysis of Twitter (SemEval-2013 Task 2, SemEval 2014-Task 9) and Aspect Based Sentiment Analysis (SemEval-2014 Task 4).We will give an overview of the best sentiment analysis systems at this point of time, including those that are conventional statistical systems as well as those using deep learning approaches. We will describe in detail the NRC-Canada systems, which were the overall best performing systems in all three SemEval competitions listed above. These are simple lexical- and sentiment-lexicon features based systems, which are relatively easy to re-implement.We will discuss features that had the most impact (those derived from sentiment lexicons and negation handling). We will present how large tweet-specific sentiment lexicons can be automatically generated and evaluated. We will also show how negation impacts sentiment differently depending on whether the scope of the negation is positive or negative. Finally, we will flesh out limitations of current approaches and promising future directions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,317 |
inproceedings | artzi-etal-2014-semantic | Semantic Parsing with {C}ombinatory {C}ategorial {G}rammars | Specia, Lucia and Carreras, Xavier | oct | 2014 | Doha, Qatar | Association for Computational Linguistics | https://aclanthology.org/D14-2003/ | Artzi, Yoav and Fitzgerald, Nicholas and Zettlemoyer, Luke | Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Semantic parsers map natural language sentences to formal representations of their underlying meaning. Building accurate semantic parsers without prohibitive engineering costs is a long-standing, open research problem.The tutorial will describe general principles for building semantic parsers. The presentation will be divided into two main parts: learning and modeling. In the learning part, we will describe a unified approach for learning Combinatory Categorial Grammar (CCG) semantic parsers, that induces both a CCG lexicon and the parameters of a parsing model. The approach learns from data with labeled meaning representations, as well as from more easily gathered weak supervision. It also enables grounded learning where the semantic parser is used in an interactive environment, for example to read and execute instructions. The modeling section will include best practices for grammar design and choice of semantic representation. We will motivate our use of lambda calculus as a language for building and representing meaning with examples from several domains.The ideas we will discuss are widely applicable. The semantic modeling approach, while implemented in lambda calculus, could be applied to many other formal languages. Similarly, the algorithms for inducing CCG focus on tasks that are formalism independent, learning the meaning of words and estimating parsing parameters. No prior knowledge of CCG is required. The tutorial will be backed by implementation and experiments in the University of Washington Semantic Parsing Framework (UW SPF, \url{http://yoavartzi.com/spf}). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,319 |
inproceedings | williams-koehn-2014-syntax | Syntax-Based Statistical Machine Translation | Specia, Lucia and Carreras, Xavier | oct | 2014 | Doha, Qatar | Association for Computational Linguistics | https://aclanthology.org/D14-2005/ | Williams, Philip and Koehn, Philipp | Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | The tutorial explains in detail syntax-based statistical machine translation with synchronous context free grammars (SCFG). It is aimed at researchers who have little background in this area, and gives a comprehensive overview about the main models and methods.While syntax-based models in statistical machine translation have a long history, spanning back almost 20 years, they have only recently shown superior translation quality over the more commonly used phrase-based models, and are now considered state of the art for some language pairs, such as Chinese-English (since ISI`s submission to NIST 2006), and English-German (since Edinburgh`s submission to WMT 2012).While the field is very dynamic, there is a core set of methods that have become dominant. Such SCFG models are implemented in the open source machine translation toolkit Moses, and the tutors draw from the practical experience of its development.The tutorial focuses on explaining core established concepts in SCFG-based approaches, which are the most popular in this area. The main goal of the tutorial is for the audience to understand how these systems work end-to-end. We review as much relevant literature as necessary, but the tutorial is not a primarily research survey.The tutorial is rounded up with open problems and advanced topics, such as computational challenges, different formalisms for syntax-based models and inclusion of semantics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,321 |
inproceedings | bordes-weston-2014-embedding | Embedding Methods for Natural Language Processing | Specia, Lucia and Carreras, Xavier | oct | 2014 | Doha, Qatar | Association for Computational Linguistics | https://aclanthology.org/D14-2006/ | Bordes, Antoine and Weston, Jason | Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Embedding-based models are popular tools in Natural Language Processing these days. In this tutorial, our goal is to provide an overview of the main advances in this domain. These methods learn latent representations of words, as well as database entries that can then be used to do semantic search, automatic knowledge base construction, natural language understanding, etc. Our current plan is to split the tutorial into 2 sessions of 90 minutes, with a 30 minutes coffee break in the middle, so that we can cover in a first session the basics of learning embeddings and advanced models in the second session. This is detailed in the following.Part 1: Unsupervised and Supervised EmbeddingsWe introduce models that embed tokens (words, database entries) by representing them as low dimensional embedding vectors. Unsupervised and supervised methods will be discussed, including SVD, Word2Vec, Paragraph Vectors, SSI, Wsabie and others. A comparison between methods will be made in terms of applicability, type of loss function (ranking loss, reconstruction loss, classification loss), regularization, etc. The use of these models in several NLP tasks will be discussed, including question answering, frame identification, knowledge extraction and document retrieval.Part 2: Embeddings for Multi-relational DataThis second part will focus mostly on the construction of embeddings for multi-relational data, that is when tokens can be interconnected in different ways in the data such as in knowledge bases for instance. Several methods based on tensor factorization, collective matrix factorization, stochastic block models or energy-based learning will be presented. The task of link prediction in a knowledge base will be used as an application example. Multiple empirical results on the use of embedding models to align textual information to knowledge bases will also be presented, together with some demos if time permits. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,322 |
inproceedings | diab-habash-2014-natural | Natural Language Processing of {A}rabic and its Dialects | Specia, Lucia and Carreras, Xavier | oct | 2014 | Doha, Qatar | Association for Computational Linguistics | https://aclanthology.org/D14-2007/ | Diab, Mona and Habash, Nizar | Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | This tutorial introduces the different challenges and current solutions to the automatic processing of Arabic and its dialects. The tutorial has two parts: First, we present a discussion of generic issues relevant to Arabic NLP and detail dialectal linguistic issues and the challenges they pose for NLP. In the second part, we review the state-of-the-art in Arabic processing covering several enabling technologies and applications, e.g., dialect identification, morphological processing (analysis, disambiguation, tokenization, POS tagging), parsing, and machine translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,323 |
article | stern-dagan-2014-biutte | The {BIUTTE} Research Platform for Transformation-based Textual Entailment Recognition | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.2/ | Stern, Asher and Dagan, Ido | null | null | Recent progress in research of the Recognizing Textual Entailment (RTE) task shows a constantly-increasing level of complexity in this research field. A way to avoid having this complexity becoming a barrier for researchers, especially for new-comers in the field, is to provide a freely available RTE system with a high level of flexibility and extensibility. In this paper, we introduce our RTE system, BiuTee2, and suggest it as an effective research framework for RTE. In particular, BiuTee follows the prominent transformation-based paradigm for RTE, and offers an accessible platform for research within this approach. We describe each of BiuTee`s components and point out the mechanisms and properties which directly support adaptations and integration of new components. In addition, we describe BiuTee`s visual tracing tool, which provides notable assistance for researchers in refining and {\textquotedblleft}debugging{\textquotedblright} their knowledge resources and inference components. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,632 |
article | bos-2014-place | Is there a place for logic in recognizing textual entailment | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.3/ | Bos, Johan | null | null | From a purely theoretical point of view, it makes sense to approach recognizing textual entailment (RTE) with the help of logic. After all, entailment matters are all about logic. In practice, only few RTE systems follow the bumpy road from words to logic. This is probably because it requires a combination of robust, deep semantic analysis and logical inference{---}and why develop something with this complexity if you perhaps can get away with something simpler? In this article, with the help of an RTE system based on Combinatory Categorial Grammar, Discourse Representation Theory, and first-order theorem proving, we make an empirical assessment of the logic-based approach. High precision paired with low recall is a key characteristic of this system. The bottleneck in achieving high recall is the lack of a systematic way to produce relevant background knowledge. There is a place for logic in RTE, but it is (still) overshadowed by the knowledge acquisition problem. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,633 |
article | cabria-magnini-2014-decomposing | Decomposing Semantic Inference | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.4/ | Cabria, Elana and Magnini, Bernardo | null | null | Beside formal approaches to semantic inference that rely on logical representation of meaning, the notion of Textual Entailment (TE) has been proposed as an applied framework to capture major semantic inference needs across applications in Computational Linguistics. Although several approaches have been tried and evaluation campaigns have shown improvements in TE, a renewed interest is rising in the research community towards a deeper and better understanding of the core phenomena involved in textual inference. Pursuing this direction, we are convinced that crucial progress will derive from a focus on decomposing the complexity of the TE task into basic phenomena and on their combination. In this paper, we carry out a deep analysis on TE data sets, investigating the relations among two relevant aspects of semantic inferences: the logical dimension, i.e. the capacity of the inference to prove the conclusion from its premises, and the linguistic dimension, i.e. the linguistic devices used to accomplish the goal of the inference. We propose a decomposition approach over TE pairs, where single linguistic phenomena are isolated in what we have called atomic inference pairs, and we show that at this granularity level the actual correlation between the linguistic and the logical dimensions of semantic inferences emerges and can be empirically observed. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,634 |
article | baroni-etal-2014-frege | Frege in Space: A Program for Composition Distributional Semantics | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.5/ | Baroni, Marco and Bernardi, Raffaella and Zamparelli, Roberto | null | null | The lexicon of any natural language encodes a huge number of distinct word meanings. Just to understand this article, you will need to know what thousands of words mean. The space of possible sentential meanings is infinite: In this article alone, you will encounter many sentences that express ideas you have never heard before, we hope. Statistical semantics has addressed the issue of the vastness of word meaning by proposing methods to harvest meaning automatically from large collections of text (corpora). Formal semantics in the Fregean tradition has developed methods to account for the infinity of sentential meaning based on the crucial insight of compositionality, the idea that meaning of sentences is built incrementally by combining the meanings of their constituents. This article sketches a new approach to semantics that brings together ideas from statistical and formal semantics to account, in parallel, for the richness of lexical meaning and the combinatorial power of sentential semantics. We adopt, in particular, the idea that word meaning can be approximated by the patterns of co-occurrence of words in corpora from statistical semantics, and the idea that compositionality can be captured in terms of a syntax-driven calculus of function application from formal semantics. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,635 |
article | lappin-2014-intensions | Intensions as Computable Functions | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.6/ | Lappin, Shalom | null | null | Classical intensional semantic frameworks, like Montague`s Intensional Logic (IL), identify intensional identity with logical equivalence. This criterion of co-intensionality is excessively coarse-grained, and it gives rise to several well-known difficulties. Theories of fine-grained intensionality have been been proposed to avoid this problem. Several of these provide a formal solution to the problem, but they do not ground this solution in a substantive account of intensional difference. Applying the distinction between operational and denotational meaning, developed for the semantics of programming languages, to the interpretation of natural language expressions, offers the basis for such an account. It permits us to escape some of the complications generated by the traditional modal characterization of intensions. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,636 |
article | icard-iii-moss-2014-recent | Recent Progress on Monotonicity | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.7/ | Icard III, Thomas F. and Moss, Lawrence S. | null | null | This paper serves two purposes. It is a summary of much work concerning formal treatments of monotonicity and polarity in natural language, and it also discusses connections to related work on exclusion relations, and connections to psycholinguistics and computational linguistics. The second part of the paper presents a summary of some new work on a formal Monotonicity Calculus. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,637 |
article | pratt-hartmann-2014-relational | The Relational Syllogistic Revisited | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.8/ | Pratt-Hartmann, Ian | null | null | The relational syllogistic is an extension of the language of Classical syllogisms in which predicates are allowed to feature transitive verbs with quantified objects. It is known that the relational syllogistic does not admit a finite set of syllogism-like rules whose associated (direct) derivation relation is sound and complete. We present a modest extension of this language which does. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,638 |
article | schubert-2014-nlog | {NL}og-like Inference and Commonsense Reasoning | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.9/ | Schubert, Lenhart | null | null | Recent implementations of Natural Logic (NLog) have shown that NLog provides a quite direct means of going from sentences in ordinary language to many of the obvious entailments of those sentences. We show here that Episodic Logic (EL) and its Epilog implementation are well-adapted to capturing NLog-like inferences, but beyond that, also support inferences that require a combination of lexical knowledge and world knowledge. However, broad language understanding and commonsense reasoning are still thwarted by the {\textquotedblleft}knowledge acquisition bottleneck{\textquotedblright}, and we summarize some of our ongoing and contemplated attacks on that persistent difficulty. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,639 |
article | toledo-etal-2014-towards | Towards a Semantic Model for Textual Entailment Annotation | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.10/ | Toledo, Assaf and Alexandropoulou, Stavroula and Chesney, Sophie and Katrenko, Sophia and Klockmann, Heid and Kokke, Pepjin and Kruit, Benno and Winter, Yoad | null | null | We introduce a new formal semantic model for annotating textual entailments that describes restrictive, intersective, and appositive modification. The model contains a formally defined interpreted lexicon, which specifies the inventory of symbols and the supported semantic operators, and an informally defined annotation scheme that instructs annotators in which way to bind words and constructions from a given pair of premise and hypothesis to the interpreted lexicon. We explore the applicability of the proposed model to the Recognizing Textual Entailment (RTE) 1{--}4 corpora and describe a first-stage annotation scheme on which we based the manual annotation work. The constructions we annotated were found to occur in 80.65{\%} of the entailments in RTE 1{--}4 and were annotated with cross-annotator agreement of 68{\%} on average. The annotated parts of the RTE corpora are publicly available for further research. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,640 |
article | djalali-2014-synthetic | Synthetic Logic | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-9.11/ | Djalali, Alex J. | null | null | The role of inference as it relates to natural language (NL) semantics has often been neglected. Recently, there has been a move away by some NL semanticists from the heavy machinery of, say, Montagovianstyle semantics to a more proof-based approach. Although researchers tend to study each type of system independently, MacCartney (2009) and MacCartney and Manning (2009) (henceforth M{\&}M) recently developed an algorithmic approach to natural logic that attempts to combine insights from both monotonicity calculi and various syllogistic fragments to derive compositionally the relation between two NL sentences from the relations of their parts. At the heart of their system, M{\&}M begin with seven intuitive lexicalsemantic relations that NL expressions can stand in, e.g., synonymy and antonymy, and then ask the question: if ' stands in some lexicalsemantic relation to ; and stands in (a possibly different) lexicalsemantic relation to ✓; what lexical-semantic relation (if any) can be concluded about the relation between ' and ✓? This type of reasoning has the familiar shape of a logical inference rule. However, the logical properties of their join table have not been explored in any real detail. The purpose of this paper is to give M{\&}M`s table a proper logical treatment. As I will show, the table has the underlying form of a syllogistic fragment and relies on a sort of generalized transitive reasoning. | Linguistic Issues in Language Technology | 9 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,641 |
article | mcshane-etal-2014-nominal | Nominal Compound Interpretation by Intelligent Agents | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-10.1/ | McShane, Marjorie and Beale, Stephen and Babkin, Petr | null | null | This paper presents a cognitively-inspired algorithm for the semantic analysis of nominal compounds by intelligent agents. The agents, modeled within the OntoAgent environment, are tasked to compute a full context-sensitive semantic interpretation of each compound using a battery of engines that rely on a high-quality computational lexicon and ontology. Rather than being treated as an isolated {\textquotedblleft}task{\textquotedblright}, as in many NLP approaches, nominal compound analysis in OntoAgent represents a minimal extension to the core process of semantic analysis. We hypothesize that seeking similarities across language analysis tasks reflects the spirit of how people approach language interpretation, and that this approach will make feasible the long-term development of truly sophisticated, human-like intelligent agents. The initial evaluation of our approach to nominal compounds are fixed expressions, requiring individual semantic specification at the lexical level. | Linguistic Issues in Language Technology | 10 | null | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,642 |
article | rayner-etal-2014-call | {CALL}-{SLT}: A Spoken {CALL} System Based on Grammar and Speech Recognition | null | null | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-10.2/ | Rayner, Manny and Isourakis, Nikos and Baur, Claudia and Bouillon, Pierrette and Gerlach, Johannna | null | null | We describe CALL-SLT, a speech-enabled Computer-Assisted Language Learning application where the central idea is to prompt the student with an abstract representation of what they are supposed to say, and then use a combination of grammar-based speech recognition and rule-based translation to rate their response. The system has been developed to the level of a mature prototype, freely deployed on the web, with versions for several languages. We present an overview of the core system architecture and the various types of content we have developed. Finally, we describe several evaluations, the last of which is a study carried out over about a week using 130 subjects recruited through the Amazon Mechanical Turk, in which CALL-SLT was contrasted against a control version where the speech recognition component was disabled. The improvement in student learning performance between the two groups was significant at p {\ensuremath{<}} 0.02. | Linguistic Issues in Language Technology | 10 | null | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,643 |
article | kapatsinski-2014-grammar | What is grammar like? A usage-based constructionist perspective | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.2/ | Kapatsinski, Vsevolod | null | null | This paper is intended to elucidate some implications of usage-based linguistic theory for statistical and computational models of language acquisition, focusing on morphology and morphophonology. I discuss the need for grammar (a.k.a. abstraction), the contents of individual grammars (a potentially infinite number of constructions, paradigmatic mappings and predictive relationships between phonological units), the computational characteristics of constructions (complex non-crossover interactions among partially redundant features), resolution of competition among constructions (probability matching), and the need for multimodel inference in modeling internal grammars underlying the linguistic performance of a community. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,645 |
article | ehret-2014-kolmogorov | Kolmogorov complexity of morphs and constructions in {E}nglish | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.3/ | Ehret, Katharina | null | null | This chapter demonstrates how compression algorithms can be used to address morphological and syntactic complexity in detail by analysing the contribution of specific linguistic features to English texts. The point of departure is the ongoing complexity debate and quest for complexity metrics. After decades of adhering to the equal complexity axiom, recent research seeks to define and measure linguistic complexity (Dahl 2004; Kortmann and Szmrecsanyi 2012; Miestamo et al. 2008). Against this backdrop, I present a new flavour of the Juola-style compression technique (Juola 1998), targeted manipulation. Essentially, compression algorithms are used to measure linguistic complexity via the relative informativeness in text samples. Thus, I assess the contribution of morphs such as {--}ing or {--}ed, and functional constructions such as progressive (be + verb-ing) or perfect (have + verb past participle) to the syntactic and morphological complexity in a mixedgenre corpus of Alice`s Adventures in Wonderland, the Gospel of Mark and newspaper texts. I find that a higher number of marker types leads to higher amounts of morphological complexity in the corpus. Syntactic complexity is reduced because the presence of morphological markers enhances the algorithmic prediction of linguistic patterns. To conclude, I show that information-theoretic methods yield linguistically meaningful results and can be used to measure the complexity of specific linguistic features in naturalistic copora. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,646 |
article | stump-2014-polyfunctionality | Polyfunctionality and inflectional economy | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.4/ | Stump, Gregory | null | null | This paper serves two purposes. It is a summary of much work concerning One compelling kind of evidence for the autonomy of a language`s morphology is the incidence of inflectional polyfunctionality, the systematic use of the same morphology to express distinct but related morphosyntactic content. Polyfunctionality is more complex than mere homophony. It can, in fact, arise in a number of ways: as an effect of rule invitation (wherein the same rule of exponence serves more than one function by interacting with other rules in more than one way), as an expression of morphosyntactic referral, as the effect of a rule of exponence realizing either a disjunction of property sets or a morphomic property set, or as the reflection of a morphosyntactic property set`s cross-categorial versatility. I distinguish these different sources of polyfunctionality in a formally precise way. It is inaccurate to see polyfunctionality as an ambiguating source of grammatical complexity; on the contrary, by enhancing the predictability of a language`s morphology, it may well enhance both the memorability of complex inflected forms and the ease with which they are processed. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,647 |
article | milizia-2014-semi | Semi-separate exponence in cumulative paradigms. Information-theoretic properties exemplified by {A}ncient {G}reek verb endings | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.5/ | Milizia, Paolo | null | null | By using the system of Ancient Greek verb endings as a case study, this paper deals with the cross-linguistically recurrent appearance of inflectional paradigms that, though generally characterized by cumulative exponence, contain segmentable {\textquotedblleft}semi-separate{\textquotedblright} endings in correspondence with low-frequency cells. Such an exponence system has information-theoretic properties which may be relevant from the point of view of morphological theory. In particular, both the phenomena of semi-separate exponence and the instances of syncretism that conform to the Br{\o}ndalian Principle of Compensation may be viewed as different manifestations of a same cross-linguistic tendency not to let a paradigm`s exponent set be too distant from the situation of equiprobability. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,648 |
article | hathout-namer-2014-demonette | D{\'e}monette, a {F}rench derivational morpho-semantic network | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.6/ | Hathout, Nabil and Namer, Fiammetta | null | null | D{\'e}monette is a derivational morphological network created from information provided by two existing lexical resources, D{\'e}riF and Morphonette. It features a formal architecture in which words are associated with semantic types and where morphological relations, labelled with concrete and abstract bi-oriented definitions, connect derived words with their base and indirectly related words with each other. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 5 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,649 |
article | lefer-grabar-2014-evaluative | Evaluative prefixes in translation: From automatic alignment to semantic categorization | null | dec | 2014 | null | CSLI Publications | https://aclanthology.org/2014.lilt-11.7/ | Lefer, Marie-Aude and Grabar, Natalia | null | null | This article aims to assess to what extent translation can shed light on the semantics of French evaluative prefixation by adopting No {\ensuremath{\ddot{}}}el (2003)`s {\textquoteleft}translations as evidence for semantics' approach. In French, evaluative prefixes can be classified along two dimensions (cf. (Fradin and Montermini 2009)): (1) a quantity dimension along a maximum/minimum axis and the semantic values big and small, and (2) a quality dimension along a positive/negative axis and the values good (excess; higher degree) and bad (lack; lower degree). In order to provide corpus-based insights into this semantic categorization, we analyze French evaluative prefixes alongside their English translation equivalents in a parallel corpus. To do so, we focus on periphrastic translations, as they are likely to {\textquoteleft}spell out' the meaning of the French prefixes. The data used were extracted from the Europarl parallel corpus (Koehn 2005; Cartoni and Meyer 2012). Using a tailormade program, we first aligned the French prefixed words with the corresponding word(s) in English target sentences, before proceeding to the evaluation of the aligned sequences and the manual analysis of the bilingual data. Results confirm that translation data can be used as evidence for semantics in morphological research and help refine existing semantic descriptions of evaluative prefixes. | Linguistic Issues in Language Technology | 11 | null | null | null | null | null | null | null | 6 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,650 |
inproceedings | cettolo-etal-2014-report | Report on the 11th {IWSLT} evaluation campaign | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.1/ | Cettolo, Mauro and Niehues, Jan and St{\"uker, Sebastian and Bentivogli, Luisa and Federico, Marcello | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 2--17 | The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,653 |
inproceedings | babaali-etal-2014-fbk | {FBK} @ {IWSLT} 2014 {--} {ASR} track | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.2/ | BabaAli, B. and Serizel, R. and Jalalvand, S. and Gretter, R. and Giuliani, D. | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 18--25 | This paper reports on the participation of FBK in the IWSLT 2014 evaluation campaign for Automatic Speech Recognition (ASR), which focused on the transcription of TED talks. The outputs of primary and contrastive systems were submitted for three languages, namely English, German and Italian. Most effort went into the development of the English transcription system. The primary system is based on the ROVER combination of the output of 5 transcription subsystems which are all based on the Deep Neural Network Hidden Markov Model (DNN-HMM) hybrid. Before combination, word lattices generated by each sub-system are rescored using an efficient interpolation of 4-gram and Recurrent Neural Network (RNN) language models. The primary system achieves a Word Error Rate (WER) of 14.7{\%} and 11.4{\%} on the 2013 and 2014 official IWSLT English test sets, respectively. The subspace Gaussian mixture model (SGMM) system developed for German achieves 39.5{\%} WER on the 2014 IWSLT German test sets. For Italian, the primary transcription system was based on hidden Markov models and achieves 23.8{\%} WER on the 2014 IWSLT Italian test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,654 |
inproceedings | bell-etal-2014-uedin | The {UEDIN} {ASR} systems for the {IWSLT} 2014 evaluation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.3/ | Bell, Peter and Swietojanski, Pawel and Driesen, Joris and Sinclair, Mark and McInnes, Fergus and Renals, Steve | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 26--33 | This paper describes the University of Edinburgh (UEDIN) ASR systems for the 2014 IWSLT Evaluation. Notable features of the English system include deep neural network acoustic models in both tandem and hybrid configuration with the use of multi-level adaptive networks, LHUC adaptation and Maxout units. The German system includes lightly supervised training and a new method for dictionary generation. Our voice activity detection system now uses a semi-Markov model to incorporate a prior on utterance lengths. There are improvements of up to 30{\%} relative WER on the tst2013 English test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,655 |
inproceedings | beloucif-etal-2014-improving | Improving {MEANT} based semantically tuned {SMT} | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.4/ | Beloucif, Meriem and Lo, Chi-kiu and Wu, Dekai | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 34--41 | We discuss various improvements to our MEANT tuned system, previously presented at IWSLT 2013. In our 2014 system, we incorporate this year`s improved version of MEANT, improved Chinese word segmentation, Chinese named entity recognition and dedicated proper name translation, and number expression handling. This results in a significant performance jump compared to last year`s system. We also ran preliminary experiments on tuning to IMEANT, our new ITG based variant of MEANT. The performance of tuning to IMEANT is comparable to tuning on MEANT (differences are statistically insignificant). We are presently investigating if tuning on IMEANT can produce even better results, since IMEANT was actually shown to correlate with human adequacy judgment more closely than MEANT. Finally, we ran experiments applying our new architectural improvements to a contrastive system tuned to BLEU. We observed a slightly higher jump in comparison to last year, possibly due to mismatches of MEANT`s similarity models to our new entity handling. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,656 |
inproceedings | bertoldi-etal-2014-fbks | {FBK}`s machine translation and speech translation systems for the {IWSLT} 2014 evaluation campaign | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.5/ | Bertoldi, Nicola and Mathur, Prashanu and Ruiz, Nicolas and Federico, Marcello | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 42--48 | This paper describes the systems submitted by FBK for the MT and SLT tracks of IWSLT 2014. We participated in the English-French and German-English machine translation tasks, as well as the English-French speech translation task. We report improvements in our English-French MT systems over last year`s baselines, largely due to improved techniques of combining translation and language models, and using huge language models. For our German-English system, we experimented with a novel domain adaptation technique. For both language pairs we also applied a novel word triggerbased model which shows slight improvements on EnglishFrench and German-English systems. Our English-French SLT system utilizes MT-based punctuation insertion, recasing, and ASR-like synthesized MT training data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,657 |
inproceedings | birch-etal-2014-edinburgh | {E}dinburgh {SLT} and {MT} system description for the {IWSLT} 2014 evaluation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.6/ | Birch, Alexandra and Huck, Matthias and Durrani, Nadir and Bogoychev, Nikolay and Koehn, Philipp | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 49--56 | This paper describes the University of Edinburgh`s spoken language translation (SLT) and machine translation (MT) systems for the IWSLT 2014 evaluation campaign. In the SLT track, we participated in the German{\ensuremath{\leftrightarrow}}English and English{\textrightarrow}French tasks. In the MT track, we participated in the German{\ensuremath{\leftrightarrow}}English, English{\textrightarrow}French, Arabic{\ensuremath{\leftrightarrow}}English, Farsi{\textrightarrow}English, Hebrew{\textrightarrow}English, Spanish{\ensuremath{\leftrightarrow}}English, and Portuguese-Brazil{\ensuremath{\leftrightarrow}}English tasks. For our SLT submissions, we experimented with comparing operation sequence models with bilingual neural network language models. For our MT submissions, we explored using unsupervised transliteration for languages which have a different script than English, in particular for Arabic, Farsi, and Hebrew. We also investigated syntax-based translation and system combination. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,658 |
inproceedings | freitag-etal-2014-combined | Combined spoken language translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.7/ | Freitag, Markus and Wuebker, Joern and Peitz, Stephan and Ney, Hermann and Huck, Matthias and Birch, Alexandra and Durrani, Nadir and Koehn, Philipp and Mediani, Mohammed and Slawik, Isabel and Niehues, Jan and Cho, Eunach and Waibel, Alex and Bertoldi, Nicola and Cettolo, Mauro and Federico, Marcello | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 57--64 | EU-BRIDGE is a European research project which is aimed at developing innovative speech translation technology. One of the collaborative efforts within EU-BRIDGE is to produce joint submissions of up to four different partners to the evaluation campaign at the 2014 International Workshop on Spoken Language Translation (IWSLT). We submitted combined translations to the German{\textrightarrow}English spoken language translation (SLT) track as well as to the German{\textrightarrow}English, English{\textrightarrow}German and English{\textrightarrow}French machine translation (MT) tracks. In this paper, we present the techniques which were applied by the different individual translation systems of RWTH Aachen University, the University of Edinburgh, Karlsruhe Institute of Technology, and Fondazione Bruno Kessler. We then show the combination approach developed at RWTH Aachen University which combined the individual systems. The consensus translations yield empirical gains of up to 2.3 points in BLEU and 1.2 points in TER compared to the best individual system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,659 |
inproceedings | kazi-etal-2014-mitll | The {MITLL}-{AFRL} {IWSLT} 2014 {MT} system | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.8/ | Kazi, Michaeel and Salesky, Elizabeth and Thompson, Brian and Ray, Jessica and Coury, Michael and Anderson, Tim and Erdmann, Grant and Gwinnup, Jeremy and Young, Katherine and Ore, Brian and Hutt, Michael | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 65--72 | This report summarizes the MITLL-AFRL MT and ASR systems and the experiments run using them during the 2014 IWSLT evaluation campaign. Our MT system is much improved over last year, owing to integration of techniques such as PRO and DREM optimization, factored language models, neural network joint model rescoring, multiple phrase tables, and development set creation. We focused our eforts this year on the tasks of translating from Arabic, Russian, Chinese, and Farsi into English, as well as translating from English to French. ASR performance also improved, partly due to increased eforts with deep neural networks for hybrid and tandem systems. Work focused on both the English and Italian ASR tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,660 |
inproceedings | kilgour-etal-2014-2014 | The 2014 {KIT} {IWSLT} speech-to-text systems for {E}nglish, {G}erman and {I}talian | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.9/ | Kilgour, Kevin and Heck, Michael and M{\"uller, Markus and Sperber, Matthias and St{\"uker, Sebastian and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 73--79 | This paper describes our German, Italian and English Speech-to-Text (STT) systems for the 2014 IWSLT TED ASR track. Our setup uses ROVER and confusion network combination from various subsystems to achieve a good overall performance. The individual subsystems are built by using different front-ends, (e.g., MVDR-MFCC or lMel), acoustic models (GMM or modular DNN) and phone sets and by training on various subsets of the training data. Decoding is performed in two stages, where the GMM systems are adapted in an unsupervised manner on the combination of the first stage outputs using VTLN, MLLR, and cMLLR. The combination setup produces a final hypothesis that has a significantly lower WER than any of the individual subsystems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,661 |
inproceedings | morchid-etal-2014-topic | A topic-based approach for post-processing correction of automatic translations | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.10/ | Morchid, Mohamed and Huet, St{\'e}phane and Dufour, Richard | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 80--85 | We present the LIA systems for the machine translation evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2014 for the English-to-Slovene and English-to-Polish translation tasks. The proposed approach takes into account word context; first, it maps sentences into a latent Dirichlet allocation (LDA) topic space, then it chooses from this space words that are thematically and grammatically close to mistranslated words. This original post-processing approach is compared with a factored translation system built with MOSES. While this postprocessing method does not allow us to achieve better results than a state-of-the-art system, this should be an interesting way to explore, for example by adding this topic space information at an early stage in the translation process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,662 |
inproceedings | ng-etal-2014-usfd | The {USFD} {SLT} system for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.11/ | Ng, Raymond W. M. and Doulaty, Mortaza and Doddipatla, Rama and Aziz, Wilker and Shah, Kashif and Saz, Oscar and Hasan, Madina and AlHaribi, Ghada and Specia, Lucia and Hain, Thomas | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 86--91 | The University of Sheffield (USFD) participated in the International Workshop for Spoken Language Translation (IWSLT) in 2014. In this paper, we will introduce the USFD SLT system for IWSLT. Automatic speech recognition (ASR) is achieved by two multi-pass deep neural network systems with adaptation and rescoring techniques. Machine translation (MT) is achieved by a phrase-based system. The USFD primary system incorporates state-of-the-art ASR and MT techniques and gives a BLEU score of 23.45 and 14.75 on the English-to-French and English-to-German speech-to-text translation task with the IWSLT 2014 data. The USFD contrastive systems explore the integration of ASR and MT by using a quality estimation system to rescore the ASR outputs, optimising towards better translation. This gives a further 0.54 and 0.26 BLEU improvement respectively on the IWSLT 2012 and 2014 evaluation data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,663 |
inproceedings | nguyen-etal-2014-speech | The speech recognition systems of {IOIT} for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.12/ | Nguyen, Quoc Bao and Vu, Tat Thang and Luong, Chi Mai | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 92--95 | This paper describes the speech recognition systems of IOIT for IWSLT 2014 TED ASR track. This year, we focus on improving acoustic model for the systems using two main approaches of deep neural network which are hybrid and bottleneck feature systems. These two subsystems are combined using lattice Minimum Bayes-Risk decoding. On the 2013 evaluations set, which serves as a progress test set, we were able to reduce the word error rate of our transcription systems from 27.2{\%} to 24.0{\%}, a relative reduction of 11.7{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,664 |
inproceedings | romdhane-etal-2014-phrase | Phrase-based language modelling for statistical machine translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.13/ | Romdhane, Achraf Ben and Jamoussi, Salma and Hamadou, Abdelmajid Ben and Sma{\"ili, Kamel | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 96--99 | In this paper, we present our submitted MT system for the IWSLT2014 Evaluation Campaign. We participated in the English-French translation task. In this article we focus on one of the most important component of SMT: the language model. The idea is to use a phrase-based language model. For that, sequences from the source and the target language models are retrieved and used to calculate a phrase n-gram language model. These phrases are used to rewrite the parallel corpus which is then used to calculate a new translation model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,665 |
inproceedings | rousseau-etal-2014-lium | {LIUM} {E}nglish-to-{F}rench spoken language translation system and the Vecsys/{LIUM} automatic speech recognition system for {I}talian language for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.14/ | Rousseau, Anthony and Barrault, Lo{\"ic and Del{\'eglise, Paul and Est{\`eve, Yannick and Schwenk, Holger and Bennacef, Samir and Muscariello, Armando and Vanni, Stephan | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 100--105 | This paper describes the Spoken Language Translation system developed by the LIUM for the IWSLT 2014 evaluation campaign. We participated in two of the proposed tasks: (i) the Automatic Speech Recognition task (ASR) in two languages, Italian with the Vecsys company, and English alone, (ii) the English to French Spoken Language Translation task (SLT). We present the approaches and specificities found in our systems, as well as the results from the evaluation campaign. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,666 |
inproceedings | segal-etal-2014-limsi | {LIMSI} {E}nglish-{F}rench speech translation system | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.15/ | Segal, Natalia and Bonneau-Maynard, H{\'e}l{\`e}ne and Do, Quoc Khanh and Allauzen, Alexandre and Gauvain, Jean-Luc and Lamel, Lori and Yvon, Fran{\c{c}}ois | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 106--112 | This paper documents the systems developed by LIMSI for the IWSLT 2014 speech translation task (English{\textrightarrow}French). The main objective of this participation was twofold: adapting different components of the ASR baseline system to the peculiarities of TED talks and improving the machine translation quality on the automatic speech recognition output data. For the latter task, various techniques have been considered: punctuation and number normalization, adaptation to ASR errors, as well as the use of structured output layer neural network models for speech data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,667 |
inproceedings | shen-etal-2014-nct | The {NCT} {ASR} system for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.16/ | Shen, Peng and Lu, Yugang and Hu, Xinhui and Kanda, Naoyuki and Saiko, Masahiro and Hori, Chiori | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 113--118 | This paper describes our automatic speech recognition system for IWSLT2014 evaluation campaign. The system is based on weighted finite-state transducers and a combination of multiple subsystems which consists of four types of acoustic feature sets, four types of acoustic models, and N-gram and recurrent neural network language models. Compared with our system used in last year, we added additional subsystems based on deep neural network modeling on filter bank feature and convolutional deep neural network modeling on filter bank feature with tonal features. In addition, modifications and improvements on automatic acoustic segmentation and deep neural network speaker adaptation were applied. Compared with our last year`s system on speech recognition experiments, our new system achieved 21.5{\%} relative improvement on word error rate on the 2013 English test data set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,668 |
inproceedings | slawik-etal-2014-kit | The {KIT} translation systems for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.17/ | Slawik, Isabel and Mediani, Mohammed and Niehues, Jan and Zhang, Yuqi and Cho, Eunah and Herrmann, Teresa and Ha, Thanh-Le and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 119--126 | In this paper, we present the KIT systems participating in the TED translation tasks of the IWSLT 2014 machine translation evaluation. We submitted phrase-based translation systems for all three official directions, namely English{\textrightarrow}German, German{\textrightarrow}English, and English{\textrightarrow}French, as well as for the optional directions English{\textrightarrow}Chinese and English{\textrightarrow}Arabic. For the official directions we built systems both for the machine translation as well as the spoken language translation track. This year we improved our systems' performance over last year through n-best list rescoring using neural network-based translation and language models and novel preordering rules based on tree information of multiple syntactic levels. Furthermore, we could successfully apply a novel phrase extraction algorithm and transliteration of unknown words for Arabic. We also submitted a contrastive system for German{\textrightarrow}English built with stemmed German adjectives. For the SLT tracks, we used a monolingual translation system to translate the lowercased ASR hypotheses with all punctuation stripped to truecased, punctuated output as a preprocessing step to our usual translation system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,669 |
inproceedings | sudoh-etal-2014-ntt | {NTT}-{NAIST} syntax-based {SMT} systems for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.18/ | Sudoh, Katsuhito and Neubig, Graham and Duh, Kevin and Hayashi, Katsuhiko | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 127--133 | This paper presents NTT-NAIST SMT systems for English-German and German-English MT tasks of the IWSLT 2014 evaluation campaign. The systems are based on generalized minimum Bayes risk system combination of three SMT systems using the forest-to-string, syntactic preordering, and phrase-based translation formalisms. Individual systems employ training data selection for domain adaptation, truecasing, compound word splitting (for GermanEnglish), interpolated n-gram language models, and hypotheses rescoring using recurrent neural network language models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,670 |
inproceedings | wang-etal-2014-nict | The {NICT} translation system for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.20/ | Wang, Xiaolin and Finch, Andrew and Utiyama, Masao and Watanabe, Taro and Sumita, Eiichiro | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 139--142 | This paper describes NICT`s participation in the IWSLT 2014 evaluation campaign for the TED Chinese-English translation shared-task. Our approach used a combination of phrase-based and hierarchical statistical machine translation (SMT) systems. Our focus was in several areas, specifically system combination, word alignment, and various language modeling techniques including the use of neural network joint models. Our experiments on the test set from the 2013 shared task, showed that an improvement in BLEU score can be gained in translation performance through all of these techniques, with the largest improvements coming from using large data sizes to train the language model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,672 |
inproceedings | wolk-marasek-2014-polish | {P}olish-{E}nglish speech statistical machine translation systems for the {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.21/ | Wolk, Krzysztof and Marasek, Krzysztof | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 143--149 | This research explores effects of various training settings between Polish and English Statistical Machine Translation systems for spoken language. Various elements of the TED parallel text corpora for the IWSLT 2014 evaluation campaign were used as the basis for training of language models, and for development, tuning and testing of the translation system as well as Wikipedia based comparable corpora prepared by us. The BLEU, NIST, METEOR and TER metrics were used to evaluate the effects of data preparations on translation results. Our experiments included systems, which use lemma and morphological information on Polish words. We also conducted a deep analysis of provided Polish data as preparatory work for the automatic data correction and cleaning phase. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,673 |
inproceedings | wuebker-etal-2014-rwth | The {RWTH} {A}achen machine translation systems for {IWSLT} 2014 | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-evaluation.22/ | Wuebker, Joern and Peitz, Stephan and Guta, Andreas and Ney, Hermann | Proceedings of the 11th International Workshop on Spoken Language Translation: Evaluation Campaign | 150--154 | This work describes the statistical machine translation (SMT) systems of RWTH Aachen University developed for the evaluation campaign International Workshop on Spoken Language Translation (IWSLT) 2014. We participated in both the MT and SLT tracks for the English{\textrightarrow}French and German{\textrightarrow}English language pairs and applied the identical training pipeline and models on both language pairs. Our state-of-the-art phrase-based baseline systems are augmented with maximum expected BLEU training for phrasal, lexical and reordering models. Further, we apply rescoring with novel recurrent neural language and translation models. The same systems are used for the SLT track, where we additionally perform punctuation prediction on the automatic transcriptions employing hierarchical phrase-based translation. We are able to improve RWTH`s 2013 evaluation systems by 1.7-1.8{\%} BLEU absolute. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,674 |
inproceedings | ali-etal-2014-advances | Advances in dialectal {A}rabic speech recognition: a study using {T}witter to improve {E}gyptian {ASR} | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.1/ | Ali, Ahmed and Mubarak, Hamdy and Vogel, Stephan | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 156--162 | This paper reports results in building an Egyptian Arabic speech recognition system as an example for under-resourced languages. We investigated different approaches to build the system using 10 hours for training the acoustic model, and results for both grapheme system and phoneme system using MADA. The phoneme-based system shows better results than the grapheme-based system. In this paper, we explore the use of tweets written in dialectal Arabic. Using 880K Egyptian tweets reduced the Out Of Vocabulary (OOV) rate from 15.1{\%} to 3.2{\%} and the WER from 59.6{\%} to 44.7{\%}, a relative gain 25{\%} in WER. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,675 |
inproceedings | baumann-etal-2014-towards | Towards simultaneous interpreting: the timing of incremental machine translation and speech synthesis | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.2/ | Baumann, Timo and Bangalore, Srinivas and Hirschberg, Julia | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 163--168 | In simultaneous interpreting, human experts incrementally construct and extend partial hypotheses about the source speaker`s message, and start to verbalize a corresponding message in the target language, based on a partial translation {--} which may have to be corrected occasionally. They commence the target utterance in the hope that they will be able to finish understanding the source speaker`s message and determine its translation in time for the unfolding delivery. Of course, both incremental understanding and translation by humans can be garden-pathed, although experts are able to optimize their delivery so as to balance the goals of minimal latency, translation quality and high speech fluency with few corrections. We investigate the temporal properties of both translation input and output to evaluate the tradeoff between low latency and translation quality. In addition, we estimate the improvements that can be gained with a tempo-elastic speech synthesizer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,676 |
inproceedings | besacier-etal-2014-word | Word confidence estimation for speech translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.3/ | Besacier, L. and Lecouteux, B. and Luong, N. Q. and Hour, K. and Hadjsalah, M. | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 169--175 | Word Confidence Estimation (WCE) for machine translation (MT) or automatic speech recognition (ASR) consists in judging each word in the (MT or ASR) hypothesis as correct or incorrect by tagging it with an appropriate label. In the past, this task has been treated separately in ASR or MT contexts and we propose here a joint estimation of word confidence for a spoken language translation (SLT) task involving both ASR and MT. This research work is possible because we built a specific corpus which is first presented. This corpus contains 2643 speech utterances for which a quintuplet containing: ASR output (src-asr), verbatim transcript (src-ref), text translation output (tgt-mt), speech translation output (tgt-slt) and post-edition of translation (tgt-pe), is made available. The rest of the paper illustrates how such a corpus (made available to the research community) can be used for evaluating word confidence estimators in ASR, MT or SLT scenarios. WCE for SLT could help rescoring SLT output graphs, improving translators productivity (for translation of lectures or movie subtitling) or it could be useful in interactive speech-to-speech translation scenarios. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,677 |
inproceedings | cho-etal-2014-machine | Machine translation of multi-party meetings: segmentation and disfluency removal strategies | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.4/ | Cho, Eunah and Niehues, Jan and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 176--183 | Translating meetings presents a challenge since multi-speaker speech shows a variety of disfluencies. In this paper we investigate the importance of transforming speech into well-written input prior to translating multi-party meetings. We first analyze the characteristics of this data and establish oracle scores. Sentence segmentation and punctuation are performed using a language model, turn information, or a monolingual translation system. Disfluencies are removed by a CRF model trained on in-domain and out-of-domain data. For comparison, we build a combined CRF model for punctuation insertion and disfluency removal. By applying these models, multi-party meetings are transformed into fluent input for machine translation. We evaluate the models with regard to translation performance and are able to achieve an improvement of 2.1 to 4.9 BLEU points depending on the availability of turn information. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,678 |
inproceedings | ding-etal-2014-empircal | Empircal dependency-based head finalization for statistical {C}hinese-, {E}nglish-, and {F}rench-to-{M}yanmar ({B}urmese) machine translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.5/ | Ding, Chenchen and Thu, Ye Kyaw and Utiyama, Masao and Finch, Andrew and Sumita, Eiichiro | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 184--191 | We conduct dependency-based head finalization for statistical machine translation (SMT) for Myanmar (Burmese). Although Myanmar is an understudied language, linguistically it is a head-final language with similar syntax to Japanese and Korean. So, applying the efficient techniques of Japanese and Korean processing to Myanmar is a natural idea. Our approach is a combination of two approaches. The first is a head-driven phrase structure grammar (HPSG) based head finalization for English-to-Japanese translation, the second is dependency-based pre-ordering originally designed for English-to-Korean translation. We experiment on Chinese-, English-, and French-to-Myanmar translation, using a statistical pre-ordering approach as a comparison method. Experimental results show the dependency-based head finalization was able to consistently improve a baseline SMT system, for different source languages and different segmentation schemes for the Myanmar language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,679 |
inproceedings | do-etal-2014-discriminative | Discriminative adaptation of continuous space translation models | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.6/ | Do, Quoc-Khanh and Allauzen, Alexandre and Yvon, Fran{\c{c}}ois | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 192--199 | In this paper we explore various adaptation techniques for continuous space translation models (CSTMs). We consider the following practical situation: given a large scale, state-of-the-art SMT system containing a CSTM, the task is to adapt the CSTM to a new domain using a (relatively) small in-domain parallel corpus. Our method relies on the definition of a new discriminative loss function for the CSTM that borrows from both the max-margin and pair-wise ranking approaches. In our experiments, the baseline out-of-domain SMT system is initially trained for the WMT News translation task, and the CSTM is to be adapted to the lecture translation task as defined by IWSLT evaluation campaign. Experimental results show that an improvement of 1.5 BLEU points can be achieved with the proposed adaptation method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,680 |
inproceedings | eck-etal-2014-extracting | Extracting translation pairs from social network content | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.7/ | Eck, Matthias and Zemlyanskiy, Yuri and Zhang, Joy and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 200--205 | We introduce two methods to collect additional training data for statistical machine translation systems from public social network content. The first method identifies multilingual content where the author self-translated their own post to reach additional friends, fans or customers. Once identified, we can split the post in the language segments and extract translation pairs from this content. The second methods considers web links (URLs) that users add as part of their post to point the reader to a video, article or website. If the same URL is shared from different language users, there is a chance they might give the same comment in their respective language. We use a support vector machine (SVM) as a classifier to identify true translations from all candidate pairs. We collected additional translation pairs using both methods for the language pairs Spanish-English and Portuguese-English. Testing the collected data as additional training data for statistical machine translations on in-domain test sets resulted in very significant improvements of up to 5 BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,681 |
inproceedings | finch-etal-2014-exploration | An exploration of segmentation strategies in stream decoding | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.8/ | Finch, Andrew and Wang, Xiaolin and Sumita, Eiichiro | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 206--213 | In this paper we explore segmentation strategies for the stream decoder a method for decoding from a continuous stream of input tokens, rather than the traditional method of decoding from sentence segmented text. The behavior of the decoder is analyzed and modifications to the decoding algorithm are proposed to improve its performance. The experimental results show our proposed decoding strategies to be effective, and add support to the original findings that this approach is capable of approaching the performance of the underlying phrase-based machine translation decoder, at useful levels of latency. Our experiments evaluated the stream decoder on a broader set of language pairs than in previous work. We found most European language pairs were similar in character, and report results on English-Chinese and English-German pairs which are of interest due to the reordering required. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,682 |
inproceedings | gong-etal-2014-incremental | Incremental development of statistical machine translation systems | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.9/ | Gong, Li and Max, Aur{\'e}lien and Yvon, Fran{\c{c}}ois | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 214--222 | Statistical Machine Translation produces results that make it a competitive option in most machine-assisted translation scenarios. However, these good results often come at a very high computational cost and correspond to training regimes which are unfit to many practical contexts, where the ability to adapt to users and domains and to continuously integrate new data (eg. in post-edition contexts) are of primary importance. In this article, we show how these requirements can be met using a strategy for on-demand word alignment and model estimation. Most remarkably, our incremental system development framework is shown to deliver top quality translation performance even in the absence of tuning, and to surpass a strong baseline when performing online tuning. All these results obtained with great computational savings as compared to conventional systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,683 |
inproceedings | ha-etal-2014-lexical | Lexical translation model using a deep neural network architecture | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.10/ | Ha, Thanh-Le and Niehues, Jan and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 223--229 | In this paper we combine the advantages of a model using global source sentence contexts, the Discriminative Word Lexicon, and neural networks. By using deep neural networks instead of the linear maximum entropy model in the Discriminative Word Lexicon models, we are able to leverage dependencies between different source words due to the non-linearity. Furthermore, the models for different target words can share parameters and therefore data sparsity problems are effectively reduced. By using this approach in a state-of-the-art translation system, we can improve the performance by up to 0.5 BLEU points for three different language pairs on the TED translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,684 |
inproceedings | hewavitharana-etal-2014-anticipatory | Anticipatory translation model adaptation for bilingual conversations | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.11/ | Hewavitharana, Sanjika and Mehay, Dennis and Ananthakrishnan, Sankaranarayanan and Kumar, Rohit and Makhoul, John | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 230--235 | Conversational spoken language translation (CSLT) systems facilitate bilingual conversations in which the two participants speak different languages. Bilingual conversations provide additional contextual information that can be used to improve the underlying machine translation system. In this paper, we describe a novel translation model adaptation method that anticipates a participant`s response in the target language, based on his counterpart`s prior turn in the source language. Our proposed strategy uses the source language utterance to perform cross-language retrieval on a large corpus of bilingual conversations in order to obtain a set of potentially relevant target responses. The responses retrieved are used to bias translation choices towards anticipated responses. On an Iraqi-to-English CSLT task, our method achieves a significant improvement over the baseline system in terms of BLEU, TER and METEOR metrics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,685 |
inproceedings | karimova-etal-2014-offline | Offline extraction of overlapping phrases for hierarchical phrase-based translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.12/ | Karimova, Sariya and Simianer, Patrick and Riezler, Stefan | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 236--243 | Standard SMT decoders operate by translating disjoint spans of input words, thus discarding information in form of overlapping phrases that is present at phrase extraction time. The use of overlapping phrases in translation may enhance fluency in positions that would otherwise be phrase boundaries, they may provide additional statistical support for long and rare phrases, and they may generate new phrases that have never been seen in the training data. We show how to extract overlapping phrases offline for hierarchical phrasebased SMT, and how to extract features and tune weights for the new phrases. We find gains of 0.3 {\ensuremath{-}} 0.6 BLEU points over discriminatively trained hierarchical phrase-based SMT systems on two datasets for German-to-English translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,686 |
inproceedings | kumar-etal-2014-translations | Translations of the Callhome {E}gyptian {A}rabic corpus for conversational speech translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.13/ | Kumar, Gaurav and Cao, Yuan and Cotterell, Ryan and Callison-Burch, Chris and Povey, Daniel and Khudanpur, Sanjeev | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 244--248 | Translation of the output of automatic speech recognition (ASR) systems, also known as speech translation, has received a lot of research interest recently. This is especially true for programs such as DARPA BOLT which focus on improving spontaneous human-human conversation across languages. However, this research is hindered by the dearth of datasets developed for this explicit purpose. For Egyptian Arabic-English, in particular, no parallel speechtranscription-translation dataset exists in the same domain. In order to support research in speech translation, we introduce the Callhome Egyptian Arabic-English Speech Translation Corpus. This supplements the existing LDC corpus with four reference translations for each utterance in the transcripts. The result is a three-way parallel dataset of Egyptian Arabic Speech, transcriptions and English translations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,687 |
inproceedings | mediani-etal-2014-improving | Improving in-domain data selection for small in-domain sets | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.14/ | Mediani, Mohammed and Winebarger, Joshua and Waibel, Alexander | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 249--256 | Finding sufficient in-domain text data for language modeling is a recurrent challenge. Some methods have already been proposed for selecting parts of out-of-domain text data most closely resembling the in-domain data using a small amount of the latter. Including this new {\textquotedblleft}near-domain{\textquotedblright} data in training can potentially lead to better language model performance, while reducing training resources relative to incorporating all data. One popular, state-of-the-art selection process based on cross-entropy scores makes use of in-domain and out-ofdomain language models. In order to compensate for the limited availability of the in-domain data required for this method, we introduce enhancements to two of its steps. Firstly, we improve the procedure for drawing the outof-domain sample data used for selection. Secondly, we use word-associations in order to extend the underlying vocabulary of the sample language models used for scoring. These enhancements are applied to selecting text for language modeling of talks given in a technical subject area. Besides comparing perplexity, we judge the resulting language models by their performance in automatic speech recognition and machine translation tasks. We evaluate our method in different contexts. We show that it yields consistent improvements, up to 2{\%} absolute reduction in word error rate and 0.3 Bleu points. We achieve these improvements even given a much smaller in-domain set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,688 |
inproceedings | muller-etal-2014-multilingual | Multilingual deep bottle neck features: a study on language selection and training techniques | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.15/ | M{\"uller, Markus and St{\"uker, Sebastian and Sheikh, Zaid and Metze, Florian and Waibel, Alex | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 257--264 | Previous work has shown that training the neural networks for bottle neck feature extraction in a multilingual way can lead to improvements in word error rate and average term weighted value in a telephone key word search task. In this work we conduct a systematic study on a) which multilingual training strategy to employ, b) the effect of language selection and amount of multilingual training data used and c) how to find a suitable combination for languages. We conducted our experiment on the key word search task and the languages of the IARPA BABEL program. In a first step, we assessed the performance of a single language out of all available languages in combination with the target language. Based on these results, we then combined a multitude of languages. We also examined the influence of the amount of training data per language, as well as different techniques for combining the languages during network training. Our experiments show that data from arbitrary additional languages does not necessarily increase the performance of a system. But when combining a suitable set of languages, a significant gain in performance can be achieved. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,689 |
inproceedings | neubig-etal-2014-naist | The {NAIST}-{NTT} {TED} talk treebank | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.16/ | Neubig, Graham and Sudoh, Katsuhiro and Oda, Yusuke and Duh, Kevin and Tsukuda, Hajime and Nagata, Masaaki | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 265--270 | Syntactic parsing is a fundamental natural language processing technology that has proven useful in machine translation, language modeling, sentence segmentation, and a number of other applications related to speech translation. However, there is a paucity of manually annotated syntactic parsing resources for speech, and particularly for the lecture speech that is the current target of the IWSLT translation campaign. In this work, we present a new manually annotated treebank of TED talks that we hope will prove useful for investigation into the interaction between syntax and these speechrelated applications. The first version of the corpus includes 1,217 sentences and 23,158 words manually annotated with parse trees, and aligned with translations in 26-43 different languages. In this paper we describe the collection of the corpus, and an analysis of its various characteristics. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,690 |
inproceedings | peitz-etal-2014-better | Better punctuation prediction with hierarchical phrase-based translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.17/ | Peitz, Stephan and Freitag, Markus and Ney, Hermann | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 271--278 | Punctuation prediction is an important task in spoken language translation and can be performed by using a monolingual phrase-based translation system to translate from unpunctuated to text with punctuation. However, a punctuation prediction system based on phrase-based translation is not able to capture long-range dependencies between words and punctuation marks. In this paper, we propose to employ hierarchical translation in place of phrase-based translation and show that this approach is more robust for unseen word sequences. Furthermore, we analyze different optimization criteria for tuning the scaling factors of a monolingual statistical machine translation system. In our experiments, we compare the new approach with other punctuation prediction methods and show improvements in terms of F1-Score and BLEU on the IWSLT 2014 German{\textrightarrow}English and English{\textrightarrow}French translation tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,691 |
inproceedings | wu-etal-2014-rule | Rule-based preordering on multiple syntactic levels in statistical machine translation | Federico, Marcello and St{\"uker, Sebastian and Yvon, Fran{\c{cois | dec # " 4-5" | 2014 | Lake Tahoe, California | null | https://aclanthology.org/2014.iwslt-papers.18/ | Wu, Ge and Zhang, Yuqi and Waibel, Alexander | Proceedings of the 11th International Workshop on Spoken Language Translation: Papers | 279--286 | We propose a novel data-driven rule-based preordering approach, which uses the tree information of multiple syntactic levels. This approach extend the tree-based reordering from one level into multiple levels, which has the capability to process more complicated reordering cases. We have conducted experiments in English-to-Chinese and Chinese-to-English translation directions. Our results show that the approach has led to improved translation quality both when it was applied separately or when it was combined with some other reordering approaches. As our reordering approach was used alone, it showed an improvement of 1.61 in BLEU score in the English-to-Chinese translation direction and an improvement of 2.16 in BLEU score in the Chinese-to-English translation direction, in comparison with the baseline, which used no word reordering. As our preordering approach were combined with the short rule [1], long rule [2] and tree rule [3] based preordering approaches, it showed further improvements of up to 0.43 in BLEU score in the English-to-Chinese translation direction and further improvements of up to 0.3 in BLEU score in the Chinese-to-English translation direction. Through the translations that used our preordering approach, we have also found many translation examples with improved syntactic structures. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,692 |
inproceedings | derzhanski-dekova-2014-electronic | Electronic Language Resources in Teaching Mathematical Linguistics | null | sep | 2014 | Sofia, Bulgaria | Department of Computational Linguistics, Institute for Bulgarian Language, Bulgarian Academy of Sciences | https://aclanthology.org/2014.clib-1.1/ | Derzhanski, Ivan and Dekova, Rositsa | Proceedings of the First International Conference on Computational Linguistics in Bulgaria (CLIB 2014) | 1--5 | The central role of electronic language resources in education is widely recognised (cf. Brinkley et al, 1999; Bennett, 2010; Derzhanski et al., 2007, among others). The variety and ease of access of such resources predetermines their extensive use in both research and education. With regard to teaching mathematical linguistics, electronic dictionaries and annotated corpora play a particularly important part, being an essential source of information for composing linguistic problems and presenting linguistic knowledge. This paper discusses the need for electronic resources, especially for less studied or low-resource languages, their creation and various uses in teaching linguistics to secondary school students, with examples mostly drawn from our practical work. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,740 |
inproceedings | karagiozov-2014-harnessing | Harnessing Language Technologies in Multilingual Information Channelling Services | null | sep | 2014 | Sofia, Bulgaria | Department of Computational Linguistics, Institute for Bulgarian Language, Bulgarian Academy of Sciences | https://aclanthology.org/2014.clib-1.2/ | Karagiozov, Diman | Proceedings of the First International Conference on Computational Linguistics in Bulgaria (CLIB 2014) | 6--13 | Scientists and industry have put significant efforts in creating suitable tools to analyze information flows. However, up to now there are no successful solutions for 1) dynamic modeling of the user-defined interests and further personalization of the results, 2) effective cross-language information retrieval, and 3) processing of multilingual content. As a consequence, much of the potentially relevant and otherwise accessible data from the media stream may elude users' grasp. We present a multilingual information channeling system, MediaTalk, which offers broad integration between language technologies and advanced data processing algorithms for annotation, analysis and classification of multilingual content. As a result, the system not only provides an all-in-one monitoring service that covers both traditional and social media, but also offers dynamic modeling of user profiles, personalization of obtained data and cross-language information retrieval. Bulgarian and English press clipping services relying on this system implement advanced functionalities such as identification of emerging topics, forecasting and trend prediction, all of which allow the users to monitor their standing reputation, events and relations. The architecture of the system is robust, extensible and adheres to the Big Data paradigm. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,741 |
inproceedings | leseva-etal-2014-automatic | Automatic Semantic Filtering of Morphosemantic Relations in {W}ord{N}et | null | sep | 2014 | Sofia, Bulgaria | Department of Computational Linguistics, Institute for Bulgarian Language, Bulgarian Academy of Sciences | https://aclanthology.org/2014.clib-1.3/ | Leseva, Svetlozara and Stoyanova, Ivelina and Rizov, Borislav and Todorova, Maria and Tarpomanova, Ekaterina | Proceedings of the First International Conference on Computational Linguistics in Bulgaria (CLIB 2014) | 14--22 | In this paper we present a method for automatic assignment of morphosemantic relations between derivationally related verb{--}noun pairs of synsets in the Bulgarian WordNet (BulNet) and for semantic filtering of those relations. The filtering process relies on the meaning of noun suffixes and the semantic compatibility of verb and noun taxonomic classes. We use the taxonomic labels assigned to all the synsets in the Princeton WordNet (PWN) {--} one label per synset {--} which denote their general semantic class. In the first iteration we employ the pairs {\ensuremath{<}}noun suffix : noun label{\ensuremath{>}} to filter out part of the relations. In the second iteration, which uses as input the output of the first one, we apply a stronger semantic filter. It makes use of the taxonomic labels of the noun-verb synset pairs observed for a given morphosemantic relation. In this way we manage to reliably filter out impossible or unlikely combinations. The results of the performed experiment may be applied to enrich BulNet with morphosemantic relations and new synsets semi-automatically, while facilitating the manual work and reducing its cost. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 68,742 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.