entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | driesen-etal-2013-description | Description of the {UEDIN} system for {G}erman {ASR} | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.11/ | Driesen, Joris and Bell, Peter and Sinclair, Mark and Renals, Steve | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | In this paper we describe the ASR system for German built at the University of Edinburgh (UEDIN) for the 2013 IWSLT evaluation campaign. For ASR, the major challenge to overcome, was to find suitable acoustic training data. Due to the lack of expertly transcribed acoustic speech data for German, acoustic model training had to be performed on publicly available data crawled from the internet. For evaluation, lack of a manual segmentation into utterances was handled in two different ways: by generating an automatic segmentation, and by treating entire input files as a single segment. Demonstrating the latter method is superior in the current task, we obtained a WER of 28.16{\%} on the dev set and 36.21{\%} on the test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,597 |
inproceedings | sudoh-etal-2013-ntt | {NTT}-{NAIST} {SMT} systems for {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.12/ | Sudoh, Katsuhito and Neubig, Graham and Duh, Kevin and Tsukada, Hajime | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper presents NTT-NAIST SMT systems for English-German and German-English MT tasks of the IWSLT 2013 evaluation campaign. The systems are based on generalized minimum Bayes risk system combination of three SMT systems: forest-to-string, hierarchical phrase-based, phrasebased with pre-ordering. Individual SMT systems include data selection for domain adaptation, rescoring using recurrent neural net language models, interpolated language models, and compound word splitting (only for German-English). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,598 |
inproceedings | kilgour-etal-2013-2013 | The 2013 {KIT} {IWSLT} speech-to-text systems for {G}erman and {E}nglish | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.13/ | Kilgour, Kevin and Mohr, Christian and Heck, Michael and Nguyen, Quoc Bao and Nguyen, Van Huy and Shin, Evgeniy and Tseyzer, Igor and Gehring, Jonas and M{\"uller, Markus and Sperber, Matthias and St{\"uker, Sebastian and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes our English Speech-to-Text (STT) systems for the 2013 IWSLT TED ASR track. The systems consist of multiple subsystems that are combinations of different front-ends, e.g. MVDR-MFCC based and lMel based ones, GMM and NN acoustic models and different phone sets. The outputs of the subsystems are combined via confusion network combination. Decoding is done in two stages, where the systems of the second stage are adapted in an unsupervised manner on the combination of the first stage outputs using VTLN, MLLR, and cMLLR. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,599 |
inproceedings | wolk-marasek-2013-polish | {P}olish-{E}nglish speech statistical machine translation systems for the {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.14/ | Wolk, Krzysztof and Marasek, Krzysztof | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This research explores the effects of various training settings from Polish to English Statistical Machine Translation system for spoken language. Various elements of the TED parallel text corpora for the IWSLT 2013 evaluation campaign were used as the basis for training of language models, and for development, tuning and testing of the translation system. The BLEU, NIST, METEOR and TER metrics were used to evaluate the effects of data preparations on translation results. Our experiments included systems, which use stems and morphological information on Polish words. We also conducted a deep analysis of provided Polish data as preparatory work for the automatic data correction and cleaning phase. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,600 |
inproceedings | shaik-etal-2013-rwth | The {RWTH} {A}achen {G}erman and {E}nglish {LVCSR} systems for {IWSLT}-2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.15/ | Shaik, M. Ali Basha and T{\"uske, Zoltan and Wiesler, Simon and Nu{\ssbaum-Thom, Markus and Peitz, Stephan and Schl{\"uter, Ralf and Ney, Hermann | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | In this paper, German and English large vocabulary continuous speech recognition (LVCSR) systems developed by the RWTH Aachen University for the IWSLT-2013 evaluation campaign are presented. Good improvements are obtained with state-of-the-art monolingual and multilingual bottleneck features. In addition, an open vocabulary approach using morphemic sub-lexical units is investigated along with the language model adaptation for the German LVCSR. For both the languages, competitive WERs are achieved using system combination. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,601 |
inproceedings | freitag-etal-2013-eu | {EU}-{BRIDGE} {MT}: text translation of talks in the {EU}-{BRIDGE} project | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.16/ | Freitag, Markus and Peitz, Stephan and Wuebker, Joern and Ney, Hermann and Durrani, Nadir and Huck, Matthias and Koehn, Philipp and Ha, Thanh-Le and Niehues, Jan and Mediani, Mohammed and Herrmann, Teresa and Waibel, Alex and Bertoldi, Nicola and Cettolo, Mauro and Federico, Marcello | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | EU-BRIDGE1 is a European research project which is aimed at developing innovative speech translation technology. This paper describes one of the collaborative efforts within EUBRIDGE to further advance the state of the art in machine translation between two European language pairs, English{\textrightarrow}French and German{\textrightarrow}English. Four research institutions involved in the EU-BRIDGE project combined their individual machine translation systems and participated with a joint setup in the machine translation track of the evaluation campaign at the 2013 International Workshop on Spoken Language Translation (IWSLT). We present the methods and techniques to achieve high translation quality for text translation of talks which are applied at RWTH Aachen University, the University of Edinburgh, Karlsruhe Institute of Technology, and Fondazione Bruno Kessler. We then show how we have been able to considerably boost translation performance (as measured in terms of the metrics BLEU and TER) by means of system combination. The joint setups yield empirical gains of up to 1.4 points in BLEU and 2.8 points in TER on the IWSLT test sets compared to the best single systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,602 |
inproceedings | kazi-etal-2013-mit | The {MIT}-{LL}/{AFRL} {IWSLT}-2013 {MT} system | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.17/ | Kazi, Michaeel and Coury, Michael and Salesky, Elizabeth and Ray, Jessica and Shen, Wade and Gleason, Terry and Anderson, Tim and Erdmann, Grant and Schwartz, Lane and Ore, Brian and Slyh, Raymond and Gwinnup, Jeremy and Young, Katherine and Hutt, Michael | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2013 evaluation campaign [1]. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Russian to English, Chinese to English, Arabic to English, and English to French TED-talk translation task. We also applied our existing ASR system to the TED-talk lecture ASR task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2012 system, and experiments we ran during the IWSLT-2013 evaluation. Specifically, we focus on 1) cross-entropy filtering of MT training data, and 2) improved optimization techniques, 3) language modeling, and 4) approximation of out-of-vocabulary words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,603 |
inproceedings | pham-etal-2013-speech | The speech recognition and machine translation system of {IOIT} for {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.18/ | Pham, Ngoc-Quan and Le, Hai-Son and Vu, Tat-Thang and Luong, Chi-Mai | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the Automatic Speech Recognition (ASR) and Machine Translation (MT) systems developed by IOIT for the evaluation campaign of IWSLT2013. For the ASR task, using Kaldi toolkit, we developed the system based on weighted finite state transducer. The system is constructed by applying several techniques, notably, subspace Gaussian mixture models, speaker adaptation, discriminative training, system combination and SOUL, a neural network language model. The techniques used for automatic segmentation are also clarified. Besides, we compared different types of SOUL models in order to study the impact of words of previous sentences in predicting words in language modeling. For the MT task, the baseline system was built based on the open source toolkit N-code, then being augmented by using SOUL on top, i.e., in N-best rescoring phase. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,604 |
inproceedings | yilmaz-etal-2013-tubitak | {T{\"UB{\.ITAK {Turkish-{English submissions for {IWSLT 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.19/ | Y{\ilmaz, Ertu{\u{grul and El-Kahlout, {\.Ilknur Durgar and Ayd{\in, Burak and {\"Ozil, Zi{\c{san S{\ila and Mermer, Co{\c{skun | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the TU {\ensuremath{\ddot{}}} B {\ensuremath{\dot{}}}ITAK Turkish-English submissions in both directions for the IWSLT`13 Evaluation Campaign TED Machine Translation (MT) track. We develop both phrase-based and hierarchical phrase-based statistical machine translation (SMT) systems based on Turkish wordand morpheme-level representations. We augment training data with content words extracted from itself and experiment with reverse word order for source languages. For the Turkish-to-English direction, we use Gigaword corpus as an additional language model with the training data. For the English-to-Turkish direction, we implemented a wide coverage Turkish word generator to generate words from the stem and morpheme sequences. Finally, we perform system combination of the different systems produced with different word alignments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,605 |
inproceedings | bertoldi-etal-2013-fbks | {FBK}`s machine translation systems for the {IWSLT} 2013 evaluation campaign | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.20/ | Bertoldi, Nicola and Farajian, M. Amin and Mathur, Prashant and Ruiz, Nicholas and Federico, Marcello | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the systems submitted by FBK for the MT track of IWSLT 2013. We participated in the English-French as well as the bidirectional Persian-English translation tasks. We report substantial improvements in our English-French systems over last year`s baselines, largely due to improved techniques of combining translation and language models. For our Persian-English and English-Persian systems, we observe substantive improvements over baselines submitted by the workshop organizers, due to enhanced language-specific text normalization and the creation of a large monolingual news corpus in Persian. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,606 |
inproceedings | simianer-etal-2013-heidelberg | The Heidelberg University machine translation systems for {IWSLT}2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.21/ | Simianer, Patrick and Jehl, Laura and Riezler, Stefan | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | We present our systems for the machine translation evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2013. We submitted systems for three language directions: German-to-English, Russian-to-English and English-to-Russian. The focus of our approaches lies on effective usage of the in-domain parallel training data. Therefore, we use the training data to tune parameter weights for millions of sparse lexicalized features using efficient parallelized stochastic learning techniques. For German-to-English we incorporate syntax features. We combine all of our systems with large language models. For the systems involving Russian we also incorporate more data into building of the translation models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,607 |
inproceedings | bell-etal-2013-uedin | The {UEDIN} {E}nglish {ASR} system for the {IWSLT} 2013 evaluation | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.22/ | Bell, Peter and McInnes, Fergus and Gangireddy, Siva Reddy and Sinclair, Mark and Birch, Alexandra and Renals, Steve | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the University of Edinburgh (UEDIN) English ASR system for the IWSLT 2013 Evaluation. Notable features of the system include deep neural network acoustic models in both tandem and hybrid configuration, cross-domain adaptation with multi-level adaptive networks, and the use of a recurrent neural network language model. Improvements to our system since the 2012 evaluation {--} which include the use of a significantly improved n-gram language model {--} result in a 19{\%} relative WER reduction on the tst2012 set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,608 |
inproceedings | sakti-etal-2013-naist | The {NAIST} {E}nglish speech recognition system for {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.23/ | Sakti, Sakriani and Kubo, Keigo and Neubig, Graham and Toda, Tomoki and Nakamura, Satoshi | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | This paper describes the NAIST English speech recognition system for the IWSLT 2013 Evaluation Campaign. In particular, we participated in the ASR track of the IWSLT TED task. Last year, we participated in collaboration with Karlsruhe Institute of Technology (KIT). This year is our first time to build a full-fledged ASR system for IWSLT solely developed by NAIST. Our final system utilizes weighted finitestate transducers with four-gram language models. The hypothesis selection is based on the principle of system combination. On the IWSLT official test set our system introduced in this work achieves a WER of 9.1{\%} for tst2011, 10.0{\%} for tst2012, and 16.2{\%} for the new tst2013. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,609 |
inproceedings | ha-etal-2013-kit | The {KIT} translation systems for {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.24/ | Ha, Than-Le and Herrmann, Teresa and Niehues, Jan and Mediani, Mohammed and Cho, Eunah and Zhang, Yuqi and Slawik, Isabel and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | In this paper, we present the KIT systems participating in all three official directions, namely English{\textrightarrow}German, German{\textrightarrow}English, and English{\textrightarrow}French, in translation tasks of the IWSLT 2013 machine translation evaluation. Additionally, we present the results for our submissions to the optional directions English{\textrightarrow}Chinese and English{\textrightarrow}Arabic. We used phrase-based translation systems to generate the translations. This year, we focused on adapting the systems towards ASR input. Furthermore, we investigated different reordering models as well as an extended discriminative word lexicon. Finally, we added a data selection approach for domain adaptation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,610 |
inproceedings | peng-etal-2013-casia | The {CASIA} machine translation system for {IWSLT} 2013 | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-evaluation.25/ | Peng, Xingyuan and Fu, Xiaoyin and Wei, Wei and Chen, Zhenbiao and Chen, Wei and Xu, Bo | Proceedings of the 10th International Workshop on Spoken Language Translation: Evaluation Campaign | null | In this paper, we describe the CASIA statistical machine translation (SMT) system for the IWSLT2013 Evaluation Campaign. We participated in the Chinese-English and English-Chinese translation tasks. For both of these tasks, we used a hierarchical phrase-based (HPB) decoder and made it as our baseline translation system. A number of techniques were proposed to deal with these translation tasks, including parallel sentence extraction, pre-processing, translation model (TM) optimization, language model (LM) interpolation, turning, and post-processing. With these techniques, the translation results were significantly improved compared with that of the baseline system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,611 |
inproceedings | schmidt-etal-2013-using | Using viseme recognition to improve a sign language translation system | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.1/ | Schmidt, Christoph and Koller, Oscar and Ney, Hermann and Hoyoux, Thomas and Piater, Justus | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Sign language-to-text translation systems are similar to spoken language translation systems in that they consist of a recognition phase and a translation phase. First, the video of a person signing is transformed into a transcription of the signs, which is then translated into the text of a spoken language. One distinctive feature of sign languages is their multi-modal nature, as they can express meaning simultaneously via hand movements, body posture and facial expressions. In some sign languages, certain signs are accompanied by mouthings, i.e. the person silently pronounces the word while signing. In this work, we closely integrate a recognition and translation framework by adding a viseme recognizer ({\textquotedblleft}lip reading system{\textquotedblright}) based on an active appearance model and by optimizing the recognition system to improve the translation output. The system outperforms the standard approach of separate recognition and translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,612 |
inproceedings | guzman-etal-2013-amara | The {AMARA} corpus: building resources for translating the web`s educational content | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.2/ | Guzman, Francisco and Sajjad, Hassan and Vogel, Stephan and Abdelali, Ahmed | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | In this paper, we introduce a new parallel corpus of subtitles of educational videos: the AMARA corpus for online educational content. We crawl a multilingual collection community generated subtitles, and present the results of processing the Arabic{--}English portion of the data, which yields a parallel corpus of about 2.6M Arabic and 3.9M English words. We explore different approaches to align the segments, and extrinsically evaluate the resulting parallel corpus on the standard TED-talks tst-2010. We observe that the data can be successfully used for this task, and also observe an absolute improvement of 1.6 BLEU when it is used in combination with TED data. Finally, we analyze some of the specific challenges when translating the educational content. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,613 |
inproceedings | shimizu-etal-2013-constructing | Constructing a speech translation system using simultaneous interpretation data | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.3/ | Shimizu, Hiroaki and Neubig, Graham and Sakti, Sakriani and Toda, Tomoki and Nakamura, Satoshi | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | There has been a fair amount of work on automatic speech translation systems that translate in real-time, serving as a computerized version of a simultaneous interpreter. It has been noticed in the field of translation studies that simultaneous interpreters perform a number of tricks to make the content easier to understand in real-time, including dividing their translations into small chunks, or summarizing less important content. However, the majority of previous work has not specifically considered this fact, simply using translation data (made by translators) for learning of the machine translation system. In this paper, we examine the possibilities of additionally incorporating simultaneous interpretation data (made by simultaneous interpreters) in the learning process. First we collect simultaneous interpretation data from professional simultaneous interpreters of three levels, and perform an analysis of the data. Next, we incorporate the simultaneous interpretation data in the learning of the machine translation system. As a result, the translation style of the system becomes more similar to that of a highly experienced simultaneous interpreter. We also find that according to automatic evaluation metrics, our system achieves performance similar to that of a simultaneous interpreter that has 1 year of experience. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,614 |
inproceedings | gonzalez-rubio-casacuberta-2013-improving | Improving the minimum {B}ayes' risk combination of machine translation systems | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.4/ | Gonz{\'a}lez-Rubio, Jes{\'u}s and Casacuberta, Francisco | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | We investigate the problem of combining the outputs of different translation systems into a minimum Bayes' risk consensus translation. We explore different risk formulations based on the BLEU score, and provide a dynamic programming decoding algorithm for each of them. In our experiments, these algorithms generated consensus translations with better risk, and more efficiently, than previous proposals. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,615 |
inproceedings | gonzalez-rubio-etal-2013-emprical | Emprical study of a two-step approach to estimate translation quality | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.5/ | Gonz{\'a}lez-Rubio, Jes{\'u}s and Navarro-Cerd{\'a}n, J. Ram{\'o}n and Casacuberta, Francisco | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | We present a method to estimate the quality of automatic translations when reference translations are not available. Quality estimation is addressed as a two-step regression problem where multiple features are combined to predict a quality score. Given a set of features, we aim at automatically extracting the variables that better explain translation quality, and use them to predict the quality score. The soundness of our approach is assessed by the encouraging results obtained in an exhaustive experimentation with several feature sets. Moreover, the studied approach is highly-scalable allowing us to employ hundreds of features to predict translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,616 |
inproceedings | winebarger-etal-2013-2013 | The 2013 {KIT} Quaero speech-to-text system for {F}rench | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.6/ | Winebarger, Joshua and Nguyen, Bao and Gehring, Jonas and St{\"uker, Sebastian and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | This paper describes our Speech-to-Text (STT) system for French, which was developed as part of our efforts in the Quaero program for the 2013 evaluation. Our STT system consists of six subsystems which were created by combining multiple complementary sources of pronunciation modeling including graphemes with various feature front-ends based on deep neural networks and tonal features. Both speaker-independent and speaker adaptively trained versions of the systems were built. The resulting systems were then combined via confusion network combination and crossadaptation. Through progressive advances and system combination we reach a word error rate (WER) of 16.5{\%} on the 2012 Quaero evaluation data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,617 |
inproceedings | gong-etal-2013-improving | Improving bilingual sub-sentential alignment by sampling-based transpotting | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.7/ | Gong, Li and Max, Aur{\'e}lien and Yvon, Fran{\c{c}}ois | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | In this article, we present a sampling-based approach to improve bilingual sub-sentential alignment in parallel corpora. This approach can be used to align parallel sentences on an as needed basis, and is able to accurately align newly available sentences. We evaluate the resulting alignments on several Machine Translation tasks. Results show that for the tasks considered here, our approach performs on par with the state-of-the-art statistical alignment pipeline giza++/Moses, and obtains superior results in a number of configurations, notably when aligning additional parallel sentence pairs carefully selected to match the test input. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,618 |
inproceedings | heck-etal-2013-incremental | Incremental unsupervised training for university lecture recognition | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.8/ | Heck, Michael and St{\"uker, Sebastian and Sakti, Sakriani and Waibel, Alex and Nakamura, Satoshi | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | In this paper we describe our work on unsupervised adaptation of the acoustic model of our simultaneous lecture translation system. We trained a speaker independent acoustic model, with which we produce automatic transcriptions of new lectures in order to improve the system for a specific lecturer. We compare our results against a model that was trained in a supervised way on an exact manual transcription. We examine four different ways of processing the decoder outputs of the automatic transcription with respect to the treatment of pronunciation variants and noise words. We will show that, instead of fixating the latter informations in the transcriptions, it is of advantage to let the Viterbi algorithm during training decide which pronunciations to use and where to insert which noise words. Further, we utilize word level posterior probabilities obtained during decoding by weighting and thresholding the words of a transcription. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,619 |
inproceedings | enarvi-kurimo-2013-studies | Studies on training text selection for conversational {F}innish language modeling | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.9/ | Enarvi, Seppo and Kurimo, Mikko | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Current ASR and MT systems do not operate on conversational Finnish, because training data for colloquial Finnish has not been available. Although speech recognition performance on literary Finnish is already quite good, those systems have very poor baseline performance in conversational speech. Text data for relevant vocabulary and language models can be collected from the Internet, but web data is very noisy and most of it is not helpful for learning good models. Finnish language is highly agglutinative, and written phonetically. Even phonetic reductions and sandhi are often written down in informal discussions. This increases vocabulary size dramatically and causes word-based selection methods to fail. Our selection method explicitly optimizes the perplexity of a subword language model on the development data, and requires only very limited amount of speech transcripts as development data. The language models have been evaluated for speech recognition using a new data set consisting of generic colloquial Finnish. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,620 |
inproceedings | mirkin-cancedda-2013-assessing | Assessing quick update methods of statistical translation models | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.10/ | Mirkin, Shachar and Cancedda, Nicola | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | The ability to quickly incorporate incoming training data into a running translation system is critical in a number of applications. Mechanisms based on incremental model update and the online EM algorithm hold the promise of achieving this objective in a principled way. Still, efficient tools for incremental training are yet to be available. In this paper we experiment with simple alternative solutions for interim model updates, within the popular Moses system. Short of updating the model in real time, such updates can execute in short timeframes even when operating on large models, and achieve a performance level close to, and in some cases exceeding, that of batch retraining. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,621 |
inproceedings | herrmann-etal-2013-analyzing | Analyzing the potential of source sentence reordering in statistical machine translation | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.11/ | Herrmann, Teresa and Weiner, Jochen and Niehues, Jan and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | We analyze the performance of source sentence reordering, a common reordering approach, using oracle experiments on German-English and English-German translation. First, we show that the potential of this approach is very promising. Compared to a monotone translation, the optimally reordered source sentence leads to improvements of up to 4.6 and 6.2 BLEU points, depending on the language. Furthermore, we perform a detailed evaluation of the different aspects of the approach. We analyze the impact of the restriction of the search space by reordering lattices and we can show that using more complex rule types for reordering results in better approximation of the optimally reordered source. However, a gap of about 3 to 3.8 BLEU points remains, presenting a promising perspective for research on extending the search space through better reordering rules. When evaluating the ranking of different reordering variants, the results reveal that the search for the best path in the lattice performs very well for German-English translation. For English-German translation there is potential for an improvement of up to 1.4 BLEU points through a better ranking of the different reordering possibilities in the reordering lattice. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,622 |
inproceedings | cho-etal-2013-crf | {CRF}-based disfluency detection using semantic features for {G}erman to {E}nglish spoken language translation | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.12/ | Cho, Eunah and Ha, Than-Le and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Disfluencies in speech pose severe difficulties in machine translation of spontaneous speech. This paper presents our conditional random field (CRF)-based speech disfluency detection system developed on German to improve spoken language translation performance. In order to detect speech disfluencies considering syntactics and semantics of speech utterances, we carried out a CRF-based approach using information learned from the word representation and the phrase table used for machine translation. The word representation is gained using recurrent neural networks and projected words are clustered using the k-means algorithm. Using the output from the model trained with the word representations and phrase table information, we achieve an improvement of 1.96 BLEU points on the lecture test set. By keeping or removing humanannotated disfluencies, we show an upper bound and lower bound of translation quality. In an oracle experiment we gain 3.16 BLEU points of improvement on the lecture test set, compared to the same set with all disfluencies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,623 |
inproceedings | shin-etal-2013-maximum | Maximum entropy language modeling for {R}ussian {ASR} | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.13/ | Shin, Evgeniy and St{\"uker, Sebastian and Kilgour, Kevin and F{\"ugen, Christian and Waibel, Alex | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Russian is a challenging language for automatic speech recognition systems due to its rich morphology. This rich morphology stems from Russian`s highly inflectional nature and the frequent use of preand suffixes. Also, Russian has a very free word order, changes in which are used to reflect connotations of the sentences. Dealing with these phenomena is rather difficult for traditional n-gram models. We therefore investigate in this paper the use of a maximum entropy language model for Russian whose features are specifically designed to deal with the inflections in Russian, as well as the loose word order. We combine this with a subword based language model in order to alleviate the problem of large vocabulary sizes necessary for dealing with highly inflecting languages. Applying the maximum entropy language model during re-scoring improves the word error rate of our recognition system by 1.2{\%} absolute, while the use of the sub-word based language model reduces the vocabulary size from 120k to 40k and the OOV rate from 4.8{\%} to 2.1{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,624 |
inproceedings | post-etal-2013-improved | Improved speech-to-text translation with the Fisher and Callhome {S}panish-{E}nglish speech translation corpus | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.14/ | Post, Matt and Kumar, Gaurav and Lopez, Adam and Karakos, Damianos and Callison-Burch, Chris and Khudanpur, Sanjeev | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Research into the translation of the output of automatic speech recognition (ASR) systems is hindered by the dearth of datasets developed for that explicit purpose. For SpanishEnglish translation, in particular, most parallel data available exists only in vastly different domains and registers. In order to support research on cross-lingual speech applications, we introduce the Fisher and Callhome Spanish-English Speech Translation Corpus, supplementing existing LDC audio and transcripts with (a) ASR 1-best, lattice, and oracle output produced by the Kaldi recognition system and (b) English translations obtained on Amazon`s Mechanical Turk. The result is a four-way parallel dataset of Spanish audio, transcriptions, ASR lattices, and English translations of approximately 38 hours of speech, with defined training, development, and held-out test sets. We conduct baseline machine translation experiments using models trained on the provided training data, and validate the dataset by corroborating a number of known results in the field, including the utility of in-domain (information, conversational) training data, increased performance translating lattices (instead of recognizer 1-best output), and the relationship between word error rate and BLEU score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,625 |
inproceedings | saers-wu-2013-unsupervised-learning | Unsupervised learning of bilingual categories in inversion transduction grammar induction | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.15/ | Saers, Markus and Wu, Dekai | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | We present the first known experiments incorporating unsupervised bilingual nonterminal category learning within end-to-end fully unsupervised transduction grammar induction using matched training and testing models. Despite steady recent progress, such induction experiments until now have not allowed for learning differentiated nonterminal categories. We divide the learning into two stages: (1) a bootstrap stage that generates a large set of categorized short transduction rule hypotheses, and (2) a minimum conditional description length stage that simultaneously prunes away less useful short rule hypotheses, while also iteratively segmenting full sentence pairs into useful longer categorized transduction rules. We show that the second stage works better when the rule hypotheses have categories than when they do not, and that the proposed conditional description length approach combines the rules hypothesized by the two stages better than a mixture model does. We also show that the compact model learned during the second stage can be further improved by combining the result of different iterations in a mixture model. In total, we see a jump in BLEU score, from 17.53 for a standalone minimum description length baseline with no category learning, to 20.93 when incorporating category induction on a Chinese{--}English translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,626 |
inproceedings | marie-max-2013-study | A study in greedy oracle improvement of translation hypotheses | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.16/ | Marie, Benjamin and Max, Aur{\'e}lien | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | This paper describes a study of translation hypotheses that can be obtained by iterative, greedy oracle improvement from the best hypothesis of a state-of-the-art phrase-based Statistical Machine Translation system. The factors that we consider include the influence of the rewriting operations, target languages, and training data sizes. Analysis of our results provide new insights into some previously unanswered questions, which include the reachability of previously unreachable hypotheses via indirect translation (thanks to the introduction of a rewrite operation on the source text), and the potential translation performance of systems relying on pruned phrase tables. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,627 |
inproceedings | ananthakrishnan-etal-2013-source | Source aware phrase-based decoding for robust conversational spoken language translation | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.17/ | Ananthakrishnan, Sankaranarayanan and Chen, Wei and Kumar, Rohit and Mehay, Dennis | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | Spoken language translation (SLT) systems typically follow a pipeline architecture, in which the best automatic speech recognition (ASR) hypothesis of an input utterance is fed into a statistical machine translation (SMT) system. Conversational speech often generates unrecoverable ASR errors owing to its rich vocabulary (e.g. out-of-vocabulary (OOV) named entities). In this paper, we study the possibility of alleviating the impact of unrecoverable ASR errors on translation performance by minimizing the contextual effects of incorrect source words in target hypotheses. Our approach is driven by locally-derived penalties applied to bilingual phrase pairs as well as target language model (LM) likelihoods in the vicinity of source errors. With oracle word error labels on an OOV word-rich English-to-Iraqi Arabic translation task, we show statistically significant relative improvements of 3.2{\%} BLEU and 2.0{\%} METEOR over an error-agnostic baseline SMT system. We then investigate the impact of imperfect source error labels on error-aware translation performance. Simulation experiments reveal that modest translation improvements are to be gained with this approach even when the source error labels are noisy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,628 |
inproceedings | sakamoto-etal-2013-evaluation | Evaluation of a simultaneous interpretation system and analysis of speech log for user experience assessment | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.18/ | Sakamoto, Akiko and Abe, Kazuhiko and Sumita, Kazuo and Kamatani, Satoshi | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | This paper focuses on the user experience (UX) of a simultaneous interpretation system for face-to-face conversation between two users. To assess the UX of the system, we first made a transcript of the speech of users recorded during a task-based evaluation experiment and then analyzed user speech from the viewpoint of UX. In a task-based evaluation experiment, 44 tasks out of 45 tasks were solved. The solved task ratio was 97.8{\%}. This indicates that the system can effectively provide interpretation to enable users to solve tasks. However, we found that users repeated speech due to errors in automatic speech recognition (ASR) or machine translation (MT). Users repeated clauses 1.8 times on average. Users seemed to repeat themselves until they received a response from their partner users. In addition, we found that after approximately 3.6 repetitions, users would change their words to avoid errors in ASR or MT and to evoke a response from their partner users. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,629 |
inproceedings | jalalvand-falavigna-2013-parameter | Parameter optimization for iterative confusion network decoding in weather-domain speech recognition | Zhang, Joy Ying | dec # " 5-6" | 2013 | Heidelberg, Germany | null | https://aclanthology.org/2013.iwslt-papers.19/ | Jalalvand, Shahab and Falavigna, Daniele | Proceedings of the 10th International Workshop on Spoken Language Translation: Papers | null | In this paper, we apply a set of approaches to, efficiently, rescore the output of the automatic speech recognition over weather-domain data. Since the in-domain data is usually insufficient for training an accurate language model (LM) we utilize an automatic selection method to extract domain-related sentences from a general text resource. Then, an N-gram language model is trained on this set. We exploit this LM, along with a pre-trained acoustic model for recognition of the development and test instances. The recognizer generates a confusion network (CN) for each instance. Afterwards, we make use of the recurrent neural network language model (RNNLM), trained on the in-domain data, in order to iteratively rescore the CNs. Rescoring the CNs, in this way, requires estimating the weights of the RNNLM, N-gramLM and acoustic model scores. Weights optimization is the critical part of this work, whereby, we propose using the minimum error rate training (MERT) algorithm along with a novel N-best list extraction method. The experiments are done over weather forecast domain data that has been provided in the framework of EUBRIDGE project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 71,630 |
inproceedings | jokinen-tenjes-2012-investigating | Investigating Engagement - intercultural and technological aspects of the collection, analysis, and use of the {E}stonian Multiparty Conversational video data | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1001/ | Jokinen, Kristiina and Tenjes, Silvi | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2764--2769 | In this paper we describe the goals of the Estonian corpus collection and analysis activities, and introduce the recent collection of Estonian First Encounters data. The MINT project aims at deepening our understanding of the conversational properties and practices in human interactions. We especially investigate conversational engagement and cooperation, and discuss some observations on the participants' views concerning the interaction they have been engaged. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,202 |
inproceedings | burkhardt-2012-seem | {\textquotedblleft}You Seem Aggressive!{\textquotedblright} Monitoring Anger in a Practical Application | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1002/ | Burkhardt, Felix | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1221--1225 | A monitoring system to detect emotional outbursts in day-to-day communication is presented. The anger monitor was tested in a household and in parallel in an office surrounding. Although the state of the art of emotion recognition seems sufficient for practical applications, the acquisition of good training material remains a difficult task, as cross database performance is too low to be used in this context. A solution will probably consist of the combination of carefully drafted general training databases and the development of usability concepts to (re-) train the monitor in the field. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,203 |
inproceedings | burkhardt-2012-fast | Fast Labeling and Transcription with the Speechalyzer Toolkit | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1003/ | Burkhardt, Felix | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 196--200 | We describe a software tool named Speechalyzer which is optimized to process large speech data sets with respect to transcription, labeling and annotation. It is implemented as a client server based framework in Java and interfaces software for speech recognition, synthesis, speech classification and quality evaluation. The application is mainly the processing of training data for speech recognition and classification models and performing benchmarking tests on speech to text, text to speech and speech categorization software systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,204 |
inproceedings | spyns-dhalleweyn-2012-smooth | Smooth Sailing for {STEVIN} | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1004/ | Spyns, Peter and D{'}Halleweyn, Elisabeth | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1021--1028 | In this paper we report on the past evaluation of the STEVIN programme in the field of Human Language Technology for Dutch (HLTD). STEVIN was a 11.4 M euro programme that was jointly organised and financed by the Flemish and Dutch governments. The aim was to provide academia and industry with basic building blocks for a linguistic infrastructure for the Dutch language. An independent evaluation has been carried out. The evaluators concluded that the most important targets of the STEVIN programme have been achieved to a very high extent. In this paper, we summarise the context, the evaluation method, the resulting resources and the highlights of the STEVIN final evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,205 |
inproceedings | stein-usabaev-2012-automatic | Automatic Speech Recognition on a Firefighter {TETRA} Broadcast Channel | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1005/ | Stein, Daniel and Usabaev, Bela | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 119--124 | For a reliable keyword extraction on firefighter radio communication, a strong automatic speech recognition system is needed. However, real-life data poses several challenges like a distorted voice signal, background noise and several different speakers. Moreover, the domain is out-of-scope for common language models, and the available data is scarce. In this paper, we introduce the PRONTO corpus, which consists of German firefighter exercise transcriptions. We show that by standard adaption techniques the recognition rate already rises from virtually zero to up to 51.7{\%} and can be further improved by domain-specific rules to 47.9{\%}. Extending the acoustic material by semi-automatic transcription and crawled in-domain written material, we arrive at a WER of 45.2{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,206 |
inproceedings | saralegi-etal-2012-building | Building a {B}asque-{C}hinese Dictionary by Using {E}nglish as Pivot | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1006/ | Saralegi, Xabier and Manterola, Iker and San Vicente, I{\~n}aki | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1443--1447 | Bilingual dictionaries are key resources in several fields such as translation, language learning or various NLP tasks. However, only major languages have such resources. Automatically built dictionaries by using pivot languages could be a useful resource in these circumstances. Pivot-based bilingual dictionary building is based on merging two bilingual dictionaries which share a common language (e.g. LA-LB, LB-LC) in order to create a dictionary for a new language pair (e.g LA-LC). This process may include wrong translations due to the polisemy of words. We built Basque-Chinese (Mandarin) dictionaries automatically from Basque-English and Chinese-English dictionaries. In order to prune wrong translations we used different methods adequate for less resourced languages. Inverse Consultation and Distributional Similarity methods are used because they just depend on easily available resources. Finally, we evaluated manually the quality of the built dictionaries and the adequacy of the methods. Both Inverse Consultation and Distributional Similarity provide good precision of translations but recall is seriously damaged. Distributional similarity prunes rare translations more accurately than other methods. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,207 |
inproceedings | tang-chen-2012-mining | Mining Sentiment Words from Microblogs for Predicting Writer-Reader Emotion Transition | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1007/ | Tang, Yi-jie and Chen, Hsin-Hsi | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1226--1229 | The conversations between posters and repliers in microblogs form a valuable writer-reader emotion corpus. This paper adopts a log relative frequency ratio to investigate the linguistic features which affect emotion transitions, and applies the results to predict writers' and readers' emotions. A 4-class emotion transition predictor, a 2-class writer emotion predictor, and a 2-class reader emotion predictor are proposed and compared. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,208 |
inproceedings | spoustova-spousta-2012-high | A High-Quality Web Corpus of {C}zech | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1008/ | Spoustov{\'a}, Johanka and Spousta, Miroslav | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 311--315 | In our paper, we present main results of the Czech grant project Internet as a Language Corpus, whose aim was to build a corpus of Czech web texts and to develop and publicly release related software tools. Our corpus may not be the largest web corpus of Czech, but it maintains very good language quality due to high portion of human work involved in the corpus development process. We describe the corpus contents (2.65 billions of words divided into three parts -- 450 millions of words from news and magazines articles, 1 billion of words from blogs, diaries and other non-reviewed literary units, 1.1 billion of words from discussions messages), particular steps of the corpus creation (crawling, HTML and boilerplate removal, near duplicates removal, language filtering) and its automatic language annotation (POS tagging, syntactic parsing). We also describe our software tools being released under an open source license, especially a fast linear-time module for removing near-duplicates on a paragraph level. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,209 |
inproceedings | luder-2012-german | {G}erman Verb Patterns and Their Implementation in an Electronic Dictionary | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1009/ | Luder, Marc | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 693--697 | We describe an electronic lexical resource for German and the structure of its lexicon entries, notably the structure of verbal single-word and multi-word entries. The verb as the center of the sentence structure, as held by dependency models, is also a basic principle of the JAKOB narrative analysis application, for which the dictionary is the background. Different linguistic layers are combined for construing lexicon entries with a rich set of syntactic and semantic properties, suited to represent the syntactic and semantic behavior of verbal expressions (verb patterns), extracted from transcripts of real discourse, thereby lexicalizing the specific meaning of a specific verb pattern in a specific context. Verb patterns are built by the lexicographer by using a parser analyzing the input of a test clause and generating a machine-readable property string with syntactic characteristics and propositions for semantic characteristics grounded in an ontology. As an example, the German idiomatic expression ''''''``an den Karren fahren'''''''' (to come down hard on somebody) demonstrates the overall structure of a dictionary entry. The goal is to build unique dictionary entries (verb patterns) with reference to the whole of their properties. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,210 |
inproceedings | maynard-greenwood-2012-large | Large Scale Semantic Annotation, Indexing and Search at The National Archives | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1010/ | Maynard, Diana and Greenwood, Mark A. | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3487--3494 | This paper describes a tool developed to improve access to the enormous volume of data housed at the UK`s National Archives, both for the general public and for specialist researchers. The system we have developed, TNA-Search, enables a multi-paradigm search over the entire electronic archive (42TB of data in various formats). The search functionality allows queries that arbitrarily mix any combination of full-text, structural, linguistic and semantic queries. The archive is annotated and indexed with respect to a massive semantic knowledge base containing data from the LOD cloud, data.gov.uk, related TNA projects, and a large geographical database. The semantic annotation component achieves approximately 83{\%} F-measure, which is very reasonable considering the wide range of entities and document types and the open domain. The technologies are being adopted by real users at The National Archives and will form the core of their suite of search tools, with additional in-house interfaces. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,211 |
inproceedings | sharaf-atwell-2012-qurana | {Q}ur{A}na: Corpus of the {Q}uran annotated with Pronominal Anaphora | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1011/ | Sharaf, Abdul-Baquee and Atwell, Eric | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 130--137 | This paper presents QurAna: a large corpus created from the original Quranic text, where personal pronouns are tagged with their antecedence. These antecedents are maintained as an ontological list of concepts, which have proved helpful for information retrieval tasks. QurAna is characterized by: (a) comparatively large number of pronouns tagged with antecedent information (over 24,500 pronouns), and (b) maintenance of an ontological concept list out of these antecedents. We have shown useful applications of this corpus. This corpus is first of its kind considering classical Arabic text, which could be used for interesting applications for Modern Standard Arabic as well. This corpus would benefit researchers in obtaining empirical and rules in building new anaphora resolution approaches. Also, such corpus would be used to train, optimize and evaluate existing approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,212 |
inproceedings | scheible-schutze-2012-bootstrapping | Bootstrapping Sentiment Labels For Unannotated Documents With Polarity {P}age{R}ank | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1012/ | Scheible, Christian and Sch{\"utze, Hinrich | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1230--1234 | We present a novel graph-theoretic method for the initial annotation of high-confidence training data for bootstrapping sentiment classifiers. We estimate polarity using topic-specific PageRank. Sentiment information is propagated from an initial seed lexicon through a joint graph representation of words and documents. We report improved classification accuracies across multiple domains for the base models and the maximum entropy model bootstrapped from the PageRank annotation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,213 |
inproceedings | clematide-etal-2012-mlsa | {MLSA} {---} A Multi-layered Reference Corpus for {G}erman Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1013/ | Clematide, Simon and Gindl, Stefan and Klenner, Manfred and Petrakis, Stefanos and Remus, Robert and Ruppenhofer, Josef and Waltinger, Ulli and Wiegand, Michael | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3551--3556 | In this paper, we describe MLSA, a publicly available multi-layered reference corpus for German-language sentiment analysis. The construction of the corpus is based on the manual annotation of 270 German-language sentences considering three different layers of granularity. The sentence-layer annotation, as the most coarse-grained annotation, focuses on aspects of objectivity, subjectivity and the overall polarity of the respective sentences. Layer 2 is concerned with polarity on the word- and phrase-level, annotating both subjective and factual language. The annotations on Layer 3 focus on the expression-level, denoting frames of private states such as objective and direct speech events. These three layers and their respective annotations are intended to be fully independent of each other. At the same time, exploring for and discovering interactions that may exist between different layers should also be possible. The reliability of the respective annotations was assessed using the average pairwise agreement and Fleiss' multi-rater measures. We believe that MLSA is a beneficial resource for sentiment analysis research, algorithms and applications that focus on the German language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,214 |
inproceedings | sainz-etal-2012-versatile | Versatile Speech Databases for High Quality Synthesis for {B}asque | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1014/ | Sainz, I{\~n}aki and Erro, Daniel and Navas, Eva and Hern{\'a}ez, Inma and Sanchez, Jon and Saratxaga, Ibon and Odriozola, Igor | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3308--3312 | This paper presents three new speech databases for standard Basque. They are designed primarily for corpus-based synthesis but each database has its specific purpose: 1) AhoSyn: high quality speech synthesis (recorded also in Spanish), 2) AhoSpeakers: voice conversion and 3) AhoEmo3: emotional speech synthesis. The whole corpus design and the recording process are described with detail. Once the databases were collected all the data was automatically labelled and annotated. Then, an HMM-based TTS voice was built and subjectively evaluated. The results of the evaluation are pretty satisfactory: 3.70 MOS for Basque and 3.44 for Spanish. Therefore, the evaluation assesses the quality of this new speech resource and the validity of the automated processing presented. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,215 |
inproceedings | llorens-etal-2012-timen | {TIMEN}: An Open Temporal Expression Normalisation Resource | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1015/ | Llorens, Hector and Derczynski, Leon and Gaizauskas, Robert and Saquete, Estela | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3044--3051 | Temporal expressions are words or phrases that describe a point, duration or recurrence in time. Automatically annotating these expressions is a research goal of increasing interest. Recognising them can be achieved with minimally supervised machine learning, but interpreting them accurately (normalisation) is a complex task requiring human knowledge. In this paper, we present TIMEN, a community-driven tool for temporal expression normalisation. TIMEN is derived from current best approaches and is an independent tool, enabling easy integration in existing systems. We argue that temporal expression normalisation can only be effectively performed with a large knowledge base and set of rules. Our solution is a framework and system with which to capture this knowledge for different languages. Using both existing and newly-annotated data, we present results showing competitive performance and invite the IE community to contribute to a knowledge base in order to solve the temporal expression normalisation problem. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,216 |
inproceedings | brooke-hirst-2012-measuring | Measuring Interlanguage: Native Language Identification with {L}1-influence Metrics | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1016/ | Brooke, Julian and Hirst, Graeme | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 779--784 | The task of native language (L1) identification suffers from a relative paucity of useful training corpora, and standard within-corpus evaluation is often problematic due to topic bias. In this paper, we introduce a method for L1 identification in second language (L2) texts that relies only on much more plentiful L1 data, rather than the L2 texts that are traditionally used for training. In particular, we do word-by-word translation of large L1 blog corpora to create a mapping to L2 forms that are a possible result of language transfer, and then use that information for unsupervised classification. We show this method is effective in several different learner corpora, with bigram features being particularly useful. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,217 |
inproceedings | saint-dizier-2012-dislog | {DISLOG}: A logic-based language for processing discourse structures | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1017/ | Saint-Dizier, Patrick | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2770--2777 | In this paper, we present the foundations and the properties of the DISLOG language, a logic-based language designed to describe and implement discourse structure analysis. Dislog has the flexibility and the expressiveness of a rule-based system, it offers the possibility to include knowledge and reasoning capabilities and the expression a variety of well-formedness constraints proper to discourse. Dislog is embedded into the platform that offers an engine with various processing capabilities and a programming environment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,218 |
inproceedings | wiegand-etal-2012-gold | A Gold Standard for Relation Extraction in the Food Domain | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1018/ | Wiegand, Michael and Roth, Benjamin and Lasarcyk, Eva and K{\"oser, Stephanie and Klakow, Dietrich | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 507--514 | We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,219 |
inproceedings | bourse-saint-dizier-2012-repository | A Repository of Rules and Lexical Resources for Discourse Structure Analysis: the Case of Explanation Structures | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1019/ | Bourse, Sarah and Saint-Dizier, Patrick | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2778--2785 | In this paper, we present an analysis method, a set of rules, lexical resources dedicated to discourse relation identification, in particular for explanation analysis. The following relations are described with prototypical rules: instructions, advice, warnings, illustration, restatement, purpose, condition, circumstance, concession, contrast and some forms of causes. Rules are developed for French and English. The approach used to describe the analysis of such relations is basically generative and also provides a conceptual view of explanation. The implementation is realized in Dislog, using the logic-based platform, and the Dislog language, that also allows for the integration of knowledge and reasoning into rules describing the structure of explanation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,220 |
inproceedings | barcellini-etal-2012-risk | Risk Analysis and Prevention: {LELIE}, a Tool dedicated to Procedure and Requirement Authoring | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1020/ | Barcellini, Flore and Albert, Camille and Grosse, Corinne and Saint-Dizier, Patrick | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 698--705 | In this paper, we present the first phase of the LELIE project. A tool that detects business errors in technical documents such as procedures or requirements is introduced. The objective is to improve readability and to check for some elements of contents so that risks that could be entailed by misunderstandings or typos can be prevented. Based on a cognitive ergonomics analysis, we survey a number of frequently encountered types of errors and show how they can be detected using the discourse analysis platform. We show how errors can be annotated, give figures on error frequencies and analyze how technical writers perceive our system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,221 |
inproceedings | tannier-2012-webannotator | {W}eb{A}nnotator, an Annotation Tool for Web Pages | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1021/ | Tannier, Xavier | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 316--319 | This article presents WebAnnotator, a new tool for annotating Web pages. WebAnnotator is implemented as a Firefox extension, allowing annotation of both offline and inline pages. The HTML rendering fully preserved and all annotations consist in new HTML spans with specific styles. WebAnnotator provides an easy and general-purpose framework and is made available under CeCILL free license (close to GNU GPL), so that use and further contributions are made simple. All parts of an HTML document can be annotated: text, images, videos, tables, menus, etc. The annotations are created by simply selecting a part of the document and clicking on the relevant type and subtypes. The annotated elements are then highlighted in a specific color. Annotation schemas can be defined by the user by creating a simple DTD representing the types and subtypes that must be highlighted. Finally, annotations can be saved (HTML with highlighted parts of documents) or exported (in a machine-readable format). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,222 |
inproceedings | ogrodniczuk-etal-2012-towards | Towards a comprehensive open repository of {P}olish language resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1022/ | Ogrodniczuk, Maciej and P{\k{e}}zik, Piotr and Przepi{\'o}rkowski, Adam | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3593--3597 | The aim of this paper is to present current efforts towards the creation of a comprehensive open repository of Polish language resources and tools (LRTs). The work described here is carried out within the CESAR project, member of the META-NET consortium. It has already resulted in the creation of the Computational Linguistics in Poland site containing an exhaustive collection of Polish LRTs. Current work is focused on the creation of new LRTs and, esp., the enhancement of existing LRTs, such as parallel corpora, annotated corpora of written and spoken Polish and morphological dictionaries to be made available via the META-SHARE repository. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,223 |
inproceedings | patejuk-przepiorkowski-2012-towards | Towards an {LFG} parser for {P}olish: An exercise in parasitic grammar development | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1023/ | Patejuk, Agnieszka and Przepi{\'o}rkowski, Adam | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3849--3852 | While it is possible to build a formal grammar manually from scratch or, going to another extreme, to derive it automatically from a treebank, the development of the LFG grammar of Polish presented in this paper is different from both of these methods as it relies on extensive reuse of existing language resources for Polish. LFG grammars minimally provide two levels of representation: constituent structure (c-structure) produced by context-free phrase structure rules and functional structure (f-structure) created by functional descriptions. The c-structure was based on a DCG grammar of Polish, while the f-structure level was mainly inspired by the available HPSG analyses of Polish. The morphosyntactic information needed to create a lexicon may be taken from one of the following resources: a morphological analyser, a treebank or a corpus. Valence information from the dictionary which accompanies the DCG grammar was converted so that subcategorisation is stated in terms of grammatical functions rather than categories; additionally, missing valence frames may be extracted from the treebank. The obtained grammar is evaluated using constructed testsuites (half of which were provided by previous grammars) and the treebank. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,224 |
inproceedings | jokinen-wilcock-2012-constructive | Constructive Interaction for Talking about Interesting Topics | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1024/ | Jokinen, Kristiina and Wilcock, Graham | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 404--410 | The paper discusses mechanisms for topic management in conversations, concentrating on interactions where the interlocutors react to each other`s presentation of new information and construct a shared context in which to exchange information about interesting topics. This is illustrated with a robot simulator that can talk about unrestricted (open-domain) topics that the human interlocutor shows interest in. Wikipedia is used as the source of information from which the robotic agent draws its world knowledge. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,225 |
inproceedings | pereira-etal-2012-corpus | Corpus-based Referring Expressions Generation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1025/ | Pereira, Hilder and Novais, Eder and Mariotti, Andr{\'e} and Paraboni, Ivandr{\'e} | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 4004--4009 | In Natural Language Generation, the task of attribute selection (AS) consists of determining the appropriate attribute-value pairs (or semantic properties) that represent the contents of a referring expression. Existing work on AS includes a wide range of algorithmic solutions to the problem, but the recent availability of corpora annotated with referring expressions data suggests that corpus-based AS strategies become possible as well. In this work we tentatively discuss a number of AS strategies using both semantic and surface information obtained from a corpus of this kind. Relying on semantic information, we attempt to learn both global and individual AS strategies that could be applied to a standard AS algorithm in order to generate descriptions found in the corpus. As an alternative, and perhaps less traditional approach, we also use surface information to build statistical language models of the referring expressions that are most likely to occur in the corpus, and let the model probabilities guide attribute selection. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,226 |
inproceedings | novais-etal-2012-portuguese | {P}ortuguese Text Generation from Large Corpora | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1026/ | Novais, Eder and Paraboni, Ivandr{\'e} and Silva, Douglas | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 4010--4014 | In the implementation of a surface realisation engine, many of the computational techniques seen in other AI fields have been widely applied. Among these, the use of statistical methods has been particularly successful, as in the so-called `generate-and-select', or 2-stages architectures. Systems of this kind produce output strings from possibly underspecified input data by over-generating a large number of alternative realisations (often including ungrammatical candidate sentences.) These are subsequently ranked with the aid of a statistical language model, and the most likely candidate is selected as the output string. Statistical approaches may however face a number of difficulties. Among these, there is the issue of data sparseness, a problem that is particularly evident in cases such as our target language - Brazilian Portuguese - which is not only morphologically-rich, but relatively poor in NLP resources such as large, publicly available corpora. In this work we describe a first implementation of a shallow surface realisation system for this language that deals with the issue of data sparseness by making use of factored language models built from a (relatively) large corpus of Brazilian newspapers articles. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,227 |
inproceedings | petukhova-etal-2012-sumat | {SUMAT}: Data Collection and Parallel Corpus Compilation for Machine Translation of Subtitles | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1027/ | Petukhova, Volha and Agerri, Rodrigo and Fishel, Mark and Penkale, Sergio and del Pozo, Arantza and Mau{\v{c}}ec, Mirjam Sepesy and Way, Andy and Georgakopoulou, Panayota and Volk, Martin | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 21--28 | Subtitling and audiovisual translation have been recognized as areas that could greatly benefit from the introduction of Statistical Machine Translation (SMT) followed by post-editing, in order to increase efficiency of subtitle production process. The FP7 European project SUMAT (An Online Service for SUbtitling by MAchine Translation: \url{http://www.sumat-project.eu}) aims to develop an online subtitle translation service for nine European languages, combined into 14 different language pairs, in order to semi-automate the subtitle translation processes of both freelance translators and subtitling companies on a large scale. In this paper we discuss the data collection and parallel corpus compilation for training SMT systems, which includes several procedures such as data partition, conversion, formatting, normalization and alignment. We discuss in detail each data pre-processing step using various approaches. Apart from the quantity (around 1 million subtitles per language pair), the SUMAT corpus has a number of very important characteristics. First of all, high quality both in terms of translation and in terms of high-precision alignment of parallel documents and their contents has been achieved. Secondly, the contents are provided in one consistent format and encoding. Finally, additional information such as type of content in terms of genres and domain is available. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,228 |
inproceedings | cambria-etal-2012-affective | Affective Common Sense Knowledge Acquisition for Sentiment Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1028/ | Cambria, Erik and Xia, Yunqing and Hussain, Amir | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3580--3585 | Thanks to the advent of Web 2.0, the potential for opinion sharing today is unmatched in history. Making meaning out of the huge amount of unstructured information available online, however, is extremely difficult as web-contents, despite being perfectly suitable for human consumption, still remain hardly accessible to machines. To bridge the cognitive and affective gap between word-level natural language data and the concept-level sentiments conveyed by them, affective common sense knowledge is needed. In sentic computing, the general common sense knowledge contained in ConceptNet is usually exploited to spread affective information from selected affect seeds to other concepts. In this work, besides exploiting the emotional content of the Open Mind corpus, we also collect new affective common sense knowledge through label sequential rules, crowd sourcing, and games-with-a-purpose techniques. In particular, we develop Open Mind Common Sentics, an emotion-sensitive IUI that serves both as a platform for affective common sense acquisition and as a publicly available NLP tool for extracting the cognitive and affective information associated with short texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,229 |
inproceedings | heracleous-etal-2012-body | Body-conductive acoustic sensors in human-robot communication | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1029/ | Heracleous, Panikos and Ishi, Carlos and Miyashita, Takahiro and Hagita, Norihiro | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3340--3344 | In this study, the use of alternative acoustic sensors in human-robot communication is investigated. In particular, a Non-Audible Murmur (NAM) microphone was applied in teleoperating Geminoid HI-1 robot in noisy environments. The current study introduces the methodology and the results of speech intelligibility subjective tests when a NAM microphone was used in comparison with using a standard microphone. The results show the advantage of using NAM microphone when the operation takes place in adverse environmental conditions. In addition, the effect of Geminoid`s lip movements on speech intelligibility is also investigated. Subjective speech intelligibility tests show that the operator`s speech can be perceived with higher intelligibility scores when operator`s audio speech is perceived along with the lip movements of robots. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,230 |
inproceedings | macken-etal-2012-keystrokes | From keystrokes to annotated process data: Enriching the output of Inputlog with linguistic information | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1030/ | Macken, Lieve and Hoste, Veronique and Leijten, Mari{\"elle and Van Waes, Luuk | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2224--2229 | Keystroke logging tools are a valuable aid to monitor written language production. These tools record all keystrokes, including backspaces and deletions together with timing information. In this paper we report on an extension to the keystroke logging program Inputlog in which we aggregate the logged process data from the keystroke (character) level to the word level. The logged process data are further enriched with different kinds of linguistic information: part-of-speech tags, lemmata, chunk boundaries, syllable boundaries and word frequency. A dedicated parser has been developed that distils from the logged process data word-level revisions, deleted fragments and final product data. The linguistically-annotated output will facilitate the linguistic analysis of the logged data and will provide a valuable basis for more linguistically-oriented writing process research. The set-up of the extension to Inputlog is largely language-independent. As proof-of-concept, the extension has been developed for English and Dutch. Inputlog is freely available for research purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,231 |
inproceedings | henrich-hinrichs-2012-comparative | A Comparative Evaluation of Word Sense Disambiguation Algorithms for {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1031/ | Henrich, Verena and Hinrichs, Erhard | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 576--583 | The present paper explores a wide range of word sense disambiguation (WSD) algorithms for German. These WSD algorithms are based on a suite of semantic relatedness measures, including path-based, information-content-based, and gloss-based methods. Since the individual algorithms produce diverse results in terms of precision and thus complement each other well in terms of coverage, a set of combined algorithms is investigated and compared in performance to the individual algorithms. Among the single algorithms considered, a word overlap method derived from the Lesk algorithm that uses Wiktionary glosses and GermaNet lexical fields yields the best F-score of 56.36. This result is outperformed by a combined WSD algorithm that uses weighted majority voting and obtains an F-score of 63.59. The WSD experiments utilize the German wordnet GermaNet as a sense inventory as well as WebCAGe (short for: Web-Harvested Corpus Annotated with GermaNet Senses), a newly constructed, sense-annotated corpus for this language. The WSD experiments also confirm that WSD performance is lower for words with fine-grained sense distinctions compared to words with coarse-grained senses. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,232 |
inproceedings | varges-etal-2012-semscribe | {S}em{S}cribe: Natural Language Generation for Medical Reports | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1032/ | Varges, Sebastian and Bieler, Heike and Stede, Manfred and Faulstich, Lukas C. and Irsig, Kristin and Atalla, Malik | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2674--2681 | Natural language generation in the medical domain is heavily influenced by domain knowledge and genre-specific text characteristics. We present SemScribe, an implemented natural language generation system that produces doctor`s letters, in particular descriptions of cardiological findings. Texts in this domain are characterized by a high density of information and a relatively telegraphic style. Domain knowledge is encoded in a medical ontology of about 80,000 concepts. The ontology is used in particular for concept generalizations during referring expression generation. Architecturally, the system is a generation pipeline that uses a corpus-informed syntactic frame approach for realizing sentences appropriate to the domain. The system reads XML documents conforming to the HL7 Clinical Document Architecture (CDA) Standard and enhances them with generated text and references to the used data elements. We conducted a first clinical trial evaluation with medical staff and report on the findings. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,233 |
inproceedings | hinrichs-zastrow-2012-automatic | Automatic Annotation and Manual Evaluation of the Diachronic {German Corpus {T{\"u{Ba-{D/{DC | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1033/ | Hinrichs, Erhard and Zastrow, Thomas | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1622--1627 | This paper presents the Tu{\`I}bingen Baumbank des Deutschen Diachron (Tu{\`I}Ba-D/DC), a linguistically annotated corpus of selected diachronic materials from the German Gutenberg Project. It was automatically annotated by a suite of NLP tools integrated into WebLicht, the linguistic chaining tool used in CLARIN-D. The annotation quality has been evaluated manually for a subcorpus ranging from Middle High German to Modern High German. The integration of the Tu{\`I}Ba-D/DC into the CLARIN-D infrastructure includes metadata provision and harvesting as well as sustainable data storage in the Tu{\`I}bingen CLARIN-D center. The paper further provides an overview of the possibilities of accessing the Tu{\`I}Ba-D/DC data. Methods for full-text search of the metadata and object data and for annotation-based search of the object data are described in detail. The WebLicht Service Oriented Architecture is used as an integrated environment for annotation based search of the Tu{\`I}Ba-D/DC. WebLicht thus not only serves as the annotation platform for the Tu{\`I}Ba-D/DC, but also as a generic user interface for accessing and visualizing it. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,234 |
inproceedings | joubert-lafourcade-2012-new | A new dynamic approach for lexical networks evaluation | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1034/ | Joubert, Alain and Lafourcade, Mathieu | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3687--3691 | Since September 2007, a large scale lexical network for French is under construction with methods based on popular consensus by means of games (under the JeuxDeMots project). To assess the resource quality, we decided to adopt an approach similar to its construction, that is to say an evaluation by laymen on open class vocabulary with a Tip of the Tongue tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,235 |
inproceedings | seo-etal-2012-grammatical | Grammatical Error Annotation for {K}orean Learners of Spoken {E}nglish | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1035/ | Seo, Hongsuck and Lee, Kyusong and Lee, Gary Geunbae and Kweon, Soo-Ok and Kim, Hae-Ri | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1628--1631 | The goal of our research is to build a grammatical error-tagged corpus for Korean learners of Spoken English dubbed Postech Learner Corpus. We collected raw story-telling speech from Korean university students. Transcription and annotation using the Cambridge Learner Corpus tagset were performed by six Korean annotators fluent in English. For the annotation of the corpus, we developed an annotation tool and a validation tool. After comparing human annotation with machine-recommended error tags, unmatched errors were rechecked by a native annotator. We observed different characteristics between the spoken language corpus built in this study and an existing written language corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,236 |
inproceedings | heinroth-etal-2012-adaptive | Adaptive Speech Understanding for Intuitive Model-based Spoken Dialogues | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1036/ | Heinroth, Tobias and Grotz, Maximilian and Nothdurft, Florian and Minker, Wolfgang | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1281--1288 | In this paper we present three approaches towards adaptive speech understanding. The target system is a model-based Adaptive Spoken Dialogue Manager, the OwlSpeak ASDM. We enhanced this system in order to properly react on non-understandings in real-life situations where intuitive communication is required. OwlSpeak provides a model-based spoken interface to an Intelligent Environment depending on and adapting to the current context. It utilises a set of ontologies used as dialogue models that can be combined dynamically during runtime. Besides the benefits the system showed in practice, real-life evaluations also conveyed some limitations of the model-based approach. Since it is unfeasible to model all variations of the communication between the user and the system beforehand, various situations where the system did not correctly understand the user input have been observed. Thus we present three enhancements towards a more sophisticated use of the ontology-based dialogue models and show how grammars may dynamically be adapted in order to understand intuitive user utterances. The evaluation of our approaches revealed the incorporation of a lexical-semantic knowledgebase into the recognition process to be the most promising approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,237 |
inproceedings | passarotti-mambrini-2012-first | First Steps towards the Semi-automatic Development of a Wordformation-based Lexicon of {L}atin | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1037/ | Passarotti, Marco and Mambrini, Francesco | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 852--859 | Although lexicography of Latin has a long tradition dating back to ancient grammarians, and almost all Latin grammars devote to wordformation at least one part of the section(s) concerning morphology, none of the today available lexical resources and NLP tools of Latin feature a wordformation-based organization of the Latin lexicon. In this paper, we describe the first steps towards the semi-automatic development of a wordformation-based lexicon of Latin, by detailing several problems occurring while building the lexicon and presenting our solutions. Developing a wordformation-based lexicon of Latin is nowadays of outmost importance, as the last years have seen a large growth of annotated corpora of Latin texts of different eras. While these corpora include lemmatization, morphological tagging and syntactic analysis, none of them features segmentation of the word forms and wordformation relations between the lexemes. This restricts the browsing and the exploitation of the annotated data for linguistic research and NLP tasks, such as information retrieval and heuristics in PoS tagging of unknown words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,238 |
inproceedings | dipper-etal-2012-use | The Use of Parallel and Comparable Data for Analysis of Abstract Anaphora in {G}erman and {E}nglish | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1038/ | Dipper, Stefanie and Seiss, Melanie and Zinsmeister, Heike | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 138--145 | Parallel corpora {\textemdash} original texts aligned with their translations {\textemdash} are a widely used resource in computational linguistics. Translation studies have shown that translated texts often differ systematically from comparable original texts. Translators tend to be faithful to structures of the original texts, resulting in a ''''''``shining through'''''''' of the original language preferences in the translated text. Translators also tend to make their translations most comprehensible with the effect that translated texts can be more explicit than their source texts. Motivated by the need to use a parallel resource for cross-linguistic feature induction in abstract anaphora resolution, this paper investigates properties of English and German texts in the Europarl corpus, taking into account both general features such as sentence length as well as task-dependent features such as the distribution of demonstrative noun phrases. The investigation is based on the entire Europarl corpus as well as on a small subset thereof, which has been manually annotated. The results indicate English translated texts are sufficiently ''''''``authentic'''''''' to be used as training data for anaphora resolution; results for German texts are less conclusive, though. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,239 |
inproceedings | tatu-moldovan-2012-tool | A Tool for Extracting Conversational Implicatures | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1039/ | Tatu, Marta and Moldovan, Dan | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2708--2715 | Explicitly conveyed knowledge represents only a portion of the information communicated by a text snippet. Automated mechanisms for deriving explicit information exist; however, the implicit assumptions and default inferences that capture our intuitions about a normal interpretation of a communication remain hidden for automated systems, despite the communication participants' ease of grasping the complete meaning of the communication. In this paper, we describe a reasoning framework for the automatic identification of conversational implicatures conveyed by real-world English and Arabic conversations carried via twitter.com. Our system transforms given utterances into deep semantic logical forms. It produces a variety of axioms that identify lexical connections between concepts, define rules of combining semantic relations, capture common-sense world knowledge, and encode Grice`s Conversational Maxims. By exploiting this rich body of knowledge and reasoning within the context of the conversation, our system produces entailments and implicatures conveyed by analyzed utterances with an F-measure of 70.42{\%} for English conversations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,240 |
inproceedings | moldovan-blanco-2012-polaris | {P}olaris: Lymba`s Semantic Parser | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1040/ | Moldovan, Dan and Blanco, Eduardo | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 66--72 | Semantic representation of text is key to text understanding and reasoning. In this paper, we present Polaris, Lymba`s semantic parser. Polaris is a supervised semantic parser that given text extracts semantic relations. It extracts relations from a wide variety of lexico-syntactic patterns, including verb-argument structures, noun compounds and others. The output can be provided in several formats: XML, RDF triples, logic forms or plain text, facilitating interoperability with other tools. Polaris is implemented using eight separate modules. Each module is explained and a detailed example of processing using a sample sentence is provided. Overall results using a benchmark are discussed. Per module performance, including errors made and pruned by each module are also analyzed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,241 |
inproceedings | vincze-2012-light | Light Verb Constructions in the {S}zeged{P}aralell{FX} {E}nglish{--}{H}ungarian Parallel Corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1041/ | Vincze, Veronika | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2381--2388 | In this paper, we describe the first English-Hungarian parallel corpus annotated for light verb constructions, which contains 14,261 sentence alignment units. Annotation principles and statistical data on the corpus are also provided, and English and Hungarian data are contrasted. On the basis of corpus data, a database containing pairs of English-Hungarian light verb constructions has been created as well. The corpus and the database can contribute to the automatic detection of light verb constructions and it is also shown how they can enhance performance in several fields of NLP (e.g. parsing, information extraction/retrieval and machine translation). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,242 |
inproceedings | fohr-mella-2012-coalt | {C}o{ALT}: A Software for Comparing Automatic Labelling Tools | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1042/ | Fohr, Dominique and Mella, Odile | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 325--332 | Speech-text alignment tools are frequently used in speech technology and research. In this paper, we propose a GPL software CoALT (Comparing Automatic Labelling Tools) for comparing two automatic labellers or two speech-text alignment tools, ranking them and displaying statistics about their differences. The main feature of CoALT is that a user can define its own criteria for evaluating and comparing the speech-text alignment tools since the required quality for labelling depends on the targeted application. Beyond ranking, our tool provides useful statistics for each labeller and above all about their differences and can emphasize the drawbacks and advantages of each labeller. We have applied our software for the French and English languages but it can be used for another language by simply defining the list of the phonetic symbols and optionally a set of phonetic rules. In this paper we present the usage of the software for comparing two automatic labellers on the corpus TIMIT. Moreover, as automatic labelling tools are configurable (number of GMMs, phonetic lexicon, acoustic parameterisation), we then present how CoALT allows to determine the best parameters for our automatic labelling tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,243 |
inproceedings | valkova-etal-2012-balanced | Balanced data repository of spontaneous spoken {C}zech | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1043/ | V{\'a}lkov{\'a}, Lucie and Waclawi{\v{c}}ov{\'a}, Martina and K{\v{r}}en, Michal | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3345--3349 | The paper presents data repository that will be used as a source of data for ORAL2013, a new corpus of spontaneous spoken Czech. The corpus is planned to be published in 2013 within the framework of the Czech National Corpus and it will contain both the audio recordings and their transcriptions manually aligned with time stamps. The corpus will be designed as a representation of contemporary spontaneous spoken language used in informal, real-life situations on the area of the whole Czech Republic and thus balanced in the main sociolinguistic categories of speakers. Therefore, the data repository features broad regional coverage with large variety of speakers, as well as precise and uniform processing. The repository is already built, basically balanced and sized 3 million words proper (i.e. tokens not including punctuation). Before the publication, another set of overall consistency checks will be carried out, as well as final selection of the transcriptions to be included into ORAL2013 as the final product. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,244 |
inproceedings | petukhova-bunt-2012-coding | The coding and annotation of multimodal dialogue acts | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1044/ | Petukhova, Volha and Bunt, Harry | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1293--1300 | Recent years have witnessed a growing interest in annotating linguistic data at the semantic level, including the annotation of dialogue corpus data. The annotation scheme developed as international standard for dialogue act annotation ISO 24617-2 is based on the DIT++ scheme (Bunt, 2006; 2009) which combines the multidimensional DIT scheme (Bunt, 1994) with concepts from DAMSL (Allen and Core , 1997) and various other schemes. This scheme is designed in a such way that it can be applied not only to spoken dialogue, as is the case for most of the previously defined dialogue annotation schemes, but also to multimodal dialogue. This paper describes how the ISO 24617-2 annotation scheme can be used, together with the DIT++ method of multidimensional segmentation', to annotate nonverbal and multimodal dialogue behaviour. We analyse the fundamental distinction between (a) the coding of surface features; (b) form-related semantic classification; and (c) semantic annotation in terms of dialogue acts, supported by experimental studies of (a) and (b). We discuss examples of specification languages for representing the results of each of these activities, show how dialogue act annotations can be attached to XML representations of functional segments of multimodal data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,245 |
inproceedings | schlaf-remus-2012-learning | Learning Categories and their Instances by Contextual Features | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1045/ | Schlaf, Antje and Remus, Robert | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1235--1239 | We present a 3-step framework that learns categories and their instances from natural language text based on given training examples. Step 1 extracts contexts of training examples as rules describing this category from text, considering part of speech, capitalization and category membership as features. Step 2 selects high quality rules using two consequent filters. The first filter is based on the number of rule occurrences, the second filter takes two non-independent characteristics into account: a rule`s precision and the amount of instances it acquires. Our framework adapts the filter`s threshold values to the respective category and the textual genre by automatically evaluating rule sets resulting from different filter settings and selecting the best performing rule set accordingly. Step 3 then identifies new instances of a category using the filtered rules applied within a previously proposed algorithm. We inspect the rule filters' impact on rule set quality and evaluate our framework by learning first names, last names, professions and cities from a hitherto unexplored textual genre -- search engine result snippets -- and achieve high precision on average. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,246 |
inproceedings | bank-etal-2012-textual | Textual Characteristics for Language Engineering | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1046/ | Bank, Mathias and Remus, Robert and Schierle, Martin | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 515--519 | Language statistics are widely used to characterize and better understand language. In parallel, the amount of text mining and information retrieval methods grew rapidly within the last decades, with many algorithms evaluated on standardized corpora, often drawn from newspapers. However, up to now there were almost no attempts to link the areas of natural language processing and language statistics in order to properly characterize those evaluation corpora, and to help others to pick the most appropriate algorithms for their particular corpus. We believe no results in the field of natural language processing should be published without quantitatively describing the used corpora. Only then the real value of proposed methods can be determined and the transferability to corpora originating from different genres or domains can be estimated. We lay ground for a language engineering process by gathering and defining a set of textual characteristics we consider valuable with respect to building natural language processing systems. We carry out a case study for the analysis of automotive repair orders and explicitly call upon the scientific community to provide feedback and help to establish a good practice of corpus-aware evaluations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,247 |
inproceedings | bank-schierle-2012-survey | A Survey of Text Mining Architectures and the {UIMA} Standard | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1047/ | Bank, Mathias and Schierle, Martin | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3479--3486 | With the rising amount of digitally available text, the need for efficient processing algorithms is growing fast. Although a lot of libraries are commonly available, their modularity and interchangeability is very limited, therefore forcing a lot of reimplementations and modifications not only in research areas but also in real world application scenarios. In recent years, different NLP frameworks have been proposed to provide an efficient, robust and convenient architecture for information processing tasks. This paper will present an overview over the most common approaches with their advantages and shortcomings, and will discuss them with respect to the first standardized architecture - the Unstructured Information Management Architecture (UIMA). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,248 |
inproceedings | seiss-2012-rule | A Rule-based Morphological Analyzer for Murrinh-Patha | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1048/ | Seiss, Melanie | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 751--758 | Resource development mainly focuses on well-described languages with a large amount of speakers. However, smaller languages may also profit from language resources which can then be used in applications such as electronic dictionaries or computer-assisted language learning materials. The development of resources for such languages may face various challenges. Often, not enough data is available for a successful statistical approach and the methods developed for other languages may not be suitable for this specific language. This paper presents a morphological analyzer for Murrinh-Patha, a polysynthetic language spoken in the Northern Territory of Australia. While nouns in Murrinh-Patha only show minimal inflection, verbs in this language are very complex. The complexity makes it very difficult if not impossible to handle data in Murrinh-Patha with statistical, surface-oriented methods. I therefore present a rule-based morphological analyzer built in XFST and LEXC (Beesley and Karttunen, 2003) which can handle the inflection on nouns and adjectives as well as the complexities of the Murrinh-Patha verb. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,249 |
inproceedings | vossen-etal-2012-dutchsemcor | {D}utch{S}em{C}or: Targeting the ideal sense-tagged corpus | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1049/ | Vossen, Piek and G{\"or{\"og, Attila and Izquierdo, Rub{\'en and van den Bosch, Antal | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 584--589 | Word Sense Disambiguation (WSD) systems require large sense-tagged corpora along with lexical databases to reach satisfactory results. The number of English language resources for developed WSD increased in the past years while most other languages are still under-resourced. The situation is no different for Dutch. In order to overcome this data bottleneck, the DutchSemCor project will deliver a Dutch corpus that is sense-tagged with senses from the Cornetto lexical database. In this paper, we discuss the different conflicting requirements for a sense-tagged corpus and our strategies to fulfill them. We report on a first series of experiments to sup- port our semi-automatic approach to build the corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,250 |
inproceedings | cartoni-meyer-2012-extracting | Extracting Directional and Comparable Corpora from a Multilingual Corpus for Translation Studies | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1050/ | Cartoni, Bruno and Meyer, Thomas | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2132--2137 | Translation studies rely more and more on corpus data to examine specificities of translated texts, that can be translated from different original languages and compared to original texts. In parallel, more and more multilingual corpora are becoming available for various natural language processing tasks. This paper questions the use of these multilingual corpora in translation studies and shows the methodological steps needed in order to obtain more reliably comparable sub-corpora that consist of original and directly translated text only. Various experiments are presented that show the advantage of directional sub-corpora. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,251 |
inproceedings | sharaf-atwell-2012-qursim | {Q}ur{S}im: A corpus for evaluation of relatedness in short texts | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1051/ | Sharaf, Abdul-Baquee and Atwell, Eric | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2295--2302 | This paper presents a large corpus created from the original Quranic text, where semantically similar or related verses are linked together. This corpus will be a valuable evaluation resource for computational linguists investigating similarity and relatedness in short texts. Furthermore, this dataset can be used for evaluation of paraphrase analysis and machine translation tasks. Our dataset is characterised by: (1) superior quality of relatedness assignment; as we have incorporated relations marked by well-known domain experts, this dataset could thus be considered a gold standard corpus for various evaluation tasks, (2) the size of our dataset; over 7,600 pairs of related verses are collected from scholarly sources with several levels of degree of relatedness. This dataset could be extended to over 13,500 pairs of related verses observing the commutative property of strongly related pairs. This dataset was incorporated into online query pages where users can visualize for a given verse a network of all directly and indirectly related verses. Empirical experiments showed that only 33{\%} of related pairs shared root words, emphasising the need to go beyond common lexical matching methods, and incorporate -in addition- semantic, domain knowledge, and other corpus-based approaches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,252 |
inproceedings | stymne-etal-2012-eye | Eye Tracking as a Tool for Machine Translation Error Analysis | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1052/ | Stymne, Sara and Danielsson, Henrik and Bremin, Sofia and Hu, Hongzhan and Karlsson, Johanna and Lillkull, Anna Prytz and Wester, Martin | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1121--1126 | We present a preliminary study where we use eye tracking as a complement to machine translation (MT) error analysis, the task of identifying and classifying MT errors. We performed a user study where subjects read short texts translated by three MT systems and one human translation, while we gathered eye tracking data. The subjects were also asked comprehension questions about the text, and were asked to estimate the text quality. We found that there are a longer gaze time and a higher number of fixations on MT errors, than on correct parts. There are also differences in the gaze time of different error types, with word order errors having the longest gaze time. We also found correlations between eye tracking data and human estimates of text quality. Overall our study shows that eye tracking can give complementary information to error analysis, such as aiding in ranking error types for seriousness. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,253 |
inproceedings | jongejan-2012-automatic | Automatic annotation of head velocity and acceleration in Anvil | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1053/ | Jongejan, Bart | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 201--208 | We describe an automatic face tracker plugin for the ANVIL annotation tool. The face tracker produces data for velocity and for acceleration in two dimensions. We compare annotations generated by the face tracking algorithm with independently made manual annotations for head movements. The annotations are a useful supplement to manual annotations and may help human annotators to quickly and reliably determine onset of head movements and to suggest which kind of head movement is taking place. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,254 |
inproceedings | niemi-linden-2012-representing | Representing the Translation Relation in a Bilingual {W}ordnet | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1054/ | Niemi, Jyrki and Lind{\'e}n, Krister | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2439--2446 | This paper describes representing translations in the Finnish wordnet, FinnWordNet (FiWN), and constructing the FiWN database. FiWN was created by translating all the word senses of the Princeton WordNet (PWN) into Finnish and by joining the translations with the semantic and lexical relations of PWN extracted into a relational (database) format. The approach naturally resulted in a translation relation between PWN and FiWN. Unlike many other multilingual wordnets, the translation relation in FiWN is not primarily on the synset level, but on the level of an individual word sense, which allows more precise translation correspondences. This can easily be projected into a synset-level translation relation, used for linking with other wordnets, for example, via Core WordNet. Synset-level translations are also used as a default in the absence of word-sense translations. The FiWN data in the relational database can be converted to other formats. In the PWN database format, translations are attached to source-language words, allowing the implementation of a Web search interface also working as a bilingual dictionary. Another representation encodes the translation relation as a finite-state transducer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,255 |
inproceedings | martindale-2012-statistical | Can Statistical Post-Editing with a Small Parallel Corpus Save a Weak {MT} Engine? | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1055/ | Martindale, Marianna J. | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2138--2142 | Statistical post-editing has been shown in several studies to increase BLEU score for rule-based MT systems. However, previous studies have relied solely on BLEU and have not conducted further study to determine whether those gains indicated an increase in quality or in score alone. In this work we conduct a human evaluation of statistical post-edited output from a weak rule-based MT system, comparing the results with the output of the original rule-based system and a phrase-based statistical MT system trained on the same data. We show that for this weak rule-based system, despite significant BLEU score increases, human evaluators prefer the output of the original system. While this is not a generally conclusive condemnation of statistical post-editing, this result does cast doubt on the efficacy of statistical post-editing for weak MT systems and on the reliability of BLEU score for comparison between weak rule-based and hybrid systems built from them. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,256 |
inproceedings | eryigit-2012-impact | The Impact of Automatic Morphological Analysis {\&} Disambiguation on Dependency Parsing of {T}urkish | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1056/ | Eryi{\u{git, G{\"ul{\c{sen | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1960--1965 | The studies on dependency parsing of Turkish so far gave their results on the Turkish Dependency Treebank. This treebank consists of sentences where gold standard part-of-speech tags are manually assigned to each word and the words forming multi word expressions are also manually determined and combined into single units. For the first time, we investigate the results of parsing Turkish sentences from scratch and observe the accuracy drop at the end of processing raw data. We test one state-of-the art morphological analyzer together with two different morphological disambiguators. We both show separately the accuracy drop due to the automatic morphological processing and to the lack of multi word unit extraction. With this purpose, we use and present a new version of the Turkish Treebank where we detached the multi word expressions (MWEs) into multiple tokens and manually annotated the missing part-of-speech tags of these new tokens. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,257 |
inproceedings | gavrilov-etal-2012-detecting | Detecting Reduplication in Videos of {A}merican {S}ign {L}anguage | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1057/ | Gavrilov, Zoya and Sclaroff, Stan and Neidle, Carol and Dickinson, Sven | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3767--3773 | A framework is proposed for the detection of reduplication in digital videos of American Sign Language (ASL). In ASL, reduplication is used for a variety of linguistic purposes, including overt marking of plurality on nouns, aspectual inflection on verbs, and nominalization of verbal forms. Reduplication involves the repetition, often partial, of the articulation of a sign. In this paper, the apriori algorithm for mining frequent patterns in data streams is adapted for finding reduplication in videos of ASL. The proposed algorithm can account for varying weights on items in the apriori algorithm`s input sequence. In addition, the apriori algorithm is extended to allow for inexact matching of similar hand motion subsequences and to provide robustness to noise. The formulation is evaluated on 105 lexical signs produced by two native signers. To demonstrate the formulation, overall hand motion direction and magnitude are considered; however, the formulation should be amenable to combining these features with others, such as hand shape, orientation, and place of articulation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,258 |
inproceedings | rosen-vavrin-2012-building | Building a multilingual parallel corpus for human users | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1058/ | Rosen, Alexandr and Vav{\v{r}}{\'i}n, Martin | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2447--2452 | We present the architecture and the current state of InterCorp, a multilingual parallel corpus centered around Czech, intended primarily for human users and consisting of written texts with a focus on fiction. Following an outline of its recent development and a comparison with some other multilingual parallel corpora we give an overview of the data collection procedure that covers text selection criteria, data format, conversion, alignment, lemmatization and tagging. Finally, we show a sample query using the web-based search interface and discuss challenges and prospects of the project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,259 |
inproceedings | roberts-etal-2012-empatweet | {E}mpa{T}weet: Annotating and Detecting Emotions on {T}witter | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1059/ | Roberts, Kirk and Roach, Michael A. and Johnson, Joseph and Guthrie, Josh and Harabagiu, Sanda M. | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3806--3813 | The rise of micro-blogging in recent years has resulted in significant access to emotion-laden text. Unlike emotion expressed in other textual sources (e.g., blogs, quotes in newswire, email, product reviews, or even clinical text), micro-blogs differ by (1) placing a strict limit on length, resulting radically in new forms of emotional expression, and (2) encouraging users to express their daily thoughts in real-time, often resulting in far more emotion statements than might normally occur. In this paper, we introduce a corpus collected from Twitter with annotated micro-blog posts (or tweets) annotated at the tweet-level with seven emotions: ANGER, DISGUST, FEAR, JOY, LOVE, SADNESS, and SURPRISE. We analyze how emotions are distributed in the data we annotated and compare it to the distributions in other emotion-annotated corpora. We also used the annotated corpus to train a classifier that automatically discovers the emotions in tweets. In addition, we present an analysis of the linguistic style used for expressing emotions our corpus. We hope that these observations will lead to the design of novel emotion detection techniques that account for linguistic style and psycholinguistic theories. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,260 |
inproceedings | feilmayr-etal-2012-evaliex | {EVALIEX} {---} A Proposal for an Extended Evaluation Methodology for Information Extraction Systems | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1060/ | Feilmayr, Christina and Pr{\"oll, Birgit and Linsmayr, Elisabeth | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 2303--2310 | Assessing the correctness of extracted data requires performance evaluation, which is accomplished by calculating quality metrics. The evaluation process must cope with the challenges posed by information extraction and natural language processing. In the previous work most of the existing methodologies have been shown that they support only traditional scoring metrics. Our research work addresses requirements, which arose during the development of three productive rule-based information extraction systems. The main contribution is twofold: First, we developed a proposal for an evaluation methodology that provides the flexibility and effectiveness needed for comprehensive performance measurement. The proposal extends state-of-the-art scoring metrics by measuring string and semantic similarities and by parameterization of metric scoring, and thus simulating with human judgment. Second, we implemented an IE evaluation tool named EVALIEX, which integrates these measurement concepts and provides an efficient user interface that supports evaluation control and the visualization of IE results. To guarantee domain independence, the tool additionally provides a Generic Mapper for XML Instances (GeMap) that maps domain-dependent XML files containing IE results to generic ones. Compared to other tools, it provides more flexible testing and better visualization of extraction results for the comparison of different (versions of) information extraction systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,261 |
inproceedings | ivanova-eriksen-2012-bibikit | {B}i{B}i{K}it - A Bilingual Bimodal Reading and Writing Tool for Sign Language Users | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1061/ | Ivanova, Nedelina and Eriksen, Olle | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 3774--3778 | Sign language is used by many people who were born deaf or who became deaf early in life use as their first and/or preferred language. There is no writing system for sign languages; texts are signed on video. As a consequence, texts in sign language are hard to navigate, search and annotate. The BiBiKit project is an easy to use authoring kit which is being developed and enables students, teachers, and virtually everyone to write and read bilingual bimodal texts and thereby creating electronic productions, which link text to sign language video. The main purpose of the project is to develop software that enables the user to link text to video, at the word, phrase and/or sentence level. The software will be developed for sign language and vice versa, but can be used to easily link text to any video: e.g. to add annotations, captions, or navigation points. The three guiding principles are: Software that is 1) stable, 2) easy to use, and 3) foolproof. A web based platform will be developed so the software is available whenever and wherever. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,262 |
inproceedings | leuski-etal-2012-blademistress | The {B}lade{M}istress Corpus: From Talk to Action in Virtual Worlds | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1062/ | Leuski, Anton and Eickhoff, Carsten and Ganis, James and Lavrenko, Victor | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 4060--4067 | Virtual Worlds (VW) are online environments where people come together to interact and perform various tasks. The chat transcripts of interactions in VWs pose unique opportunities and challenges for language analysis: Firstly, the language of the transcripts is very brief, informal, and task-oriented. Secondly, in addition to chat, a VW system records users' in-world activities. Such a record could allow us to analyze how the language of interactions is linked to the users actions. For example, we can make the language analysis of the users dialogues more effective by taking into account the context of the corresponding action or we can predict or detect users actions by analyzing the content of conversations. Thirdly, a joined analysis of both the language and the actions would empower us to build effective modes of the users and their behavior. In this paper we present a corpus constructed from logs from an online multiplayer game BladeMistress. We describe the original logs, annotations that we created on the data, and summarize some of the experiments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,263 |
inproceedings | arnulphy-etal-2012-event | Event Nominals: Annotation Guidelines and a Manually Annotated Corpus in {F}rench | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1063/ | Arnulphy, B{\'e}atrice and Tannier, Xavier and Vilnat, Anne | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1505--1510 | Within the general purpose of information extraction, detection of event descriptions is an important clue. A word refering to an event is more powerful than a single word, because it implies a location, a time, protagonists (persons, organizations{\textbackslash}dots). However, if verbal designations of events are well studied and easier to detect than nominal ones, nominal designations do not claim as much definition effort and resources. In this work, we focus on nominals desribing events. As our application domain is information extraction, we follow a named entity approach to describe and annotate events. In this paper, we present a typology and annotation guidelines for event nominals annotation. We applied them to French newswire articles and produced an annotated corpus. We present observations about the designations used in our manually annotated corpus and the behavior of their triggers. We provide statistics concerning word ambiguity and context of use of event nominals, as well as machine learning experiments showing the difficulty of using lexicons for extracting events. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,264 |
inproceedings | caelen-haumont-sam-2012-comparison | Comparison between two models of language for the automatic phonetic labeling of an undocumented language of the {S}outh-{A}sia: the case of {M}o {P}iu | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1064/ | Caelen-Haumont, Genevi{\`e}ve and Sam, Sethserey | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 956--962 | This paper aims at assessing the automatic labeling of an undocumented, unknown, unwritten and under-resourced language (Mo Piu) of the North Vietnam, by an expert phonetician. In the previous stage of the work, 7 sets of languages were chosen among Mandarin, Vietnamese, Khmer, English, French, to compete in order to select the best models of languages to be used for the phonetic labeling of Mo Piu isolated words. Two sets of languages (1{\^A}{\textdegree} Mandarin + French, 2{\^A}{\textdegree} Vietnamese + French) which got the best scores showed an additional distribution of their results. Our aim is now to study this distribution more precisely and more extensively, in order to statistically select the best models of languages and among them, the best sets of phonetic units which minimize the wrong phonetic automatic labeling. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,265 |
inproceedings | bosco-etal-2012-parallel | The Parallel-{TUT}: a multilingual and multiformat treebank | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1065/ | Bosco, Cristina and Sanguinetti, Manuela and Lesmo, Leonardo | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 1932--1938 | The paper introduces an ongoing project for the development of a parallel treebank for Italian, English and French, i.e. Parallel--TUT, or simply ParTUT. For the development of this resource, both the dependency and constituency-based formats of the Italian Turin University Treebank (TUT) have been applied to a preliminary dataset, which includes the whole text of the Universal Declaration of Human Rights, and sentences from the JRC-Acquis Multilingual Parallel Corpus and the Creative Commons licence. The focus of the project is mainly on the quality of the annotation and the investigation of some issues related to the alignment of data that can be allowed by the TUT formats, also taking into account the availability of conversion tools for display data in standard ways, such as Tiger--XML and CoNLL formats. It is, in fact, our belief that increasing the portability of our treebank could give us the opportunity to access resources and tools provided by other research groups, especially at this stage of the project, where no particular tool -- compatible with the TUT format -- is available in order to tackle the alignment problems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,266 |
inproceedings | scharl-etal-2012-leveraging | Leveraging the Wisdom of the Crowds for the Acquisition of Multilingual Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Declerck, Thierry and Do{\u{g}}an, Mehmet U{\u{g}}ur and Maegaard, Bente and Mariani, Joseph and Moreno, Asuncion and Odijk, Jan and Piperidis, Stelios | may | 2012 | Istanbul, Turkey | European Language Resources Association (ELRA) | https://aclanthology.org/L12-1066/ | Scharl, Arno and Sabou, Marta and Gindl, Stefan and Rafelsberger, Walter and Weichselbraun, Albert | Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}`12) | 379--383 | Games with a purpose are an increasingly popular mechanism for leveraging the wisdom of the crowds to address tasks which are trivial for humans but still not solvable by computer algorithms in a satisfying manner. As a novel mechanism for structuring human-computer interactions, a key challenge when creating them is motivating users to participate while generating useful and unbiased results. This paper focuses on important design choices and success factors of effective games with a purpose. Our findings are based on lessons learned while developing and deploying Sentiment Quiz, a crowdsourcing application for creating sentiment lexicons (an essential component of most sentiment detection algorithms). We describe the goals and structure of the game, the underlying application framework, the sentiment lexicons gathered through crowdsourcing, as well as a novel approach to automatically extend the lexicons by means of a bootstrapping process. Such an automated extension further increases the efficiency of the acquisition process by limiting the number of terms that need to be gathered from the game participants. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 73,267 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.