entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | melby-etal-2012-reliably | Reliably Assessing the Quality of Post-edited Translation Based on Formalized Structured Translation Specifications | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.4/ | Melby, Alan K. and Housley, Jason and Fields, Paul J. and Tuioti, Emily | Workshop on Post-Editing Technology and Practice | null | Post-editing of machine translation has become more common in recent years. This has created the need for a formal method of assessing the performance of post-editors in terms of whether they are able to produce post-edited target texts that follow project specifications. This paper proposes the use of formalized structured translation specifications (FSTS) as a basis for post-editor assessment. To determine if potential evaluators are able to reliably assess the quality of post-edited translations, an experiment used texts representing the work of five fictional post-editors. Two software applications were developed to facilitate the assessment: the Ruqual Specifications Writer, which aids in establishing post-editing project specifications; and Ruqual Rubric Viewer, which provides a graphical user interface for constructing a rubric in a machine-readable format. Seventeen non-experts rated the translation quality of each simulated post-edited text. Intraclass correlation analysis showed evidence that the evaluators were highly reliable in evaluating the performance of the post-editors. Thus, we assert that using FSTS specifications applied through the Ruqual software tools provides a useful basis for evaluating the quality of post-edited texts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,010 |
inproceedings | mundt-etal-2012-learning | Learning to Automatically Post-Edit Dropped Words in {MT} | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.5/ | Mundt, Jacob and Parton, Kristen and McKeown, Kathleen | Workshop on Post-Editing Technology and Practice | null | Automatic post-editors (APEs) can improve adequacy of MT output by detecting and reinserting dropped content words, but the location where these words are inserted is critical. In this paper, we describe a probabilistic approach for learning reinsertion rules for specific languages and MT systems, as well as a method for synthesizing training data from reference translations. We test the insertion logic on MT systems for Chinese to English and Arabic to English. Our adaptive APE is able to insert within 3 words of the best location 73{\%} of the time (32{\%} in the exact location) in Arabic-English MT output, and 67{\%} of the time in Chinese-English output (30{\%} in the exact location), and delivers improved performance on automated adequacy metrics over a previous rule-based approach to insertion. We consider how particular aspects of the insertion problem make it particularly amenable to machine learning solutions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,011 |
inproceedings | penkale-way-2012-smartmate | {S}mart{MATE}: An Online End-To-End {MT} Post-Editing Framework | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.6/ | Penkale, Sergio and Way, Andy | Workshop on Post-Editing Technology and Practice | null | It is a well-known fact that the amount of content which is available to be translated and localized far outnumbers the current amount of translation resources. Automation in general and Machine Translation (MT) in particular are one of the key technologies which can help improve this situation. However, a tool that integrates all of the components needed for the localization process is still missing, and MT is still out of reach for most localisation professionals. In this paper we present an online translation environment which empowers users with MT by enabling engines to be created from their data, without a need for technical knowledge or special hardware requirements and at low cost. Documents in a variety of formats can then be post-edited after being processed with their Translation Memories, MT engines and glossaries. We give an overview of the tool and present a case study of a project for a large games company, showing the applicability of our tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,012 |
inproceedings | poulis-kolovratnik-2012-post | To post-edit or not to post-edit? Estimating the benefits of {MT} post-editing for a {E}uropean organization | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.7/ | Poulis, Alexandros and Kolovratnik, David | Workshop on Post-Editing Technology and Practice | null | In the last few years the European Parliament has witnessed a significant increase in translation demand. Although Translation Memory (TM) tools, terminology databases and bilingual concordancers have provided significant leverage in terms of quality and productivity the European Parliament is in need for advanced language technology to keep facing successfully the challenge of multilingualism. This paper describes an ongoing large-scale machine translation post-editing evaluation campaign the purpose of which is to estimate the business benefits from the use of machine translation for the European Parliament. This paper focuses mainly on the design, the methodology and the tools used by the evaluators but it also presents some preliminary results for the following language pairs: Polish-English, Danish-English, Lithuanian-English, English-German and English-French. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,013 |
inproceedings | tatsumi-etal-2012-good | How Good Is Crowd Post-Editing? Its Potential and Limitations | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.8/ | Tatsumi, Midori and Aikawa, Takako and Yamamoto, Kentaro and Isahara, Hitoshi | Workshop on Post-Editing Technology and Practice | null | This paper is a partial report of a research effort on evaluating the effect of crowd-sourced post-editing. We first discuss the emerging trend of crowd-sourced post-editing of machine translation output, along with its benefits and drawbacks. Second, we describe the pilot study we have conducted on a platform that facilitates crowd-sourced post-editing. Finally, we provide our plans for further studies to have more insight on how effective crowd-sourced post-editing is. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,014 |
inproceedings | valotkaite-asadullah-2012-error | Error Detection for Post-editing Rule-based Machine Translation | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.9/ | Valotkaite, Justina and Asadullah, Munshi | Workshop on Post-Editing Technology and Practice | null | The increasing role of post-editing as a way of improving machine translation output and a faster alternative to translating from scratch has lately attracted researchers' attention and various attempts have been proposed to facilitate the task. We experiment with a method to provide support for the post-editing task through error detection. A deep linguistic error analysis was done of a sample of English sentences translated from Portuguese by two Rule-based Machine Translation systems. We designed a set of rules to deal with various systematic translation errors and implemented a subset of these rules covering the errors of tense and number. The evaluation of these rules showed a satisfactory performance. In addition, we performed an experiment with human translators which confirmed that highlighting translation errors during the post-editing can help the translators perform the post-editing task up to 12 seconds per error faster and improve their efficiency by minimizing the number of missed errors. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,015 |
inproceedings | zhechev-2012-machine | Machine Translation Infrastructure and Post-editing Performance at {A}utodesk | O'Brien, Sharon and Simard, Michel and Specia, Lucia | oct # " 28" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-wptp.10/ | Zhechev, Ventsislav | Workshop on Post-Editing Technology and Practice | null | In this paper, we present the Moses-based infrastructure we developed and use as a productivity tool for the localisation of software documentation and user interface (UI) strings at Autodesk into twelve languages. We describe the adjustments we have made to the machine translation (MT) training workflow to suit our needs and environment, our server environment and the MT Info Service that handles all translation requests and allows the integration of MT in our various localisation systems. We also present the results of our latest post-editing productivity test, where we measured the productivity gain for translators post-editing MT output versus translating from scratch. Our analysis of the data indicates the presence of a strong correlation between the amount of editing applied to the raw MT output by the translators and their productivity gain. In addition, within the last calendar year our system has processed over thirteen million tokens of documentation content of which we have a record of the performed post-editing. This has allowed us to evaluate the performance of our MT engines for the different languages across our product portfolio, as well as spotlight potential issues with MT in the localisation process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,016 |
inproceedings | hajlaoui-popescu-belis-2012-translating | Translating {E}nglish Discourse Connectives into {A}rabic: a Corpus-based Analysis and an Evaluation Metric | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.1/ | Hajlaoui, Najeh and Popescu-Belis, Andrei | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 1--8 | Discourse connectives can often signal multiple discourse relations, depending on their context. The automatic identification of the Arabic translations of seven English discourse connectives shows how these connectives are differently translated depending on their actual senses. Automatic labelling of English source connectives can help a machine translation system to translate them more correctly. The corpus-based analysis of Arabic translations also enables the definition of a connective-specific evaluation metric for machine translation, which is here validated by human judges on sample English/Arabic translation data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,018 |
inproceedings | lancioni-boella-2012-idiomatic | Idiomatic {MWE}s and Machine Translation A Retrieval and Representation Model: the {A}ra{MWE} Project | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.2/ | Lancioni, Giuliano and Boella, Marco | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 9--16 | A preliminary implementation of AraMWE, a hybrid project that includes a statistical component and a CCG symbolic component to extract and treat MWEs and idioms in Arabic and Eng- lish parallel texts is presented, together with a general sketch of the system, a thorough description of the statistical component and a proof of concept of the CCG component. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,019 |
inproceedings | jabbari-etal-2012-developing | Developing an Open-domain {E}nglish-{F}arsi Translation System Using {AFEC}: Amirkabir Bilingual {F}arsi-{E}nglish Corpus | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.3/ | Jabbari, Fattaneh and Bakshaei, Somayeh and Mohammadzadeh Ziabary, Seyyed Mohammad and Khadivi, Shahram | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 17--23 | The translation quality of Statistical Machine Translation (SMT) depends on the amount of input data especially for morphologically rich languages. Farsi (Persian) language is such a language which has few NLP resources. It also suffers from the non-standard written characters which causes a large variety in the written form of each character. Moreover, the structural difference between Farsi and English results in long range reorderings which cannot be modeled by common SMT reordering models. Here, we try to improve the existing English-Farsi SMT system focusing on these challenges first by expanding our bilingual limited-domain corpus to an open-domain one. Then, to alleviate the character variations, a new text normalization algorithm is offered. Finally, some hand-crafted rules are applied to reduce the structural differences. Using the new corpus, the experimental results showed 8.82{\%} BLEU improvement by applying new normalization method and 9.1{\%} BLEU when rules are used. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,020 |
inproceedings | shihadeh-neumann-2012-arne | {ARNE} - A tool for Namend Entity Recognition from {A}rabic Text | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.4/ | Shihadeh, Carolin and Neumann, G{\"unter | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 24--31 | In this paper, we study the problem of finding named entities in the Arabic text. For this task we present the development of our pipeline software for Arabic named entity recognition (ARNE), which includes tokenization, morphological analysis, Buckwalter transliteration, part of speech tagging and named entity recognition of person, location and organisation named entities. In our first attempt to recognize named entites, we have used a simple, fast and language independent gazetteer lookup approach. In our second attempt, we have used the morphological analysis provided by our pipeline to remove affixes and observed hence an improvement in our performance. The pipeline presented in this paper, can be used in future as a basis for a named entity recognition system that recognized named entites not only using gazetteers, but also making use of morphological information and part of speech tagging. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,021 |
inproceedings | kay-rineer-2012-approaches | Approaches to {A}rabic Name Transliteration and Matching in the {D}ata{F}lux Quality Knowledge Base | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.5/ | Kay, Brant N. and Rineer, Brian C. | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 32--37 | This paper discusses a hybrid approach to transliterating and matching Arabic names, as implemented in the DataFlux Quality Knowledge Base (QKB), a knowledge base used by data management software systems from SAS Institute, Inc. The approach to transliteration relies on a lexicon of names with their corresponding transliterations as its primary method, and falls back on PERL regular expression rules to transliterate any names that do not exist in the lexicon. Transliteration in the QKB is bi-directional; the technology transliterates Arabic names written in the Arabic script to the Latin script, and transliterates Arabic names written in the Latin script to Arabic. Arabic name matching takes a similar approach and relies on a lexicon of Arabic names and their corresponding transliterations, falling back on phonetic transliteration rules to transliterate names into the Latin script. All names are ultimately rendered in the Latin script before matching takes place. Thus, the technology is capable of matching names across the Arabic and Latin scripts, as well as within the Arabic script or within the Latin script. The goal of the authors of this paper was to build a software system capable of transliterating and matching Arabic names across scripts with an accuracy deemed to be acceptable according to internal software quality standards. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,022 |
inproceedings | saadane-etal-2012-using | Using {A}rabic Transliteration to Improve Word Alignment from {F}rench- {A}rabic Parallel Corpora | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.6/ | Saadane, Houda and Benterki, Ouafa and Semmar, Nasredine and Fluhr, Christian | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 38--46 | In this paper, we focus on the use of Arabic transliteration to improve the results of a linguistics-based word alignment approach from parallel text corpora. This approach uses, on the one hand, a bilingual lexicon, named entities, cognates and grammatical tags to align single words, and on the other hand, syntactic dependency relations to align compound words. We have evaluated the word aligner integrating Arabic transliteration using two methods: A manual evaluation of the alignment quality and an evaluation of the impact of this alignment on the translation quality by using the Moses statistical machine translation system. The obtained results show that Arabic transliteration improves the quality of both alignment and translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,023 |
inproceedings | shoukry-rafea-2012-preprocessing | Preprocessing {E}gyptian Dialect Tweets for Sentiment Mining | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.7/ | Shoukry, Amira and Rafea, Ahmed | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 47--56 | Research done on Arabic sentiment analysis is considered very limited almost in its early steps compared to other languages like English whether at document-level or sentence-level. In this paper, we test the effect of preprocessing (normalization, stemming, and stop words removal) on the performance of an Arabic sentiment analysis system using Arabic tweets from twitter. The sentiment (positive or negative) of the crawled tweets is analyzed to interpret the attitude of the public with regards to topic of interest. Using Twitter as the main source of data reflects the importance of the system for the Middle East region, which mostly speaks Arabic. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,024 |
inproceedings | abuzeina-etal-2012-rescoring | Rescoring N-Best Hypotheses for {A}rabic Speech Recognition: A Syntax- Mining Approach | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.8/ | AbuZeina, Dia and Elshafei, Moustafa and Al-Muhtaseb, Husni and Al-Khatib, Wasfi | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 57--64 | Improving speech recognition accuracy through linguistic knowledge is a major research area in automatic speech recognition systems. In this paper, we present a syntax-mining approach to rescore N-Best hypotheses for Arabic speech recognition systems. The method depends on a machine learning tool (WEKA-3-6-5) to extract the N-Best syntactic rules of the Baseline tagged transcription corpus which was tagged using Stanford Arabic tagger. The proposed method was tested using the Baseline system that contains a pronunciation dictionary of 17,236 vocabularies (28,682 words and variants) from 7.57 hours pronunciation corpus of modern standard Arabic (MSA) broadcast news. Using Carnegie Mellon University (CMU) PocketSphinx speech recognition engine, the Baseline system achieved a Word Error Rate (WER) of 16.04 {\%} on a test set of 400 utterances ( about 0.57 hours) containing 3585 diacritized words. Even though there were enhancements in some tested files, we found that this method does not lead to significant enhancement (for Arabic). Based on this research work, we conclude this paper by introducing a new design for language models to account for longer-distance constrains, instead of a few proceeding words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,025 |
inproceedings | mohamed-2012-morphological | Morphological Segmentation and Part of Speech Tagging for Religious {A}rabic | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.9/ | Mohamed, Emad | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 65--71 | We annotate a small corpus of religious Arabic with morphological segmentation boundaries and fine-grained segment-based part of speech tags. Experiments on both segmentation and POS tagging show that the religious corpus-trained segmenter and POS tagger outperform the Arabic Treebak-trained ones although the latter is 21 times as big, which shows the need for building religious Arabic linguistic resources. The small corpus we annotate improves segmentation accuracy by 5{\%} absolute (from 90.84{\%} to 95.70{\%}), and POS tagging by 9{\%} absolute (from 82.22{\%} to 91.26) when using gold standard segmentation, and by 9.6{\%} absolute (from 78.62{\%} to 88.22) when using automatic segmentation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,026 |
inproceedings | sellami-etal-2012-exploiting | Exploiting {W}ikipedia as a Knowledge Base for the Extraction of Linguistic Resources: Application on {A}rabic-{F}rench Comparable Corpora and Bilingual Lexicons | Farghaly, Ali and Oroumchian, Farhad | nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-caas14.10/ | Sellami, Rahma and Sadat, Fatiha and Hadrich Belguith, Lamia | Fourth Workshop on Computational Approaches to Arabic-Script-based Languages | 72--79 | We present simple and effective methods for extracting comparable corpora and bilingual lexicons from Wikipedia. We shall exploit the large scale and the structure of Wikipedia articles to extract two resources that will be very useful for natural language applications. We build a comparable corpus from Wikipedia using categories as topic restrictions and we extract bilingual lexicons from inter-language links aligned with statistical method or a combined statistical and linguistic method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,027 |
inproceedings | formiga-etal-2012-improving | Improving {E}nglish to {S}panish Out-of-Domain Translations by Morphology Generalization and Generation | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.1/ | Formiga, Llu{\'i}s and Hern{\'a}ndez, Adolfo and Mari{\~n}o, Jos{\'e} B. and Monte, Enric | Workshop on Monolingual Machine Translation | null | This paper presents a detailed study of a method for morphology generalization and generation to address out-of-domain translations in English-to-Spanish phrase-based MT. The paper studies whether the morphological richness of the target language causes poor quality translation when translating out-of-domain. In detail, this approach first translates into Spanish simplified forms and then predicts the final inflected forms through a morphology generation step based on shallow and deep-projected linguistic information available from both the source and target-language sentences. Obtained results highlight the importance of generalization, and therefore generation, for dealing with out-of-domain data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,029 |
inproceedings | jiang-etal-2012-monolingual | Monolingual Data Optimisation for Bootstrapping {SMT} Engines | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.2/ | Jiang, Jie and Way, Andy and Ng, Nelson and Haque, Rejwanul and Dillinger, Mike and Lu, Jun | Workshop on Monolingual Machine Translation | null | Content localisation via machine translation (MT) is a sine qua non, especially for international online business. While most applications utilise rule-based solutions due to the lack of suitable in-domain parallel corpora for statistical MT (SMT) training, in this paper we investigate the possibility of applying SMT where huge amounts of monolingual content only are available. We describe a case study where an analysis of a very large amount of monolingual online trading data from eBay is conducted by ALS with a view to reducing this corpus to the most representative sample in order to ensure the widest possible coverage of the total data set. Furthermore, minimal yet optimal sets of sentences/words/terms are selected for generation of initial translation units for future SMT system-building. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,030 |
inproceedings | mehay-white-2012-shallow | Shallow and Deep Paraphrasing for Improved Machine Translation Parameter Optimization | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.3/ | Mehay, Dennis N. and White, Michael | Workshop on Monolingual Machine Translation | null | String comparison methods such as BLEU (Papineni et al., 2002) are the de facto standard in MT evaluation (MTE) and in MT system parameter tuning (Och, 2003). It is difficult for these metrics to recognize legitimate lexical and grammatical paraphrases, which is important for MT system tuning (Madnani, 2010). We present two methods to address this: a shallow lexical substitution technique and a grammar-driven paraphrasing technique. Grammatically precise paraphrasing is novel in the context of MTE, and demonstrating its usefulness is a key contribution of this paper. We use these techniques to paraphrase a single reference, which, when used for parameter tuning, leads to superior translation performance over baselines that use only human-authored references. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,031 |
inproceedings | murakami-etal-2012-two | Two stage Machine Translation System using Pattern-based {MT} and Phrase-based {SMT} | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.4/ | Murakami, Jin{'}ichi and Nishimura, Takuya and Tokuhisa, Masoto | Workshop on Monolingual Machine Translation | null | We have developed a two-stage machine translation (MT) system. The first stage consists of an automatically created pattern-based machine translation system (PBMT), and the second stage consists of a standard phrase-based statistical machine translation (SMT) system. We studied for the Japanese-English simple sentence task. First, we obtained English sentences from Japanese sentences using an automatically created Japanese-English pattern-based machine translation. We call the English sentences obtained in this way as {\textquotedblleft}English{\textquotedblright}. Second, we applied a standard SMT (Moses) to the results. This means that we translated the {\textquotedblleft}English{\textquotedblright} sentences into English by SMT. We also conducted ABX tests (Clark, 1982) to compare the outputs by the standard SMT (Moses) with those by the proposed system for 100 sentences. The experimental results indicated that 30 sentences output by the proposed system were evaluated as being better than those outputs by the standard SMT system, whereas 9 sentences output by the standard SMT system were thought to be better than those outputs by the proposed system. This means that our proposed system functioned effectively in the Japanese-English simple sentence task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,032 |
inproceedings | larasati-2012-improving | Improving Word Alignment by Exploiting Adapted Word Similarity | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.5/ | Larasati, Septina Dian | Workshop on Monolingual Machine Translation | null | This paper presents a method to improve a word alignment model in a phrase-based Statistical Machine Translation system for a low-resourced language using a string similarity approach. Our method captures similar words that can be seen as semi-monolingual across languages, such as numbers, named entities, and adapted/loan words. We use several string similarity metrics to measure the monolinguality of the words, such as Longest Common Subsequence Ratio (LCSR), Minimum Edit Distance Ratio (MEDR), and we also use a modified BLEU Score (modBLEU). Our approach is to add intersecting alignment points for word pairs that are orthographically similar, before applying a word alignment heuristic, to generate a better word alignment. We demonstrate this approach on Indonesian-to-English translation task, where the languages share many similar words that are poorly aligned given a limited training data. This approach gives a statistically significant improvement by up to 0.66 in terms of BLEU score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,033 |
inproceedings | singh-2012-addressing | Addressing some Issues of Data Sparsity towards Improving {E}nglish- {M}anipuri {SMT} using Morphological Information | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.6/ | Singh, Thoudam Doren | Workshop on Monolingual Machine Translation | null | The performance of an SMT system heavily depends on the availability of large parallel corpora. Unavailability of these resources in the required amount for many language pair is a challenging issue. The required size of the resource involving morphologically rich and highly agglutinative language is essentially much more for the SMT systems. This paper investigates on some of the issues on enriching the resource for this kind of languages. Handling of inflectional and derivational morphemes of the morphologically rich target language plays important role in the enrichment process. Mapping from the source to the target side is carried out for the English-Manipuri SMT task using factored model. The SMT system developed shows improvement in the performance both in terms of the automatic scoring and subjective evaluation over the baseline system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,034 |
inproceedings | gottesman-2012-statistical | Statistical Machine Translation for Depassivizing {G}erman Part-of-speech Sequences | Okita, Tsuyoshi and Sokolov, Artem and Watanabe, Taro | oct # " 28-" # nov # " 1" | 2012 | San Diego, California, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2012.amta-monomt.7/ | Gottesman, Benjamin | Workshop on Monolingual Machine Translation | null | We aim to use statistical machine translation technology to correct grammar errors and style issues in monolingual text. Here, as a feasibility test, we focus on depassivization in German and we abstract from surface forms to parts of speech. Our results are not yet satisfactory but yield useful insights into directions for improvement. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 75,035 |
inproceedings | wu-2011-syntactic | Syntactic {SMT} and Semantic {SMT} | null | sep # " 19" | 2011 | Xiamen, China | null | https://aclanthology.org/2011.mtsummit-tutorials.1/ | Wu, Dekai | Proceedings of Machine Translation Summit XIII: Tutorial Abstracts | null | Over the past twenty years, we have attacked the historical methodological barriers between statistical machine translation and traditional models of syntax, semantics, and structure. In this tutorial, we will survey some of the central issues and techniques from each of these aspects, with an emphasis on `deeply theoretically integrated' models, rather than hybrid approaches such as superficial statistical aggregation or system combination of outputs produced by traditional symbolic components. On syntactic SMT, we will explore the trade-offs for SMT between learnability and representational expressiveness. After establishing a foundation in the theory and practice of stochastic transduction grammars, we will examine very recent new approaches to automatic unsupervised induction of various classes of transduction grammars. We will show why stochastic linear transduction grammars (LTGs and LITGs) and their preterminalized variants (PLITGs) are proving to be particularly intriguing models for the bootstrapping of inducing full-fledged stochastic inversion transduction grammars (ITGs). On semantic SMT, we will explore the trade-offs for SMT involved in applying various lexical semantics models. We will first examine word sense disambiguation, and discuss why traditional WSD models that are not deeply integrated within the SMT model tend, surprisingly, to fail. In contrast, we will show how a deeply embedded phrase sense disambiguation (PSD) approach succeeds where traditional WSD does not. We will then turn to semantic role labeling, and discuss the challenges of early approaches of applying SRL models to SMT. Finally, on semantic MT evaluation, we will explore some very new human and semi-automatic metrics based on semantic frame agreement. We show that by keeping the metrics deeply grounded within the theoretical framework of semantic frames, the new HMEANT and MEANT metrics can significantly outperform even the state-of-the-art expensive HTER and TER metrics, while at the same time maintaining the desirable characteristics of simplicity, inexpensiveness, and representational transparency. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,076 |
inproceedings | lavie-2011-evaluating | Evaluating the Output of Machine Translation Systems | null | sep # " 19" | 2011 | Xiamen, China | null | https://aclanthology.org/2011.mtsummit-tutorials.3/ | Lavie, Alon | Proceedings of Machine Translation Summit XIII: Tutorial Abstracts | null | This half-day tutorial provides a broad overview of how to evaluate translations that are produced by machine translation systems. The range of issues covered includes a broad survey of both human evaluation measures and commonly-used automated metrics, and a review of how these are used for various types of evaluation tasks, such as assessing the translation quality of MT-translated sentences, comparing the performance of alternative MT systems, or measuring the productivity gains of incorporating MT into translation workflows. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,078 |
inproceedings | shrestha-2011-alignment | Alignment of Monolingual Corpus by Reduction of the Search Space | Lopez, C{\'e}dric | jun | 2011 | Montpellier, France | ATALA | https://aclanthology.org/2011.jeptalnrecital-recital.5/ | Shrestha, Prajol | Actes de la 18e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues | 48--56 | Monolingual comparable corpora annotated with alignments between text segments (paragraphs, sentences, etc.) based on similarity are used in a wide range of natural language processing applications like plagiarism detection, information retrieval, summarization and so on. The drawback wanting to use them is that there aren`t many standard corpora which are aligned. Due to this drawback, the corpus is manually created, which is a time consuming and costly task. In this paper, we propose a method to significantly reduce the search space for manual alignment of the monolingual comparable corpus which in turn makes the alignment process faster and easier. This method can be used in making alignments on different levels of text segments. Using this method we create our own gold corpus aligned on the level of paragraph, which will be used for testing and building our algorithms for automatic alignment. We also present some experiments for the reduction of search space on the basis of stem overlap, word overlap, and cosine similarity measure which help us automatize the process to some extent and reduce human effort for alignment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,208 |
inproceedings | shrestha-2011-corpus | Corpus-Based methods for Short Text Similarity | Lopez, C{\'e}dric | jun | 2011 | Montpellier, France | ATALA | https://aclanthology.org/2011.jeptalnrecital-recitalcourt.1/ | Shrestha, Prajol | Actes de la 18e conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues (articles courts) | 1--6 | This paper presents corpus-based methods to find similarity between short text (sentences, paragraphs, ...) which has many applications in the field of NLP. Previous works on this problem have been based on supervised methods or have used external resources such as WordNet, British National Corpus etc. Our methods are focused on unsupervised corpus-based methods. We present a new method, based on Vector Space Model, to capture the contextual behavior, senses and correlation, of terms and show that this method performs better than the baseline method that uses vector based cosine similarity measure. The performance of existing document similarity measures, Dice and Resemblance, are also evaluated which in our knowledge have not been used for short text similarity. We also show that the performance of the vector-based baseline method is improved when using stems instead of words and using the candidate sentences for computing the parameters rather than some external resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,210 |
inproceedings | federico-etal-2011-overview | Overview of the {IWSLT} 2011 evaluation campaign | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.1/ | Federico, Marcello and Bentivogli, Luisa and Paul, Michael and St{\"uker, Sebastian | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 11--27 | We report here on the eighth Evaluation Campaign organized by the IWSLT workshop. This year, the IWSLT evaluation focused on the automatic translation of public talks and included tracks for speech recognition, speech translation, text translation, and system combination. Unlike previous years, all data supplied for the evaluation has been publicly released on the workshop website, and is at the disposal of researchers interested in working on our benchmarks and in comparing their results with those published at the workshop. This paper provides an overview of the IWSLT 2011 Evaluation Campaign, which includes: descriptions of the supplied data and evaluation specifications of each track, the list of participants specifying their submitted runs, a detailed description of the subjective evaluation carried out, the main findings of each exercise drawn from the results and the system descriptions prepared by the participants, and, finally, several detailed tables reporting all the evaluation results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,217 |
inproceedings | abe-etal-2011-nict | The {NICT} {ASR} system for {IWSLT}2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.2/ | Abe, Kazuhiko and Wu, Youzheng and Huang, Chien-lin and Dixon, Paul R. and Matsuda, Shigeki and Hori, Chiori and Kashioka, Hideki | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 28--33 | In this paper, we describe NICT`s participation in the IWSLT 2011 evaluation campaign for the ASR Track. To recognize spontaneous speech, we prepared an acoustic model trained by more spontaneous speech corpora and a language model constructed with text corpora distributed by the organizer. We built the multi-pass ASR system by adapting the acoustic and language models with previous ASR results. The target speech was selected from talks on the TED (Technology, Entertainment, Design) program. Here, a large reduction in word error rate was obtained by the speaker adaptation of the acoustic model with MLLR. Additional improvement was achieved not only by adaptation of the language model but also by parallel usage of the baseline and speaker-dependent acoustic models. Accordingly, the final WER was reduced by 30{\%} from the baseline ASR for the distributed test set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,218 |
inproceedings | aminzadeh-etal-2011-mit | The {MIT}-{LL}/{AFRL} {IWSLT}-2011 {MT} system | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.3/ | Aminzadeh, A. Ryan and Anderson, Tim and Slyh, Ray and Ore, Brian and Hansen, Eric and Shen, Wade and Drexler, Jennifer and Gleason, Terry | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 34--40 | This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2011 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Arabic to English and English to French TED-talk translation tasks. We also applied our existing ASR system to the TED-talk lecture ASR task. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2010 system, and experiments we ran during the IWSLT-2011 evaluation. Specifically, we focus on 1) speech recognition for lecture-like data, 2) cross-domain translation using MAP adaptation, and 3) improved Arabic morphology for MT preprocessing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,219 |
inproceedings | banerjee-etal-2011-dcu | The {DCU} machine translation systems for {IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.4/ | Banerjee, Pratyush and Almaghout, Hala and Naskar, Sudip and Roturier, Johann and Jiang, Jie and Way, Andy and van Genabith, Josef | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 41--48 | In this paper, we provide a description of the Dublin City University`s (DCU) submissions in the IWSLT 2011 evaluationcampaign.1 WeparticipatedintheArabic-Englishand Chinese-English Machine Translation(MT) track translation tasks. We use phrase-based statistical machine translation (PBSMT) models to create the baseline system. Due to the open-domain nature of the data to be translated, we use domain adaptation techniques to improve the quality of translation. Furthermore, we explore target-side syntactic augmentation for an Hierarchical Phrase-Based (HPB) SMT model. Combinatory Categorial Grammar (CCG) is used to extract labels for target-side phrases and non-terminals in the HPB system. Combining the domain adapted language models with the CCG-augmented HPB system gave us the best translations for both language pairs providing statistically significant improvements of 6.09 absolute BLEU points (25.94{\%} relative) and 1.69 absolute BLEU points (15.89{\%} relative) over the unadapted PBSMT baselines for the Arabic-English and Chinese-English language pairs, respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,220 |
inproceedings | he-etal-2011-msr | The {MSR} system for {IWSLT} 2011 evaluation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.6/ | He, Xiaodong and Axelrod, Amittai and Deng, Li and Acero, Alex and Hwang, Mei-Yuh and Nguyen, Alisa and Wang, Andrew and Huang, Xiahui | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 57--61 | This paper describes the Microsoft Research (MSR) system for the evaluation campaign of the 2011 international workshop on spoken language translation. The evaluation task is to translate TED talks (www.ted.com). This task presents two unique challenges: First, the underlying topic switches sharply from talk to talk. Therefore, the translation system needs to adapt to the current topic quickly and dynamically. Second, only a very small amount of relevant parallel data (transcripts of TED talks) is available. Therefore, it is necessary to perform accurate translation model estimation with limited data. In the preparation for the evaluation, we developed two new methods to attack these problems. Specifically, we developed an unsupervised topic modeling based adaption method for machine translation models. We also developed a discriminative training method to estimate parameters in the generative components of the translation models with limited data. Experimental results show that both methods improve the translation quality. Among all the submissions, ours achieves the best BLEU score in the machine translation Chinese-to-English track (MT{\_}CE) of the IWSLT 2011 evaluation that we participated. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,222 |
inproceedings | lavergne-etal-2011-limsis | {LIMSI}`s experiments in domain adaptation for {IWSLT}11 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.7/ | Lavergne, Thomas and Allauzen, Alexandre and Le, Hai-Son and Yvon, Fran{\c{c}}ois | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 62--67 | LIMSI took part in the IWSLT 2011 TED task in the MT track for English to French using the in-house n-code system, which implements the n-gram based approach to Machine Translation. This framework not only allows to achieve state-of-the-art results for this language pair, but is also appealing due to its conceptual simplicity and its use of well understood statistical language models. Using this approach, we compare several ways to adapt our existing systems and resources to the TED task with mixture of language models and try to provide an analysis of the modest gains obtained by training a log linear combination of inand out-of-domain models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,223 |
inproceedings | lecouteux-etal-2011-lig | {LIG} {E}nglish-{F}rench spoken language translation system for {IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.8/ | Lecouteux, Benjamin and Besacier, Laurent and Blanchon, Herv{\'e} | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 68--72 | This paper describes the system developed by the LIG laboratory for the 2011 IWSLT evaluation. We participated to the English-French MT and SLT tasks. The development of a reference translation system (MT task), as well as an ASR output translation system (SLT task) are presented. We focus this year on the SLT task and on the use of multiple 1-best ASR outputs to improve overall translation quality. The main experiment presented here compares the performance of a SLT system where multiple ASR 1-best are combined before translation (source combination), with a SLT system where multiple ASR 1-best are translated, the system combination being conducted afterwards on the target side (target combination). The experimental results show that the second approach (target combination) overpasses the first one, when the performance is measured with BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,224 |
inproceedings | mediani-etal-2011-kit | The {KIT} {E}nglish-{F}rench translation systems for {IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.9/ | Mediani, Mohammed and Cho, Eunach and Niehues, Jan and Herrmann, Teresa and Waibel, Alex | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 73--78 | This paper presents the KIT system participating in the English{\textrightarrow}French TALK Translation tasks in the framework of the IWSLT 2011 machine translation evaluation. Our system is a phrase-based translation system using POS-based reordering extended with many additional features. First of all, a special preprocessing is devoted to the Giga corpus in order to minimize the effect of the great amount of noise it contains. In addition, the system gives more importance to the in-domain data by adapting the translation and the language models as well as by using a wordcluster language model. Furthermore, the system is extended by a bilingual language model and a discriminative word lexicon. The automatic speech transcription input usually has no or wrong punctuation marks, therefore these marks were especially removed from the source training data for the SLT system training. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,225 |
inproceedings | rousseau-etal-2011-liums | {LIUM}`s systems for the {IWSLT} 2011 speech translation tasks | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.10/ | Rousseau, Anthony and Bougares, Fethi and Del{\'e}glise, Paul and Schwenk, Holger and Est{\`e}ve, Yannick | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 79--85 | This paper describes the three systems developed by the LIUM for the IWSLT 2011 evaluation campaign. We participated in three of the proposed tasks, namely the Automatic Speech Recognition task (ASR), the ASR system combination task (ASR{\_}SC) and the Spoken Language Translation task (SLT), since these tasks are all related to speech translation. We present the approaches and specificities we developed on each task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,226 |
inproceedings | ruiz-etal-2011-fbk | {FBK}@{IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.11/ | Ruiz, N. and Bisazza, A. and Brugnara, F. and Falavigna, D. and Giuliani, D. and Jaber, S. and Gretter, R. and Federico, M. | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 86--93 | This paper reports on the participation of FBK at the IWSLT 2011 Evaluation: namely in the English ASR track, the Arabic-English MT track and the English-French MT and SLT tracks. Our ASR system features acoustic models trained on a portion of the TED talk recordings that was automatically selected according to the fidelity of the provided transcriptions. Three decoding steps are performed interleaved by acoustic feature normalization and acoustic model adaptation. Concerning the MT and SLT systems, besides language specific pre-processing and the automatic introduction of punctuation in the ASR output, two major improvements are reported over our last year baselines. First, we applied a fill-up method for phrase-table adaptation; second, we explored the use of hybrid class-based language models to better capture the language style of public speeches. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,227 |
inproceedings | stuker-etal-2011-2011 | The 2011 {KIT} {E}nglish {ASR} system for the {IWSLT} evaluation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.12/ | St{\"uker, Sebastian and Kilgour, Kevin and Saam, Christian and Waibel, Alex | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 94--97 | This paper describes our English Speech-to-Text (STT) system for the 2011 IWSLT ASR track. The system consists of 2 subsystems with different front-ends{---}one MVDR based, one MFCC based{---}which are combined using confusion network combination to provide a base for a second pass speaker adapted MVDR system. We demonstrate that this set-up produces competitive results on the IWSLT 2010 dev and test sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,228 |
inproceedings | vilar-etal-2011-dfkis | {DFKI}`s {SC} and {MT} submissions to {IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.13/ | Vilar, David and Avramidis, Eleftherios and Popovi{\'c}, Maja and Hunsicker, Sabine | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 98--105 | We describe DFKI`s submission to the System Combination and Machine Translation tracks of the 2011 IWSLT Evaluation Campaign. We focus on a sentence selection mechanism which chooses the (hopefully) best sentence among a set of candidates. The rationale behind it is to take advantage of the strengths of each system, especially given an heterogeneous dataset like the one in this evaluation campaign, composed of TED Talks of very different topics. We focus on using features that correlate well with human judgement and, while our primary system still focus on optimizing the BLEU score on the development set, our goal is to move towards optimizing directly the correlation with human judgement. This kind of system is still under development and was used as a secondary submission. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,229 |
inproceedings | wuebker-etal-2011-rwth | The {RWTH} {A}achen machine translation system for {IWSLT} 2011 | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.14/ | Wuebker, Joern and Huck, Matthias and Mansour, Saab and Freitag, Markus and Feng, Minwei and Peitz, Stephan and Schmidt, Christoph and Ney, Hermann | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 106--113 | In this paper the statistical machine translation (SMT) systems of RWTH Aachen University developed for the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2011 is presented. We participated in the MT (English-French, Arabic-English, ChineseEnglish) and SLT (English-French) tracks. Both hierarchical and phrase-based SMT decoders are applied. A number of different techniques are evaluated, including domain adaptation via monolingual and bilingual data selection, phrase training, different lexical smoothing methods, additional reordering models for the hierarchical system, various Arabic and Chinese segmentation methods, punctuation prediction for speech recognition output, and system combination. By application of these methods we can show considerable improvements over the respective baseline systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,230 |
inproceedings | boudahmane-etal-2011-advances | Advances on spoken language translation in the Quaero program | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.15/ | Boudahmane, Karim and Buschbeck, Bianka and Cho, Eunah and Crego, Josep Maria and Freitag, Markus and Lavergne, Thomas and Ney, Hermann and Niehues, Jan and Peitz, Stephan and Senellart, Jean and Sokolov, Artem and Waibel, Alex and Wandmacher, Tonio and Wuebker, Joern and Yvon, Fran{\c{c}}ois | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 114--120 | The Quaero program is an international project promoting research and industrial innovation on technologies for automatic analysis and classification of multimedia and multilingual documents. Within the program framework, research organizations and industrial partners collaborate to develop prototypes of innovating applications and services for access and usage of multimedia data. One of the topics addressed is the translation of spoken language. Each year, a project-internal evaluation is conducted by DGA to monitor the technological advances. This work describes the design and results of the 2011 evaluation campaign. The participating partners were RWTH, KIT, LIMSI and SYSTRAN. Their approaches are compared on both ASR output and reference transcripts of speech data for the translation between French and German. The results show that the developed techniques further the state of the art and improve translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,231 |
inproceedings | lamel-etal-2011-speech | Speech recognition for machine translation in Quaero | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.16/ | Lamel, Lori and Courcinous, Sandrine and Despres, Julien and Gauvain, Jean-Luc and Josse, Yvan and Kilgour, Kevin and Kraft, Florian and Le, Viet-Bac and Ney, Hermann and Nu{\ssbaum-Thom, Markus and Oparin, Ilya and Schlippe, Tim and Schl{\"uter, Ralf and Schultz, Tanja and Fraga da Silva, Thiago and St{\"uker, Sebastian and Sundermeyer, Martin and Vieru, Bianca and Vu, Ngoc Thang and Waibel, Alexander and Woehrling, C{\'ecile | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 121--128 | This paper describes the speech-to-text systems used to provide automatic transcriptions used in the Quaero 2010 evaluation of Machine Translation from speech. Quaero (www.quaero.org) is a large research and industrial innovation program focusing on technologies for automatic analysis and classification of multimedia and multilingual documents. The ASR transcript is the result of a Rover combination of systems from three teams ( KIT, RWTH, LIMSI+VR) for the French and German languages. The casesensitive word error rates (WER) of the combined systems were respectively 20.8{\%} and 18.1{\%} on the 2010 evaluation data, relative WER reductions of 14.6{\%} and 17.4{\%} respectively over the best component system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,232 |
inproceedings | arranz-etal-2011-protocol | Protocol and lessons learnt from the production of parallel corpora for the evaluation of speech translation systems | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.17/ | Arranz, Victoria and Hamon, Olivier and Boudahmane, Karim and Garnier-Rizet, Martine | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 129--135 | Machine translation evaluation campaigns require the production of reference corpora to automatically measure system output. This paper describes recent efforts to create such data with the objective of measuring the quality of the systems participating in the Quaero evaluations. In particular, we focus on the protocols behind such production as well as all the issues raised by the complexity of the transcription data handled. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,233 |
inproceedings | bisazza-etal-2011-fill | Fill-up versus interpolation methods for phrase-based {SMT} adaptation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.18/ | Bisazza, Arianna and Ruiz, Nick and Federico, Marcello | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 136--143 | This paper compares techniques to combine diverse parallel corpora for domain-specific phrase-based SMT system training. We address a common scenario where little in-domain data is available for the task, but where large background models exist for the same language pair. In particular, we focus on phrase table fill-up: a method that effectively exploits background knowledge to improve model coverage, while preserving the more reliable information coming from the in-domain corpus. We present experiments on an emerging transcribed speech translation task {--} the TED talks. While performing similarly in terms of BLEU and NIST scores to the popular log-linear and linear interpolation techniques, filled-up translation models are more compact and easy to tune by minimum error training. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,234 |
inproceedings | chen-etal-2011-semantic | Semantic smoothing and fabrication of phrase pairs for {SMT} | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.19/ | Chen, Boxing and Kuhn, Roland and Foster, George | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 144--150 | In statistical machine translation systems, phrases with similar meanings often have similar but not identical distributions of translations. This paper proposes a new soft clustering method to smooth the conditional translation probabilities for a given phrase with those of semantically similar phrases. We call this semantic smoothing (SS). Moreover, we fabricate new phrase pairs that were not observed in training data, but which may be used for decoding. In learning curve experiments against a strong baseline, we obtain a consistent pattern of modest improvement from semantic smoothing, and further modest improvement from phrase pair fabrication. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,235 |
inproceedings | ding-etal-2011-long | Long-distance hierarchical structure transformation rules utilizing function words | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.21/ | Ding, Chenchen and Inui, Takashi and Yamamoto, Mikio | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 159--166 | In this paper, we propose structure transformation rules for statistical machine translation which are lexicalized by only function words. Although such rules can be extracted from an aligned parallel corpus simply as original phrase pairs, their structure is hierarchical and thus can be used in a hierarchical translation system. In addition, structure transformation rules can take into account long-distance reordering, allowing for more than two phrases to be moved simultaneously. The rule set is used as a core module in our hierarchical model together with two other modules, namely, a basic reordering module and an optional gap phrase module. Our model is considerably more compact and produces slightly higher BLEU scores than the original hierarchical phrase-based model in Japanese-English translation on the parallel corpus of the NTCIR-7 patent translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,237 |
inproceedings | dixon-etal-2011-investigation | Investigation of the effects of {ASR} tuning on speech translation performance | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.22/ | Dixon, Paul R. and Finch, Andrew and Hori, Chiori and Kashioka, Hideki | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 167--174 | In this paper we describe some of our recent investigations into ASR and SMT coupling issues from an ASR perspective. Our study was motivated by several areas: Firstly, to understand how standard ASR tuning procedures effect the SMT performance and whether it is safe to perform this tuning in isolation. Secondly, to investigate how vocabulary and segmentation mismatches between the ASR and SMT system effect the performance. Thirdly, to uncover any practical issues that arise when using a WFST based speech decoder for tight coupling as opposed to a more traditional tree-search decoding architecture. On the IWSLT07 Japanese-English task we found that larger language model weights only helped the SMT performance when the ASR decoder was tuned in a sub-optimal manner. When we considered the performance with suitable wide beams that ensured the ASR accuracy had converged we observed the language model weight had little influence on the SMT BLEU scores. After the construction of the phrase table the actual SMT vocabulary can be less than the training data vocabulary. By reducing the ASR lexicon to only cover the words the SMT system could accept, we found this lead to an increase in the ASR error rates, however the SMT BLEU scores were nearly unchanged. From a practical point of view this is a useful result as it means we can significantly reduce the memory footprint of the ASR system. We also investigated coupling WFST based ASR to a simple WFST based translation decoder and found it was crucial to perform phrase table expansion to avoid OOV problems. For the WFST translation decoder we describe a semiring based approach for optimizing the log-linear weights. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,238 |
inproceedings | gupta-etal-2011-extending | Extending a probabilistic phrase alignment approach for {SMT} | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.23/ | Gupta, Mridul and Hewavitharana, Sanjika and Vogel, Stephan | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 175--182 | Phrase alignment is a crucial step in phrase-based statistical machine translation. We explore a way of improving phrase alignment by adding syntactic information in the form of chunks as soft constraints guided by an in-depth and detailed analysis on a hand-aligned data set. We extend a probabilistic phrase alignment model that extracts phrase pairs by optimizing phrase pair boundaries over the sentence pair [1]. The boundaries of the target phrase are chosen such that the overall sentence alignment probability is optimal. Viterbi alignment information is also added in the extended model with a view of improving phrase alignment. We extract phrase pairs using a relatively larger number of features which are discriminatively trained using a large-margin online learning algorithm, i.e., Margin Infused Relaxed Algorithm (MIRA) and integrate it in our approach. Initial experiments show improvements in both phrase alignment and translation quality for Arabic-English on a moderate-size translation task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,239 |
inproceedings | heafield-etal-2011-left | Left language model state for syntactic machine translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-evaluation.24/ | Heafield, Kenneth and Hoang, Hieu and Koehn, Philipp and Kiso, Tetsuo and Federico, Marcello | Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign | 183--190 | Many syntactic machine translation decoders, including Moses, cdec, and Joshua, implement bottom-up dynamic programming to integrate N-gram language model probabilities into hypothesis scoring. These decoders concatenate hypotheses according to grammar rules, yielding larger hypotheses and eventually complete translations. When hypotheses are concatenated, the language model score is adjusted to account for boundary-crossing n-grams. Words on the boundary of each hypothesis are encoded in state, consisting of left state (the first few words) and right state (the last few words). We speed concatenation by encoding left state using data structure pointers in lieu of vocabulary indices and by avoiding unnecessary queries. To increase the decoder`s opportunities to recombine hypothesis, we minimize the number of words encoded by left state. This has the effect of reducing search errors made by the decoder. The resulting gain in model score is smaller than for right state minimization, which we explain by observing a relationship between state minimization and language model probability. With a fixed cube pruning pop limit, we show a 3-6{\%} reduction in CPU time and improved model scores. Reducing the pop limit to the point where model scores tie the baseline yields a net 11{\%} reduction in CPU time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,240 |
inproceedings | huck-etal-2011-lexicon | Lexicon models for hierarchical phrase-based machine translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.1/ | Huck, Matthias and Mansour, Saab and Wiesler, Simon and Ney, Hermann | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 191--198 | In this paper, we investigate lexicon models for hierarchical phrase-based statistical machine translation. We study five types of lexicon models: a model which is extracted from word-aligned training data and{---}given the word alignment matrix{---}relies on pure relative frequencies [1]; the IBM model 1 lexicon [2]; a regularized version of IBM model 1; a triplet lexicon model variant [3]; and a discriminatively trained word lexicon model [4]. We explore sourceto-target models with phrase-level as well as sentence-level scoring and target-to-source models with scoring on phrase level only. For the first two types of lexicon models, we compare several scoring variants. All models are used during search, i.e. they are incorporated directly into the log-linear model combination of the decoder. Phrase table smoothing with triplet lexicon models and with discriminative word lexicons are novel contributions. We also propose a new regularization technique for IBM model 1 by means of the Kullback-Leibler divergence with the empirical unigram distribution as regularization term. Experiments are carried out on the large-scale NIST Chinese{\textrightarrow}English translation task and on the English{\textrightarrow}French and Arabic{\textrightarrow}English IWSLT TED tasks. For Chinese{\textrightarrow}English and English{\textrightarrow}French, we obtain the best results by using the discriminative word lexicon to smooth our phrase tables. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,241 |
inproceedings | kilgour-etal-2011-2011 | The 2011 {KIT} {QUAERO} speech-to-text system for {S}panish | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.2/ | Kilgour, Kevin and Saam, Christian and Mohr, Christian and St{\"uker, Sebastian and Waibel, Alex | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 199--205 | This paper describes our current Spanish speech-to-text (STT) system with which we participated in the 2011 Quaero STT evaluation that is being developed within the Quaero program. The system consists of 4 separate subsystems, as well as the standard MFCC and MVDR phoneme based subsystems we included a both a phoneme and grapheme based bottleneck subsystem. We carefully evaluate the performance of each subsystem. After including several new techniques we were able to reduce the WER by over 30{\%} from 20.79{\%} to 14.53{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,242 |
inproceedings | ling-etal-2011-named | Named entity translation using anchor texts | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.3/ | Ling, Wang and Calado, P{\'a}vel and Martins, Bruno and Trancoso, Isabel and Black, Alan and Coheur, Lu{\'i}sa | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 206--213 | This work describes a process to extract Named Entity (NE) translations from the text available in web links (anchor texts). It translates a NE by retrieving a list of web documents in the target language, extracting the anchor texts from the links to those documents and finding the best translation from the anchor texts, using a combination of features, some of which, are specific to anchor texts. Experiments performed on a manually built corpora, suggest that over 70{\%} of the NEs, ranging from unpopular to popular entities, can be translated correctly using sorely anchor texts. Tests on a Machine Translation task indicate that the system can be used to improve the quality of the translations of state-of-the-art statistical machine translation systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,243 |
inproceedings | maergner-etal-2011-unsupervised | Unsupervised vocabulary selection for simultaneous lecture translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.4/ | Maergner, Paul and Kilgour, Kevin and Lane, Ian and Waibel, Alex | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 214--221 | In this work, we propose a novel method for vocabulary selection which enables simultaneous speech recognition systems for lectures to automatically adapt to the diverse topics that occur in educational and scientific lectures. Utilizing materials that are available before the lecture begins, such as lecture slides, our proposed framework iteratively searches for related documents on the World Wide Web and generates a lecture-specific vocabulary and language model based on the resulting documents. In this paper, we introduce a novel method for vocabulary selection where we rank vocabulary that occurs in the collected documents based on a relevance score which is calculated using a combination of word features. Vocabulary selection is a critical component for topic adaptation that has typically been overlooked in prior works. On the interACT German-English simultaneous lecture translation system our proposed approach significantly improved vocabulary coverage, reducing the out-of-vocabulary rate on average by 57.0{\%} and up to 84.9{\%}, compared to a lecture-independent baseline. Furthermore, our approach reduced the word error rate by up to 25.3{\%} (on average 13.2{\%} across all lectures), compared to a lectureindependent baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,244 |
inproceedings | mansour-etal-2011-combining | Combining translation and language model scoring for domain-specific data filtering | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.5/ | Mansour, Saab and Wuebker, Joern and Ney, Hermann | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 222--229 | The increasing popularity of statistical machine translation (SMT) systems is introducing new domains of translation that need to be tackled. As many resources are already available, domain adaptation methods can be applied to utilize these recourses in the most beneficial way for the new domain. We explore adaptation via filtering, using the crossentropy scores to discard irrelevant sentences. We focus on filtering for two important components of an SMT system, namely the language model (LM) and the translation model (TM). Previous work has already applied LM cross-entropy based scoring for filtering. We argue that LM cross-entropy might be appropriate for LM filtering, but not as much for TM filtering. We develop a novel filtering approach based on a combined TM and LM cross-entropy scores. We experiment with two large-scale translation tasks, the Arabic-to-English and English-to-French IWSLT 2011 TED Talks MT tasks. For LM filtering, we achieve strong perplexity improvements which carry over to the translation quality with improvements up to +0.4{\%} BLEU. For TM filtering, the combined method achieves small but consistent improvements over the standalone methods. As a side effect of adaptation via filtering, the fully fledged SMT system vocabulary size and phrase table size are reduced by a factor of at least 2 while up to +0.6{\%} BLEU improvement is observed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,245 |
inproceedings | niehues-waibel-2011-using | Using {W}ikipedia to translate domain-specific terms in {SMT} | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.6/ | Niehues, Jan and Waibel, Alex | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 230--237 | When building a university lecture translation system, one important step is to adapt it to the target domain. One problem in this adaptation task is to acquire translations for domain specific terms. In this approach we tried to get these translations from Wikipedia, which provides articles on very specific topics in many different languages. To extract translations for the domain specific terms, we used the interlanguage links of Wikipedia . We analyzed different methods to integrate this corpus into our system and explored methods to disambiguate between different translations by using the text of the articles. In addition, we developed methods to handle different morphological forms of the specific terms in morphologically rich input languages like German. The results show that the number of out-of-vocabulary (OOV) words could be reduced by 50{\%} on computer science lectures and the translation quality could be improved by more than 1 BLEU point. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,246 |
inproceedings | peitz-etal-2011-modeling | Modeling punctuation prediction as machine translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.7/ | Peitz, Stephan and Freitag, Markus and Mauser, Arne and Ney, Hermann | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 238--245 | Punctuation prediction is an important task in Spoken Language Translation. The output of speech recognition systems does not typically contain punctuation marks. In this paper we analyze different methods for punctuation prediction and show improvements in the quality of the final translation output. In our experiments we compare the different approaches and show improvements of up to 0.8 BLEU points on the IWSLT 2011 English French Speech Translation of Talks task using a translation system to translate from unpunctuated to punctuated text instead of a language model based punctuation prediction method. Furthermore, we do a system combination of the hypotheses of all our different approaches and get an additional improvement of 0.4 points in BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,247 |
inproceedings | peter-etal-2011-soft | Soft string-to-dependency hierarchical machine translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.8/ | Peter, Jan-Thorsten and Huck, Matthias and Ney, Hermann and Stein, Daniel | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 246--253 | In this paper, we dissect the influence of several target-side dependency-based extensions to hierarchical machine translation, including a dependency language model (LM). We pursue a non-restrictive approach that does not prohibit the production of hypotheses with malformed dependency structures. Since many questions remained open from previous and related work, we offer in-depth analysis of the influence of the language model order, the impact of dependency-based restrictions on the search space, and the information to be gained from dependency tree building during decoding. The application of a non-restrictive approach together with an integrated dependency LM scoring is a novel contribution which yields significant improvements for two large-scale translation tasks for the language pairs Chinese{--}English and German{--}French. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,248 |
inproceedings | schneider-luz-2011-speaker | Speaker alignment in synthesised, machine translation communication | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.9/ | Schneider, Anne H. and Luz, Saturnino | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 254--260 | The effect of mistranslations on the verbal behaviour of users of speech-to-speech translation is investigated through a question answering experiment in which users were presented with machine translated questions through synthesized speech. Results show that people are likely to align their verbal behaviour to the output of a system that combines machine translation, speech recognition and speech synthesis in an interactive dialogue context, even when the system produces erroneous output. The alignment phenomenon has been previously considered by dialogue system designers from the perspective of the benefits it might bring to the interaction (e.g. by making the user more likely to employ terms contained in the system`s vocabulary). In contrast, our results reveal that in speech-to-speech translation systems alignment can in fact be detrimental to the interaction (e.g. by priming the user to align with non-existing lexical items produced by mistranslation). The implications of these findings are discussed with respect to the design of such systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,249 |
inproceedings | tomeh-etal-2011-good | How good are your phrases? Assessing phrase quality with single class classification | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.10/ | Tomeh, Nadi and Turchi, Marco and Wisinewski, Guillaume and Allauzen, Alexandre and Yvon, Fran{\c{c}}ois | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 261--268 | We present a novel translation quality informed procedure for both extraction and scoring of phrase pairs in PBSMT systems. We reformulate the extraction problem in the supervised learning framework. Our goal is twofold. First, We attempt to take the translation quality into account; and second we incorporating arbitrary features in order to circumvent alignment errors. One-Class SVMs and the Mapping Convergence algorithm permit training a single-class classifier to discriminate between useful and useless phrase pairs. Such classifier can be learned from a training corpus that comprises only useful instances. The confidence score, produced by the classifier for each phrase pairs, is employed as a selection criteria. The smoothness of these scores allow a fine control over the size of the resulting translation model. Finally, confidence scores provide a new accuracy-based feature to score phrase pairs. Experimental evaluation of the method shows accurate assessments of phrase pairs quality even for regions in the space of possible phrase pairs that are ignored by other approaches. This enhanced evaluation of phrase pairs leads to improvements in the translation performance as measured by BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,250 |
inproceedings | yasuda-etal-2011-annotating | Annotating data selection for improving machine translation | Federico, Marcello and Hwang, Mei-Yuh and R{\"odder, Margit and St{\"uker, Sebastian | dec # " 8-9" | 2011 | San Francisco, California | null | https://aclanthology.org/2011.iwslt-papers.11/ | Yasuda, Keiji and Okuma, Hideo and Utiyama, Masao and Sumita, Eiichiro | Proceedings of the 8th International Workshop on Spoken Language Translation: Papers | 269--274 | In order to efficiently improve machine translation systems, we propose a method which selects data to be annotated (manually translated) from speech-to-speech translation field data. For the selection experiments, we used data from field experiments conducted during the 2009 fiscal year in five areas of Japan. For the selection experiments, we used data sets from two areas: one data set giving the lowest baseline speech translation performance for its test set, and another data set giving the highest. In the experiments, we compare two methods for selecting data to be manually translated from the field data. Both of them use source side language models for data selection, but in different manners. According to the experimental results, either or both of the methods show larger improvements compared to a random data selection. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,251 |
inproceedings | gasser-2011-towards | Towards synchronous extensible dependency grammar | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.3/ | Gasser, Michael | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 3--10 | Extensible Dependency Grammar (XDG; Debusmann, 2007) is a flexible, modular dependency grammar framework in which sentence analyses consist of multigraphs and processing takes the form of constraint satisfaction. This paper shows how XDG lends itself to grammar-driven machine translation and introduces the machinery necessary for synchronous XDG. Since the approach relies on a shared semantics, it resembles interlingua MT. It differs in that there are no separate analysis and generation phases. Rather, translation consists of the simultaneous analysis and generation of a single source-target {\textquotedblleft}sentence{\textquotedblright}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,255 |
inproceedings | monti-etal-2011-taking | Taking on new challenges in multi-word unit processing for machine translation | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.4/ | Monti, Johanna and Barreiro, Anabela and Elia, Annibale and Marano, Federica and Napoli, Antonella | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 11--20 | This paper discusses the qualitative comparative evaluation performed on the results of two machine translation systems with different approaches to the processing of multi-word units. It proposes a solution for overcoming the difficulties multi-word units present to machine translation by adopting a methodology that combines the lexicon grammar approach with OpenLogos ontology and semantico-syntactic rules. The paper also discusses the importance of a qualitative evaluation metrics to correctly evaluate the performance of machine translation engines with regards to multi-word units. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,256 |
inproceedings | radziszewski-sniatowski-2011-maca | {M}aca {--} a configurable tool to integrate {P}olish morphological data | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.6/ | Radziszewski, Adam and {\'S}niatowski, Tomasz | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 29--36 | There are a number of morphological analysers for Polish. Most of these, however, are non-free resources. What is more, different analysers employ different tagsets and tokenisation strategies. This situation calls for a simple and universal framework to join different sources of morphological information, including the existing resources as well as user-provided dictionaries. We present such a configurable framework that allows to write simple configuration files that define tokenisation strategies and the behaviour of morphological analysers, including simple tagset conversion. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,258 |
inproceedings | toral-way-2011-automatic | Automatic acquisition of named entities for rule-based machine translation | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.7/ | Toral, Antonio and Way, Andy | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 37--44 | This paper proposes to enrich RBMT dictionaries with Named Entities (NEs) automatically acquired from Wikipedia. The method is applied to the Apertium English{--}Spanish system and its performance compared to that of Apertium with and without handtagged NEs. The system with automatic NEs outperforms the one without NEs, while results vary when compared to a system with handtagged NEs (results are comparable for Spanish{\textrightarrow}English but slightly worst for English{\textrightarrow}Spanish). Apart from that, adding automatic NEs contributes to decreasing the amount of unknown terms by more than 10{\%}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,259 |
inproceedings | rangelov-2011-rule | Rule-based machine translation between {B}ulgarian and {M}acedonian | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.9/ | Rangelov, Tihomir | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 53--60 | This paper describes the development of a two-way shallow-transfer rulebased machine translation system between Bulgarian and Macedonian. It gives an account of the resources and the methods used for constructing the system, including the development of monolingual and bilingual dictionaries, syntactic transfer rules and constraint grammars. An evaluation of the system`s performance was carried out and compared to another commercially available MT system for the two languages. Some future work was suggested. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,261 |
inproceedings | ivars-ribes-sanchez-cartagena-2011-widely | A widely used machine translation service and its migration to a free/open-source solution: the case of Softcatal{\`a} | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.10/ | Ivars-Ribes, Xavier and S{\'a}nchez-Cartagena, Victor M. | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 61--68 | Softcatala` is a non-profit association created more than 10 years ago to fight the marginalisation of the Catalan language in information and communication technologies. It has led the localisation of many applications and the creation of a website which allows its users to translate texts between Spanish and Catalan using an external closedsource translation engine. Recently, the closed-source translation back-end has been replaced by a free/open-source solution completely managed by Softcatala`: the Apertium machine translation platform and the ScaleMT web service framework. Thanks to the openness of the new solution, it is possible to take advantage of the huge amount of users of the Softcatala` translation service to improve it, using a series of methods presented in this paper. In addition, a study of the translations requested by the users has been carried out, and it shows that the translation back-end change has not affected the usage patterns. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,262 |
inproceedings | ruth-oregan-2011-shallow | Shallow-transfer rule-based machine translation from {C}zech to {P}olish | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.11/ | Ruth, Joanna and O{'}Regan, Jimmy | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 69--76 | This article describes the development of an Open Source shallow-transfer machine translation system from Czech to Polish in the Apertium platform. It gives details of the methods and resources used in constructing the system. Although the resulting system has quite a high error rate, it is still competetive with other systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,263 |
inproceedings | toral-etal-2011-italian | An {I}talian to {C}atalan {RBMT} system reusing data from existing language pairs | S{\'a}nchez-Martinez, Felipe and P{\'e}rez-Ortiz, Juan Antonio | jan # " 20-21" | 2011 | Barcelona, Spain | null | https://aclanthology.org/2011.freeopmt-1.12/ | Toral, Antonio and Ginest{\'i}-Rosell, Mireia and Tyers, Francis | Proceedings of the Second International Workshop on Free/Open-Source Rule-Based Machine Translation | 77--81 | This paper presents an Italian{\textrightarrow}Catalan RBMT system automatically built by combining the linguistic data of the existing pairs Spanish{--}Catalan and Spanish{--}Italian. A lightweight manual postprocessing is carried out in order to fix inconsistencies in the automatically derived dictionaries and to add very frequent words that are missing according to a corpus analysis. The system is evaluated on the KDE4 corpus and outperforms Google Translate by approximately ten absolute points in terms of both TER and GTM. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 77,264 |
inproceedings | dalianis-etal-2010-creating | Creating a Reusable {E}nglish-{C}hinese Parallel Corpus for Bilingual Dictionary Construction | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1001/ | Dalianis, Hercules and Xing, Hao-chun and Zhang, Xin | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper first describes an experiment to construct an English-Chinese parallel corpus, then applying the Uplug word alignment tool on the corpus and finally produce and evaluate an English-Chinese word list. The Stockholm English-Chinese Parallel Corpus (SEC) was created by downloading English-Chinese parallel corpora from a Chinese web site containing law texts that have been manually translated from Chinese to English. The parallel corpus contains 104 563 Chinese characters equivalent to 59 918 Chinese words, and the corresponding English corpus contains 75 766 English words. However Chinese writing does not utilize any delimiters to mark word boundaries so we had to carry out word segmentation as a preprocessing step on the Chinese corpus. Moreover since the parallel corpus is downloaded from Internet the corpus is noisy regarding to alignment between corresponding translated sentences. Therefore we used 60 hours of manually work to align the sentences in the English and Chinese parallel corpus before performing automatic word alignment using Uplug. The word alignment with Uplug was carried out from English to Chinese. Nine respondents evaluated the resulting English-Chinese word list with frequency equal to or above three and we obtained an accuracy of 73.1 percent. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,884 |
inproceedings | padro-etal-2010-freeling | {F}ree{L}ing 2.1: Five Years of Open-source Language Processing Tools | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1002/ | Padr{\'o}, Llu{\'i}s and Collado, Miquel and Reese, Samuel and Lloberes, Marina and Castell{\'o}n, Irene | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | FreeLing is an open-source multilingual language processing library providing a wide range of language analyzers for several languages. It offers text processing and language annotation facilities to natural language processing application developers, simplifying the task of building those applications. FreeLing is customizable and extensible. Developers can use the default linguistic resources (dictionaries, lexicons, grammars, etc.) directly, or extend them, adapt them to specific domains, or even develop new ones for specific languages. This paper overviews the recent history of this tool, summarizes the improvements and extensions incorporated in the latest version, and depicts the architecture of the library. Special focus is brought to the fact and consequences of the library being open-source: After five years and over 35,000 downloads, a growing user community has extended the initial threelanguages (English, Spanish and Catalan) to eight (adding Galician, Italian, Welsh, Portuguese, and Asturian), proving that the collaborative open model is a productive approach for the development of NLP tools and resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,885 |
inproceedings | kirschenbaum-wintner-2010-general | A General Method for Creating a Bilingual Transliteration Dictionary | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1003/ | Kirschenbaum, Amit and Wintner, Shuly | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Transliteration is the rendering in one language of terms from another language (and, possibly, another writing system), approximating spelling and/or phonetic equivalents between the two languages. A transliteration dictionary is a crucial resource for a variety of natural language applications, most notably machine translation. We describe a general method for creating bilingual transliteration dictionaries from Wikipedia article titles. The method can be applied to any language pair with Wikipedia presence, independently of the writing systems involved, and requires only a single simple resource that can be provided by any literate bilingual speaker. It was successfully applied to extract a Hebrew-English transliteration dictionary which, when incorporated in a machine translation system, indeed improved its performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,886 |
inproceedings | kao-chen-2010-comment | Comment Extraction from Blog Posts and Its Applications to Opinion Mining | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1004/ | Kao, Huan-An and Chen, Hsin-Hsi | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Blog posts containing many personal experiences or perspectives toward specific subjects are useful. Blogs allow readers to interact with bloggers by placing comments on specific blog posts. The comments carry viewpoints of readers toward the targets described in the post, or supportive/non-supportive attitude toward the post. Comment extraction is challenging due to that there does not exist a unique template among all blog service providers. This paper proposes methods to deal with this problem. Firstly, the repetitive patterns and their corresponding blocks are extracted from input posts by pattern identification algorithm. Secondly, three filtering strategies, i.e., tag pattern loop filtering, rule overlap filtering, and longest rule first, are used to remove non-comment blocks. Finally, a comment/non-comment classifier is learned to distinguish comment blocks from non-comment blocks with 14 block-level features and 5 rule-level features. In the experiments, we randomly select 600 blog posts from 12 blog service providers. F-measure, recall, and precision are 0.801, 0.855, and 0.780, respectively, by using all of the three filtering strategies together with some selected features. The application of comment extraction to blog mining is also illustrated. We show how to identify the relevant opinionated objects {\textemdash} say, opinion holders, opinions, and targets, from posts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,887 |
inproceedings | schmidt-schutte-2010-folker | {FOLKER}: An Annotation Tool for Efficient Transcription of Natural, Multi-party Interaction | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1005/ | Schmidt, Thomas and Sch{\"utte, Wilfried | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents FOLKER, an annotation tool developed for the efficient transcription of natural, multi-party interaction in a conversation analysis framework. FOLKER is being developed at the Institute for German Language in and for the FOLK project, whose aim is the construction of a large corpus of spoken present-day German, to be used for research and teaching purposes. FOLKER builds on the experience gained with multi-purpose annotation tools like ELAN and EXMARaLDA, but attempts to improve transcription efficiency by restricting and optimizing both data model and tool functionality to a single, well-defined purpose. The tools most important features in this respect are the possibility to freely switch between several editable views according to the requirements of different steps in the annotation process, and an automatic syntax check of annotations during input for their conformance to the GAT transcription convention. This paper starts with a description of the GAT transcription conventions and the data model underlying the tool. It then gives an overview of the tool functionality and compares this functionality to that of other widely used tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,888 |
inproceedings | navigli-etal-2010-annotated | An Annotated Dataset for Extracting Definitions and Hypernyms from the Web | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1006/ | Navigli, Roberto and Velardi, Paola and Ruiz-Mart{\'i}nez, Juana Maria | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper presents and analyzes an annotated corpus of definitions, created to train an algorithm for the automatic extraction of definitions and hypernyms from web documents. As an additional resource, we also include a corpus of non-definitions with syntactic patterns similar to those of definition sentences, e.g.: ''``An android is a robot'''' vs. ''``Snowcap is unmistakable''''. Domain and style independence is obtained thanks to the annotation of a large and domain-balanced corpus and to a novel pattern generalization algorithm based on word-class lattices (WCL). A lattice is a directed acyclic graph (DAG), a subclass of nondeterministic finite state automata (NFA). The lattice structure has the purpose of preserving the salient differences among distinct sequences, while eliminating redundant information. The WCL algorithm will be integrated into an improved version of the GlossExtractor Web application (Velardi et al., 2008). This paper is mostly concerned with a description of the corpus, the annotation strategy, and a linguistic analysis of the data. A summary of the WCL algorithm is also provided for the sake of completeness. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,889 |
inproceedings | khokhlova-zakharov-2010-studying | Studying Word Sketches for {R}ussian | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1007/ | Khokhlova, Maria and Zakharov, Victor | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Without any doubt corpora are vital tools for linguistic studies and solution for applied tasks. Although corpora opportunities are very useful, there is a need of another kind of software for further improvement of linguistic research as it is impossible to process huge amount of linguistic data manually. The Sketch Engine representing itself a corpus tool which takes as input a corpus of any language and corresponding grammar patterns. The paper describes the writing of Sketch grammar for the Russian language as a part of the Sketch Engine system. The system gives information about a words collocability on concrete dependency models, and generates lists of the most frequent phrases for a given word based on appropriate models. The paper deals with two different approaches to writing rules for the grammar, based on morphological information, and also with applying word sketches to the Russian language. The data evidences that such results may find an extensive use in various fields of linguistics, such as dictionary compiling, language learning and teaching, translation (including machine translation), phraseology, information retrieval etc. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,890 |
inproceedings | costa-jussa-fonollosa-2010-using | Using Linear Interpolation and Weighted Reordering Hypotheses in the {M}oses System | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1008/ | Costa-juss{\`a}, Marta R. and Fonollosa, Jos{\'e} A. R. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper proposes to introduce a novel reordering model in the open-source Moses toolkit. The main idea is to provide weighted reordering hypotheses to the SMT decoder. These hypotheses are built using a first-step Ngram-based SMT translation from a source language into a third representation that is called reordered source language. Each hypothesis has its own weight provided by the Ngram-based decoder. This proposed reordering technique offers a better and more efficient translation when compared to both the distance-based and the lexicalized reordering. In addition to this reordering approach, this paper describes a domain adaptation technique which is based on a linear combination of an specific in-domain and an extra out-domain translation models. Results for both approaches are reported in the Arabic-to-English 2008 IWSLT task. When implementing the weighted reordering hypotheses and the domain adaptation technique in the final translation system, translation results reach improvements up to 2.5 BLEU compared to a standard state-of-the-art Moses baseline system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,891 |
inproceedings | crasborn-2010-sign | The Sign Linguistics Corpora Network: Towards Standards for Signed Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1009/ | Crasborn, Onno | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Sign Linguistics Corpora Network is a three-year network initiative that aims to collect existing knowledge and practices on the creation and use of signed language resources. The concrete goals are to organise a series of four workshops in 2009 and 2010, create a stable Internet location for such knowledge, and generate new ideas for employing the most recent technologies for the study of signed languages. The network covers a wide range of subjects: data collection, metadata, annotation, and exploitation; these are the topics of the four workshops. The outcomes of the first two workshops are summarised in this paper; both workshops demonstrated that the need for dedicated knowledge on sign language corpora is especially salient in countries where researchers work alone or in small groups, which is still quite common in many places in Europe. While the original goal of the network was primarily to focus on corpus linguistics and language documentation, human language technology has gradually been incorporated as a user group of signed language resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,892 |
inproceedings | hawayek-etal-2010-bilingual | A Bilingual Dictionary {M}exican {S}ign {L}anguage-{S}panish/{S}panish-{M}exican {S}ign {L}anguage | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1010/ | Hawayek, Antoinette and Del Gratta, Riccardo and Cappelli, Giuseppe | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present a three-part bilingual specialized dictionary Mexican Sign Language-Spanish / Spanish-Mexican Sign Language. This dictionary will be the outcome of a three-years agreement between the Italian Consiglio Nazionale delle Ricerche and the Mexican Conacyt. Although many other sign language dictionaries have been provided to deaf communities, there are no Mexican Sign Language dictionaries in Mexico, yet. We want to stress on the specialized feature of the proposed dictionary: the bilingual dictionary will contain frequently used general Spanish forms along with scholastic course specific specialized words whose meanings warrant comprehension of school curricula. We emphasize that this aspect of the bilingual dictionary can have a deep social impact, since we will furnish to deaf people the possibility to get competence in official language, which is necessary to ensure access to school curriculum and to become full-fledged citizens. From a technical point of view, the dictionary consists of a relational database, where we have saved the sign parameters and a graphical user interface especially designed to allow deaf children to retrieve signs using the relevant parameters and,thus, the meaning of the sign in Spanish. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,893 |
inproceedings | sharoff-etal-2010-web | The Web Library of {B}abel: evaluating genre collections | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1011/ | Sharoff, Serge and Wu, Zhili and Markert, Katja | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We present experiments in automatic genre classification on web corpora, comparing a wide variety of features on several different genreannotated datasets (HGC, I-EN, KI-04, KRYS-I, MGC and SANTINIS).We investigate the performance of several types of features (POS n-grams, character n-grams and word n-grams) and show that simple character n-grams perform best on current collections because of their ability to generalise both lexical and syntactic phenomena related to genres. However, we also show that these impressive results might not be transferrable to the wider web due to the lack of comparability between different annotation labels (many webpages cannot be described in terms of the genre labels in individual collections), lack of representativeness of existing collections (many genres are represented by webpages coming from a small number of sources) as well as problems in the reliability of genre annotation (many pages from the web are difficult to interpret in terms of the labels available). This suggests that more research is needed to understand genres on the Web. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,894 |
inproceedings | krieger-2010-general | A General Methodology for Equipping Ontologies with Time | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1012/ | Krieger, Hans-Ulrich | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the first part of this paper, we present a framework for enriching arbitrary upper or domain-specific ontologies with a concept of time. To do so, we need the notion of a time slice. Contrary to other approaches, we directly interpret the original entities as time slices in order to (i) avoid a duplication of the original ontology and (ii) to prevent a knowledge engineer from ontology rewriting. The diachronic representation of time is complemented by a sophisticated time ontology that supports underspecification and an arbitrarily fine granularity of time. As a showcase, we describe how the time ontology has been interfaced with the PROTON upper ontology. The second part investigates a temporal extension of RDF that replaces the usual triple notation by a more general tuple representation. In this setting, Hayes/ter Horst-like entailment rules are replaced by their temporal counterparts. Our motivation to move towards this direction is twofold: firstly, extending binary relation instances with time leads to a massive proliferation of useless objects (independently of the encoding); secondly, reasoning and querying with such extended relations is extremely complex, expensive, and error-prone. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,895 |
inproceedings | qian-etal-2010-python | A Python Toolkit for Universal Transliteration | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1013/ | Qian, Ting and Hollingshead, Kristy and Yoon, Su-youn and Kim, Kyoung-young and Sproat, Richard | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | We describe ScriptTranscriber, an open source toolkit for extracting transliterations in comparable corpora from languages written in different scripts. The system includes various methods for extracting potential terms of interest from raw text, for providing guesses on the pronunciations of terms, and for comparing two strings as possible transliterations using both phonetic and temporal measures. The system works with any script in the Unicode Basic Multilingual Plane and is easily extended to include new modules. Given comparable corpora, such as newswire text, in a pair of languages that use different scripts, ScriptTranscriber provides an easy way to mine transliterations from the comparable texts. This is particularly useful for underresourced languages, where training data for transliteration may be lacking, and where it is thus hard to train good transliterators. ScriptTranscriber provides an open source package that allows for ready incorporation of more sophisticated modules {\textemdash} e.g. a trained transliteration model for a particular language pair. ScriptTranscriber is available as part of the nltk contrib source tree at \url{http://code.google.com/p/nltk/}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,896 |
inproceedings | cohen-etal-2010-test | Test Suite Design for Biomedical Ontology Concept Recognition Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1014/ | Cohen, K. Bretonnel and Roeder, Christophe and Baumgartner Jr., William A. and Hunter, Lawrence E. and Verspoor, Karin | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Systems that locate mentions of concepts from ontologies in free text are known as ontology concept recognition systems. This paper describes an approach to the evaluation of the workings of ontology concept recognition systems through use of a structured test suite and presents a publicly available test suite for this purpose. It is built using the principles of descriptive linguistic fieldwork and of software testing. More broadly, we also seek to investigate what general principles might inform the construction of such test suites. The test suite was found to be effective in identifying performance errors in an ontology concept recognition system. The system could not recognize 2.1{\%} of all canonical forms and no non-canonical forms at all. Regarding the question of general principles of test suite construction, we compared this test suite to a named entity recognition test suite constructor. We found that they had twenty features in total and that seven were shared between the two models, suggesting that there is a core of feature types that may be applicable to test suite construction for any similar type of application. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,897 |
inproceedings | lefever-hoste-2010-construction | Construction of a Benchmark Data Set for Cross-lingual Word Sense Disambiguation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1015/ | Lefever, Els and Hoste, V{\'e}ronique | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Given the recent trend to evaluate the performance of word sense disambiguation systems in a more application-oriented set-up, we report on the construction of a multilingual benchmark data set for cross-lingual word sense disambiguation. The data set was created for a lexical sample of 25 English nouns, for which translations were retrieved in 5 languages, namely Dutch, German, French, Italian and Spanish. The corpus underlying the sense inventory was the parallel data set Europarl. The gold standard sense inventory was based on the automatic word alignments of the parallel corpus, which were manually verified. The resulting word alignments were used to perform a manual clustering of the translations over all languages in the parallel corpus. The inventory then served as input for the annotators of the sentences, who were asked to provide a maximum of three contextually relevant translations per language for a given focus word. The data set was released in the framework of the SemEval-2010 competition. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,898 |
inproceedings | barron-cedeno-etal-2010-corpus | Corpus and Evaluation Measures for Automatic Plagiarism Detection | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1016/ | Barr{\'o}n-Cede{\~n}o, Alberto and Potthast, Martin and Rosso, Paolo and Stein, Benno | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The simple access to texts on digital libraries and the World Wide Web has led to an increased number of plagiarism cases in recent years, which renders manual plagiarism detection infeasible at large. Various methods for automatic plagiarism detection have been developed whose objective is to assist human experts in the analysis of documents for plagiarism. The methods can be divided into two main approaches: intrinsic and external. Unlike other tasks in natural language processing and information retrieval, it is not possible to publish a collection of real plagiarism cases for evaluation purposes since they cannot be properly anonymized. Therefore, current evaluations found in the literature are incomparable and, very often not even reproducible. Our contribution in this respect is a newly developed large-scale corpus of artificial plagiarism useful for the evaluation of intrinsic as well as external plagiarism detection. Additionally, new detection performance measures tailored to the evaluation of plagiarism detection algorithms are proposed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,899 |
inproceedings | zinn-etal-2010-evolving | An Evolving e{S}cience Environment for Research Data in Linguistics | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1017/ | Zinn, Claus and Wittenburg, Peter and Ringersma, Jacquelijn | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The amount of research data in the Humanities is increasing at fast speed. Metadata helps describing and making accessible this data to interested researchers within and across institutions. While metadata interoperability is an issue that is being recognised and addressed, the systematic and user-driven provision of annotations and the linking together of resources into new organisational layers have received much less attention. This paper gives an overview of our evolving technological eScience environment to support such functionality. It describes two tools, ADDIT and ViCoS, which enable researchers, rather than archive managers, to organise and reorganise research data to fit their particular needs. The two tools, which are embedded into our institute`s existing software landscape, are an initial step towards an eScience environment that gives our scientists easy access to (multimodal) research data of their interest, and empowers them to structure, enrich, link together, and share such data as they wish. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,900 |
inproceedings | scerri-etal-2010-classifying | Classifying Action Items for Semantic Email | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1018/ | Scerri, Simon and Gossen, Gerhard and Davis, Brian and Handschuh, Siegfried | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Email can be considered as a virtual working environment in which users are constantly struggling to manage the vast amount of exchanged data. Although most of this data belongs to well-defined workflows, these are implicit and largely unsupported by existing email clients. Semanta provides this support by enabling Semantic Email {\textemdash} email enhanced with machine-processable metadata about specific types of email Action Items (e.g. Task Assignment, Meeting Proposal). In the larger picture, these items form part of ad-hoc workflows (e.g. Task Delegation, Meeting Scheduling). Semanta is faced with a knowledge-acquisition bottleneck, as users cannot be expected to annotate each action item, and their automatic recognition proves difficult. This paper focuses on applying computationally treatable aspects of speech act theory for the classification of email action items. A rule-based classification model is employed, based on the presence or form of a number of linguistic features. The technologys evaluation suggests that whereas full automation is not feasible, the results are good enough to be presented as suggestions for the user to review. In addition the rule-based system will bootstrap a machine learning system that is currently in development, to generate the initial training sets which are then improved through the users reviewing. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,901 |
inproceedings | tsvetkov-wintner-2010-automatic | Automatic Acquisition of Parallel Corpora from Websites with Dynamic Content | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1019/ | Tsvetkov, Yulia and Wintner, Shuly | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Parallel corpora are indispensable resources for a variety of multilingual natural language processing tasks. This paper presents a technique for fully automatic construction of constantly growing parallel corpora. We propose a simple and effective dictionary-based algorithm to extract parallel document pairs from a large collection of articles retrieved from the Internet, potentially containing manually translated texts. This algorithm was implemented and tested on Hebrew-English parallel texts. With properly selected thresholds, precision of 100{\%} can be obtained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,902 |
inproceedings | rentoumi-etal-2010-united | United we Stand: Improving Sentiment Analysis by Joining Machine Learning and Rule Based Methods | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1020/ | Rentoumi, Vassiliki and Petrakis, Stefanos and Klenner, Manfred and Vouros, George A. and Karkaletsis, Vangelis | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In the past, we have succesfully used machine learning approaches for sentiment analysis. In the course of those experiments, we observed that our machine learning method, although able to cope well with figurative language could not always reach a certain decision about the polarity orientation of sentences, yielding erroneous evaluations. We support the conjecture that these cases bearing mild figurativeness could be better handled by a rule-based system. These two systems, acting complementarily, could bridge the gap between machine learning and rule-based approaches. Experimental results using the corpus of the Affective Text Task of SemEval 07, provide evidence in favor of this direction. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,903 |
inproceedings | bel-2010-handling | Handling of Missing Values in Lexical Acquisition | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1021/ | Bel, N{\'u}ria | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this work we propose a strategy to reduce the impact of the sparse data problem in the tasks of lexical information acquisition based on the observation of linguistic cues. We propose a way to handle the uncertainty created by missing values, that is, when a zero value could mean either that the cue has not been observed because the word in question does not belong to the class, i.e. negative evidence, or that the word in question has just not been observed in the context sought by chance, i.e. lack of evidence. This uncertainty creates problems to the learner, because zero values for incompatible labelled examples make the cue lose its predictive capacity and even though some samples display the sought context, it is not taken into account. In this paper we present the results of our experiments to try to reduce this uncertainty by, as other authors do (Joanis et al. 2007, for instance), substituting zero values for pre-processed estimates. Here we present a first round of experiments that have been the basis for the estimates of linguistic information motivated by lexical classes. We obtained experimental results that show a clear benefit of the proposed approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,904 |
inproceedings | carlsson-dalianis-2010-influence | Influence of Module Order on Rule-Based De-identification of Personal Names in Electronic Patient Records Written in {S}wedish | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1022/ | Carlsson, Elin and Dalianis, Hercules | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Electronic patient records (EPRs) are a valuable resource for research but for confidentiality reasons they cannot be used freely. In order to make EPRs available to a wider group of researchers, sensitive information such as personal names has to be removed. De-identification is a process that makes this possible. Both rule-based as well as statistical and machine learning based methods exist to perform de-identification, but the second method requires annotated training material which exists only very sparsely for patient names. It is therefore necessary to use rule-based methods for de-identification of EPRs. Not much is known, however, about the order in which the various rules should be applied and how the different rules influence precision and recall. This paper aims to answer this research question by implementing and evaluating four common rules for de-identification of personal names in EPRs written in Swedish: (1) dictionary name matching, (2) title matching, (3) common words filtering and (4) learning from previous modules. The results show that to obtain the highest recall and precision, the rules should be applied in the following order: title matching, common words filtering and dictionary name matching. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,905 |
inproceedings | costa-jussa-etal-2010-automatic | Automatic and Human Evaluation Study of a Rule-based and a Statistical {C}atalan-{S}panish Machine Translation Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1023/ | Costa-juss{\`a}, Marta R. and Farr{\'u}s, Mireia and Mari{\~n}o, Jos{\'e} B. and Fonollosa, Jos{\'e} A. R. | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Machine translation systems can be classified into rule-based and corpus-based approaches, in terms of their core technology. Since both paradigms have largely been used during the last years, one of the aims in the research community is to know how these systems differ in terms of translation quality. To this end, this paper reports a study and comparison of a rule-based and a corpus-based (particularly, statistical) Catalan-Spanish machine translation systems, both of them freely available in the web. The translation quality analysis is performed under two different domains: journalistic and medical. The systems are evaluated by using standard automatic measures, as well as by native human evaluators. Automatic results show that the statistical system performs better than the rule-based system. Human judgements show that in the Spanish-to-Catalan direction the statistical system also performs better than the rule-based system, while in the Catalan-to-Spanish direction is the other way round. Although the statistical system obtains the best automatic scores, its errors tend to be more penalized by human judgements than the errors of the rule-based system. This can be explained because statistical errors are usually unexpected and they do not follow any pattern. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,906 |
inproceedings | singh-ambati-2010-integrated | An Integrated Digital Tool for Accessing Language Resources | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1024/ | Singh, Anil Kumar and Ambati, Bharat Ram | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Language resources can be classified under several categories. To be able to query and operate on all (or most of) these categories using a single digital tool would be very helpful for a large number of researchers working on languages. We describe such a tool in this paper. It is different from other such tools in that it allows querying and transformation on different kinds of resources (such as corpora, lexicon and language models) with the same framework. Search options can be given based on the kind of resource being queried. It is possible to select a matched resource and open it for editing in the specialized interfaces with which that resource is associated. The tool also allows the extracted or modified data to be saved separately, apart from having the usual facilities like displaying the results in KeyWord-In-Context (KWIC) format. We also present the notation used for querying and transformation, which is comparable to but different from the Corpus Query Language (CQL). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,907 |
inproceedings | pedersen-larsen-2010-speech | A Speech Corpus for Dyslexic Reading Training | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1025/ | Pedersen, Jakob Schou and Larsen, Lars Bo | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | Traditional Danish reading training for dyslexic readers typically involves the presence of a professional reading therapist for guidance, advice and evaluation. Allowing dyslexic readers to train their reading skills on their own could not only benefit the dyslexics themselves in terms of increased flexibility but could also allow professional therapists to increase the amount of dyslexic readers to whom they have a professional contact. It is envisioned that an automated reading training tool operating on the basis of ASR could provide dyslexic users with such independence. However, only limited experience in handling dyslexic input (in Danish) by a speech recognizer exists currently. This paper reports on the establishment of a speech corpus of Danish dyslexic speech along with an annotation hereof and the setup of a proof-of-concept training tool allowing dyslexic users to improve their reading skills on their own. Despite relatively limited ASR performance, a usability evaluation by dyslexic users shows an unconditional belief in the fairness of the system and indicates furthermore willingness for using such a training tool. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,908 |
inproceedings | benajiba-zitouni-2010-arabic | {A}rabic Word Segmentation for Better Unit of Analysis | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1026/ | Benajiba, Yassine and Zitouni, Imed | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | The Arabic language has a very rich morphology where a word is composed of zero or more prefixes, a stem and zero or more suffixes. This makes Arabic data sparse compared to other languages, such as English, and consequently word segmentation becomes very important for many Natural Language Processing tasks that deal with the Arabic language. We present in this paper two segmentation schemes that are morphological segmentation and Arabic TreeBank segmentation and we show their impact on an important natural language processing task that is mention detection. Experiments on Arabic TreeBank corpus show 98.1{\%} accuracy on morphological segmentation and 99.4{\%} on morphological segmentation. We also discuss the importance of segmenting the text; experiments show up to 6F points improvement of the mention detection system performance when morphological segmentation is used instead of not segmenting the text. Obtained results also show up to 3F points improvement is achieved when the appropriate segmentation style is used. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,909 |
inproceedings | pustejovsky-etal-2010-iso | {ISO}-{T}ime{ML}: An International Standard for Semantic Annotation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1027/ | Pustejovsky, James and Lee, Kiyong and Bunt, Harry and Romary, Laurent | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In this paper, we present ISO-TimeML, a revised and interoperable version of the temporal markup language, TimeML. We describe the changes and enrichments made, while framing the effort in a more general methodology of semantic annotation. In particular, we assume a principled distinction between the annotation of an expression and the representation which that annotation denotes. This involves not only the specification of an annotation language for a particular phenomenon, but also the development of a meta-model that allows one to interpret the syntactic expressions of the specification semantically. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,910 |
inproceedings | stankovic-etal-2010-gis | {GIS} Application Improvement with Multilingual Lexical and Terminological Resources | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1028/ | Stankovi{\'c}, Ranka and Obradovi{\'c}, Ivan and Kitanovi{\'c}, Olivera | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper introduces the results of integration of lexical and terminological resources, most of them developed within the Human Language Technology (HLT) Group at the University of Belgrade, with the Geological information system of Serbia (GeolISS) developed at the Faculty of Mining and Geology and funded by the Ministry of the Environmental protection. The approach to GeolISS development, which is aimed at the integration of existing geologic archives, data from published maps on different scales, newly acquired field data, and intranet and internet publishing of geologic is given, followed by the description of the geologic multilingual vocabulary and other lexical and terminological resources used. Two basic results are outlined: multilingual map annotation and improvement of queries for the GeolISS geodatabase. Multilingual labelling and annotation of maps for their graphic display and printing have been tested with Serbian, which describes regional information in the local language, and English, used for sharing geographic information with the world, although the geological vocabulary offers the possibility for integration of other languages as well. The resources also enable semantic and morphological expansion of queries, the latter being very important in highly inflective languages, such as Serbian. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,911 |
inproceedings | chambers-jurafsky-2010-database | A Database of Narrative Schemas | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1029/ | Chambers, Nathanael and Jurafsky, Dan | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes a new language resource of events and semantic roles that characterize real-world situations. Narrative schemas contain sets of related events (edit and publish), a temporal ordering of the events (edit before publish), and the semantic roles of the participants (authors publish books). This type of world knowledge was central to early research in natural language understanding, scripts being one of the main formalisms, they represented common sequences of events that occur in the world. Unfortunately, most of this knowledge was hand-coded and time consuming to create. Current machine learning techniques, as well as a new approach to learning through coreference chains, has allowed us to automatically extract rich event structure from open domain text in the form of narrative schemas. The narrative schema resource described in this paper contains approximately 5000 unique events combined into schemas of varying sizes. We describe the resource, how it is learned, and a new evaluation of the coverage of these schemas over unseen documents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,912 |
inproceedings | kokkinakis-gerdin-2010-swedish | A {S}wedish Scientific Medical Corpus for Terminology Management and Linguistic Exploration | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1030/ | Kokkinakis, Dimitrios and Gerdin, Ulla | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | This paper describes the development of a new Swedish scientific medical corpus. We provide a detailed description of the characteristics of this new collection as well results of an application of the corpus on term management tasks, including terminology validation and terminology extraction. Although the corpus is representative for the scientific medical domain it still covers in detail a lot of specialised sub-disciplines such as diabetes and osteoporosis which makes it suitable for facilitating the production of smaller but more focused sub-corpora. We address this issue by making explicit some features of the corpus in order to demonstrate the usability of the corpus particularly for the quality assessment of subsets of official terminologies such as the Systematized NOmenclature of MEDicine - Clinical Terms (SNOMED CT). Domain-dependent language resources, labelled or not, are a crucial key components for progressing R{\&}D in the human language technology field since such resources are an indispensable, integrated part for terminology management, evaluation, software prototyping and design validation and a prerequisite for the development and evaluation of a number of sublanguage dependent applications including information extraction, text mining and information retrieval. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,913 |
inproceedings | proisl-kabashi-2010-using | Using High-Quality Resources in {NLP}: The Valency Dictionary of {E}nglish as a Resource for Left-Associative Grammars | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Rosner, Mike and Tapias, Daniel | may | 2010 | Valletta, Malta | European Language Resources Association (ELRA) | https://aclanthology.org/L10-1031/ | Proisl, Thomas and Kabashi, Besim | Proceedings of the Seventh International Conference on Language Resources and Evaluation ({LREC}`10) | null | In Natural Language Processing (NLP), the quality of a system depends to a great extent on the quality of the linguistic resources it uses. One area where precise information is particularly needed is valency. The unpredictable character of valency properties requires a reliable source of information for syntactic and semantic analysis. There are several (electronic) dictionaries that provide the necessary information. One such dictionary that contains especially detailed valency descriptions is the Valency Dictionary of English. We will discuss how the Valency Dictionary of English in machine-readable form can be used as a resource for NLP. We will use valency descriptions that are freely available online via the Erlangen Valency Pattern Bank which contains most of the information from the printed dictionary. We will show that the valency data can be used for accurately parsing natural language with a rule-based approach by integrating it into a Left-Associative Grammar. The Valency Dictionary of English can therefore be regarded as being well suited for NLP purposes. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 78,914 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.