entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | marcu-etal-2010-utilizing | Utilizing Automated Translation with Quality Scores to Increase Productivity | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.2/ | Marcu, Daniel and Egan, Kathleen and Simmons, Chuck and Mahlmann, Ning-Ning | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | Automated translation can assist with a variety of translation needs in government, from speeding up access to information for intelligence work to helping human translators increase their productivity. However, government entities need to have a mechanism in place so that they know whether or not they can trust the output from automated translation solutions. In this presentation, Language Weaver will present a new capability ``TrustScore'': an automated scoring algorithm that communicates how good the automated translation is, using a meaningful metric. With this capability, each translation is automatically assigned a score from 1 to 5 in the TrustScore. A score of 1 would indicate that the translation is unintelligible; a score of 3 would indicate that meaning has been conveyed and that the translated content is actionable. A score approaching 4 or higher would indicate that meaning and nuance have been carried through. This automatic prediction of quality has been validated by testing done across significant numbers of data points in different companies and on different types of content. After outlining TrustScore, and how it works, Language Weaver will discuss how a scoring mechanism like TrustScore could be used in a translation productivity workflow in government to assist linguists with day to day translation work. This would enable them to further benefit from their investments in automated translation software. Language Weaver would also share how TrustScore is used in commercial deployments to cost effectively publish information in near real time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,367 |
inproceedings | simmons-2010-foreign | Foreign Media Collaboration Framework ({FMCF}) | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.4/ | Simmons, Chuck | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | The Foreign Media Collaboration Framework (FMCF) is the latest approach by NASIC to provide a comprehensive system to process foreign language materials. FMCF is a Services Oriented Architecture (SOA) that provides an infrastructure to manage HLT tools, products, workflows, and services. This federated SOA solution adheres to DISA`s NCES SOA Governance Model, DDMS XML for Metadata Capture/Dissemination, and IC-ISM for Security. The FMCF provides a cutting edge infrastructure that encapsulates multiple capabilities from multiple vendors in one place. This approach will accelerate HLT development, contain sustainment cost, minimize training, and brings the MT, OCR, ASR, audio/video, entity extraction, analytic tools and database under one umbrella, thus reducing the total cost of ownership. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,369 |
inproceedings | egan-2010-cross | Cross Lingual {A}rabic Blog Alerting ({COLABA}) | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.5/ | Egan, Kathleen | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | Social media and tools for communication over the Internet have expanded a great deal in recent years. This expansion offers a diverse set of users a means to communicate more freely and spontaneously in mixed languages and genres (blogs, message boards, chat, texting, video and images). Dialectal Arabic is pervasive in written social media, however current state of the art tools made for Modern Standard Arabic (MSA) fail on Arabic dialects. COLABA enables MSA users to interpret dialects correctly. It helps find Arabic colloquial content that is currently not easily searchable and accessible to MSA queries. The COLABA team has built a suite of tools that will offer users the ability to anonymously capture online unstructured media content from blogs to comprehend, organize, and validate content from informal and colloquial genres of online communication in MSA and a variety of Arabic dialects. The DoD/Combating Terrorism Technical Support Office/Technical Support Working Group (CTTSO/TSWG) awarded the contract to Acxiom Corporation and partners from MTI/IBM, Columbia University, Janya and Wichita State University to bring joint expertise to address this challenge. The suite has several use applications: Support for language and cultural learning by making colloquial Arabic intelligible to students of MSA; Retrieval and prioritization for triage and content analysis by finding Arabic colloquial and dialect terms that today`s search engines miss; by providing appropriate interpretations of colloquial Arabic, which is opaque to current analytics approaches; and by Identify named entities, events, topics, and sentiment. Enabling improved translations by MSA-trained MT systems through decreases in out-of-vocabulary terms achieved by means of colloquial term conversion to MSA. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,370 |
inproceedings | jiang-2010-pre | Pre-editing for Machine Translation | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.6/ | Jiang, Weimin | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | It is common practice that linguists will do MT post-editing to improve translation accuracy and fluency. This presentation however, examines the importance of pre-editing source material to improve MT. Even when a digital source file which is literally correct is used for MT, there are still some factors that have significant effect on MT translation accuracy and fluency. Based on 35 examples from more than 20 professional journals and websites, this article is about an experiment of pre-editing source material for Chinese-English MT in the S and T domain. Pertinent examples are selected to illustrate how machine translation accuracy and fluency can be enhanced by pre-editing which includes the following four areas: to provide a straightforward sentence structure, to improve punctuation, to use straightforward wording, and to eliminate redundancy and superfluous elements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,371 |
inproceedings | roberson-2010-multi | Multi-Language Desktop Suite | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.7/ | Roberson, Brian | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | Professional language analysts leverage a myriad of tools in their quest to produce accurate translations of foreign language material. The effectiveness of these tools ultimately affects resource allocation, information dissemination and subsequent follow-on mission planning; all three of which are vital, time-critical components in the intelligence cycle. This presentation will highlight the need for interactive tools that perform jointly in an operational environment, focusing on a dynamic suite of foreign language tools packaged into a desktop application and serving in a machine translation role. Basis Technology`s Arabic/Afghan Desktop Suite (ADS) supports DOMEX, CELLEX, and HUMINT missions while being the most powerful Arabic, Dari and Pushto text analytic and processing software available. The ADS translates large scale lists of names from foreign language to English and also pinpoints place names appearing in reports with their coordinate locations on maps. With standardization output having to be more accurate than ever, the ADS ensures conformance with USG transliteration standards for Arabic script languages, including IC, BGN/PCGN, SATTS and MELTS. The ADS enables optimization of your limited resources and allows your analysts and linguists to be tasked more efficiently throughout the workflow process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,372 |
inproceedings | summers-sawaf-2010-user | User-generated System for Critical Document Triage and Exploitation{--}Version 2011 | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.8/ | Summers, Kristen and Sawaf, Hassan | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | CACI has developed and delivered systems for document exploitation and processing to Government customers around the world. Many of these systems include advanced language processing capabilities in order to enable rapid triage of vast collections of foreign language documents, separating the content that requires immediate human attention from the less immediately pressing material. AppTek provides key patent-pending Machine Translation technology for this critical process, rendering material in Arabic, Farsi and other languages into an English rendition that enables both further automated processing and rapid review by monolingual analysts, to identify the documents that require immediate linguist attention. Both CACI and AppTek have been working with customers to develop capabilities that enable them, the users, to be the ones in command of making their systems learn and continuously improve. We will describe how we put this critical user requirement into the systems and the key role that the user`s involvement played in this. We will also discuss some of the key components of the system and what the customer-centric evolution of the system will be, including our document translation workflow, the machine translation technology within it, and our approaches to supporting the technology and sustaining its success designed around adapting to user needs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,373 |
inproceedings | klavans-2010-task | Task-based evaluation methods for machine translation, in practice and theory | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.9/ | Klavans, Judith L. | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | A panel of industry and government experts will discuss ways in which they have applied task-based evaluation for Machine Translation and other language technologies in their organizations and share ideas for new methods that could be tried in the future. As part of the discussion, the panelists will address some of the following points: What task-based evaluation means within their organization, i.e., how task-based evaluation is defined; How task-based evaluation impacts the use of MT technologies in their work environment; Whether task-based evaluation correlates with MT developers' automated metrics and if not, how do we arrive at automated metrics that do correlate with the more expensive task-based evaluation; What ``lessons-learned'' resulted from the course of performing task-based evaluation; How task-based evaluations can be generalized to multiple workflow environments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,374 |
inproceedings | holland-2010-exploring | Exploring the {AFPAK} Web | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.10/ | Holland, Rod | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | In spite of low literacy levels in Afghanistan and the Tribal Areas of Pakistan, the Pashto and Dari regions of the World Wide Web manifest diverse content from authors with a broad range of viewpoints. We have used cross-language information retrieval (CLIR) with machine translation to explore this content, and present an informal study of the principal genres that we have encountered. The suitability and limitations of existing machine translation packages for these languages for the exploitation of this content is discussed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,375 |
inproceedings | bemish-2010-use | Use of {HLT} tools within the {US} Government | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.12/ | Bemish, Nicholas | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | In today`s post 9/11 world, the need for qualified linguists to process all the foreign language materials that are collected/confiscated overseas and at home has grown considerably. To date, a gap exists in the number of linguists needed to process all this material. To fill this gap, the government has invested in the research, development and implementation of Human Language Technologies into the linguist workflow. Most of the current DOMEX workflows incorporate HLT tools, whether that is Machine Translation, Named Entity Extraction, Name Normalization or Transliteration tools. These tools aid the linguists in processing and translating DOMEX material, cutting back on the amount of time needed to sift through all the material. In addition to the technologies used in workflow processes, we have also implemented tools for intelligence analysts, such as the Broadcast Monitoring System and Tripwire. These tools allow non-language qualified analysts to search through foreign language material and exploit that material for intelligence value. These tools implement such technologies as Speech-to-text and machine translation. Part of this effort to fill the gap in the ability to process all this information has been collaboration amongst the members of the Intelligence Community on the research and development of tools. This type of engagement allows the government to save time and money in eliminating the duplication of efforts and allows government agencies to share their ideas and expertise. Our presentation will address some of the tools that are currently in use throughout DoD; being considered for use; some of the challenges we face; and how we are making best use of the HLT development and research that is supporting our needs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,377 |
inproceedings | desilets-2010-webitext | {W}e{B}i{T}ext: Multilingual Concordancer Built from Public High Quality Web Content | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.13/ | D{\'e}silets, Alain | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | In this paper, we describe WeBiText (www.webitext.ca) and how it is being used. WeBiText is a concordancer that allows translators to search in large, high-quality multilingual web sites, in order to find solutions to translation problems. After a quick overview of the system, we present results from an analysis of its logs, which provides a picture of how the tool is being used and how well it performs. We show that it is mostly used to find solutions for short, two or three word translation problems. The system produces at least one hit for 58{\%} of the queries, and hits from at least five different web pages in 41{\%} of cases. We show that 36{\%} of the queries correspond to specialized language problems, which is much higher than what was previously reported for a similar concordancer based on the Canadian Hansard (TransSearch). We also provide a back of the envelope calculation of the current economic impact of the tool, which we estimate at {\$}1 million per year, and growing rapidly. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,378 |
inproceedings | bailey-2010-data | Data Preparation for Machine Translation Customization | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.14/ | Bailey, Stacey | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | The presentation will focus on ongoing work to develop sentence-aligned Chinese-English data for machine translation customization. Fully automatic alignment produces noisy data (e.g., containing OCR and alignment errors), and we are looking at the question of just how noisy noisy data can be and still produce translation improvements. Related, data clean-up efforts are time- and labor-intensive and we are examining whether translation improvements justify the clean-up costs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,379 |
inproceedings | ladwig-2010-language | Language {NOW} | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.15/ | Ladwig, Michael | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | Language Now is a natural language processing (NLP) research and development program with a goal of improving the performance of machine translation (MT) and other NLP technologies in mission-critical applications. The Language NOW research and development program has produced the following four primary advances as Government license-free technology: 1) A consistent and simple user interface developed to allow non-technical users, regardless of language proficiency, to use NLP technology in exploiting foreign language text content. Language NOW research has produced first-of-a-kind capabilities such as detection and handling of structured data, direct processing and visualization of foreign language data with transliterations and translations. 2) A highly efficient NLP integration framework, the Abstract Scalable Language Services (ASLS). ASLS offers system developers easy implementation of an efficient integrated service oriented architecture suitable for devices ranging from handheld computers to large enterprise computer clusters. 3) Service wrappers integrating commercial, Government license-free, open source and research software that provide NLP services such as machine translation, named entity recognition, optical character recognition (OCR), transliteration and text search. 4) STatistical Engines for Language Analysis (STELAE) and Maximum Entropy Extraction Pipeline (MEEP) tools that produce customized statistical machine translation and hybrid statistical/rule-based named entity recognition engines. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,380 |
inproceedings | mcintyre-2010-translation | Translation of {C}hinese Entities in {R}ussian Text | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.17/ | McIntyre, William | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | This briefing addresses the development of a conversion table that will enable a translator to render Chinese names, locations, and nomenclature into proper Pinyin. As a rule, Russian Machine Translation is a robust system that provides good results. It is a mature system with extensive glossaries and can be useful for translating documents across many disciplines. However, as a result of the transliteration process, Russian MT will not convert Chinese terms from Russian into the Pinyin standard. This standard is used by most databases and the internet. Currently the MT software is performing as it was designed, but this problem impacts the accuracy of the MT making it almost useless for many purposes including data retrieval. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,382 |
inproceedings | van-ess-dykema-gerber-2010-parallel | Parallel Corpus Development at {NVTC} | null | oct # " 31-" # nov # " 4" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-government.18/ | Van Ess-Dykema, Carol and Gerber, Laurie | Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program | null | In this paper, we describe the methods used to develop an exchangeable translation memory bank of sentence-aligned Mandarin Chinese - English sentences. This effort is part of a larger effort, initiated by the National Virtual Translation Center (NVTC), to foster collaboration and sharing of translation memory banks across the Intelligence Community and the Department of Defense. In this paper, we describe our corpus creation process - a largely automated process - highlighting the human interventions that are still deemed necessary. We conclude with a brief discussion of how this work will affect plans for NVTC`s new translation management workflow and future research to increase the performance of the automated components of the corpus creation process. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,383 |
inproceedings | munro-2010-crowdsourced | Crowdsourced translation for emergency response in {H}aiti: the global collaboration of local knowledge | null | oct # " 31" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-workshop.1/ | Munro, Robert | Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective | null | In the wake of the January 12 earthquake in Haiti it quickly became clear that the existing emergency response services had failed but text messages were still getting through. A number of people quickly came together to establish a text-message based emergency reporting system. There was one hurdle: the majority of the messages were in Haitian Kreyol, which for the most part was not understood by the primary emergency responders, the US Military. We therefore crowdsourced the translation of messages, allowing volunteers from within the Haitian Kreyol and French-speaking communities to translate, categorize and geolocate the messages in real-time. Collaborating online, they employed their local knowledge of locations, regional slang, abbreviations and spelling variants to process more than 40,000 messages in the first six weeks alone. According the responders this saved hundreds of lives and helped direct the first food and aid to tens of thousands. The average turn-around from a message arriving in Kreyol to it being translated, categorized, geolocated and streamed back to the responders was 10 minutes. Collaboration among translators was crucial for data-quality, motivation and community contacts, enabling richer value-adding in the translation than would have been possible from any one person. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,391 |
inproceedings | zetzsche-2010-crowdsourcing | Crowdsourcing and the Professional Translator | null | oct # " 31" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-workshop.2/ | Zetzsche, Jost | Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective | null | The recent emergence of crowdsourced translation {\`a} la Facebook or Twitter has exposed a raw nerve in the translation industry. Perceptions of ill-placed entitlement -- we are the professionals who have the ``right'' to translate these products -- abound. And many have felt threatened by something that carries not only a relatively newly coined term -- crowdsourcing -- but seems in and of itself completely new. Or is it? | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,392 |
inproceedings | kronrod-etal-2010-position | Position Paper: Improving Translation via Targeted Paraphrasing | null | oct # " 31" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-workshop.3/ | Kronrod, Yakov and Resnik, Philip and Buzek, Olivia and Hu, Chang and Quinn, Alex and Bederson, Ben | Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective | null | Targeted paraphrasing is a new approach to the problem of obtaining cost-effective, reasonable quality translation that makes use of simple and inexpensive human computations by monolingual speakers in combination with machine translation. The key insight behind the process is that it is possible to spot likely translation errors with only monolingual knowledge of the target language, and it is possible to generate alternative ways to say the same thing (i.e. paraphrases) with only monolingual knowledge of the source language. Evaluations demonstrate that this approach can yield substantial improvements in translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,393 |
inproceedings | kumaran-etal-2010-wikibabel | {W}iki{BABEL}: A System for Multilingual {W}ikipedia Content | null | oct # " 31" | 2010 | Denver, Colorado, USA | Association for Machine Translation in the Americas | https://aclanthology.org/2010.amta-workshop.4/ | Kumaran, A. and Datha, Naren and Ashok, B. and Saravanan, K. and Ande, Anil and Sharma, Ashwani and Vedantham, Sridhar and Natampally, Vidya and Dendi, Vikram and Maurice, Sandor | Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective | null | This position paper outlines our project {--} WikiBABEL {--} which will be released as an open source project for the creation of multilingual Wikipedia content, and has potential to produce parallel data as a by-product for Machine Translation systems research. We discuss its architecture, functionality and the user-experience components, and briefly present an analysis that emphasizes the resonance that the WikiBABEL design and the planned involvement with Wikipedia has with the open source communities in general and Wikipedians in particular. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 80,394 |
inproceedings | daoud-daoud-2009-arabic | {A}rabic Disambiguation Using Dependency Grammar | Nazarenko, Adeline and Poibeau, Thierry | jun | 2009 | Senlis, France | ATALA | https://aclanthology.org/2009.jeptalnrecital-court.8/ | Daoud, Daoud and Daoud, Mohammad | Actes de la 16{\`e}me conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Articles courts | 67--76 | In this paper, we present a new approach to disambiguation Arabic using a joint rule-based model which is conceptualized using Dependency Grammar. This approach helps in highly accurate analysis of sentences. The analysis produces a semantic net like structure expressed by means of Universal Networking Language (UNL) - a recently proposed interlingua. Extremely varied and complex phenomena of Arabic language have been addressed. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,474 |
inproceedings | malaise-etal-2009-relevance | Relevance of {ASR} for the Automatic Generation of Keywords Suggestions for {TV} programs | Nazarenko, Adeline and Poibeau, Thierry | jun | 2009 | Senlis, France | ATALA | https://aclanthology.org/2009.jeptalnrecital-court.34/ | Malais{\'e}, V{\'e}ronique and Gazendam, Luit and Heeren, Willemijn and Ordelman, Roeland and Brugman, Hennie | Actes de la 16{\`e}me conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Articles courts | 311--320 | Semantic access to multimedia content in audiovisual archives is to a large extent dependent on quantity and quality of the metadata, and particularly the content descriptions that are attached to the individual items. However, the manual annotation of collections puts heavy demands on resources. A large number of archives are introducing (semi) automatic annotation techniques for generating and/or enhancing metadata. The NWO funded CATCH-CHOICE project has investigated the extraction of keywords from textual resources related to TV programs to be archived (context documents), in collaboration with the Dutch audiovisual archives, Sound and Vision. This paper investigates the suitability of Automatic Speech Recognition transcripts produced in the CATCH-CHoral project for generating such keywords, which we evaluate against manual annotations of the documents, and against keywords automatically generated from context documents describing the TV programs' content. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,500 |
inproceedings | santaholma-2009-comparing | Comparing Speech Recognizers Derived from Mono- and Multilingual Grammars | Mondary, Thibault and Bossard, Aur{\'e}lien and Hamon, Thierry | jun | 2009 | Senlis, France | ATALA | https://aclanthology.org/2009.jeptalnrecital-recital.2/ | Santaholma, Marianne | Actes de la 16{\`e}me conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. REncontres jeunes Chercheurs en Informatique pour le Traitement Automatique des Langues | 11--20 | This paper examines the performance of multilingual parameterized grammar rules on speech recognition. We present a performance comparison of two different types of Japanese and English grammar-based speech recognizers. One system is derived from monolingual grammar rules and the other from multilingual parameterized grammar rules. The latter one uses hence the same grammar rules for creation of the language models for these two different languages. We carried out experiments on speech recognition of limited domain dialog application. These experiments show that the language models derived from multilingual parameterized grammar rules (1) perform equally well on both tested languages, on English and Japanese, and (2) that the performance is comparable with the recognizers derived from monolingual grammars that were explicitly developed for these languages. This suggests that the sharing grammar resources between different languages could be one solution for more efficient development of rule-based speech recognizers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,530 |
inproceedings | paul-2009-overview | Overview of the {IWSLT} 2009 evaluation campaign | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.1/ | Paul, Michael | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 1--18 | This paper gives an overview of the evaluation campaign results of the International1Workshop on Spoken Language Translation (IWSLT) 2009 . In this workshop, we focused on the translation of task-oriented human dialogs in travel situations. The speech data was recorded through human interpreters, where native speakers of different languages were asked to complete certain travel-related tasks like hotel reservations using their mother tongue. The translation of the freely-uttered conversation was carried out by human interpreters. The obtained speech data was annotated with dialog and speaker information. The translation directions were English into Chinese and vice versa for the Challenge Task, and Arabic, Chinese, and Turkish, which is a new edition, into English for the standard BTEC Task. In total, 18 research groups participated in this year`s event. Automatic and subjective evaluations were carried out in order to investigate the impact of task-oriented human dialogs on automatic speech recognition (ASR) and machine translation (MT) system performance, as well as the robustness of state-of-the-art MT systems for speech-to-speech translation in a dialog scenario. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,544 |
inproceedings | kopru-2009-apptek | {A}pp{T}ek {T}urkish-{E}nglish machine translation system description for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.2/ | K{\"opr{\"u, Sel{\c{cuk | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 19--23 | In this paper, we describe the techniques that are explored in the AppTek system to enhance the translations in the Turkish to English track of IWSLT09. The submission was generated using a phrase-based statistical machine translation system. We also researched the usage of morpho-syntactic information and the application of word reordering in order to improve the translation results. The results are evaluated based on BLEU and METEOR scores. We show that the usage of morpho-syntactic information yields 3 BLEU points gain in the overall system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,545 |
inproceedings | costa-jussa-banchs-2009-barcelona | {B}arcelona Media {SMT} system description for the {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.3/ | Costa-juss{\`a}, Marta R. and Banchs, Rafael E. | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 24--28 | This paper describes the Barcelona Media SMT system in the IWSLT 2009 evaluation campaign. The Barcelona Media system is an statistical phrase-based system enriched with source context information. Adding source context in an SMT system is interesting to enhance the translation in order to solve lexical and structural choice errors. The novel technique uses a similarity metric among each test sentence and each training sentence. First experimental results of this technique are reported in the Arabic and Chinese Basic Traveling Expression Corpus (BTEC) task. Although working in a single domain, there are ambiguities in SMT translation units and slight improvements in BLEU are shown in both tasks (Zh2En and Ar2En). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,546 |
inproceedings | ma-etal-2009-low | Low-resource machine translation using {M}a{T}r{E}x | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.4/ | Ma, Yanjun and Okita, Tsuyoshi and {\c{Cetino{\u{glu, {\"Ozlem and Du, Jinhua and Way, Andy | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 29--36 | In this paper, we give a description of the Machine Translation (MT) system developed at DCU that was used for our fourth participation in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT 2009). Two techniques are deployed in our system in order to improve the translation quality in a low-resource scenario. The first technique is to use multiple segmentations in MT training and to utilise word lattices in decoding stage. The second technique is used to select the optimal training data that can be used to build MT systems. In this year`s participation, we use three different prototype SMT systems, and the output from each system are combined using standard system combination method. Our system is the top system for Chinese{--}English CHALLENGE task in terms of BLEU score. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,547 |
inproceedings | bertoldi-etal-2009-fbk | {FBK} at {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.5/ | Bertoldi, Nicola and Bisazza, Arianna and Cettolo, Mauro and Sanchis-Trilles, Germ{\'a}n and Federico, Marcello | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 37--44 | This paper reports on the participation of FBK at the IWSLT 2009 Evaluation. This year we worked on the Arabic-English and Turkish-English BTEC tasks with a special effort on linguistic preprocessing techniques involving morphological segmentation. In addition, we investigated the adaptation problem in the development of systems for the Chinese-English and English-Chinese challenge tasks; in particular, we explored different ways for clustering training data into topic or dialog-specific subsets: by producing (and combining) smaller but more focused models, we intended to make better use of the available training data, with the ultimate purpose of improving translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,548 |
inproceedings | lepage-etal-2009-greyc | The {GREYC} translation memory for the {IWSLT} 2009 evaluation campaign | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.6/ | Lepage, Yves and Lardilleux, Adrien and Gosme, Julien | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 45--49 | This year`s GREYC translation system is an improved translation memory that was designed from scratch to experiment with an approach whose goal is just to improve over the output of a standard translation memory by making heavy use of sub-sentential alignments in a restricted case of translation by analogy. The tracks the system participated in are all BTEC tracks: Arabic to English, Chinese to English, and Turkish to English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,549 |
inproceedings | duan-etal-2009-i2rs | {I}2{R}`s machine translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.7/ | Duan, Xiangyu and Xiong, Deyi and Zhang, Hui and Zhang, Min and Li, Haizhou | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 50--54 | In this paper, we describe the system and approach used by the Institute for Infocomm Research (I2R) for the IWSLT 2009 spoken language translation evaluation campaign. Two kinds of machine translation systems are applied, namely, phrase-based machine translation system and syntax-based machine translation system. To test syntax-based machine translation system on spoken language translation, variational systems are explored. On top of both phrase-based and syntax-based single systems, we further use rescoring method to improve the individual system performance and use system combination method to combine the strengths of the different individual systems. Rescoring is applied on each single system output, and system combination is applied on all rescoring outputs. Finally, our system combination framework shows better performance in Chinese-English BTEC task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,550 |
inproceedings | mi-etal-2009-ict | The {ICT} statistical machine translation system for the {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.8/ | Mi, Haitao and Li, Yang and Xia, Tian and Xiao, Xinyan and Feng, Yang and Xie, Jun and Xiong, Hao and Tu, Zhaopeng and Zheng, Daqi and Lu, Yanjuan and Liu, Qun | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 55--59 | This paper describes the ICT Statistical Machine Translation systems that used in the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2009. For this year`s evaluation, we participated in the Challenge Task (Chinese-English and English-Chinese) and BTEC Task (Chinese-English). And we mainly focus on one new method to improve single system`s translation quality. Specifically, we developed a sentence-similarity based development set selection technique. For each task, we finally submitted the single system who got the maximum BLEU scores on the selected development set. The four single translation systems are based on different techniques: a linguistically syntax-based system, two formally syntax-based systems and a phrase-based system. Typically, we didn`t use any rescoring or system combination techniques in this year`s evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,551 |
inproceedings | bougares-etal-2009-lig | {LIG} approach for {IWSLT}09 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.9/ | Bougares, Fethi and Besacier, Laurent and Blanchon, Herv{\'e} | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 60--64 | This paper describes the LIG experiments in the context of IWSLT09 evaluation (Arabic to English Statistical Machine Translation task). Arabic is a morphologically rich language, and recent experimentations in our laboratory have shown that the performance of Arabic to English SMT systems varies greatly according to the Arabic morphological segmenters applied. Based on this observation, we propose to use simultaneously multiple segmentations for machine translation of Arabic. The core idea is to keep the ambiguity of the Arabic segmentation in the system input (using confusion networks or lattices). Then, we hope that the best segmentation will be chosen during MT decoding. The mathematics of this multiple segmentation approach are given. Practical implementations in the case of verbatim text translation as well as speech translation (outside of the scope of IWSLT09 this year) are proposed. Experiments conducted in the framework of IWSLT evaluation campaign show the potential of the multiple segmentation approach. The last part of this paper explains in detail the different systems submitted by LIG at IWSLT09 and the results obtained. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,552 |
inproceedings | schwenk-etal-2009-liums | {LIUM}`s statistical machine translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.10/ | Schwenk, Holger and Barrault, Lo{\"ic and Est{\`eve, Yannick and Lambert, Patrik | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 65--70 | This paper describes the systems developed by the LIUM laboratory for the 2009 IWSLT evaluation. We participated in the Arabic and Chinese to English BTEC tasks. We developed three different systems: a statistical phrase-based system using the Moses toolkit, an Statistical Post-Editing system and a hierarchical phrase-based system based on Joshua. A continuous space language model was deployed to improve the modeling of the target language. These systems are combined by a confusion network based approach. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,553 |
inproceedings | shen-etal-2009-mit | The {MIT}-{LL}/{AFRL} {IWSLT}-2009 {MT} system | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.11/ | Shen, Wade and Delaney, Brian and Aminzadeh, A. Ryan and Anderson, Tim and Slyh, Ray | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 71--78 | This paper describes the MIT-LL/AFRL statistical MT system and the improvements that were developed during the IWSLT 2009 evaluation campaign. As part of these efforts, we experimented with a number of extensions to the standard phrase-based model that improve performance on the Arabic and Turkish to English translation tasks. We discuss the architecture of the MIT-LL/AFRL MT system, improvements over our 2008 system, and experiments we ran during the IWSLT-2009 evaluation. Specifically, we focus on 1) Cross-domain translation using MAP adaptation and unsupervised training, 2) Turkish morphological processing and translation, 3) improved Arabic morphology for MT preprocessing, and 4) system combination methods for machine translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,554 |
inproceedings | li-etal-2009-casia | The {CASIA} statistical machine translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.13/ | Li, Maoxi and Zhang, Jiajun and Zhou, Yu and Zong, Chengqing | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 83--90 | This paper reports on the participation of CASIA (Institute of Automation Chinese Academy of Sciences) at the evaluation campaign of the International Workshop on Spoken Language Translation 2009. We participated in the challenge tasks for Chinese-to-English and English-to-Chinese translation respectively and the BTEC task for Chinese-to-English translation only. For all of the tasks, system performance is improved with some special methods as follows: 1) combining different results of Chinese word segmentation, 2) combining different results of word alignments, 3) adding reliable bilingual words with high probabilities to the training data, 4) handling named entities including person names, location names, organization names, temporal and numerical expressions additionally, 5) combining and selecting translations from the outputs of multiple translation engines, 6) replacing Chinese character with Chinese Pinyin to train the translation model for Chinese-to-English ASR challenge task. This is a new approach that has never been introduced before. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,556 |
inproceedings | nakov-etal-2009-nus | The {NUS} statistical machine translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.14/ | Nakov, Preslav and Liu, Chang and Lu, Wei and Ng, Hwee Tou | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 91--98 | We describe the system developed by the team of the National University of Singapore for the Chinese-English BTEC task of the IWSLT 2009 evaluation campaign. We adopted a state-of-the-art phrase-based statistical machine translation approach and focused on experiments with different Chinese word segmentation standards. In our official submission, we trained a separate system for each segmenter and we combined the outputs in a subsequent re-ranking step. Given the small size of the training data, we further re-trained the system on the development data after tuning. The evaluation results show that both strategies yield sizeable and consistent improvements in translation quality. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,557 |
inproceedings | wu-etal-2009-uot | The {UOT} system | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.15/ | Wu, Xianchao and Matsuzaki, Takuya and Okazaki, Naoaki and Miyao, Yusuke and Tsujii, Jun{'}ichi | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 99--106 | We present the UOT Machine Translation System that was used in the IWSLT-09 evaluation campaign. This year, we participated in the BTEC track for Chinese-to-English translation. Our system is based on a string-to-tree framework. To integrate deep syntactic information, we propose the use of parse trees and semantic dependencies on English sentences described respectively by Head-driven Phrase Structure Grammar and Predicate-Argument Structures. We report the results of our system on both the development and test sets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,558 |
inproceedings | murakami-etal-2009-statistical | Statistical machine translation adding pattern-based machine translation in {C}hinese-{E}nglish translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.16/ | Murakami, Jin{'}ichi and Tokuhisa, Masato and Ikehara, Satoru | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 107--112 | We have developed a two-stage machine translation (MT) system. The first stage is a rule-based machine translation system. The second stage is a normal statistical machine translation system. For Chinese-English machine translation, first, we used a Chinese-English rule-based MT, and we obtained {\textquotedblright}ENGLISH{\textquotedblright} sentences from Chinese sentences. Second, we used a standard statistical machine translation. This means that we translated {\textquotedblright}ENGLISH{\textquotedblright} to English machine translation. We believe this method has two advantages. One is that there are fewer unknown words. The other is that it produces structured or grammatically correct sentences. From the results of experiments, we obtained a BLEU score of 0.3151 in the BTEC-CE task using our proposed method. In contrast, we obtained a BLEU score of 0.3311 in the BTEC-CE task using a standard method (moses). This means that our proposed method was not as effective for the BTEC-CE task. Therefore, we will try to improve the performance by optimizing parameters. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,559 |
inproceedings | mermer-etal-2009-tubitak | The {T{\"UB{\.ITAK-{UEKAE statistical machine translation system for {IWSLT 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.17/ | Mermer, Co{\c{s}}kun and Kaya, Hamza and Do{\u{g}}an, Mehmet U{\u{g}}ur | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 113--117 | We describe our Arabic-to-English and Turkish-to-English machine translation systems that participated in the IWSLT 2009 evaluation campaign. Both systems are based on the Moses statistical machine translation toolkit, with added components to address the rich morphology of the source languages. Three different morphological approaches are investigated for Turkish. Our primary submission uses linguistic morphological analysis and statistical disambiguation to generate morpheme-based translation models, which is the approach with the better translation performance. One of the contrastive submissions utilizes unsupervised subword segmentation to generate non-linguistic subword-based translation models, while another contrastive system uses word-based models but makes use of lexical approximation to cope with out-of-vocabulary words, similar to the approach in our Arabic-to-English submission. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,560 |
inproceedings | gasco-sanchez-2009-upv | {UPV} translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.18/ | Gasc{\'o}, Guillem and S{\'a}nchez, Joan Andreu | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 118--123 | In this paper, we describe the machine translation system developed at the Polytechnic University of Valencia, which was used in our participation in the International Workshop on Spoken Language Translation (IWSLT) 2009. We have taken part only in the Chinese-English BTEC Task. In the evaluation campaign, we focused on the use of our hybrid translation system over the provided corpus and less effort was devoted to the use of preand post-processing techniques that could have improved the results. Our decoder is a hybrid machine translation system that combines phrase-based models together with syntax-based translation models. The syntactic formalism that underlies the whole decoding process is a Chomsky Normal Form Stochastic Inversion Transduction Grammar (SITG) with phrasal productions and a log-linear combination of probability models. The decoding algorithm is a CYK-like algorithm that combines the translated phrases inversely or directly in order to get a complete translation of the input sentence. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,561 |
inproceedings | yang-etal-2009-university | The {U}niversity of {W}ashington machine translation system for {IWSLT} 2009 | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-evaluation.19/ | Yang, Mei and Axelrod, Amittai and Duh, Kevin and Kirchhoff, Katrin | Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign | 124--128 | This paper describes the University of Washington`s system for the 2009 International Workshop on Spoken Language Translation (IWSLT) evaluation campaign. Two systems were developed, one each for the BTEC Chinese-to-English and Arabic-to-English tracks. We describe experiments with different preprocessing and alignment combination schemes. Our main focus this year was on exploring a novel semi-supervised approach to N-best list reranking; however, this method yielded inconclusive results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,562 |
inproceedings | bisazza-federico-2009-morphological | Morphological pre-processing for {T}urkish to {E}nglish statistical machine translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.1/ | Bisazza, Arianna and Federico, Marcello | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | 129--135 | We tried to cope with the complex morphology of Turkish by applying different schemes of morphological word segmentation to the training and test data of a phrase-based statistical machine translation system. These techniques allow for a considerable reduction of the training dictionary, and lower the out-of-vocabulary rate of the test set. By minimizing differences between lexical granularities of Turkish and English we can produce more refined alignments and a better modeling of the translation task. Morphological segmentation is highly language dependent and requires a fair amount of linguistic knowledge in its development phase. Yet it is fast and light-weight {--} does not involve syntax {--} and appears to benefit our IWSLT09 system: our best segmentation scheme associated to a simple lexical approximation technique achieved a 50{\%} reduction of out-of-vocabulary rate and over 5 point BLEU improvement above the baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,563 |
inproceedings | cmejrek-etal-2009-enriching | Enriching {SCFG} rules directly from efficient bilingual chart parsing | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.2/ | {\v{C}}mejrek, Martin and Zhou, Bowen and Xiang, Bing | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | 136--143 | In this paper, we propose a new method for training translation rules for a Synchronous Context-free Grammar. A bilingual chart parser is used to generate the parse forest, and EM algorithm to estimate expected counts for each rule of the ruleset. Additional rules are constructed as combinations of reliable rules occurring in the parse forest. The new method of proposing additional translation rules is independent of word alignments. We present the theoretical background for this method, and initial experimental results on German-English translations of Europarl data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,564 |
inproceedings | hayashi-etal-2009-structural | Structural support vector machines for log-linear approach in statistical machine translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.3/ | Hayashi, Katsuhiko and Watanabe, Taro and Tsukada, Hajime and Isozaki, Hideki | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | 144--151 | Minimum error rate training (MERT) is a widely used learning method for statistical machine translation. In this paper, we present a SVM-based training method to enhance generalization ability. We extend MERT optimization by maximizing the margin between the reference and incorrect translations under the L2-norm prior to avoid overfitting problem. Translation accuracy obtained by our proposed methods is more stable in various conditions than that obtained by MERT. Our experimental results on the French-English WMT08 shared task show that degrade of our proposed methods is smaller than that of MERT in case of small training data or out-of-domain test data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,565 |
inproceedings | hoang-etal-2009-unified | A unified framework for phrase-based, hierarchical, and syntax-based statistical machine translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.4/ | Hoang, Hieu and Koehn, Philipp and Lopez, Adam | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | 152--159 | Despite many differences between phrase-based, hierarchical, and syntax-based translation models, their training and testing pipelines are strikingly similar. Drawing on this fact, we extend the Moses toolkit to implement hierarchical and syntactic models, making it the first open source toolkit with end-to-end support for all three of these popular models in a single package. This extension substantially lowers the barrier to entry for machine translation research across multiple models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,566 |
inproceedings | sanchis-trilles-etal-2009-online | Online language model adaptation for spoken dialog translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.5/ | Sanchis-Trilles, Germ{\'a}n and Cettolo, Mauro and Bertoldi, Nicola and Federico, Marcello | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | 160--167 | This paper focuses on the problem of language model adaptation in the context of Chinese-English cross-lingual dialogs, as set-up by the challenge task of the IWSLT 2009 Evaluation Campaign. Mixtures of n-gram language models are investigated, which are obtained by clustering bilingual training data according to different available human annotations, respectively, at the dialog level, turn level, and dialog act level. For the latter case, clustering of IWSLT data was in fact induced through a comparable Italian-English parallel corpus provided with dialog act annotations. For the sake of adaptation, mixture weight estimation is performed either at the level of single source sentence or test set. Estimated weights are then transferred to the target language mixture model. Experimental results show that, by training different specific language models weighted according to the actual input instead of using a single target language model, significant gains in terms of perplexity and BLEU can be achieved. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,567 |
inproceedings | hori-etal-2009-network | Network-based speech-to-speech translation | null | dec # " 1-2" | 2009 | Tokyo, Japan | null | https://aclanthology.org/2009.iwslt-papers.6/ | Hori, Chiori and Sakti, Sakriani and Paul, Michael and Kimura, Noriyuki and Ashikari, Yutaka and Isotani, Ryosuke and Sumita, Eiichiro and Nakamura, Satoshi | Proceedings of the 6th International Workshop on Spoken Language Translation: Papers | null | This demo shows the network-based speech-to-speech translation system. The system was designed to perform realtime, location-free, multi-party translation between speakers of different languages. The spoken language modules: automatic speech recognition (ASR), machine translation (MT), and text-to-speech synthesis (TTS), are connected through Web servers that can be accessed via client applications worldwide. In this demo, we will show the multiparty speech-to-speech translation of Japanese, Chinese, Indonesian, Vietnamese, and English, provided by the NICT server. These speech-to-speech modules have been developed by NICT as a part of A-STAR (Asian Speech Translation Advanced Research) consortium project1. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,568 |
inproceedings | scott-barreiro-2009-openlogos | {O}pen{L}ogos {MT} and the {SAL} representation language | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.5/ | Scott, Bernard and Barreiro, Anabela | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 19--26 | This paper describes OpenLogos, a rule-driven machine translation system, and the syntactic-semantic taxonomy SAL that underlies this system. We illustrate how SAL addresses typical problems relating to source language analysis and target language synthesis. The adaptation of OpenLogos resources to a specific application concerning paraphrasing in Portuguese is also described here. References are provided for access to OpenLogos and to SAL. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,574 |
inproceedings | tyers-nordfalk-2009-shallow | Shallow-transfer rule-based machine translation for {S}wedish to {D}anish | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.6/ | Tyers, Francis M. and Nordfalk, Jacob | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 27--34 | This article describes the development of a shallow-transfer machine translation system from Swedish to Danish in the Apertium platform. It gives details of the resources used, the methods for constructing the system and an evaluation of the translation quality. The quality is found to be comparable with that of current commercial systems, despite the particularly low coverage of the lexicons. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,575 |
inproceedings | unhammer-trosterud-2009-reuse | Reuse of free resources in machine translation between {N}ynorsk and {B}okm{\r{a}}l | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.7/ | Unhammer, Kevin and Trosterud, Trond | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 35--42 | We describe the development of a two-way shallow-transfer machine translation system between Norwegian Nynorsk and Norwegian Bokma ̊l built on the Apertium platform, using the Free and Open Source resources Norsk Ordbank and the Oslo{--}Bergen Constraint Grammar tagger. We detail the integration of these and other resources in the system along with the construction of the lexical and structural transfer, and evaluate the translation quality in comparison with another system. Finally, some future work is suggested. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,576 |
inproceedings | faridee-tyers-2009-development | Development of a morphological analyser for {B}engali | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.8/ | Faridee, Abu Zaher Md and Tyers, Francis M. | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 43--50 | This article describes the development of an open-source morphological analyser for Bengali Language using nitestate technology. First we discuss the challenges of creating a morphological analyser for a highly inectional language like Bengali and then propose a solution to that using lttoolbox, an open-source nite-state toolkit. We then evaluate the performance of our developed system and propose ways of improving it further. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,577 |
inproceedings | sanchez-cartagena-perez-ortiz-2009-open | An open-source highly scalable web service architecture for the Apertium machine translation engine | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.9/ | S{\'a}nchez-Cartagena, Victor M. and P{\'e}rez-Ortiz, Juan Antonio | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 51--58 | Some machine translation services like Google Ajax Language API have become very popular as they make the collaboratively created contents of the web 2.0 available to speakers of many languages. One of the keys of its success is its clear and easy-to-use application programming interface (API) and a scalable and reliable service. This paper describes a highly scalable implementation of an Apertium-based translation web service, that aims to make contents available to speakers of lesser resourced languages. The API of this service is compatible with Google`s one, and the scalability of the system is achieved by a new architecture that allows adding or removing new servers at any time; for that, an application placement algorithm which decides which language pairs should be translated on which servers is designed. Our experiments show how the resulting architecture improves the translation rate in comparison to existing Apertium-based servers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,578 |
inproceedings | minervini-2009-apertium | Apertium goes {SOA}: an efficient and scalable service based on the Apertium rule-based machine translation platform | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.10/ | Minervini, Pasquale | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 59--66 | Service Oriented Architecture (SOA) is a paradigm for organising and using distributed services that may be under the control of different ownership domains and implemented using various technology stacks. In some contexts, an organisation using an IT infrastructure implementing the SOA paradigm can take a great benefit from the integration, in its business processes, of efficient machine translation (MT) services to overcome language barriers. This paper describes the architecture and the design patterns used to develop an MT service that is efficient, scalable and easy to integrate in new and existing business processes. The service is based on Apertium, a free/opensource rule-based machine translation platform. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,579 |
inproceedings | sheikh-sanchez-martinez-2009-trigram | A trigram part-of-speech tagger for the Apertium free/open-source machine translation platform | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.11/ | Sheikh, Zaid Md Abdul Wahab and S{\'a}nchez-Mart{\'i}nez, Felipe | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 67--74 | This paper describes the implementation of a second-order hidden Markov model (HMM) based part-of-speech tagger for the Apertium free/opensource rule-based machine translation platform. We describe the part-ofspeech (PoS) tagging approach in Apertium and how it is parametrised through a tagger definition file that defines: (1) the set of tags to be used and (2) constrain rules that can be used to forbid certain PoS tag sequences, thus refining the HMM parameters and increasing its tagging accuracy. The paper also reviews the Baum-Welch algorithm used to estimate the HMM parameters and compares the tagging accuracy achieved with that achieved by the original, first-order HMM-based PoS tagger in Apertium. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,580 |
inproceedings | villarejo-munoz-etal-2009-joint | Joint efforts to further develop and incorporate Apertium into the document management flow at {U}niversitat Oberta de {C}atalunya | P{\'e}rez-Ortiz, Juan Antonio and S{\'a}nchez-Martinez, Felipe and Tyers, Francis M. | nov # " 2-3" | 2009 | Alacant, Spain | null | https://aclanthology.org/2009.freeopmt-1.12/ | Villarejo Mu{\~n}oz, Luis and Ortiz Rojas, Sergio and Ginest{\'i} Rosell, Mireia | Proceedings of the First International Workshop on Free/Open-Source Rule-Based Machine Translation | 75--82 | This article describes the needs of UOC regarding translation and how these needs are satisfied by Prompsit further developing a free rule-based machine translation system: Apertium. We initially describe the general framework regarding linguistic needs inside UOC. Then, section 2 introduces Apertium and outlines the development scenario that Prompsit executed. After that, section 3 outlines the specific needs of UOC and why Apertium was chosen as the machine translation engine. Then, section 4 describes some of the features specially developed in this project. Section 5 explains how the linguistic data was improved to increase the quality of the output in Catalan and Spanish. And, finally, we draw conclusions and outline further work originating from the project. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 82,581 |
inproceedings | eichler-etal-2008-unsupervised | Unsupervised Relation Extraction From Web Documents | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1001/ | Eichler, Kathrin and Hemsen, Holmer and Neumann, G{\"unter | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The IDEX system is a prototype of an interactive dynamic Information Extraction (IE) system. A user of the system expresses an information request in the form of a topic description, which is used for an initial search in order to retrieve a relevant set of documents. On basis of this set of documents, unsupervised relation extraction and clustering is done by the system. The results of these operations can then be interactively inspected by the user. In this paper we describe the relation extraction and clustering components of the IDEX system. Preliminary evaluation results of these components are presented and an overview is given of possible enhancements to improve the relation extraction and clustering components. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,389 |
inproceedings | alzghool-inkpen-2008-combining | Combining Multiple Models for Speech Information Retrieval | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1002/ | Alzghool, Muath and Inkpen, Diana | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In this article we present a method for combining different information retrieval models in order to increase the retrieval performance in a Speech Information Retrieval task. The formulas for combining the models are tuned on training data. Then the system is evaluated on test data. The task is particularly difficult because the text collection is automatically transcribed spontaneous speech, with many recognition errors. Also, the topics are real information needs, difficult to satisfy. Information Retrieval systems are not able to obtain good results on this data set, except for the case when manual summaries are included. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,390 |
inproceedings | teng-chen-2008-event | Event Detection and Summarization in Weblogs with Temporal Collocations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1003/ | Teng, Chun-Yuan and Chen, Hsin-Hsi | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper deals with the relationship between weblog content and time. With the proposed temporal mutual information, we analyze the collocations in time dimension, and the interesting collocations related to special events. The temporal mutual information is employed to observe the strength of term-to-term associations over time. An event detection algorithm identifies the collocations that may cause an event in a specific timestamp. An event summarization algorithm retrieves a set of collocations which describe an event. We compare our approach with the approach without considering the time interval. The experimental results demonstrate that the temporal collocations capture the real world semantics and real world events over time. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,391 |
inproceedings | krstev-etal-2008-usage | The Usage of Various Lexical Resources and Tools to Improve the Performance of Web Search Engines | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1004/ | Krstev, Cvetana and Stankovi{\'c}, Ranka and Vitas, Du{\v{s}}ko and Obradovi{\'c}, Ivan | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In this paper we present how resources and tools developed within the Human Language Technology Group at the University of Belgrade can be used for tuning queries before submitting them to a web search engine. We argue that the selection of words chosen for a query, which are of paramount importance for the quality of results obtained by the query, can be substantially improved by using various lexical resources, such as morphological dictionaries and wordnets. These dictionaries enable semantic and morphological expansion of the query, the latter being very important in highly inflective languages, such as Serbian. Wordnets can also be used for adding another language to a query, if appropriate, thus making the query bilingual. Problems encountered in retrieving documents of interest are discussed and illustrated by examples. A brief description of resources is given, followed by an outline of the web tool which enables their integration. Finally, a set of examples is chosen in order to illustrate the use of the lexical resources and tool in question. Results obtained for these examples show that the number of documents obtained through a query by using our approach can double and even quadruple in some cases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,392 |
inproceedings | bird-etal-2008-acl | The {ACL} {A}nthology Reference Corpus: A Reference Dataset for Bibliographic Research in Computational Linguistics | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1005/ | Bird, Steven and Dale, Robert and Dorr, Bonnie and Gibson, Bryan and Joseph, Mark and Kan, Min-Yen and Lee, Dongwon and Powley, Brett and Radev, Dragomir and Tan, Yee Fan | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The ACL Anthology is a digital archive of conference and journal papers in natural language processing and computational linguistics. Its primary purpose is to serve as a reference repository of research results, but we believe that it can also be an object of study and a platform for research in its own right. We describe an enriched and standardized reference corpus derived from the ACL Anthology that can be used for research in scholarly document processing. This corpus, which we call the ACL Anthology Reference Corpus (ACL ARC), brings together the recent activities of a number of research groups around the world. Our goal is to make the corpus widely available, and to encourage other researchers to use it as a standard testbed for experiments in both bibliographic and bibliometric research. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,393 |
inproceedings | reed-etal-2008-linguistic | The {L}inguistic {D}ata {C}onsortium Member Survey: Purpose, Execution and Results | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1006/ | Reed, Marian and DiPersio, Denise and Cieri, Christopher | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The Linguistic Data Consortium (LDC) seeks to provide its members with quality linguistic resources and services. In order to pursue these ideals and to remain current, LDC monitors the needs and sentiments of its communities. One mechanism LDC uses to generate feedback on consortium and resource issues is the LDC Member Survey. The survey allows LDC Members and nonmembers to provide LDC with valuable insight into their own unique circumstances, their current and future data needs and their views on LDCs role in meeting them. When the 2006 Survey was found to be a useful tool for communicating with the Consortium membership, a 2007 Survey was organized and administered. As a result of the surveys, LDC has confirmed that it has made a positive impact on the community and has identified ways to improve the quality of service and the diversity of monthly offerings. Many respondents recommended ways to improve LDCs functions, ordering mechanism and webpage. Some of these comments have inspired changes to LDCs operation and strategy. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,394 |
inproceedings | van-uytvanck-etal-2008-language | Language-Sites: Accessing and Presenting Language Resources via Geographic Information Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1007/ | Van Uytvanck, Dieter and Dukers, Alex and Ringersma, Jacquelijn and Trilsbeek, Paul | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The emerging area of Geographic Information Systems (GIS) has proven to add an interesting dimension to many research projects. Within the language-sites initiative we have brought together a broad range of links to digital language corpora and resources. Via Google Earths visually appealing 3D-interface users can spin the globe, zoom into an area they are interested in and access directly the relevant language resources. This paper focuses on several ways of relating the map and the online data (lexica, annotations, multimedia recordings, etc.). Furthermore, we discuss some of the implementation choices that have been made, including future challenges. In addition, we show how scholars (both linguists and anthropologists) are using GIS tools to fulfill their specific research needs by making use of practical examples. This illustrates how both scientists and the general public can benefit from geography-based access to digital language data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,395 |
inproceedings | varadi-etal-2008-clarin | {CLARIN}: Common Language Resources and Technology Infrastructure | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1008/ | V{\'a}radi, Tam{\'a}s and Krauwer, Steven and Wittenburg, Peter and Wynne, Martin and Koskenniemi, Kimmo | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The paper provides a general introduction to the CLARIN project, a large-scale European research infrastructure project designed to establish an integrated and interoperable infrastructure of language resources and technologies. The goal is to make language resources and technology much more accessible to all researchers working with language material, particularly non-expert users in the Humanities and Social Sciences. CLARIN intends to build a virtual, distributed infrastructure consisting of a federation of trusted digital archives and repositories where language resources and tools are accessible through web services. The CLARIN project consists of 32 partners from 22 countries and is currently engaged in the preparatory phase of developing the infrastructure. The paper describes the objectives of the project in terms of its technical, legal, linguistic and user dimensions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,396 |
inproceedings | geertzen-etal-2008-evaluating | Evaluating Dialogue Act Tagging with Naive and Expert Annotators | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1009/ | Geertzen, Jeroen and Petukhova, Volha and Bunt, Harry | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In this paper the dialogue act annotation of naive and expert annotators, both annotating the same data, are compared in order to characterise the insights annotations made by different kind of annotators may provide for evaluating dialogue act tagsets. It is argued that the agreement among naive annotators provides insight in the clarity of the tagset, whereas agreement among expert annotators provides an indication of how reliably the tagset can be applied when errors are ruled out that are due to deficiencies in understanding the concepts of the tagset, to a lack of experience in using the annotation tool, or to little experience in annotation more generally. An indication of the differences between the two groups in terms of inter-annotator agreement and tagging accuracy on task-oriented dialogue in different domains, annotated with the DIT++ dialogue act tagset is presented, and the annotations of both groups are assessed against a gold standard. Additionally, the effect of the reduction of the tagsets granularity on the performances of both groups is looked into. In general, it is concluded that the annotations of both groups provide complementary insights in reliability, clarity, and more fundamental conceptual issues. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,397 |
inproceedings | ivanova-etal-2008-evaluating | Evaluating a {G}erman Sketch Grammar: A Case Study on Noun Phrase Case | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1011/ | Ivanova, Kremena and Heid, Ulrich and Schulte im Walde, Sabine and Kilgarriff, Adam and Pomik{\'a}lek, Jan | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Word sketches are part of the Sketch Engine corpus query system. They represent automatic, corpus-derived summaries of the words grammatical and collocational behaviour. Besides the corpus itself, word sketches require a sketch grammar, a regular expression-based shallow grammar over the part-of-speech tags, to extract evidence for the properties of the targeted words from the corpus. The paper presents a sketch grammar for German, a language which is not strictly configurational and which shows a considerable amount of case syncretism, and evaluates its accuracy, which has not been done for other sketch grammars. The evaluation focuses on NP case as a crucial part of the German grammar. We present various versions of NP definitions, so demonstrating the influence of grammar detail on precision and recall. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,399 |
inproceedings | mcconville-dzikovska-2008-evaluating | Evaluating Complement-Modifier Distinctions in a Semantically Annotated Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1012/ | McConville, Mark and Dzikovska, Myroslava O. | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | We evaluate the extent to which the distinction between semantically core and non-core dependents as used in the FrameNet corpus corresponds to the traditional distinction between syntactic complements and modifiers of a verb, for the purposes of harvesting a wide-coverage verb lexicon from FrameNet for use in deep linguistic processing applications. We use the VerbNet verb database as our gold standard for making judgements about complement-hood, in conjunction with our own intuitions in cases where VerbNet is incomplete. We conclude that there is enough agreement between the two notions (0.85) to make practical the simple expedient of equating core PP dependents in FrameNet with PP complements in our lexicon. Doing so means that we lose around 13{\%} of PP complements, whilst around 9{\%} of the PP dependents left in the lexicon are not complements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,400 |
inproceedings | strauss-etal-2008-pit | The {PIT} Corpus of {G}erman Multi-Party Dialogues | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1013/ | Strau{\ss, Petra-Maria and Hoffmann, Holger and Minker, Wolfgang and Neumann, Heiko and Palm, G{\"unther and Scherer, Stefan and Traue, Harald and Weidenbacher, Ulrich | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The PIT corpus is a German multi-media corpus of multi-party dialogues recorded in a Wizard-of-Oz environment at the University of Ulm. The scenario involves two human dialogue partners interacting with a multi-modal dialogue system in the domain of restaurant selection. In this paper we present the characteristics of the data which was recorded in three sessions resulting in a total of 75 dialogues and about 14 hours of audio and video data. The corpus is available at \url{http://www.uni-ulm.de/in/pit}. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,401 |
inproceedings | adda-decker-etal-2008-annotation | Annotation and analysis of overlapping speech in political interviews | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1014/ | Adda-Decker, Martine and Barras, Claude and Adda, Gilles and Paroubek, Patrick and de Mare{\"uil, Philippe Boula and Habert, Benoit | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Looking for a better understanding of spontaneous speech-related phenomena and to improve automatic speech recognition (ASR), we present here a study on the relationship between the occurrence of overlapping speech segments and disfluencies (filled pauses, repetitions, revisions) in political interviews. First we present our data, and our overlap annotation scheme. We detail our choice of overlapping tags and our definition of disfluencies; the observed ratios of the different overlapping tags are examined, as well as their correlation with of the speaker role and propose two measures to characterise speakers interacting attitude: the attack/resist ratio and the attack density. We then study the relationship between the overlapping speech segments and the disfluencies in our corpus, before concluding on the perspectives that our experiments offer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,402 |
inproceedings | moreau-etal-2008-data | Data Collection for the {CHIL} {CLEAR} 2007 Evaluation Campaign | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1015/ | Moreau, Nicolas and Mostefa, Djamel and Stiefelhagen, Rainer and Burger, Susanne and Choukri, Khalid | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper describes in detail the data that was collected and annotated during the third and final year of the CHIL project. This data was used for the CLEAR evaluation campaign in spring 2007. The paper also introduces the CHIL Evaluation Package 2007 that resulted from this campaign including a complete description of the performed evaluation tasks. This evaluation package will be made available to the community through the ELRA General Catalogue. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,403 |
inproceedings | burger-etal-2008-comparative | A Comparative Cross-Domain Study of the Occurrence of Laughter in Meeting and Seminar Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1016/ | Burger, Susanne and Laskowski, Kornel and Woelfel, Matthias | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Laughter is an intrinsic component of human-human interaction, and current automatic speech understanding paradigms stand to gain significantly from its detection and modeling. In the current work, we produce a manual segmentation of laughter in a large corpus of interactive multi-party seminars, which promises to be a valuable resource for acoustic modeling purposes. More importantly, we quantify the occurrence of laughter in this new domain, and contrast our observations with findings for laughter in multi-party meetings. Our analyses show that, with respect to the majority of measures we explore, the occurrence of laughter in both domains is quite similar. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,404 |
inproceedings | mani-etal-2008-spatialml | {S}patial{ML}: Annotation Scheme, Corpora, and Tools | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1017/ | Mani, Inderjeet and Hitzeman, Janet and Richer, Justin and Harris, Dave and Quimby, Rob and Wellner, Ben | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | SpatialML is an annotation scheme for marking up references to places in natural language. It covers both named and nominal references to places, grounding them where possible with geo-coordinates, including both relative and absolute locations, and characterizes relationships among places in terms of a region calculus. A freely available annotation editor has been developed for SpatialML, along with a corpus of annotated documents released by the Linguistic Data Consortium. Inter-annotator agreement on SpatialML is 77.0 F-measure for extents on that corpus. An automatic tagger for SpatialML extents scores 78.5 F-measure. A disambiguator scores 93.0 F-measure and 93.4 Predictive Accuracy. In adapting the extent tagger to new domains, merging the training data from the above corpus with annotated data in the new domain provides the best performance. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,405 |
inproceedings | bethard-etal-2008-building | Building a Corpus of Temporal-Causal Structure | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1018/ | Bethard, Steven and Corvey, William and Klingenstein, Sara and Martin, James H. | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | While recent corpus annotation efforts cover a wide variety of semantic structures, work on temporal and causal relations is still in its early stages. Annotation efforts have typically considered either temporal relations or causal relations, but not both, and no corpora currently exist that allow the relation between temporals and causals to be examined empirically. We have annotated a corpus of 1000 event pairs for both temporal and causal relations, focusing on a relatively frequent construction in which the events are conjoined by the word and. Temporal relations were annotated using an extension of the BEFORE and AFTER scheme used in the TempEval competition, and causal relations were annotated using a scheme based on connective phrases like and as a result. The annotators achieved 81.2{\%} agreement on temporal relations and 77.8{\%} agreement on causal relations. Analysis of the resulting corpus revealed some interesting findings, for example, that over 30{\%} of CAUSAL relations do not have an underlying BEFORE relation. The corpus was also explored using machine learning methods, and while model performance exceeded all baselines, the results suggested that simple grammatical cues may be insufficient for identifying the more difficult temporal and causal relations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,406 |
inproceedings | zarcone-lenci-2008-computational | Computational Models for Event Type Classification in Context | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1019/ | Zarcone, Alessandra and Lenci, Alessandro | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Verb lexical semantic properties are only one of the factors that contribute to the determination of the event type expressed by a sentence, which is instead the result of a complex interplay between the verb meaning and its linguistic context. We report on two computational models for the automatic identification of event type in Italian. Both models use linguistically-motivated features extracted from Italian corpora. The main goal of our experiments is to evaluate the contribution of different types of linguistic indicators to identify the event type of a sentence, as well as to model various cases of context-driven event type shift. In the first model, event type identification has been modelled as a supervised classification task, performed with Maximum Entropy classifiers. In the second model, Self-Organizing Maps have been used to define and identify event types in an unsupervised way. The interaction of various contextual factors in determining the event type expressed by a sentence makes event type identification a highly challenging task. Computational models can help us to shed new light on the real structure of event type classes as well as to gain a better understanding of context-driven semantic shifts. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,407 |
inproceedings | forascu-2008-gmt | {GMT} to +2 or how can {T}ime{ML} be used in {R}omanian | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1020/ | For{\u{a}}scu, Corina | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The paper describes the construction and usage of the Romanian version of the TimeBank corpus. The success rate of 96.53{\%} for the automatic import of the temporal annotation from English to Romanian shows that the automatic transfer is a worth doing enterprise if temporality is to be studied in another language than the one for which TimeML, the annotation standard used, was developed. A preliminary study identifies the main situations that occurred during the automatic transfer, as well as temporal elements not (yet) marked in the English corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,408 |
inproceedings | xue-etal-2008-annotating | Annotating {\textquotedblleft}tense{\textquotedblright} in a Tense-less Language | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1021/ | Xue, Nianwen and Zhong, Hua and Chen, Kai-Yun | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In the context of Natural Language Processing, annotation is about recovering implicit information that is useful for natural language applications. In this paper we describe a tense annotation task for Chinese - a language that does not have grammatical tense - that is designed to infer the temporal location of a situation in relation to the temporal deixis, the moment of speech. If successful, this would be a highly rewarding endeavor as it has application in many natural language systems. Our preliminary experiments show that while this is a very challenging annotation task for which high annotation consistency is very difficult but not impossible to achieve. We show that guidelines that provide a conceptually intuitive framework will be crucial to the success of this annotation effort. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,409 |
inproceedings | plank-simaan-2008-subdomain | Subdomain Sensitive Statistical Parsing using Raw Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1022/ | Plank, Barbara and Sima{'}an, Khalil | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Modern statistical parsers are trained on large annotated corpora (treebanks). These treebanks usually consist of sentences addressing different subdomains (e.g. sports, politics, music), which implies that the statistics gathered by current statistical parsers are mixtures of subdomains of language use. In this paper we present a method that exploits raw subdomain corpora gathered from the web to introduce subdomain sensitivity into a given parser. We employ statistical techniques for creating an ensemble of domain sensitive parsers, and explore methods for amalgamating their predictions. Our experiments show that introducing domain sensitivity by exploiting raw corpora can improve over a tough, state-of-the-art baseline. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,410 |
inproceedings | kallmeyer-etal-2008-developing | Developing a {TT}-{MCTAG} for {G}erman with an {RCG}-based Parser | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1023/ | Kallmeyer, Laura and Lichte, Timm and Maier, Wolfgang and Parmentier, Yannick and Dellert, Johannes | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Developing linguistic resources, in particular grammars, is known to be a complex task in itself, because of (amongst others) redundancy and consistency issues. Furthermore some languages can reveal themselves hard to describe because of specific characteristics, e.g. the free word order in German. In this context, we present (i) a framework allowing to describe tree-based grammars, and (ii) an actual fragment of a core multicomponent tree-adjoining grammar with tree tuples (TT-MCTAG) for German developed using this framework. This framework combines a metagrammar compiler and a parser based on range concatenation grammar (RCG) to respectively check the consistency and the correction of the grammar. The German grammar being developed within this framework already deals with a wide range of scrambling and extraction phenomena. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,411 |
inproceedings | adolphs-etal-2008-fine | Some Fine Points of Hybrid Natural Language Parsing | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1024/ | Adolphs, Peter and Oepen, Stephan and Callmeier, Ulrich and Crysmann, Berthold and Flickinger, Dan and Kiefer, Bernd | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Large-scale grammar-based parsing systems nowadays increasingly rely on independently developed, more specialized components for pre-processing their input. However, different tools make conflicting assumptions about very basic properties such as tokenization. To make linguistic annotation gathered in pre-processing available to deep parsing, a hybrid NLP system needs to establish a coherent mapping between the two universes. Our basic assumption is that tokens are best described by attribute value matrices (AVMs) that may be arbitrarily complex. We propose a powerful resource-sensitive rewrite formalism, chart mapping, that allows us to mediate between the token descriptions delivered by shallow pre-processing components and the input expected by the grammar. We furthermore propose a novel way of unknown word treatment where all generic lexical entries are instantiated that are licensed by a particular token AVM. Again, chart mapping is used to give the grammar writer full control as to which items (e.g. native vs. generic lexical items) enter syntactic parsing. We discuss several further uses of the original idea and report on early experiences with the new machinery. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,412 |
inproceedings | nicholson-etal-2008-evaluating | Evaluating and Extending the Coverage of {HPSG} Grammars: A Case Study for {G}erman | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1025/ | Nicholson, Jeremy and Kordoni, Valia and Zhang, Yi and Baldwin, Timothy and Dridan, Rebecca | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In this work, we examine and attempt to extend the coverage of a German HPSG grammar. We use the grammar to parse a corpus of newspaper text and evaluate the proportion of sentences which have a correct attested parse, and analyse the cause of errors in terms of lexical or constructional gaps which prevent parsing. Then, using a maximum entropy model, we evaluate prediction of lexical types in the HPSG type hierarchy for unseen lexemes. By automatically adding entries to the lexicon, we observe that we can increase coverage without substantially decreasing precision. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,413 |
inproceedings | zhang-kordoni-2008-robust | Robust Parsing with a Large {HPSG} Grammar | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1026/ | Zhang, Yi and Kordoni, Valia | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In this paper we propose a partial parsing model which achieves robust parsing with a large HPSG grammar. Constraint-based precision grammars, like the HPSG grammar we are using for the experiments reported in this paper, typically lack robustness, especially when applied to real world texts. To maximally recover the linguistic knowledge from an unsuccessful parse, a proper selection model must be used. Also, the efficiency challenges usually presented by the selection model must be answered. Building on the work reported in (Zhang et al., 2007), we further propose a new partial parsing model that splits the parsing process into two stages, both of which use the bottom-up chart-based parsing algorithm. The algorithm is implemented and a preliminary experiment shows promising results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,414 |
inproceedings | otterbacher-radev-2008-modeling | Modeling Document Dynamics: an Evolutionary Approach | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1027/ | Otterbacher, Jahna and Radev, Dragomir | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | News articles about the same event published over time have properties that challenge NLP and IR applications. A cluster of such texts typically exhibits instances of paraphrase and contradiction, as sources update the facts surrounding the story, often due to an ongoing investigation. The current hypothesis is that the stories evolve over time, beginning with the first text published on a given topic. This is tested using a phylogenetic approach as well as one based on language modeling. The fit of the evolutionary models is evaluated with respect to how well they facilitate the recovery of chronological relationships between the documents. Over all data clusters, the language modeling approach consistently outperforms the phylogenetics model. However, on manually collected clusters in which the documents are published within short time spans of one another, both have a similar performance, and produce statistically significant results on the document chronology recovery evaluation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,415 |
inproceedings | widdows-ferraro-2008-semantic | Semantic Vectors: a Scalable Open Source Package and Online Technology Management Application | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1028/ | Widdows, Dominic and Ferraro, Kathleen | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper describes the open source SemanticVectors package that efficiently creates semantic vectors for words and documents from a corpus of free text articles. We believe that this package can play an important role in furthering research in distributional semantics, and (perhaps more importantly) can help to significantly reduce the current gap that exists between good research results and valuable applications in production software. Two clear principles that have guided the creation of the package so far include ease-of-use and scalability. The basic package installs and runs easily on any Java-enabled platform, and depends only on Apache Lucene. Dimension reduction is performed using Random Projection, which enables the system to scale much more effectively than other algorithms used for the same purpose. This paper also describes a trial application in the Technology Management domain, which highlights some user-centred design challenges which we believe are also key to successful deployment of this technology. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,416 |
inproceedings | rosell-velupillai-2008-revealing | Revealing Relations between Open and Closed Answers in Questionnaires through Text Clustering Evaluation | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1029/ | Rosell, Magnus and Velupillai, Sumithra | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Open answers in questionnaires contain valuable information that is very time-consuming to analyze manually. We present a method for hypothesis generation from questionnaires based on text clustering. Text clustering is used interactively on the open answers, and the user can explore the cluster contents. The exploration is guided by automatic evaluation of the clusters against a closed answer regarded as a categorization. This simplifies the process of selecting interesting clusters. The user formulates a hypothesis from the relation between the cluster content and the closed answer categorization. We have applied our method on an open answer regarding occupation compared to a closed answer on smoking habits. With no prior knowledge of smoking habits in different occupation groups we have generated the hypothesis that farmers smoke less than the average. The hypothesis is supported by several separate surveys. Closed answers are easy to analyze automatically but are restricted and may miss valuable aspects. Open answers, on the other hand, fully capture the dynamics and diversity of possible outcomes. With our method the process of analyzing open answers becomes feasible. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,417 |
inproceedings | luyckx-daelemans-2008-personae | {P}ersonae: a Corpus for Author and Personality Prediction from Text | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1030/ | Luyckx, Kim and Daelemans, Walter | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | We present a new corpus for computational stylometry, more specifically authorship attribution and the prediction of author personality from text. Because of the large number of authors (145), the corpus will allow previously impossible studies of variation in features considered predictive for writing style. The innovative meta-information (personality profiles of the authors) associated with these texts allows the study of personality prediction, a not yet very well researched aspect of style. In this paper, we describe the contents of the corpus and show its use in both authorship attribution and personality prediction. We focus on features that have been proven useful in the field of author recognition. Syntactic features like part-of-speech n-grams are generally accepted as not being under the authors conscious control and therefore providing good clues for predicting gender or authorship. We want to test whether these features are helpful for personality prediction and authorship attribution on a large set of authors. Both tasks are approached as text categorization tasks. First a document representation is constructed based on feature selection from the linguistically analyzed corpus (using the Memory-Based Shallow Parser (MBSP)). These are associated with each of the 145 authors or each of the four components of the Myers-Briggs Type Indicator (Introverted-Extraverted, Sensing-iNtuitive, Thinking-Feeling, Judging-Perceiving). Authorship attribution on 145 authors achieves results around 50{\%}-accuracy. Preliminary results indicate that the first two personality dimensions can be predicted fairly accurately. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,418 |
inproceedings | spracklin-etal-2008-using | Using the Complexity of the Distribution of Lexical Elements as a Feature in Authorship Attribution | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1031/ | Spracklin, Leanne and Inkpen, Diana and Nayak, Amiya | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Traditional Authorship Attribution models extract normalized counts of lexical elements such as nouns, common words and punctuation and use these normalized counts or ratios as features for author fingerprinting. The text is viewed as a bag-of-words and the order of words and their position relative to other words is largely ignored. We propose a new method of feature extraction which quantifies the distribution of lexical elements within the text using Kolmogorov complexity estimates. Testing carried out on blog corpora indicates that such measures outperform ratios when used as features in an SVM authorship attribution model. Moreover, by adding complexity estimates to a model using ratios, we were able to increase the F-measure by 5.2-11.8{\%} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,419 |
inproceedings | schmidt-etal-2008-exchange | An Exchange Format for Multimodal Annotations | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1032/ | Schmidt, Thomas and Duncan, Susan and Ehmer, Oliver and Hoyt, Jeffrey and Kipp, Michael and Loehr, Dan and Magnusson, Magnus and Rose, Travis and Sloetjes, Han | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation of multimodality. We propose a multimodal annotation exchange format, based on the annotation graph formalism, which is supported by import and export routines in the respective tools. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,420 |
inproceedings | stoia-etal-2008-scare | {SCARE}: a Situated Corpus with Annotated Referring Expressions | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1033/ | Stoia, Laura and Shockley, Darla Magdalene and Byron, Donna K. and Fosler-Lussier, Eric | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Even though a wealth of speech data is available for the dialog systems research community, the particular field of situated language has yet to find an appropriate free resource. The corpus required to answer research questions related to situated language should connect world information to the human language. In this paper we report on the release of a corpus of English spontaneous instruction giving situated dialogs. The corpus was collected using the Quake environment, a first-person virtual reality game, and consists of pairs of participants completing a direction giver- direction follower scenario. The corpus contains the collected audio and video, as well as word-aligned transcriptions and the positional/gaze information of the player. Referring expressions in the corpus are annotated with the IDs of their virtual world referents. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,421 |
inproceedings | sloetjes-wittenburg-2008-annotation | Annotation by Category: {ELAN} and {ISO} {DCR} | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1034/ | Sloetjes, Han and Wittenburg, Peter | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The Data Category Registry is one of the ISO initiatives towards the establishment of standards for Language Resource management, creation and coding. Successful application of the DCR depends on the availability of tools that can interact with it. This paper describes the first steps that have been taken to provide users of the multimedia annotation tool ELAN, with the means to create references from tiers and annotations to data categories defined in the ISO Data Category Registry. It first gives a brief description of the capabilities of ELAN and the structure of the documents it creates. After a concise overview of the goals and current state of the ISO DCR infrastructure, a description is given of how the preliminary connectivity with the DCR is implemented in ELAN. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,422 |
inproceedings | brugman-etal-2008-common | A Common Multimedia Annotation Framework for Cross Linking Cultural Heritage Digital Collections | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1035/ | Brugman, Hennie and Malais{\'e}, V{\'e}ronique and Hollink, Laura | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | In the context of the CATCH research program that is currently carried out at a number of large Dutch cultural heritage institutions our ambition is to combine and exchange heterogeneous multimedia annotations between projects and institutions. As first step we designed an Annotation Meta Model: a simple but powerful RDF/OWL model mainly addressing the anchoring of annotations to segments of the many different media types used in the collections of the archives, museums and libraries involved. The model includes support for the annotation of annotations themselves, and of segments of annotation values, to be able to layer annotations and in this way enable projects to process each others annotation data as the primary data for further annotation. On basis of AMM we designed an application programming interface for accessing annotation repositories and implemented it both as a software library and as a web service. Finally, we report on our experiences with the application of model, API and repository when developing web applications for collection managers in cultural heritage institutions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,423 |
inproceedings | blache-etal-2008-creating | Creating and Exploiting Multimodal Annotated Corpora | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1036/ | Blache, Philippe and Bertrand, Roxane and Ferr{\'e, Ga{\"elle | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The paper presents a project of the Laboratoire Parole {\&} Langage which aims at collecting, annotating and exploiting a corpus of spoken French in a multimodal perspective. The project directly meets the present needs in linguistics where a growing number of researchers become aware of the fact that a theory of communication which aims at describing real interactions should take into account the complexity of these interactions. However, in order to take into account such a complexity, linguists should have access to spoken corpora annotated in different fields. The paper presents the annotation schemes used in phonetics, morphology and syntax, prosody, gestuality at the LPL together with the type of linguistic description made from the annotations seen in two examples. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,424 |
inproceedings | burchardt-pennacchiotti-2008-fate | {FATE}: a {F}rame{N}et-Annotated Corpus for Textual Entailment | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1038/ | Burchardt, Aljoscha and Pennacchiotti, Marco | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Several studies indicate that the level of predicate-argument structure is relevant for modeling prevalent phenomena in current textual entailment corpora. Although large resources like FrameNet have recently become available, attempts to integrate this type of information into a system for textual entailment did not confirm the expected gain in performance. The reasons for this are not fully obvious; candidates include FrameNets restricted coverage, limitations of semantic parsers, or insufficient modeling of FrameNet information. To enable further insight on this issue, in this paper we present FATE (FrameNet-Annotated Textual Entailment), a manually crafted, fully reliable frame-annotated RTE corpus. The annotation has been carried out over the 800 pairs of the RTE-2 test set. This dataset offers a safe basis for RTE systems to experiment, and enables researchers to develop clearer ideas on how to effectively integrate frame knowledge in semantic inferenence tasks like recognizing textual entailment. We describe and present statistics over the adopted annotation, which introduces a new schema based on full-text annotation of so called relevant frame evoking elements. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,426 |
inproceedings | boxwell-white-2008-projecting | Projecting {P}ropbank Roles onto the {CCG}bank | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1039/ | Boxwell, Stephen and White, Michael | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper describes a method of accurately projecting Propbank roles onto constituents in the CCGbank and automatically annotating verbal categories with the semantic roles of their arguments. This method will be used to improve the structure of the derivations in the CCGbank and to facilitate research on semantic role tagging and broad coverage generation with CCG. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,427 |
inproceedings | vossen-etal-2008-integrating | Integrating Lexical Units, Synsets and Ontology in the Cornetto Database | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1040/ | Vossen, Piek and Maks, Isa and Segers, Roxane and VanderVliet, Hennie | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Cornetto is a two-year Stevin project (project number STE05039) in which a lexical semantic database is built that combines Wordnet with Framenet-like information for Dutch. The combination of the two lexical resources (the Dutch Wordnet and the Referentie Bestand Nederlands) will result in a much richer relational database that may improve natural language processing (NLP) technologies, such as word sense-disambiguation, and language-generation systems. In addition to merging the Dutch lexicons, the database is also mapped to a formal ontology to provide a more solid semantic backbone. Since the database represents different traditions and perspectives of semantic organization, a key issue in the project is the alignment of concepts across the resources. This paper discusses our methodology to first automatically align the word meanings and secondly to manually revise the most critical cases. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,428 |
inproceedings | alvez-etal-2008-complete | Complete and Consistent Annotation of {W}ord{N}et using the Top Concept Ontology | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1041/ | {\'A}lvez, Javier and Atserias, Jordi and Carrera, Jordi and Climent, Salvador and Laparra, Egoitz and Oliver, Antoni and Rigau, German | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper presents the complete and consistent ontological annotation of the nominal part of WordNet. The annotation has been carried out using the semantic features defined in the EuroWordNet Top Concept Ontology and made available to the NLP community. Up to now only an initial core set of 1,024 synsets, the so-called Base Concepts, was ontologized in such a way. The work has been achieved by following a methodology based on an iterative and incremental expansion of the initial labeling through the hierarchy while setting inheritance blockage points. Since this labeling has been set on the EuroWordNets Interlingual Index (ILI), it can be also used to populate any other wordnet linked to it through a simple porting process. This feature-annotated WordNet is intended to be useful for a large number of semantic NLP tasks and for testing for the first time componential analysis on real environments. Moreover, the quantitative analysis of the work shows that more than 40{\%} of the nominal part of WordNet is involved in structure errors or inadequacies. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,429 |
inproceedings | popescu-grefenstette-2008-conceptual | A Conceptual Approach to Web Image Retrieval | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1042/ | Popescu, Adrian and Grefenstette, Gregory | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | People use the Internet to find a wide variety of images. Existing image search engines do not understand the pictures they return. The introduction of semantic layers in information retrieval frameworks may enhance the quality of the results compared to existing systems. One important challenge in the field is to develop architectures that fit the requirements of real-life applications, like the Internet search engines. In this paper, we describe Olive, an image retrieval application that exploits a large scale conceptual hierarchy (extracted from WordNet) to automatically reformulate user queries, search for associated images and present results in an interactive and structured fashion. When searching a concept in the hierarchy, Olive reformulates the query using its deepest subtypes in WordNet. On the answers page, the system displays a selection of related classes and proposes a content based retrieval functionality among the pictures sharing the same linguistic label. In order to validate our approach, we run to series of tests to assess the performances of the application and report the results here. First, two precision evaluations over a panel of concepts from different domains are realized and second, a user test is designed so as to assess the interaction with the system. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,430 |
inproceedings | lecorve-etal-2008-use | On the Use of Web Resources and Natural Language Processing Techniques to Improve Automatic Speech Recognition Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1043/ | Lecorv{\'e}, Gw{\'e}nol{\'e} and Gravier, Guillaume and S{\'e}billot, Pascale | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Language models used in current automatic speech recognition systems are trained on general-purpose corpora and are therefore not relevant to transcribe spoken documents dealing with successive precise topics, such as long multimedia streams, frequently tacking reportages and debates. To overcome this problem, this paper shows that Web resources and natural language processing techniques can be effective to automatically adapt the baseline language model of an automatic speech recognition system to any encountered topic. More precisely, we detail how to characterize the topic of transcription segment and how to collect Web pages from which a topic-specific language model can be trained. Then, an adapted language model is obtained by combining the topic-specific language model with the general-purpose language model. Finally, new transcriptions are generated using the adapted language model and are compared with transcriptions previously obtained with the baseline language model. Experiments show that our topic adaptation technique leads to significant transcription quality gains. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,431 |
inproceedings | oger-etal-2008-local | Local Methods for On-Demand Out-of-Vocabulary Word Retrieval | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1044/ | Oger, Stanislas and Linar{\`e}s, Georges and B{\'e}chet, Fr{\'e}d{\'e}ric | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Most of the Web-based methods for lexicon augmenting consist in capturing global semantic features of the targeted domain in order to collect relevant documents from the Web. We suggest that the local context of the out-of-vocabulary (OOV) words contains relevant information on the OOV words. With this information, we propose to use the Web to build locally-augmented lexicons which are used in a final local decoding pass. First, an automatic web based OOV word detection method is proposed. Then, we demonstrate the relevance of the Web for the OOV word retrieval. Different methods are proposed to retrieve the hypothesis words. We finally retrieve about 26{\%} of the OOV words with a lexicon increase of less than 1000 words using the reference context. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,432 |
inproceedings | kemps-snijders-etal-2008-exploring | Exploring and Enriching a Language Resource Archive via the Web | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1045/ | Kemps-Snijders, Marc and Klassmann, Alex and Zinn, Claus and Berck, Peter and Russel, Albert and Wittenburg, Peter | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | The download first, then process paradigm is still the predominant working method amongst the research community. The web-based paradigm, however, offers many advantages from a tool development and data management perspective as they allow a quick adaptation to changing research environments. Moreover, new ways of combining tools and data are increasingly becoming available and will eventually enable a true web-based workflow approach, thus challenging the download first, then process paradigm. The necessary infrastructure for managing, exploring and enriching language resources via the Web will need to be delivered by projects like CLARIN and DARIAH. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,433 |
inproceedings | schiel-mogele-2008-talking | Talking and Looking: the {S}mart{W}eb Multimodal Interaction Corpus | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1046/ | Schiel, Florian and M{\"ogele, Hannes | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | Nowadays portable devices such as smart phones can be used to capture the face of a user simultaneously with the voice input. Server based or even embedded dialogue system might utilize this additional information to detect whether the speaking user addresses the system or other parties or whether the listening user is focused on the display or not. Depending on these findings the dialogue system might change its strategy to interact with the user improving the overall communication between human and system. To develop and test methods for On/Off-Focus detection a multimodal corpus of user-machine interactions was recorded within the German SmartWeb project. The corpus comprises 99 recording sessions of a triad communication between the user, the system and a human companion. The user can address/watch/listen to the system but also talk to his companion, read from the display or simply talk to herself. Facial video is captured with a standard built-in video camera of a smart phone while voice input in being recorded by a high quality close microphone as well as over a realistic transmission line via Bluetooth and WCDMA. The resulting SmartWeb Video Corpus (SVC) can be obtained from the Bavarian Archive for Speech Signals. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,434 |
inproceedings | hinrichs-lau-2008-contrast | In Contrast - A Complex Discourse Connective | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1047/ | Hinrichs, Erhard and L{\u{a}}u, Monica | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | This paper presents a corpus-based study of the discourse connective in contrast. The corpus data are drawn from the British National Corpus (BNC) and are analyzed at the levels of syntax, discourse structure, and compositional semantics. Following Webber et al. (2003), the paper argues that in contrast crucially involves discourse anaphora and, thus, resembles other discourse adverbials such as then, otherwise, and nevertheless. The compositional semantics proposed for other discourse connectives, however, does not straightforwardly generalize to in contrast, for which the notions of contrast pairs and contrast properties are essential. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,435 |
inproceedings | rehm-etal-2008-towards | Towards a Reference Corpus of Web Genres for the Evaluation of Genre Identification Systems | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1048/ | Rehm, Georg and Santini, Marina and Mehler, Alexander and Braslavski, Pavel and Gleim, R{\"udiger and Stubbe, Andrea and Symonenko, Svetlana and Tavosanis, Mirko and Vidulin, Vedrana | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | We present initial results from an international and multi-disciplinary research collaboration that aims at the construction of a reference corpus of web genres. The primary application scenario for which we plan to build this resource is the automatic identification of web genres. Web genres are rather difficult to capture and to describe in their entirety, but we plan for the finished reference corpus to contain multi-level tags of the respective genre or genres a web document or a website instantiates. As the construction of such a corpus is by no means a trivial task, we discuss several alternatives that are, for the time being, mostly based on existing collections. Furthermore, we discuss a shared set of genre categories and a multi-purpose tool as two additional prerequisites for a reference corpus of web genres. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,436 |
inproceedings | uryupina-2008-error | Error Analysis for Learning-based Coreference Resolution | Calzolari, Nicoletta and Choukri, Khalid and Maegaard, Bente and Mariani, Joseph and Odijk, Jan and Piperidis, Stelios and Tapias, Daniel | may | 2008 | Marrakech, Morocco | European Language Resources Association (ELRA) | https://aclanthology.org/L08-1049/ | Uryupina, Olga | Proceedings of the Sixth International Conference on Language Resources and Evaluation ({LREC}`08) | null | State-of-the-art coreference resolution engines show similar performance figures (low sixties on the MUC-7 data). Our system with a rich linguistically motivated feature set yields significantly better performance values for a variety of machine learners, but still leaves substantial room for improvement. In this paper we address a relatively unexplored area of coreference resolution - we present a detailed error analysis in order to understand the issues raised by corpus-based approaches to coreference resolution. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 83,437 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.