entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | shu-etal-2017-doc | {DOC}: Deep Open Classification of Text Documents | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1314/ | Shu, Lei and Xu, Hu and Liu, Bing | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2911--2916 | Traditional supervised learning makes the closed-world assumption that the classes appeared in the test data must have appeared in training. This also applies to text learning or text classification. As learning is used increasingly in dynamic open environments where some new/test documents may not belong to any of the training classes, identifying these novel documents during classification presents an important problem. This problem is called open-world classification or open classification. This paper proposes a novel deep learning based approach. It outperforms existing state-of-the-art techniques dramatically. | null | null | 10.18653/v1/D17-1314 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,802 |
inproceedings | gangal-etal-2017-charmanteau | {C}harmanteau: Character Embedding Models For Portmanteau Creation | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1315/ | Gangal, Varun and Jhamtani, Harsh and Neubig, Graham and Hovy, Eduard and Nyberg, Eric | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2917--2922 | Portmanteaus are a word formation phenomenon where two words combine into a new word. We propose character-level neural sequence-to-sequence (S2S) methods for the task of portmanteau generation that are end-to-end-trainable, language independent, and do not explicitly use additional phonetic information. We propose a noisy-channel-style model, which allows for the incorporation of unsupervised word lists, improving performance over a standard source-to-target model. This model is made possible by an exhaustive candidate generation strategy specifically enabled by the features of the portmanteau task. Experiments find our approach superior to a state-of-the-art FST-based baseline with respect to ground truth accuracy and human evaluation. | null | null | 10.18653/v1/D17-1315 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,803 |
inproceedings | gutierrez-etal-2017-using | Using Automated Metaphor Identification to Aid in Detection and Prediction of First-Episode Schizophrenia | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1316/ | Guti{\'e}rrez, E. Dar{\'i}o and Cecchi, Guillermo and Corcoran, Cheryl and Corlett, Philip | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2923--2930 | The diagnosis of serious mental health conditions such as schizophrenia is based on the judgment of clinicians whose training takes several years, and cannot be easily formalized into objective measures. However, previous research suggests there are disturbances in aspects of the language use of patients with schizophrenia. Using metaphor-identification and sentiment-analysis algorithms to automatically generate features, we create a classifier, that, with high accuracy, can predict which patients will develop (or currently suffer from) schizophrenia. To our knowledge, this study is the first to demonstrate the utility of automated metaphor identification algorithms for detection or prediction of disease. | null | null | 10.18653/v1/D17-1316 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,804 |
inproceedings | rashkin-etal-2017-truth | Truth of Varying Shades: Analyzing Language in Fake News and Political Fact-Checking | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1317/ | Rashkin, Hannah and Choi, Eunsol and Jang, Jin Yea and Volkova, Svitlana and Choi, Yejin | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2931--2937 | We present an analytic study on the language of news media in the context of political fact-checking and fake news detection. We compare the language of real news with that of satire, hoaxes, and propaganda to find linguistic characteristics of untrustworthy text. To probe the feasibility of automatic political fact-checking, we also present a case study based on PolitiFact.com using their factuality judgments on a 6-point scale. Experiments show that while media fact-checking remains to be an open research question, stylistic cues can help determine the truthfulness of text. | null | null | 10.18653/v1/D17-1317 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,805 |
inproceedings | menini-etal-2017-topic | Topic-Based Agreement and Disagreement in {US} Electoral Manifestos | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1318/ | Menini, Stefano and Nanni, Federico and Ponzetto, Simone Paolo and Tonelli, Sara | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2938--2944 | We present a topic-based analysis of agreement and disagreement in political manifestos, which relies on a new method for topic detection based on key concept clustering. Our approach outperforms both standard techniques like LDA and a state-of-the-art graph-based method, and provides promising initial results for this new task in computational social science. | null | null | 10.18653/v1/D17-1318 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,806 |
inproceedings | xu-koehn-2017-zipporah | {Z}ipporah: a Fast and Scalable Data Cleaning System for Noisy Web-Crawled Parallel Corpora | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1319/ | Xu, Hainan and Koehn, Philipp | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2945--2950 | We introduce Zipporah, a fast and scalable data cleaning system. We propose a novel type of bag-of-words translation feature, and train logistic regression models to classify good data and synthetic noisy data in the proposed feature space. The trained model is used to score parallel sentences in the data pool for selection. As shown in experiments, Zipporah selects a high-quality parallel corpus from a large, mixed quality data pool. In particular, for one noisy dataset, Zipporah achieves a 2.1 BLEU score improvement with using 1/5 of the data over using the entire corpus. | null | null | 10.18653/v1/D17-1319 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,807 |
inproceedings | falke-gurevych-2017-bringing | Bringing Structure into Summaries: Crowdsourcing a Benchmark Corpus of Concept Maps | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1320/ | Falke, Tobias and Gurevych, Iryna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2951--2961 | Concept maps can be used to concisely represent important information and bring structure into large document collections. Therefore, we study a variant of multi-document summarization that produces summaries in the form of concept maps. However, suitable evaluation datasets for this task are currently missing. To close this gap, we present a newly created corpus of concept maps that summarize heterogeneous collections of web documents on educational topics. It was created using a novel crowdsourcing approach that allows us to efficiently determine important elements in large document collections. We release the corpus along with a baseline system and proposed evaluation protocol to enable further research on this variant of summarization. | null | null | 10.18653/v1/D17-1320 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,808 |
inproceedings | kottur-etal-2017-natural | Natural Language Does Not Emerge {\textquoteleft}Naturally' in Multi-Agent Dialog | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1321/ | Kottur, Satwik and Moura, Jos{\'e} and Lee, Stefan and Batra, Dhruv | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2962--2967 | A number of recent works have proposed techniques for end-to-end learning of communication protocols among cooperative multi-agent populations, and have simultaneously found the emergence of grounded human-interpretable language in the protocols developed by the agents, learned without any human supervision! In this paper, using a Task {\&} Talk reference game between two agents as a testbed, we present a sequence of {\textquoteleft}negative' results culminating in a {\textquoteleft}positive' one {--} showing that while most agent-invented languages are effective (i.e. achieve near-perfect task rewards), they are decidedly not interpretable or compositional. In essence, we find that natural language does not emerge {\textquoteleft}naturally',despite the semblance of ease of natural-language-emergence that one may gather from recent literature. We discuss how it is possible to coax the invented languages to become more and more human-like and compositional by increasing restrictions on how two agents may communicate. | null | null | 10.18653/v1/D17-1321 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,809 |
inproceedings | yates-etal-2017-depression | Depression and Self-Harm Risk Assessment in Online Forums | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1322/ | Yates, Andrew and Cohan, Arman and Goharian, Nazli | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2968--2978 | Users suffering from mental health conditions often turn to online resources for support, including specialized online support communities or general communities such as Twitter and Reddit. In this work, we present a framework for supporting and studying users in both types of communities. We propose methods for identifying posts in support communities that may indicate a risk of self-harm, and demonstrate that our approach outperforms strong previously proposed methods for identifying such posts. Self-harm is closely related to depression, which makes identifying depressed users on general forums a crucial related task. We introduce a large-scale general forum dataset consisting of users with self-reported depression diagnoses matched with control users. We show how our method can be applied to effectively identify depressed users from their use of language alone. We demonstrate that our method outperforms strong baselines on this general forum dataset. | null | null | 10.18653/v1/D17-1322 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,810 |
inproceedings | zhao-etal-2017-men | Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints | Palmer, Martha and Hwa, Rebecca and Riedel, Sebastian | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-1323/ | Zhao, Jieyu and Wang, Tianlu and Yatskar, Mark and Ordonez, Vicente and Chang, Kai-Wei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing | 2979--2989 | Language is increasingly being used to de-fine rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found in web corpora. In this work, we study data and models associated with multilabel object classification and visual semantic role labeling. We find that (a) datasets for these tasks contain significant gender bias and (b) models trained on these datasets further amplify existing bias. For example, the activity cooking is over 33{\%} more likely to involve females than males in a training set, and a trained model further amplifies the disparity to 68{\%} at test time. We propose to inject corpus-level constraints for calibrating existing structured prediction models and design an algorithm based on Lagrangian relaxation for collective inference. Our method results in almost no performance loss for the underlying recognition task but decreases the magnitude of bias amplification by 47.5{\%} and 40.5{\%} for multilabel classification and visual semantic role labeling, respectively。 | null | null | 10.18653/v1/D17-1323 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,811 |
inproceedings | habernal-etal-2017-argotario | {A}rgotario: Computational Argumentation Meets Serious Games | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2002/ | Habernal, Ivan and Hannemann, Raffael and Pollak, Christian and Klamm, Christopher and Pauli, Patrick and Gurevych, Iryna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 7--12 | An important skill in critical thinking and argumentation is the ability to spot and recognize fallacies. Fallacious arguments, omnipresent in argumentative discourse, can be deceptive, manipulative, or simply leading to {\textquoteleft}wrong moves' in a discussion. Despite their importance, argumentation scholars and NLP researchers with focus on argumentation quality have not yet investigated fallacies empirically. The nonexistence of resources dealing with fallacious argumentation calls for scalable approaches to data acquisition and annotation, for which the serious games methodology offers an appealing, yet unexplored, alternative. We present Argotario, a serious game that deals with fallacies in everyday argumentation. Argotario is a multilingual, open-source, platform-independent application with strong educational aspects, accessible at \url{www.argotario.net}. | null | null | 10.18653/v1/D17-2002 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,814 |
inproceedings | ovesdotter-alm-etal-2017-analysis | An Analysis and Visualization Tool for Case Study Learning of Linguistic Concepts | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2003/ | Ovesdotter Alm, Cecilia and Meyers, Benjamin and Prud{'}hommeaux, Emily | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 13--18 | We present an educational tool that integrates computational linguistics resources for use in non-technical undergraduate language science courses. By using the tool in conjunction with evidence-driven pedagogical case studies, we strive to provide opportunities for students to gain an understanding of linguistic concepts and analysis through the lens of realistic problems in feasible ways. Case studies tend to be used in legal, business, and health education contexts, but less in the teaching and learning of linguistics. The approach introduced also has potential to encourage students across training backgrounds to continue on to computational language analysis coursework. | null | null | 10.18653/v1/D17-2003 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,815 |
inproceedings | falke-gurevych-2017-graphdocexplore | {G}raph{D}oc{E}xplore: A Framework for the Experimental Comparison of Graph-based Document Exploration Techniques | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2004/ | Falke, Tobias and Gurevych, Iryna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 19--24 | Graphs have long been proposed as a tool to browse and navigate in a collection of documents in order to support exploratory search. Many techniques to automatically extract different types of graphs, showing for example entities or concepts and different relationships between them, have been suggested. While experimental evidence that they are indeed helpful exists for some of them, it is largely unknown which type of graph is most helpful for a specific exploratory task. However, carrying out experimental comparisons with human subjects is challenging and time-consuming. Towards this end, we present the \textit{GraphDocExplore} framework. It provides an intuitive web interface for graph-based document exploration that is optimized for experimental user studies. Through a generic graph interface, different methods to extract graphs from text can be plugged into the system. Hence, they can be compared at minimal implementation effort in an environment that ensures controlled comparisons. The system is publicly available under an open-source license. | null | null | 10.18653/v1/D17-2004 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,816 |
inproceedings | stahlberg-etal-2017-sgnmt | {SGNMT} {--} A Flexible {NMT} Decoding Platform for Quick Prototyping of New Models and Search Strategies | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2005/ | Stahlberg, Felix and Hasler, Eva and Saunders, Danielle and Byrne, Bill | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 25--30 | This paper introduces SGNMT, our experimental platform for machine translation research. SGNMT provides a generic interface to neural and symbolic scoring modules (predictors) with left-to-right semantic such as translation models like NMT, language models, translation lattices, n-best lists or other kinds of scores and constraints. Predictors can be combined with other predictors to form complex decoding tasks. SGNMT implements a number of search strategies for traversing the space spanned by the predictors which are appropriate for different predictor constellations. Adding new predictors or decoding strategies is particularly easy, making it a very efficient tool for prototyping new research ideas. SGNMT is actively being used by students in the MPhil program in Machine Learning, Speech and Language Technology at the University of Cambridge for course work and theses, as well as for most of the research work in our group. | null | null | 10.18653/v1/D17-2005 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,817 |
inproceedings | yanai-etal-2017-struap | {S}tru{AP}: A Tool for Bundling Linguistic Trees through Structure-based Abstract Pattern | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2006/ | Yanai, Kohsuke and Sato, Misa and Yanase, Toshihiko and Kurotsuchi, Kenzo and Koreeda, Yuta and Niwa, Yoshiki | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 31--36 | We present a tool for developing tree structure patterns that makes it easy to define the relations among textual phrases and create a search index for these newly defined relations. By using the proposed tool, users develop tree structure patterns through abstracting syntax trees. The tool features (1) intuitive pattern syntax, (2) unique functions such as recursive call of patterns and use of lexicon dictionaries, and (3) whole workflow support for relation development and validation. We report the current implementation of the tool and its effectiveness. | null | null | 10.18653/v1/D17-2006 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,818 |
inproceedings | mechanic-etal-2017-knowyournyms | {K}now{Y}our{N}yms? A Game of Semantic Relationships | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2007/ | Mechanic, Ross and Fulgoni, Dean and Cutler, Hannah and Rajana, Sneha and Liu, Zheyuan and Jackson, Bradley and Cocos, Anne and Callison-Burch, Chris and Apidianaki, Marianna | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 37--42 | Semantic relation knowledge is crucial for natural language understanding. We introduce {\textquotedblleft}KnowYourNyms?{\textquotedblright}, a web-based game for learning semantic relations. While providing users with an engaging experience, the application collects large amounts of data that can be used to improve semantic relation classifiers. The data also broadly informs us of how people perceive the relationships between words, providing useful insights for research in psychology and linguistics. | null | null | 10.18653/v1/D17-2007 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,819 |
inproceedings | akbik-vollgraf-2017-projector | The Projector: An Interactive Annotation Projection Visualization Tool | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2008/ | Akbik, Alan and Vollgraf, Roland | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 43--48 | Previous works proposed annotation projection in parallel corpora to inexpensively generate treebanks or propbanks for new languages. In this approach, linguistic annotation is automatically transferred from a resource-rich source language (SL) to translations in a target language (TL). However, annotation projection may be adversely affected by translational divergences between specific language pairs. For this reason, previous work often required careful qualitative analysis of projectability of specific annotation in order to define strategies to address quality and coverage issues. In this demonstration, we present THE PROJECTOR, an interactive GUI designed to assist researchers in such analysis: it allows users to execute and visually inspect annotation projection in a range of different settings. We give an overview of the GUI, discuss use cases and illustrate how the tool can facilitate discussions with the research community. | null | null | 10.18653/v1/D17-2008 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,820 |
inproceedings | sarnat-etal-2017-interactive | Interactive Visualization for Linguistic Structure | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2009/ | Sarnat, Aaron and Joshi, Vidur and Petrescu-Prahova, Cristian and Herrasti, Alvaro and Stilson, Brandon and Hopkins, Mark | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 49--54 | We provide a visualization library and web interface for interactively exploring a parse tree or a forest of parses. The library is not tied to any particular linguistic representation, but provides a general-purpose API for the interactive exploration of hierarchical linguistic structure. To facilitate rapid understanding of a complex structure, the API offers several important features, including expand/collapse functionality, positional and color cues, explicit visual support for sequential structure, and dynamic highlighting to convey node-to-text correspondence. | null | null | 10.18653/v1/D17-2009 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,821 |
inproceedings | schwartz-etal-2017-dlatk | {DLATK}: Differential Language Analysis {T}ool{K}it | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2010/ | Schwartz, H. Andrew and Giorgi, Salvatore and Sap, Maarten and Crutchley, Patrick and Ungar, Lyle and Eichstaedt, Johannes | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 55--60 | We present Differential Language Analysis Toolkit (DLATK), an open-source python package and command-line tool developed for conducting social-scientific language analyses. While DLATK provides standard NLP pipeline steps such as tokenization or SVM-classification, its novel strengths lie in analyses useful for psychological, health, and social science: (1) incorporation of extra-linguistic structured information, (2) specified levels and units of analysis (e.g. document, user, community), (3) statistical metrics for continuous outcomes, and (4) robust, proven, and accurate pipelines for social-scientific prediction problems. DLATK integrates multiple popular packages (SKLearn, Mallet), enables interactive usage (Jupyter Notebooks), and generally follows object oriented principles to make it easy to tie in additional libraries or storage technologies. | null | null | 10.18653/v1/D17-2010 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,822 |
inproceedings | abujabal-etal-2017-quint | {QUINT}: Interpretable Question Answering over Knowledge Bases | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2011/ | Abujabal, Abdalghani and Saha Roy, Rishiraj and Yahya, Mohamed and Weikum, Gerhard | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 61--66 | We present QUINT, a live system for question answering over knowledge bases. QUINT automatically learns role-aligned utterance-query templates from user questions paired with their answers. When QUINT answers a question, it visualizes the complete derivation sequence from the natural language utterance to the final answer. The derivation provides an explanation of how the syntactic structure of the question was used to derive the structure of a SPARQL query, and how the phrases in the question were used to instantiate different parts of the query. When an answer seems unsatisfactory, the derivation provides valuable insights towards reformulating the question. | null | null | 10.18653/v1/D17-2011 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,823 |
inproceedings | richardson-kuhn-2017-function | Function Assistant: A Tool for {NL} Querying of {API}s | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2012/ | Richardson, Kyle and Kuhn, Jonas | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 67--72 | In this paper, we describe Function Assistant, a lightweight Python-based toolkit for querying and exploring source code repositories using natural language. The toolkit is designed to help end-users of a target API quickly find information about functions through high-level natural language queries, or descriptions. For a given text query and background API, the tool finds candidate functions by performing a translation from the text to known representations in the API using the semantic parsing approach of (Richardson and Kuhn, 2017). Translations are automatically learned from example text-code pairs in example APIs. The toolkit includes features for building translation pipelines and query engines for arbitrary source code projects. To explore this last feature, we perform new experiments on 27 well-known Python projects hosted on Github. | null | null | 10.18653/v1/D17-2012 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,824 |
inproceedings | huang-etal-2017-moodswipe | {M}ood{S}wipe: A Soft Keyboard that Suggests {M}essage{B}ased on User-Specified Emotions | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2013/ | Huang, Chieh-Yang and Labetoulle, Tristan and Huang, Ting-Hao and Chen, Yi-Pei and Chen, Hung-Chen and Srivastava, Vallari and Ku, Lun-Wei | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 73--78 | We present MoodSwipe, a soft keyboard that suggests text messages given the user-specified emotions utilizing the real dialog data. The aim of MoodSwipe is to create a convenient user interface to enjoy the technology of emotion classification and text suggestion, and at the same time to collect labeled data automatically for developing more advanced technologies. While users select the MoodSwipe keyboard, they can type as usual but sense the emotion conveyed by their text and receive suggestions for their message as a benefit. In MoodSwipe, the detected emotions serve as the medium for suggested texts, where viewing the latter is the incentive to correcting the former. We conduct several experiments to show the superiority of the emotion classification models trained on the dialog data, and further to verify good emotion cues are important context for text suggestion. | null | null | 10.18653/v1/D17-2013 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,825 |
inproceedings | miller-etal-2017-parlai | {P}arl{AI}: A Dialog Research Software Platform | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2014/ | Miller, Alexander and Feng, Will and Batra, Dhruv and Bordes, Antoine and Fisch, Adam and Lu, Jiasen and Parikh, Devi and Weston, Jason | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 79--84 | We introduce ParlAI (pronounced {\textquotedblleft}par-lay{\textquotedblright}), an open-source software platform for dialog research implemented in Python, available at \url{http://parl.ai}. Its goal is to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others' models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs. | null | null | 10.18653/v1/D17-2014 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,826 |
inproceedings | richter-etal-2017-heidelplace | {H}eidel{P}lace: An Extensible Framework for Geoparsing | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2015/ | Richter, Ludwig and Gei{\ss}, Johanna and Spitz, Andreas and Gertz, Michael | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 85--90 | Geographic information extraction from textual data sources, called geoparsing, is a key task in text processing and central to subsequent spatial analysis approaches. Several geoparsers are available that support this task, each with its own (often limited or specialized) gazetteer and its own approaches to toponym detection and resolution. In this demonstration paper, we present HeidelPlace, an extensible framework in support of geoparsing. Key features of HeidelPlace include a generic gazetteer model that supports the integration of place information from different knowledge bases, and a pipeline approach that enables an effective combination of diverse modules tailored to specific geoparsing tasks. This makes HeidelPlace a valuable tool for testing and evaluating different gazetteer sources and geoparsing methods. In the demonstration, we show how to set up a geoparsing workflow with HeidelPlace and how it can be used to compare and consolidate the output of different geoparsing approaches. | null | null | 10.18653/v1/D17-2015 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,827 |
inproceedings | panchenko-etal-2017-unsupervised | Unsupervised, Knowledge-Free, and Interpretable Word Sense Disambiguation | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2016/ | Panchenko, Alexander and Marten, Fide and Ruppert, Eugen and Faralli, Stefano and Ustalov, Dmitry and Ponzetto, Simone Paolo and Biemann, Chris | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 91--96 | Interpretability of a predictive model is a powerful feature that gains the trust of users in the correctness of the predictions. In word sense disambiguation (WSD), knowledge-based systems tend to be much more interpretable than knowledge-free counterparts as they rely on the wealth of manually-encoded elements representing word senses, such as hypernyms, usage examples, and images. We present a WSD system that bridges the gap between these two so far disconnected groups of methods. Namely, our system, providing access to several state-of-the-art WSD models, aims to be interpretable as a knowledge-based system while it remains completely unsupervised and knowledge-free. The presented tool features a Web interface for all-word disambiguation of texts that makes the sense predictions human readable by providing interpretable word sense inventories, sense representations, and disambiguation results. We provide a public API, enabling seamless integration. | null | null | 10.18653/v1/D17-2016 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,828 |
inproceedings | dernoncourt-etal-2017-neuroner | {N}euro{NER}: an easy-to-use program for named-entity recognition based on neural networks | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2017/ | Dernoncourt, Franck and Lee, Ji Young and Szolovits, Peter | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 97--102 | Named-entity recognition (NER) aims at identifying entities of interest in a text. Artificial neural networks (ANNs) have recently been shown to outperform existing NER systems. However, ANNs remain challenging to use for non-expert users. In this paper, we present NeuroNER, an easy-to-use named-entity recognition tool based on ANNs. Users can annotate entities using a graphical web-based user interface (BRAT): the annotations are then used to train an ANN, which in turn predict entities' locations and categories in new texts. NeuroNER makes this annotation-training-prediction flow smooth and accessible to anyone. | null | null | 10.18653/v1/D17-2017 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,829 |
inproceedings | papandrea-etal-2017-supwsd | {S}up{WSD}: A Flexible Toolkit for Supervised Word Sense Disambiguation | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2018/ | Papandrea, Simone and Raganato, Alessandro and Delli Bovi, Claudio | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 103--108 | In this demonstration we present SupWSD, a Java API for supervised Word Sense Disambiguation (WSD). This toolkit includes the implementation of a state-of-the-art supervised WSD system, together with a Natural Language Processing pipeline for preprocessing and feature extraction. Our aim is to provide an easy-to-use tool for the research community, designed to be modular, fast and scalable for training and testing on large datasets. The source code of SupWSD is available at \url{http://github.com/SI3P/SupWSD}. | null | null | 10.18653/v1/D17-2018 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,830 |
inproceedings | shapira-etal-2017-interactive | Interactive Abstractive Summarization for Event News Tweets | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2019/ | Shapira, Ori and Ronen, Hadar and Adler, Meni and Amsterdamer, Yael and Bar-Ilan, Judit and Dagan, Ido | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 109--114 | We present a novel interactive summarization system that is based on abstractive summarization, derived from a recent consolidated knowledge representation for multiple texts. We incorporate a couple of interaction mechanisms, providing a bullet-style summary while allowing to attain the most important information first and interactively drill down to more specific details. A usability study of our implementation, for event news tweets, suggests the utility of our approach for text exploration. | null | null | 10.18653/v1/D17-2019 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,831 |
inproceedings | abzianidze-2017-langpro | {L}ang{P}ro: Natural Language Theorem Prover | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2020/ | Abzianidze, Lasha | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 115--120 | LangPro is an automated theorem prover for natural language. Given a set of premises and a hypothesis, it is able to prove semantic relations between them. The prover is based on a version of analytic tableau method specially designed for natural logic. The proof procedure operates on logical forms that preserve linguistic expressions to a large extent. {\%}This property makes the logical forms easily obtainable from syntactic trees. {\%}, in particular, Combinatory Categorial Grammar derivation trees. The nature of proofs is deductive and transparent. On the FraCaS and SICK textual entailment datasets, the prover achieves high results comparable to state-of-the-art. | null | null | 10.18653/v1/D17-2020 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,832 |
inproceedings | lee-etal-2017-interactive | Interactive Visualization and Manipulation of Attention-based Neural Machine Translation | Specia, Lucia and Post, Matt and Paul, Michael | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-2021/ | Lee, Jaesong and Shin, Joong-Hwi and Kim, Jun-Seok | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: System Demonstrations | 121--126 | While neural machine translation (NMT) provides high-quality translation, it is still hard to interpret and analyze its behavior. We present an interactive interface for visualizing and intervening behavior of NMT, specifically concentrating on the behavior of beam search mechanism and attention component. The tool (1) visualizes search tree and attention and (2) provides interface to adjust search tree and attention weight (manually or automatically) at real-time. We show the tool gives various methods to understand NMT. | null | null | 10.18653/v1/D17-2021 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,833 |
inproceedings | pasca-2017-acquisition | Acquisition, Representation and Usage of Conceptual Hierarchies | Birch, Alexandra and Schneider, Nathan | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-3001/ | Pasca, Marius | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Through subsumption and instantiation, individual instances ({\textquotedblleft}artificial intelligence{\textquotedblright}, {\textquotedblleft}the spotted pig{\textquotedblright}) otherwise spanning a wide range of domains can be brought together and organized under conceptual hierarchies. The hierarchies connect more specific concepts ({\textquotedblleft}computer science subfields{\textquotedblright}, {\textquotedblleft}gastropubs{\textquotedblright}) to more general concepts ({\textquotedblleft}academic disciplines{\textquotedblright}, {\textquotedblleft}restaurants{\textquotedblright}) through IsA relations. Explicit or implicit properties applicable to, and defining, more general concepts are inherited by their more specific concepts, down to the instances connected to the lower parts of the hierarchies. Subsumption represents a crisp, universally-applicable principle towards consistently representing IsA relations in any knowledge resource. Yet knowledge resources often exhibit significant differences in their scope, representation choices and intended usage, to cause significant differences in their expected usage and impact on various tasks. This tutorial examines the theoretical foundations of subsumption, and its practical embodiment through IsA relations compiled manually or extracted automatically. It addresses IsA relations from their formal definition; through practical choices made in their representation within the larger and more widely-used of the available knowledge resources; to their automatic acquisition from document repositories, as opposed to their manual compilation by human contributors; to their impact in text analysis and information retrieval. As search engines move away from returning a set of links and closer to returning results that more directly answer queries, IsA relations play an increasingly important role towards a better understanding of documents and queries. The tutorial teaches the audience about definitions, assumptions and practical choices related to modeling and representing IsA relations in existing, human-compiled resources of instances, concepts and resulting conceptual hierarchies; methods for automatically extracting sets of instances within unlabeled or labeled concepts, where the concepts may be considered as a flat set or organized hierarchically; and applications of IsA relations in information retrieval. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,834 |
inproceedings | malliaros-vazirgiannis-2017-graph | Graph-based Text Representations: Boosting Text Mining, {NLP} and Information Retrieval with Graphs | Birch, Alexandra and Schneider, Nathan | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-3003/ | Malliaros, Fragkiskos D. and Vazirgiannis, Michalis | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | Graphs or networks have been widely used as modeling tools in Natural Language Processing (NLP), Text Mining (TM) and Information Retrieval (IR). Traditionally, the unigram bag-of-words representation is applied; that way, a document is represented as a multiset of its terms, disregarding dependencies between the terms. Although several variants and extensions of this modeling approach have been proposed (e.g., the n-gram model), the main weakness comes from the underlying term independence assumption. The order of the terms within a document is completely disregarded and any relationship between terms is not taken into account in the final task (e.g., text categorization). Nevertheless, as the heterogeneity of text collections is increasing (especially with respect to document length and vocabulary), the research community has started exploring different document representations aiming to capture more fine-grained contexts of co-occurrence between different terms, challenging the well-established unigram bag-of-words model. To this direction, graphs constitute a well-developed model that has been adopted for text representation. The goal of this tutorial is to offer a comprehensive presentation of recent methods that rely on graph-based text representations to deal with various tasks in NLP and IR. We will describe basic as well as novel graph theoretic concepts and we will examine how they can be applied in a wide range of text-related application domains.All the material associated to the tutorial will be available at: \url{http://fragkiskosm.github.io/projects/graph_text_tutorial} | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,836 |
inproceedings | marcheggiani-etal-2017-semantic | Semantic Role Labeling | Birch, Alexandra and Schneider, Nathan | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-3004/ | Marcheggiani, Diego and Roth, Michael and Titov, Ivan and Van Durme, Benjamin | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | This tutorial describes semantic role labelling (SRL), the task of mapping text to shallow semantic representations of eventualities and their participants. The tutorial introduces the SRL task and discusses recent research directions related to the task. The audience of this tutorial will learn about the linguistic background and motivation for semantic roles, and also about a range of computational models for this task, from early approaches to the current state-of-the-art. We will further discuss recently proposed variations to the traditional SRL task, including topics such as semantic proto-role labeling.We also cover techniques for reducing required annotation effort, such as methods exploiting unlabeled corpora (semi-supervised and unsupervised techniques), model adaptation across languages and domains, and methods for crowdsourcing semantic role annotation (e.g., question-answer driven SRL). Methods based on different machine learning paradigms, including neural networks, generative Bayesian models, graph-based algorithms and bootstrapping style techniques.Beyond sentence-level SRL, we discuss work that involves semantic roles in discourse. In particular, we cover data sets and models related to the task of identifying implicit roles and linking them to discourse antecedents. We introduce different approaches to this task from the literature, including models based on coreference resolution, centering, and selectional preferences. We also review how new insights gained through them can be useful for the traditional SRL task. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,837 |
inproceedings | faruqui-etal-2017-cross | Cross-Lingual Word Representations: Induction and Evaluation | Birch, Alexandra and Schneider, Nathan | sep | 2017 | Copenhagen, Denmark | Association for Computational Linguistics | https://aclanthology.org/D17-3007/ | Faruqui, Manaal and S{\o}gaard, Anders and Vuli{\'c}, Ivan | Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts | null | In recent past, NLP as a field has seen tremendous utility of distributional word vector representations as features in downstream tasks. The fact that these word vectors can be trained on unlabeled monolingual corpora of a language makes them an inexpensive resource in NLP. With the increasing use of monolingual word vectors, there is a need for word vectors that can be used as efficiently across multiple languages as monolingually. Therefore, learning bilingual and multilingual word embeddings/vectors is currently an important research topic. These vectors offer an elegant and language-pair independent way to represent content across different languages.This tutorial aims to bring NLP researchers up to speed with the current techniques in cross-lingual word representation learning. We will first discuss how to induce cross-lingual word representations (covering both bilingual and multilingual ones) from various data types and resources (e.g., parallel data, comparable data, non-aligned monolingual data in different languages, dictionaries and theasuri, or, even, images, eye-tracking data). We will then discuss how to evaluate such representations, intrinsically and extrinsically. We will introduce researchers to state-of-the-art methods for constructing cross-lingual word representations and discuss their applicability in a broad range of downstream NLP applications.We will deliver a detailed survey of the current methods, discuss best training and evaluation practices and use-cases, and provide links to publicly available implementations, datasets, and pre-trained models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,840 |
article | pustejovsky-joshi-2017-lexical | Lexical Factorization and Syntactic Behavior | null | null | 2017 | null | CSLI Publications | https://aclanthology.org/2017.lilt-15.1/ | Pustejovsky, James and Joshi, Aravind | null | null | In this paper, we examine the correlation between lexical semantics and the syntactic realization of the different components of a word`s meaning in natural language. More specifically, we will explore the effect that lexical factorization in verb semantics has on the suppression or expression of semantic features within the sentence. Factorization was a common analytic tool employed in early generative linguistic approaches to lexical decomposition, and continues to play a role in contemporary semantics, in various guises and modified forms. Building on the unpublished analysis of verbs of seeing in Joshi (1972), we argue here that the significance of lexical factorization is twofold: first, current models of verb meaning owe much of their insight to factor-based theories of meaning; secondly, the factorization properties of a lexical item appear to influence, both directly and indirectly, the possible syntactic expressibility of arguments and adjuncts in sentence composition. We argue that this information can be used to compute what we call the factor expression likelihood (FEL) associated with a verb in a sentence. This is the likelihood that the overt syntactic expression of a factor will cooccur with the verb. This has consequences for the compositional mechanisms responsible for computing the meaning of the sentence, as well as significance in the creation of computational models attempting to capture linguistic behavior over large corpora. | Linguistic Issues in Language Technology | 15 | null | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,904 |
article | bernardy-lappin-2017-using | Using Deep Neural Networks to Learn Syntactic Agreement | null | null | 2017 | null | CSLI Publications | https://aclanthology.org/2017.lilt-15.3/ | Bernardy, Jean-Phillipe and Lappin, Shalom | null | null | We consider the extent to which different deep neural network (DNN) configurations can learn syntactic relations, by taking up Linzen et al.`s (2016) work on subject-verb agreement with LSTM RNNs. We test their methods on a much larger corpus than they used (a ⇠24 million example part of the WaCky corpus, instead of their ⇠1.35 million example corpus, both drawn from Wikipedia). We experiment with several different DNN architectures (LSTM RNNs, GRUs, and CNNs), and alternative parameter settings for these systems (vocabulary size, training to test ratio, number of layers, memory size, drop out rate, and lexical embedding dimension size). We also try out our own unsupervised DNN language model. Our results are broadly compatible with those that Linzen et al. report. However, we discovered some interesting, and in some cases, surprising features of DNNs and language models in their performance of the agreement learning task. In particular, we found that DNNs require large vocabularies to form substantive lexical embeddings in order to learn structural patterns. This finding has interesting consequences for our understanding of the way in which DNNs represent syntactic information. It suggests that DNNs learn syntactic patterns more efficiently through rich lexical embeddings, with semantic as well as syntactic cues, than from training on lexically impoverished strings that highlight structural patterns. | Linguistic Issues in Language Technology | 15 | null | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,906 |
article | jezek-2017-dynamic | Dynamic Argument Structure | null | null | 2017 | null | CSLI Publications | https://aclanthology.org/2017.lilt-15.4/ | Jezek, Elisabetta | null | null | This paper presents a new classification of verbs of change and modification, proposing a dynamic interpretation of the lexical semantics of the predicate and its arguments. Adopting the model of dynamic event structure proposed in Pustejovsky (2013), and extending the model of dynamic selection outlined in Pustejovsky and Jezek (2011), we define a verb class in terms of its Dynamic Argument Structure (DAS), a representation which encodes how the participants involved in the change behave as the event unfolds. We address how the logical resources and results of change predicates are realized syntactically, if at all, as well as how the exploitation of the resource results in the initiation or termination of a new object, i.e. the result. We show how DAS can be associated with a dynamically encoded event structure representation, which measures the change making reference to a scalar component, modelled in terms of assignment and/or testing of values of attributes of participants. | Linguistic Issues in Language Technology | 15 | null | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,907 |
inproceedings | qasemizadeh-etal-2017-projection | Projection Al{\'e}atoire Non-N{\'e}gative pour le Calcul de Word Embedding / Non-Negative Randomized Word Embedding | Eshkol-Taravella, Iris and Antoine, Jean-Yves | 6 | 2017 | Orl{\'e}ans, France | ATALA | https://aclanthology.org/2017.jeptalnrecital-long.8/ | Qasemizadeh, Behrang and Kallmeyer, Laura and Herbelot, Aurelie | Actes des 24{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Volume 1 - Articles longs | 109--122 | Non-Negative Randomized Word Embedding We propose a word embedding method which is based on a novel random projection technique. We show that weighting methods such as positive pointwise mutual information (PPMI) can be applied to our models after their construction and at a reduced dimensionality. Hence, the proposed technique can efficiently transfer words onto semantically discriminative spaces while demonstrating high computational performance, besides benefits such as ease of update and a simple mechanism for interoperability. We report the performance of our method on several tasks and show that it yields competitive results compared to neural embedding methods in monolingual corpus-based setups. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,916 |
inproceedings | andreou-petitjean-2017-describing | Describing derivational polysemy with {XMG} | Eshkol-Taravella, Iris and Antoine, Jean-Yves | 6 | 2017 | Orl{\'e}ans, France | ATALA | https://aclanthology.org/2017.jeptalnrecital-court.12/ | Andreou, Marios and Petitjean, Simon | Actes des 24{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. Volume 2 - Articles courts | 94--101 | In this paper, we model and test the monosemy and polysemy approaches to derivational multiplicity of meaning, using Frame Semantics and XMG. In order to illustrate our claims and proposals, we use data from deverbal nominalizations with the suffix -al on verbs of change of possession (e.g. rental, disbursal). In our XMG implementation, we show that the underspecified meaning of affixes cannot always be reduced to a single unitary meaning and that the polysemy approach to multiplicity of meaning is more judicious compared to the monosemy approach. We also introduce constraints on the potential referents of derivatives. These constraints have the form of type constraints and specify which arguments in the frame of the verbal base are compatible with the referential argument of the derivative. The introduction of type constraints rules out certain readings because frame unification only succeeds if types are compatible. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,935 |
inproceedings | bawden-2017-machine-translation | Machine Translation of Speech-Like Texts: Strategies for the Inclusion of Context | Eshkol-Taravella, Iris and Antoine, Jean-Yves | 6 | 2017 | Orl{\'e}ans, France | ATALA | https://aclanthology.org/2017.jeptalnrecital-recital.1/ | Bawden, Rachel | Actes des 24{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. 19es REncontres jeunes Chercheurs en Informatique pour le TAL (RECITAL 2017) | 1--14 | Whilst the focus of Machine Translation (MT) has for a long time been the translation of planned, written texts, more and more research is being dedicated to translating speech-like texts (informal or spontaneous discourse or dialogue). To achieve high quality and natural translation of speechlike texts, the integration of context is needed, whether it is extra-linguistic (speaker identity, the interaction between speaker and interlocutor) or linguistic (coreference and stylistic phenomena linked to the spontaneous and informal nature of the texts). However, the integration of contextual information in MT systems remains limited in most current systems. In this paper, we present and critique three experiments for the integration of context into a MT system, each focusing on a different type of context and exploiting a different method: adaptation to speaker gender, cross-lingual pronoun prediction and the generation of tag questions from French into English. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,955 |
inproceedings | mirzapour-2017-finding | Finding Missing Categories in Incomplete Utterances | Eshkol-Taravella, Iris and Antoine, Jean-Yves | 6 | 2017 | Orl{\'e}ans, France | ATALA | https://aclanthology.org/2017.jeptalnrecital-recital.12/ | Mirzapour, Mehdi | Actes des 24{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles. 19es REncontres jeunes Chercheurs en Informatique pour le TAL (RECITAL 2017) | 149--160 | Finding Missing Categories in Incomplete Utterances This paper introduces an efficient algorithm (O(n4 )) for finding a missing category in an incomplete utterance by using unification technique as when learning categorial grammars, and dynamic programming as in Cocke{--}Younger{--}Kasami algorithm. Using syntax/semantic interface of categorial grammar, this work can be used for deriving possible semantic readings of an incomplete utterance. The paper illustrates the problem with running examples. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,966 |
inproceedings | cettolo-etal-2017-overview | Overview of the {IWSLT} 2017 Evaluation Campaign | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.1/ | Cettolo, Mauro and Federico, Marcello and Bentivogli, Luisa and Niehues, Jan and St{\"uker, Sebastian and Sudoh, Katsuhito and Yoshino, Koichiro and Federmann, Christian | Proceedings of the 14th International Conference on Spoken Language Translation | 2--14 | The IWSLT 2017 evaluation campaign has organised three tasks. The Multilingual task, which is about training machine translation systems handling many-to-many language directions, including so-called zero-shot directions. The Dialogue task, which calls for the integration of context information in machine translation, in order to resolve anaphoric references that typically occur in human-human dialogue turns. And, finally, the Lecture task, which offers the challenge of automatically transcribing and translating real-life university lectures. Following the tradition of these reports, we will described all tasks in detail and present the results of all runs submitted by their participants. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,984 |
inproceedings | espana-bonet-van-genabith-2017-going | Going beyond zero-shot {MT}: combining phonological, morphological and semantic factors. The {U}d{S}-{DFKI} System at {IWSLT} 2017 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.2/ | Espa{\~n}a-Bonet, Cristina and van Genabith, Josef | Proceedings of the 14th International Conference on Spoken Language Translation | 15--22 | This paper describes the UdS-DFKI participation to the multilingual task of the IWSLT Evaluation 2017. Our approach is based on factored multilingual neural translation systems following the small data and zero-shot training conditions. Our systems are designed to fully exploit multilinguality by including factors that increase the number of common elements among languages such as phonetic coarse encodings and synsets, besides shallow part-of-speech tags, stems and lemmas. Document level information is also considered by including the topic of every document. This approach improves a baseline without any additional factor for all the language pairs and even allows beyond-zero-shot translation. That is, the translation from unseen languages is possible thanks to the common elements {---}especially synsets in our models{---} among languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,985 |
inproceedings | przybysz-etal-2017-samsung | The {S}amsung and {U}niversity of {E}dinburgh`s submission to {IWSLT}17 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.3/ | Przybysz, Pawel and Chochowski, Marcin and Sennrich, Rico and Haddow, Barry and Birch, Alexandra | Proceedings of the 14th International Conference on Spoken Language Translation | 23--28 | This paper describes the joint submission of Samsung Research and Development, Warsaw, Poland and the University of Edinburgh team to the IWSLT MT task for TED talks. We took part in two translation directions, en-de and de-en. We also participated in the en-de and de-en lectures SLT task. The models have been trained with an attentional encoder-decoder model using the BiDeep model in Nematus. We filtered the training data to reduce the problem of noisy data, and we use back-translated monolingual data for domain-adaptation. We demonstrate the effectiveness of the different techniques that we applied via ablation studies. Our submission system outperforms our baseline, and last year`s University of Edinburgh submission to IWSLT, by more than 5 BLEU. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,986 |
inproceedings | bahar-etal-2017-rwth | The {RWTH} {A}achen Machine Translation Systems for {IWSLT} 2017 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.4/ | Bahar, Parnia and Rosendahl, Jan and Rossenbach, Nick and Ney, Hermann | Proceedings of the 14th International Conference on Spoken Language Translation | 29--34 | This work describes the Neural Machine Translation (NMT) system of the RWTH Aachen University developed for the English{\$}German tracks of the evaluation campaign of the International Workshop on Spoken Language Translation (IWSLT) 2017. We use NMT systems which are augmented by state-of-the-art extensions. Furthermore, we experiment with techniques that include data filtering, a larger vocabulary, two extensions to the attention mechanism and domain adaptation. Using these methods, we can show considerable improvements over the respective baseline systems and our IWSLT 2016 submission. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,987 |
inproceedings | lakew-etal-2017-fbks | {FBK}`s Multilingual Neural Machine Translation System for {IWSLT} 2017 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.5/ | Lakew, Surafel M. and Lotito, Quintino F. and Turchi, Marco and Negri, Matteo and Federico, Marcello | Proceedings of the 14th International Conference on Spoken Language Translation | 35--41 | Neural Machine Translation has been shown to enable inference and cross-lingual knowledge transfer across multiple language directions using a single multilingual model. Focusing on this multilingual translation scenario, this work summarizes FBK`s participation in the IWSLT 2017 shared task. Our submissions rely on two multilingual systems trained on five languages (English, Dutch, German, Italian, and Romanian). The first one is a 20 language direction model, which handles all possible combinations of the five languages. The second multilingual system is trained only on 16 directions, leaving the others as zero-shot translation directions (i.e representing a more complex inference task on language pairs not seen at training time). More specifically, our zero-shot directions are Dutch{\$}German and Italian{\$}Romanian (resulting in four language combinations). Despite the small amount of parallel data used for training these systems, the resulting multilingual models are effective, even in comparison with models trained separately for every language pair (i.e. in more favorable conditions). We compare and show the results of the two multilingual models against a baseline single language pair systems. Particularly, we focus on the four zero-shot directions and show how a multilingual model trained with small data can provide reasonable results. Furthermore, we investigate how pivoting (i.e using a bridge/pivot language for inference in a source!pivot!target translations) using a multilingual model can be an alternative to enable zero-shot translation in a low resource setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,988 |
inproceedings | pham-etal-2017-kits | {KIT}`s Multilingual Neural Machine Translation systems for {IWSLT} 2017 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.6/ | Pham, Ngoc-Quan and Sperber, Matthias and Salesky, Elizabeth and Ha, Thanh-Le and Niehues, Jan and Waibel, Alexander | Proceedings of the 14th International Conference on Spoken Language Translation | 42--47 | In this paper, we present KIT`s multilingual neural machine translation (NMT) systems for the IWSLT 2017 evaluation campaign machine translation (MT) and spoken language translation (SLT) tasks. For our MT task submissions, we used our multi-task system, modified from a standard attentional neural machine translation framework, instead of building 20 individual NMT systems. We investigated different architectures as well as different data corpora in training such a multilingual system. We also suggested an effective adaptation scheme for multilingual systems which brings great improvements compared to monolingual systems. For the SLT track, in addition to a monolingual neural translation system used to generate correct punctuations and true cases of the data prior to training our multilingual system, we introduced a noise model in order to make our system more robust. Results show that our novel modifications improved our systems considerably on all tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,989 |
inproceedings | bei-zong-2017-towards | Towards better translation performance on spoken language | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.7/ | Bei, Chao and Zong, Hao | Proceedings of the 14th International Conference on Spoken Language Translation | 48--54 | In this paper, we describe GTCOM`s neural machine translation(NMT) systems for the International Workshop on Spoken Language Translation(IWSLT) 2017. We participated in the English-to-Chinese and Chinese-to-English tracks in the small data condition of the bilingual task and the zero-shot condition of the multilingual task. Our systems are based on the encoder-decoder architecture with attention mechanism. We build byte pair encoding (BPE) models in parallel data and back-translated monolingual training data provided in the small data condition. Other techniques we explored in our system include two deep architectures, layer nomalization, weight normalization and training models with annealing Adam, etc. The official scores of English-to-Chinese, Chinese-to-English are 28.13 and 21.35 on test set 2016 and 28.30 and 22.16 on test set 2017. The official scores on German-to-Dutch, Dutch-to-German, Italian-to-Romanian and Romanian-to-Italian are 19.59, 17.95, 18.62 and 20.39 respectively. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,990 |
inproceedings | dabre-etal-2017-kyoto | {K}yoto {U}niversity {MT} System Description for {IWSLT} 2017 | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.8/ | Dabre, Raj and Cromieres, Fabien and Kurohashi, Sadao | Proceedings of the 14th International Conference on Spoken Language Translation | 55--59 | We describe here our Machine Translation (MT) model and the results we obtained for the IWSLT 2017 Multilingual Shared Task. Motivated by Zero Shot NMT [1] we trained a Multilingual Neural Machine Translation by combining all the training data into one single collection by appending the tokens to the source sentences in order to indicate the target language they should be translated to. We observed that even in a low resource situation we were able to get translations whose quality surpass the quality of those obtained by Phrase Based Statistical Machine Translation by several BLEU points. The most surprising result we obtained was in the zero shot setting for Dutch-German and Italian-Romanian where we observed that despite using no parallel corpora between these language pairs, the NMT model was able to translate between these languages and the translations were either as good as or better (in terms of BLEU) than the non zero resource setting. We also verify that the NMT models that use feed forward layers and self attention instead of recurrent layers are extremely fast in terms of training which is useful in a NMT experimental setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,991 |
inproceedings | nguyen-etal-2017-2017 | The 2017 {KIT} {IWSLT} Speech-to-Text Systems for {E}nglish and {G}erman | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.9/ | Nguyen, Thai-Son and M{\"uller, Markus and Sperber, Matthias and Zenkel, Thomas and St{\"uker, Sebastian and Waibel, Alex | Proceedings of the 14th International Conference on Spoken Language Translation | 60--64 | This paper describes our German and English Speech-to-Text (STT) systems for the 2017 IWSLT evaluation campaign. The campaign focuses on the transcription of unsegmented lecture talks. Our setup includes systems using both the Janus and Kaldi frameworks. We combined the outputs using both ROVER [1] and confusion network combination (CNC) [2] to achieve a good overall performance. The individual subsystems are built by using different speaker-adaptive feature combination (e.g., lMEL with i-vector or bottleneck speaker vector), acoustic models (GMM or DNN) and speaker adaptation (MLLR or fMLLR). Decoding is performed in two stages, where the GMM and DNN systems are adapted on the combination of the first stage outputs using MLLR, and fMLLR. The combination setup produces a final hypothesis that has a significantly lower WER than any of the individual sub-systems. For the English lecture task, our best combination system has a WER of 8.3{\%} on the tst2015 development set while our other combinations gained 25.7{\%} WER for German lecture tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,992 |
inproceedings | sajjad-etal-2017-neural | Neural Machine Translation Training in a Multi-Domain Scenario | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.10/ | Sajjad, Hassan and Durrani, Nadir and Dalvi, Fahim and Belinkov, Yonatan and Vogel, Stephan | Proceedings of the 14th International Conference on Spoken Language Translation | 66--73 | In this paper, we explore alternative ways to train a neural machine translation system in a multi-domain scenario. We investigate data concatenation (with fine tuning), model stacking (multi-level fine tuning), data selection and multi-model ensemble. Our findings show that the best translation quality can be achieved by building an initial system on a concatenation of available out-of-domain data and then fine-tuning it on in-domain data. Model stacking works best when training begins with the furthest out-of-domain data and the model is incrementally fine-tuned with the next furthest domain and so on. Data selection did not give the best results, but can be considered as a decent compromise between training time and translation quality. A weighted ensemble of different individual models performed better than data selection. It is beneficial in a scenario when there is no time for fine-tuning an already trained model. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,993 |
inproceedings | cho-etal-2017-domain | Domain-independent Punctuation and Segmentation Insertion | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.11/ | Cho, Eunah and Niehues, Jan and Waibel, Alex | Proceedings of the 14th International Conference on Spoken Language Translation | 74--81 | Punctuation and segmentation is crucial in spoken language translation, as it has a strong impact to translation performance. However, the impact of rare or unknown words in the performance of punctuation and segmentation insertion has not been thoroughly studied. In this work, we simulate various degrees of domain-match in testing scenario and investigate their impact to the punctuation insertion task. We explore three rare word generalizing schemes using part-of-speech (POS) tokens. Experiments show that generalizing rare and unknown words greatly improves the punctuation insertion performance, reaching up to 8.8 points of improvement in F-score when applied to the out-of-domain test scenario. We show that this improvement in punctuation quality has a positive impact on a following machine translation (MT) performance, improving it by 2 BLEU points. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,994 |
inproceedings | hassan-etal-2017-synthetic | Synthetic Data for Neural Machine Translation of Spoken-Dialects | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.12/ | Hassan, Hany and Elaraby, Mostafa and Tawfik, Ahmed Y. | Proceedings of the 14th International Conference on Spoken Language Translation | 82--89 | In this paper, we introduce a novel approach to generate synthetic data for training Neural Machine Translation systems. The proposed approach supports language variants and dialects with very limited parallel training data. This is achieved using a seed data to project words from a closely-related resource-rich language to an under-resourced language variant via word embedding representations. The proposed approach is based on localized embedding projection of distributed representations which utilizes monolingual embeddings and approximate nearest neighbors queries to transform parallel data across language variants. Our approach is language independent and can be used to generate data for any variant of the source language such as slang or spoken dialect or even for a different language that is related to the source language. We report experimental results on Levantine to English translation using Neural Machine Translation. We show that the synthetic data can provide significant improvements over a very large scale system by more than 2.8 Bleu points and it can be used to provide a reliable translation system for a spoken dialect which does not have sufficient parallel data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,995 |
inproceedings | sperber-etal-2017-toward | Toward Robust Neural Machine Translation for Noisy Input Sequences | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.13/ | Sperber, Matthias and Niehues, Jan and Waibel, Alex | Proceedings of the 14th International Conference on Spoken Language Translation | 90--96 | Translating noisy inputs, such as the output of a speech recognizer, is a difficult but important challenge for neural machine translation. One way to increase robustness of neural models is by introducing artificial noise to the training data. In this paper, we experiment with appropriate forms of such noise, exploring a middle ground between general-purpose regularizers and highly task-specific forms of noise induction. We show that with a simple generative noise model, moderate gains can be achieved in translating erroneous speech transcripts, provided that type and amount of noise are properly calibrated. The optimal amount of noise at training time is much smaller than the amount of noise in our test data, indicating limitations due to trainability issues. We note that unlike our baseline model, models trained on noisy data are able to generate outputs of proper length even for noisy inputs, while gradually reducing output length for higher amount of noise, as might also be expected from a human translator. We discuss these findings in details and give suggestions for future work. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,996 |
inproceedings | di-gangi-federico-2017-monolingual | Monolingual Embeddings for Low Resourced Neural Machine Translation | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.14/ | Di Gangi, Mattia Antonino and Federico, Marcello | Proceedings of the 14th International Conference on Spoken Language Translation | 97--104 | Neural machine translation (NMT) is the state of the art for machine translation, and it shows the best performance when there is a considerable amount of data available. When only little data exist for a language pair, the model cannot produce good representations for words, particularly for rare words. One common solution consists in reducing data sparsity by segmenting words into sub-words, in order to allow rare words to have shared representations with other words. Taking a different approach, in this paper we present a method to feed an NMT network with word embeddings trained on monolingual data, which are combined with the task-specific embeddings learned at training time. This method can leverage an embedding matrix with a huge number of words, which can therefore extend the word-level vocabulary. Our experiments on two language pairs show good results for the typical low-resourced data scenario (IWSLT in-domain dataset). Our consistent improvements over the baselines represent a positive proof about the possibility to leverage models pre-trained on monolingual data in NMT. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,997 |
inproceedings | ha-etal-2017-effective | Effective Strategies in Zero-Shot Neural Machine Translation | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.15/ | Ha, Thanh-Le and Niehues, Jan and Waibel, Alexander | Proceedings of the 14th International Conference on Spoken Language Translation | 105--112 | In this paper, we proposed two strategies which can be applied to a multilingual neural machine translation system in order to better tackle zero-shot scenarios despite not having any parallel corpus. The experiments show that they are effective in terms of both performance and computing resources, especially in multilingual translation of unbalanced data in real zero-resourced condition when they alleviate the language bias problem. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,998 |
inproceedings | lakew-etal-2017-improving | Improving Zero-Shot Translation of Low-Resource Languages | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.16/ | Lakew, Surafel M. and Lotito, Quintino F. and Negri, Matteo and Turchi, Marco and Federico, Marcello | Proceedings of the 14th International Conference on Spoken Language Translation | 113--119 | Recent work on multilingual neural machine translation reported competitive performance with respect to bilingual models and surprisingly good performance even on (zero-shot) translation directions not observed at training time. We investigate here a zero-shot translation in a particularly low-resource multilingual setting. We propose a simple iterative training procedure that leverages a duality of translations directly generated by the system for the zero-shot directions. The translations produced by the system (sub-optimal since they contain mixed language from the shared vocabulary), are then used together with the original parallel data to feed and iteratively re-train the multilingual network. Over time, this allows the system to learn from its own generated and increasingly better output. Our approach shows to be effective in improving the two zero-shot directions of our multilingual model. In particular, we observed gains of about 9 BLEU points over a baseline multilingual model and up to 2.08 BLEU over a pivoting mechanism using two bilingual models. Further analysis shows that there is also a slight improvement in the non-zero-shot language directions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 57,999 |
inproceedings | qin-etal-2017-evolution | Evolution Strategy Based Automatic Tuning of Neural Machine Translation Systems | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.17/ | Qin, Hao and Shinozaki, Takahiro and Duh, Kevin | Proceedings of the 14th International Conference on Spoken Language Translation | 120--128 | Neural machine translation (NMT) systems have demonstrated promising results in recent years. However, non-trivial amounts of manual effort are required for tuning network architectures, training configurations, and pre-processing settings such as byte pair encoding (BPE). In this study, we propose an evolution strategy based automatic tuning method for NMT. In particular, we apply the covariance matrix adaptation-evolution strategy (CMA-ES), and investigate a Pareto-based multi-objective CMA-ES to optimize the translation performance and computational time jointly. Experimental results show that the proposed method automatically finds NMT systems that outperform the initial manual setting. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,000 |
inproceedings | durrani-dalvi-2017-continuous | Continuous Space Reordering Models for Phrase-based {MT} | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.18/ | Durrani, Nadir and Dalvi, Fahim | Proceedings of the 14th International Conference on Spoken Language Translation | 129--136 | Bilingual sequence models improve phrase-based translation and reordering by overcoming phrasal independence assumption and handling long range reordering. However, due to data sparsity, these models often fall back to very small context sizes. This problem has been previously addressed by learning sequences over generalized representations such as POS tags or word clusters. In this paper, we explore an alternative based on neural network models. More concretely we train neuralized versions of lexicalized reordering [1] and the operation sequence models [2] using feed-forward neural network. Our results show improvements of up to 0.6 and 0.5 BLEU points on top of the baseline German!English and English!German systems. We also observed improvements compared to the systems that used POS tags and word clusters to train these models. Because we modify the bilingual corpus to integrate reordering operations, this allows us to also train a sequence-to-sequence neural MT model having explicit reordering triggers. Our motivation was to directly enable reordering information in the encoder-decoder framework, which otherwise relies solely on the attention model to handle long range reordering. We tried both coarser and fine-grained reordering operations. However, these experiments did not yield any improvements over the baseline Neural MT systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,001 |
inproceedings | santamaria-axelrod-2017-data | Data Selection with Cluster-Based Language Difference Models and Cynical Selection | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.19/ | Santamar{\'i}a, Luc{\'i}a and Axelrod, Amittai | Proceedings of the 14th International Conference on Spoken Language Translation | 137--145 | We present and apply two methods for addressing the problem of selecting relevant training data out of a general pool for use in tasks such as machine translation. Building on existing work on class-based language difference models [1], we first introduce a cluster-based method that uses Brown clusters to condense the vocabulary of the corpora. Secondly, we implement the cynical data selection method [2], which incrementally constructs a training corpus to efficiently model the task corpus. Both the cluster-based and the cynical data selection approaches are used for the first time within a machine translation system, and we perform a head-to-head comparison. Our intrinsic evaluations show that both new methods outperform the standard Moore-Lewis approach (cross-entropy difference), in terms of better perplexity and OOV rates on in-domain data. The cynical approach converges much quicker, covering nearly all of the in-domain vocabulary with 84{\%} less data than the other methods. Furthermore, the new approaches can be used to select machine translation training data for training better systems. Our results confirm that class-based selection using Brown clusters is a viable alternative to POS-based class-based methods, and removes the reliance on a part-of-speech tagger. Additionally, we are able to validate the recently proposed cynical data selection method, showing that its performance in SMT models surpasses that of traditional cross-entropy difference methods and more closely matches the sentence length of the task corpus. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,002 |
inproceedings | lardilleux-lepage-2017-charcut | {CHARCUT}: Human-Targeted Character-Based {MT} Evaluation with Loose Differences | Sakti, Sakriani and Utiyama, Masao | dec # " 14-15" | 2017 | Tokyo, Japan | International Workshop on Spoken Language Translation | https://aclanthology.org/2017.iwslt-1.20/ | Lardilleux, Adrien and Lepage, Yves | Proceedings of the 14th International Conference on Spoken Language Translation | 146--153 | We present CHARCUT, a character-based machine translation evaluation metric derived from a human-targeted segment difference visualisation algorithm. It combines an iterative search for longest common substrings between the candidate and the reference translation with a simple length-based threshold, enabling loose differences that limit noisy character matches. Its main advantage is to produce scores that directly reflect human-readable string differences, making it a useful support tool for the manual analysis of MT output and its display to end users. Experiments on WMT16 metrics task data show that it is on par with the best {\textquotedblleft}un-trained{\textquotedblright} metrics in terms of correlation with human judgement, well above BLEU and TER baselines, on both system and segment tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,003 |
inproceedings | krishna-etal-2016-compound | Compound Type Identification in {S}anskrit: What Roles do the Corpus and Grammar Play? | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3701/ | Krishna, Amrith and Satuluri, Pavankumar and Sharma, Shubham and Kumar, Apurv and Goyal, Pawan | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 1--10 | We propose a classification framework for semantic type identification of compounds in Sanskrit. We broadly classify the compounds into four different classes namely, \textit{Avyay{\={i}}bh{\={a}}va}, \textit{Tatpuruṣa}, \textit{Bahuvr{\={i}}hi} and \textit{Dvandva}. Our classification is based on the traditional classification system followed by the ancient grammar treatise \textit{Adṣṭ{\={a}}dhy{\={a}}y{\={i}}}, proposed by P{\={a}}ṇini 25 centuries back. We construct an elaborate features space for our system by combining conditional rules from the grammar \textit{Adṣṭ{\={a}}dhy{\={a}}y{\={i}}}, semantic relations between the compound components from a lexical database \textit{Amarakoṣa} and linguistic structures from the data using Adaptor Grammars. Our in-depth analysis of the feature space highlight inadequacy of \textit{Adṣṭ{\={a}}dhy{\={a}}y{\={i}}}, a generative grammar, in classifying the data samples. Our experimental results validate the effectiveness of using lexical databases as suggested by Amba Kulkarni and Anil Kumar, and put forward a new research direction by introducing linguistic patterns obtained from Adaptor grammars for effective identification of compound type. We utilise an ensemble based approach, specifically designed for handling skewed datasets and we {\%}and Experimenting with various classification methods, we achieve an overall accuracy of 0.77 using random forest classifiers. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,780 |
inproceedings | kyaw-thu-etal-2016-comparison | Comparison of Grapheme-to-Phoneme Conversion Methods on a {M}yanmar Pronunciation Dictionary | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3702/ | Kyaw Thu, Ye and Pa Pa, Win and Sagisaka, Yoshinori and Iwahashi, Naoto | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 11--22 | Grapheme-to-Phoneme (G2P) conversion is the task of predicting the pronunciation of a word given its graphemic or written form. It is a highly important part of both automatic speech recognition (ASR) and text-to-speech (TTS) systems. In this paper, we evaluate seven G2P conversion approaches: Adaptive Regularization of Weight Vectors (AROW) based structured learning (S-AROW), Conditional Random Field (CRF), Joint-sequence models (JSM), phrase-based statistical machine translation (PBSMT), Recurrent Neural Network (RNN), Support Vector Machine (SVM) based point-wise classification, Weighted Finite-state Transducers (WFST) on a manually tagged Myanmar phoneme dictionary. The G2P bootstrapping experimental results were measured with both automatic phoneme error rate (PER) calculation and also manual checking in terms of voiced/unvoiced, tones, consonant and vowel errors. The result shows that CRF, PBSMT and WFST approaches are the best performing methods for G2P conversion on Myanmar language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,781 |
inproceedings | gridach-2016-character | Character-Aware Neural Networks for {A}rabic Named Entity Recognition for Social Media | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3703/ | Gridach, Mourad | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 23--32 | Named Entity Recognition (NER) is the task of classifying or labelling atomic elements in the text into categories such as Person, Location or Organisation. For Arabic language, recognizing named entities is a challenging task because of the complexity and the unique characteristics of this language. In addition, most of the previous work focuses on Modern Standard Arabic (MSA), however, recognizing named entities in social media is becoming more interesting these days. Dialectal Arabic (DA) and MSA are both used in social media, which is deemed as another challenging task. Most state-of-the-art Arabic NER systems count heavily on handcrafted engineering features and lexicons which is time consuming. In this paper, we introduce a novel neural network architecture which benefits both from character- and word-level representations automatically, by using combination of bidirectional LSTM and Conditional Random Field (CRF), eliminating the need for most feature engineering. Moreover, our model relies on unsupervised word representations learned from unannotated corpora. Experimental results demonstrate that our model achieves state-of-the-art performance on publicly available benchmark for Arabic NER for social media and surpassing the previous system by a large margin. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,782 |
inproceedings | das-etal-2016-development | Development of a {B}engali parser by cross-lingual transfer from {H}indi | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3704/ | Das, Ayan and Saha, Agnivo and Sarkar, Sudeshna | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 33--43 | In recent years there has been a lot of interest in cross-lingual parsing for developing treebanks for languages with small or no annotated treebanks. In this paper, we explore the development of a cross-lingual transfer parser from Hindi to Bengali using a Hindi parser and a Hindi-Bengali parallel corpus. A parser is trained and applied to the Hindi sentences of the parallel corpus and the parse trees are projected to construct probable parse trees of the corresponding Bengali sentences. Only about 14{\%} of these trees are complete (transferred trees contain all the target sentence words) and they are used to construct a Bengali parser. We relax the criteria of completeness to consider well-formed trees (43{\%} of the trees) leading to an improvement. We note that the words often do not have a one-to-one mapping in the two languages but considering sentences at the chunk-level results in better correspondence between the two languages. Based on this we present a method to use chunking as a preprocessing step and do the transfer on the chunk trees. We find that about 72{\%} of the projected parse trees of Bengali are now well-formed. The resultant parser achieves significant improvement in both Unlabeled Attachment Score (UAS) as well as Labeled Attachment Score (LAS) over the baseline word-level transferred parser. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,783 |
inproceedings | kadupitiya-etal-2016-sinhala | {S}inhala Short Sentence Similarity Calculation using Corpus-Based and Knowledge-Based Similarity Measures | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3705/ | Kadupitiya, Jcs and Ranathunga, Surangika and Dias, Gihan | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 44--53 | Currently, corpus based-similarity, string-based similarity, and knowledge-based similarity techniques are used to compare short phrases. However, no work has been conducted on the similarity of phrases in Sinhala language. In this paper, we present a hybrid methodology to compute the similarity between two Sinhala sentences using a Semantic Similarity Measurement technique (corpus-based similarity measurement plus knowledge-based similarity measurement) that makes use of word order information. Since Sinhala WordNet is still under construction, we used lexical resources in performing this semantic similarity calculation. Evaluation using 4000 sentence pairs yielded an average MSE of 0.145 and a Pearson correla-tion factor of 0.832. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,784 |
inproceedings | jawaid-etal-2016-enriching | Enriching Source for {E}nglish-to-{U}rdu Machine Translation | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3706/ | Jawaid, Bushra and Kamran, Amir and Bojar, Ond{\v{r}}ej | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 54--63 | This paper focuses on the generation of case markers for free word order languages that use case markers as phrasal clitics for marking the relationship between the dependent-noun and its head. The generation of such clitics becomes essential task especially when translating from fixed word order languages where syntactic relations are identified by the positions of the dependent-nouns. To address the problem of missing markers on source-side, artificial markers are added in source to improve alignments with its target counterparts. Up to 1 BLEU point increase is observed over the baseline on different test sets for English-to-Urdu. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,785 |
inproceedings | behera-etal-2016-imagact4all | The {IMAGACT}4{ALL} Ontology of Animated Images: Implications for Theoretical and Machine Translation of Action Verbs from {E}nglish-{I}ndian Languages | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3707/ | Behera, Pitambar and Muzaffar, Sharmin and Ojha, Atul Ku. and Jha, Girish | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 64--73 | Action verbs are one of the frequently occurring linguistic elements in any given natural language as the speakers use them during every linguistic intercourse. However, each language expresses action verbs in its own inherently unique manner by categorization. One verb can refer to several interpretations of actions and one action can be expressed by more than one verb. The inter-language and intra-language variations create ambiguity for the translation of languages from the source language to target language with respect to action verbs. IMAGACT is a corpus-based ontological platform of action verbs translated from prototypic animated images explained in English and Italian as meta-languages. In this paper, we are presenting the issues and challenges in translating action verbs of Indian languages as target and English as source language by observing the animated images. Among the ten Indian languages which have been annotated so far on the platform are Sanskrit, Hindi, Urdu, Odia (Oriya), Bengali, Manipuri, Tamil, Assamese, Magahi and Marathi. Out of them, Manipuri belongs to the Sino-Tibetan, Tamil comes off the Dravidian and the rest owe their genesis to the Indo-Aryan language family. One of the issues is that the one-word morphological English verbs are translated into most of the Indian languages as verbs having more than one-word form; for instance as in the case of conjunct, compound, serial verbs and so on. We are further presenting a cross-lingual comparison of action verbs among Indian languages. In addition, we are also dealing with the issues in disambiguating animated images by the L1 native speakers using competence-based judgements and the theoretical and machine translation implications they bear. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,786 |
inproceedings | lapitan-etal-2016-crowdsourcing | Crowdsourcing-based Annotation of Emotions in {F}ilipino and {E}nglish Tweets | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3708/ | Lapitan, Fermin Roberto and Batista-Navarro, Riza Theresa and Albacea, Eliezer | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 74--82 | The automatic analysis of emotions conveyed in social media content, e.g., tweets, has many beneficial applications. In the Philippines, one of the most disaster-prone countries in the world, such methods could potentially enable first responders to make timely decisions despite the risk of data deluge. However, recognising emotions expressed in Philippine-generated tweets, which are mostly written in Filipino, English or a mix of both, is a non-trivial task. In order to facilitate the development of natural language processing (NLP) methods that will automate such type of analysis, we have built a corpus of tweets whose predominant emotions have been manually annotated by means of crowdsourcing. Defining measures ensuring that only high-quality annotations were retained, we have produced a gold standard corpus of 1,146 emotion-labelled Filipino and English tweets. We validate the value of this manually produced resource by demonstrating that an automatic emotion-prediction method based on the use of a publicly available word-emotion association lexicon was unable to reproduce the labels assigned via crowdsourcing. While we are planning to make a few extensions to the corpus in the near future, its current version has been made publicly available in order to foster the development of emotion analysis methods based on advanced Filipino and English NLP. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,787 |
inproceedings | phani-etal-2016-sentiment | Sentiment Analysis of Tweets in Three {I}ndian Languages | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3710/ | Phani, Shanta and Lahiri, Shibamouli and Biswas, Arindam | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 93--102 | In this paper, we describe the results of sentiment analysis on tweets in three Indian languages {--} Bengali, Hindi, and Tamil. We used the recently released SAIL dataset (Patra et al., 2015), and obtained state-of-the-art results in all three languages. Our features are simple, robust, scalable, and language-independent. Further, we show that these simple features provide better results than more complex and language-specific features, in two separate classification tasks. Detailed feature analysis and error analysis have been reported, along with learning curves for Hindi and Bengali. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,788 |
inproceedings | behera-etal-2016-dealing | Dealing with Linguistic Divergences in {E}nglish-{B}hojpuri Machine Translation | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3711/ | Behera, Pitambar and Mourya, Neha and Pandey, Vandana | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 103--113 | In Machine Translation, divergence is one of the major barriers which plays a deciding role in determining the efficiency of the system at hand. Translation divergences originate when there is structural discrepancies between the input and the output languages. It can be of various types based on the issues we are addressing to such as linguistic, cultural, communicative and so on. Owing to the fact that two languages owe their origin to different language families, linguistic divergences emerge. The present study attempts at categorizing different types of linguistic divergences: the lexical-semantic and syntactic. In addition, it also helps identify and resolve the divergent linguistic features between English as source language and Bhojpuri as target language pair. Dorr`s theoretical framework (1994, 1994a) has been followed in the classification and resolution procedure. Furthermore, so far as the methodology is concerned, we have adhered to the Dorr`s Lexical Conceptual Structure for the resolution of divergences. This research will prove to be beneficial for developing efficient MT systems if the mentioned factors are incorporated considering the inherent structural constraints between source and target languages.ated considering the inherent structural constraints between SL and TL pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,789 |
inproceedings | nishioka-akasegawa-2016-development | The development of a web corpus of {H}indi language and corpus-based comparative studies to {J}apanese | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3712/ | Nishioka, Miki and Akasegawa, Shiro | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 114--123 | In this paper, we discuss our creation of a web corpus of spoken Hindi (COSH), one of the Indo-Aryan languages spoken mainly in the Indian subcontinent. We also point out notable problems we`ve encountered in the web corpus and the special concordancer. After observing the kind of technical problems we encountered, especially regarding annotation tagged by Shiva Reddy`s tagger, we argue how they can be solved when using COSH for linguistic studies. Finally, we mention the kinds of linguistic research that we non-native speakers of Hindi can do using the corpus, especially in pragmatics and semantics, and from a comparative viewpoint to Japanese. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,790 |
inproceedings | abdul-hameed-etal-2016-automatic | Automatic Creation of a Sentence Aligned {S}inhala-{T}amil Parallel Corpus | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3713/ | Abdul Hameed, Riyafa and Pathirennehelage, Nadeeshani and Ihalapathirana, Anusha and Ziyad Mohamed, Maryam and Ranathunga, Surangika and Jayasena, Sanath and Dias, Gihan and Fernando, Sandareka | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 124--132 | A sentence aligned parallel corpus is an important prerequisite in statistical machine translation. However, manual creation of such a parallel corpus is time consuming, and requires experts fluent in both languages. Automatic creation of a sentence aligned parallel corpus using parallel text is the solution to this problem. In this paper, we present the first ever empirical evaluation carried out to identify the best method to automatically create a sentence aligned Sinhala-Tamil parallel corpus. Annual reports from Sri Lankan government institutions were used as the parallel text for aligning. Despite both Sinhala and Tamil being under-resourced languages, we were able to achieve an F-score value of 0.791 using a hybrid approach that makes use of a bilingual dictionary. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,791 |
inproceedings | chen-etal-2016-clustering-based | Clustering-based Phonetic Projection in Mismatched Crowdsourcing Channels for Low-resourced {ASR} | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3714/ | Chen, Wenda and Hasegawa-Johnson, Mark and Chen, Nancy and Jyothi, Preethi and Varshney, Lav | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 133--141 | Acquiring labeled speech for low-resource languages is a difficult task in the absence of native speakers of the language. One solution to this problem involves collecting speech transcriptions from crowd workers who are foreign or non-native speakers of a given target language. From these mismatched transcriptions, one can derive probabilistic phone transcriptions that are defined over the set of all target language phones using a noisy channel model. This paper extends prior work on deriving probabilistic transcriptions (PTs) from mismatched transcriptions by 1) modelling multilingual channels and 2) introducing a clustering-based phonetic mapping technique to improve the quality of PTs. Mismatched crowdsourcing for multilingual channels has certain properties of projection mapping, e.g., it can be interpreted as a clustering based on singular value decomposition of the segment alignments. To this end, we explore the use of distinctive feature weights, lexical tone confusions, and a two-step clustering algorithm to learn projections of phoneme segments from mismatched multilingual transcriber languages to the target language. We evaluate our techniques using mismatched transcriptions for Cantonese speech acquired from native English and Mandarin speakers. We observe a 5-9{\%} relative reduction in phone error rate for the predicted Cantonese phone transcriptions using our proposed techniques compared with the previous PT method. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,792 |
inproceedings | hellwig-2016-improving | Improving the Morphological Analysis of Classical {S}anskrit | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3715/ | Hellwig, Oliver | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 142--151 | The paper describes a new tagset for the morphological disambiguation of Sanskrit, and compares the accuracy of two machine learning methods (Conditional Random Fields, deep recurrent neural networks) for this task, with a special focus on how to model the lexicographic information. It reports a significant improvement over previously published results. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,793 |
inproceedings | bhattacharya-etal-2016-query | Query Translation for Cross-Language Information Retrieval using Multilingual Word Clusters | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3716/ | Bhattacharya, Paheli and Goyal, Pawan and Sarkar, Sudeshna | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 152--162 | In Cross-Language Information Retrieval, finding the appropriate translation of the source language query has always been a difficult problem to solve. We propose a technique towards solving this problem with the help of multilingual word clusters obtained from multilingual word embeddings. We use word embeddings of the languages projected to a common vector space on which a community-detection algorithm is applied to find clusters such that words that represent the same concept from different languages fall in the same group. We utilize these multilingual word clusters to perform query translation for Cross-Language Information Retrieval for three languages - English, Hindi and Bengali. We have experimented with the FIRE 2012 and Wikipedia datasets and have shown improvements over several standard methods like dictionary-based method, a transliteration-based model and Google Translate. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,794 |
inproceedings | das-etal-2016-study | A study of attention-based neural machine translation model on {I}ndian languages | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3717/ | Das, Ayan and Yerra, Pranay and Kumar, Ken and Sarkar, Sudeshna | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 163--172 | Neural machine translation (NMT) models have recently been shown to be very successful in machine translation (MT). The use of LSTMs in machine translation has significantly improved the translation performance for longer sentences by being able to capture the context and long range correlations of the sentences in their hidden layers. The attention model based NMT system (Bahdanau et al., 2014) has become the state-of-the-art, performing equal or better than other statistical MT approaches. In this paper, we wish to study the performance of the attention-model based NMT system (Bahdanau et al., 2014) on the Indian language pair, Hindi and Bengali, and do an analysis on the types or errors that occur in case when the languages are morphologically rich and there is a scarcity of large parallel training corpus. We then carry out certain post-processing heuristic steps to improve the quality of the translated statements and suggest further measures that can be carried out. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,795 |
inproceedings | fernando-etal-2016-comprehensive | Comprehensive Part-Of-Speech Tag Set and {SVM} based {POS} Tagger for {S}inhala | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3718/ | Fernando, Sandareka and Ranathunga, Surangika and Jayasena, Sanath and Dias, Gihan | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 173--182 | This paper presents a new comprehensive multi-level Part-Of-Speech tag set and a Support Vector Machine based Part-Of-Speech tagger for the Sinhala language. The currently available tag set for Sinhala has two limitations: the unavailability of tags to represent some word classes and the lack of tags to capture inflection based grammatical variations of words. The new tag set, presented in this paper overcomes both of these limitations. The accuracy of available Sinhala Part-Of-Speech taggers, which are based on Hidden Markov Models, still falls far behind state of the art. Our Support Vector Machine based tagger achieved an overall accuracy of 84.68{\%} with 59.86{\%} accuracy for unknown words and 87.12{\%} for known words, when the test set contains 10{\%} of unknown words. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,796 |
inproceedings | bakliwal-etal-2016-align | Align Me: A framework to generate Parallel Corpus Using {OCR}s and Bilingual Dictionaries | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3719/ | Bakliwal, Priyam and V V, Devadath and Jawahar, C V | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 183--187 | Multilingual language processing tasks like statistical machine translation and cross language information retrieval rely mainly on availability of accurate parallel corpora. Manual construction of such corpus can be extremely expensive and time consuming. In this paper we present a simple yet efficient method to generate huge amount of reasonably accurate parallel corpus with minimal user efforts. We utilize the availability of large number of English books and their corresponding translations in other languages to build parallel corpus. Optical Character Recognizing systems are used to digitize such books. We propose a robust dictionary based parallel corpus generation system for alignment of multilingual text at different levels of granularity (sentence, paragraphs, etc). We show the performance of our proposed method on a manually aligned dataset of 300 Hindi-English sentences and 100 English-Malayalam sentences. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,797 |
inproceedings | qiu-zhu-2016-learning | Learning {I}ndonesian-{C}hinese Lexicon with Bilingual Word Embedding Models and Monolingual Signals | Wu, Dekai and Bhattacharyya, Pushpak | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3720/ | Qiu, Xinying and Zhu, Gangqin | Proceedings of the 6th Workshop on South and Southeast {A}sian Natural Language Processing ({WSSANLP}2016) | 188--193 | We present a research on learning Indonesian-Chinese bilingual lexicon using monolingual word embedding and bilingual seed lexicons to build shared bilingual word embedding space. We take the first attempt to examine the impact of different monolingual signals for the choice of seed lexicons on the model performance. We found that although monolingual signals alone do not seem to outperform signals coverings all words, the significant improvement for learning word translation of the same signal types may suggest that linguistic features possess value for further study in distinguishing the semantic margins of the shared word embedding space. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,798 |
inproceedings | apresjan-2016-information | Information structure, syntax, and pragmatics and other factors in resolving scope ambiguity | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3801/ | Apresjan, Valentina | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 1--6 | The paper is a corpus study of the factors involved in disambiguating potential scope ambiguity in sentences with negation and universal quantifier, such as {\textquotedblleft}I don`t want talk to all these people{\textquotedblright}, which can alternatively mean {\textquoteleft}I don`t want to talk to any of these people' and {\textquoteleft}I don`t want to talk to some of these people'. The relevant factors are demonstrated to be largely different from those involved in disambiguating lexical polysemy. They include the syntactic function of the constituent containing {\textquotedblleft}all{\textquotedblright} quantifier (subject, direct complement, adjunct), as well as the deepness of its embedding; the status of the main predicate and {\textquotedblleft}all{\textquotedblright} constituent with respect to the information structure of the 6utterance (topic vs. focus, given vs. new information); pragmatic implicatures pertaining to the situations described in the utterances. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,801 |
inproceedings | iomdin-2016-microsyntactic | Microsyntactic Phenomena as a Computational Linguistics Issue | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3803/ | Iomdin, Leonid | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 8--17 | Microsyntactic linguistic units, such as syntactic idioms and non-standard syntactic constructions, are poorly represented in linguistic resources, mostly because the former are elements occupying an intermediate position between the lexicon and the grammar and the latter are too specific to be routinely tackled by general grammars. Consequently, many such units produce substantial gaps in systems intended to solve sophisticated computational linguistics tasks, such as parsing, deep semantic analysis, question answering, machine translation, or text generation. They also present obstacles for applying advanced techniques to these tasks, such as machine learning. The paper discusses an approach aimed at bridging such gaps, focusing on the development of monolingual and multilingual corpora where microsyntactic units are to be tagged. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,803 |
inproceedings | lopatkova-kettnerova-2016-alternations | {A}lternations: From Lexicon to Grammar And Back Again | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3804/ | Lopatkov{\'a}, Mark{\'e}ta and Kettnerov{\'a}, V{\'a}clava | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 18--27 | An excellent example of a phenomenon bridging a lexicon and a grammar is provided by grammaticalized alternations (e.g., passivization, reflexivity, and reciprocity): these alternations represent productive grammatical processes which are, however, lexically determined. While grammaticalized alternations keep lexical meaning of verbs unchanged, they are usually characterized by various changes in their morphosyntactic structure. In this contribution, we demonstrate on the example of reciprocity and its representation in the valency lexicon of Czech verbs, VALLEX how a linguistic description of complex (and still systemic) changes characteristic of grammaticalized alternations can benefit from an integration of grammatical rules into a valency lexicon. In contrast to other types of grammaticalized alternations, reciprocity in Czech has received relatively little attention although it closely interacts with various linguistic phenomena (e.g., with light verbs, diatheses, and reflexivity). | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,804 |
inproceedings | mcshane-nirenburg-2016-extra | Extra-Specific Multiword Expressions for Language-Endowed Intelligent Agents | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3805/ | McShane, Marjorie and Nirenburg, Sergei | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 28--37 | Language-endowed intelligent agents benefit from leveraging lexical knowledge falling at different points along a spectrum of compositionality. This means that robust computational lexicons should include not only the compositional expectations of argument-taking words, but also non-compositional collocations (idioms), semi-compositional collocations that might be difficult for an agent to interpret (e.g., standard metaphors), and even collocations that could be compositionally analyzed but are so frequently encountered that recording their meaning increases the efficiency of interpretation. In this paper we argue that yet another type of string-to-meaning mapping can also be useful to intelligent agents: remembered semantic analyses of actual text inputs. These can be viewed as super-specific multi-word expressions whose recorded interpretations mimic a person`s memories of knowledge previously learned from language input. These differ from typical annotated corpora in two ways. First, they provide a full, context-sensitive semantic interpretation rather than select features. Second, they are are formulated in the ontologically-grounded metalanguage used in a particular agent environment, meaning that the interpretations contribute to the dynamically evolving cognitive capabilites of agents configured in that environment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,805 |
inproceedings | nivre-2016-universal | {U}niversal {D}ependencies: A Cross-Linguistic Perspective on Grammar and Lexicon | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3806/ | Nivre, Joakim | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 38--40 | Universal Dependencies is an initiative to develop cross-linguistically consistent grammatical annotation for many languages, with the goal of facilitating multilingual parser development, cross-lingual learning and parsing research from a language typology perspective. It assumes a dependency-based approach to syntax and a lexicalist approach to morphology, which together entail that the fundamental units of grammatical annotation are words. Words have properties captured by morphological annotation and enter into relations captured by syntactic annotation. Moreover, priority is given to relations between lexical content words, as opposed to grammatical function words. In this position paper, I discuss how this approach allows us to capture similarities and differences across typologically diverse languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,806 |
inproceedings | pustejovsky-etal-2016-development | The Development of Multimodal Lexical Resources | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3807/ | Pustejovsky, James and Do, Tuan and Kehat, Gitit and Krishnaswamy, Nikhil | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 41--47 | Human communication is a multimodal activity, involving not only speech and written expressions, but intonation, images, gestures, visual clues, and the interpretation of actions through perception. In this paper, we describe the design of a multimodal lexicon that is able to accommodate the diverse modalities that present themselves in NLP applications. We have been developing a multimodal semantic representation, VoxML, that integrates the encoding of semantic, visual, gestural, and action-based features associated with linguistic expressions. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,807 |
inproceedings | boguslavsky-2016-non | On the Non-canonical Valency Filling | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3808/ | Boguslavsky, Igor | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 51--60 | Valency slot filling is a semantic glue, which brings together the meanings of words. As regards the position of an argument in the dependency structure with respect to its predicate, there exist three types of valency filling: active (canonical), passive, and discontinuous. Of these, the first type is studied much better than the other two. As a rule, canonical actants are unambiguously marked in the syntactic structure, and each actant corresponds to a unique syntactic position. Linguistic information on which syntactic function an actant might have (subject, direct or indirect object), what its morphological form should be and which prepositions or conjunctions it requires, can be given in the lexicon in the form of government patterns, subcategorization frames, or similar data structures. We concentrate on non-canonical cases of valency filling in Russian, which are characteristic of non-verbal parts of speech, such as adverbs, adjectives, and particles, in the first place. They are more difficult to handle than canonical ones, because the position of the actant in the tree is governed by more complicated rules. A valency may be filled by expressions occupying different syntactic positions, and a syntactic position may accept expressions filling different valencies of the same word. We show how these phenomena can be processed in a semantic analyzer. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,808 |
inproceedings | danlos-etal-2016-improvement | Improvement of {V}erb{N}et-like resources by frame typing | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3809/ | Danlos, Laurence and Constant, Matthieu and Barque, Lucie | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 61--70 | Verbenet is a French lexicon developed by {\textquotedblleft}translation{\textquotedblright} of its English counterpart {---} VerbNet (Kipper-Schuler, 2005){---}and treatment of the specificities of French syntax (Pradet et al., 2014; Danlos et al., 2016). One difficulty encountered in its development springs from the fact that the list of (potentially numerous) frames has no internal organization. This paper proposes a type system for frames that shows whether two frames are variants of a given alternation. Frame typing facilitates coherence checking of the resource in a {\textquotedblleft}virtuous circle{\textquotedblright}. We present the principles underlying a program we developed and used to automatically type frames in VerbeNet. We also show that our system is portable to other languages. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,809 |
inproceedings | fucikova-etal-2016-enriching | Enriching a Valency Lexicon by Deverbative Nouns | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3810/ | Fu{\v{c}}{\'i}kov{\'a}, Eva and Haji{\v{c}}, Jan and Ure{\v{s}}ov{\'a}, Zde{\v{n}}ka | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 71--80 | We present an attempt to automatically identify Czech deverbative nouns using several methods that use large corpora as well as existing lexical resources. The motivation for the task is to extend a verbal valency (i.e., predicate-argument) lexicon by adding nouns that share the valency properties with the base verb, assuming their properties can be derived (even if not trivially) from the underlying verb by deterministic grammatical rules. At the same time, even in inflective languages, not all deverbatives are simply created from their underlying base verb by regular lexical derivation processes. We have thus developed hybrid techniques that use both large parallel corpora and several standard lexical resources. Thanks to the use of parallel corpora, the resulting sets contain also synonyms, which the lexical derivation rules cannot get. For evaluation, we have manually created a small, 100-verb gold data since no such dataset was initially available for Czech. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,810 |
inproceedings | iordachioaia-etal-2016-grammar | The Grammar of {E}nglish Deverbal Compounds and their Meaning | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3811/ | Iord{\u{a}}chioaia, Gianina and van der Plas, Lonneke and Jagfeld, Glorianna | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 81--91 | We present an interdisciplinary study on the interaction between the interpretation of noun-noun deverbal compounds (DCs; e.g., task assignment) and the morphosyntactic properties of their deverbal heads in English. Underlying hypotheses from theoretical linguistics are tested with tools and resources from computational linguistics. We start with Grimshaw`s (1990) insight that deverbal nouns are ambiguous between argument-supporting nominal (ASN) readings, which inherit verbal arguments (e.g., the assignment of the tasks), and the less verbal and more lexicalized Result Nominal and Simple Event readings (e.g., a two-page assignment). Following Grimshaw, our hypothesis is that the former will realize object arguments in DCs, while the latter will receive a wider range of interpretations like root compounds headed by non-derived nouns (e.g., chocolate box). Evidence from a large corpus assisted by machine learning techniques confirms this hypothesis, by showing that, besides other features, the realization of internal arguments by deverbal heads outside compounds (i.e., the most distinctive ASN-property in Grimshaw 1990) is a good predictor for an object interpretation of non-heads in DCs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,811 |
inproceedings | kahane-lareau-2016-encoding | Encoding a syntactic dictionary into a super granular unification grammar | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3812/ | Kahane, Sylvain and Lareau, Fran{\c{c}}ois | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 92--101 | We show how to turn a large-scale syntactic dictionary into a dependency-based unification grammar where each piece of lexical information calls a separate rule, yielding a super granular grammar. Subcategorization, raising and control verbs, auxiliaries and copula, passivization, and tough-movement are discussed. We focus on the semantics-syntax interface and offer a new perspective on syntactic structure. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,812 |
inproceedings | morimoto-etal-2016-identification | Identification of Flexible Multiword Expressions with the Help of Dependency Structure Annotation | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3813/ | Morimoto, Ayaka and Yoshimoto, Akifumi and Kato, Akihiko and Shindo, Hiroyuki and Matsumoto, Yuji | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 102--109 | This paper presents our ongoing work on compilation of English multi-word expression (MWE) lexicon. We are especially interested in collecting flexible MWEs, in which some other components can intervene the expression such as {\textquotedblleft}a number of{\textquotedblright} vs {\textquotedblleft}a large number of{\textquotedblright} where a modifier of {\textquotedblleft}number{\textquotedblright} can be placed in the expression and inherit the original meaning. We fiest collect possible candidates of flexible English MWEs from the web, and annotate all of their occurrences in the Wall Street Journal portion of Ontonotes corpus. We make use of word dependency strcuture information of the sentences converted from the phrase structure annotation. This process enables semi-automatic annotation of MWEs in the corpus and simultanaously produces the internal and external dependency representation of flexible MWEs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,813 |
inproceedings | nedoluzhko-2016-new | A new look at possessive reflexivization: A comparative study between {C}zech and {R}ussian | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3814/ | Nedoluzhko, Anna | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 110--119 | The paper presents a contrastive description of reflexive possessive pronouns {\textquotedblleft}sv{\r{u}}j{\textquotedblright} in Czech and {\textquotedblleft}svoj{\textquotedblright} in Russian. The research concerns syntactic, semantic and pragmatic aspects. With our analysis, we shed a new light on the already investigated issue, which comes from a detailed comparison of the phenomenon of possessive reflexivization in two typologically and genetically similar languages. We show that whereas in Czech, the possessive reflexivization is mostly limited to syntactic functions and does not go beyond the grammar, in Russian it gets additional semantic meanings and moves substan-tially towards the lexicon. The obtained knowledge allows us to explain heretofore unclear marginal uses of reflexives in each language. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,814 |
inproceedings | rosen-2016-modeling | Modeling non-standard language | Haji{\v{c}}ov{\'a}, Eva and Boguslavsky, Igor | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3815/ | Rosen, Alexandr | Proceedings of the Workshop on Grammar and Lexicon: interactions and interfaces ({G}ram{L}ex) | 120--131 | A specific language as used by different speakers and in different situations has a number of more or less distant varieties. Extending the notion of non-standard language to varieties that do not fit an explicitly or implicitly assumed norm or pattern, we look for methods and tools that could be applied to this domain. The needs start from the theoretical side: categories usable for the analysis of non-standard language are not readily available, and continue to methods and tools required for its detection and diagnostics. A general discussion of issues related to non-standard language is followed by two case studies. The first study presents a taxonomy of morphosyntactic categories as an attempt to analyse non-standard forms produced by non-native learners of Czech. The second study focusses on the role of a rule-based grammar and lexicon in the process of building and using a parsebank. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,815 |
inproceedings | plank-2016-processing | Processing non-canonical or noisy text: fortuitous data to the rescue | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3901/ | Plank, Barbara | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 1 | Real world data differs radically from the benchmark corpora we use in NLP, resulting in large performance drops. The reason for this problem is obvious: NLP models are trained on limited samples from canonical varieties considered standard. However, there are many dimensions, e.g., sociodemographic, language, genre, sentence type, etc. on which texts can differ from the standard. The solution is not obvious: we cannot control for all factors, and it is not clear how to best go beyond the current practice of training on homogeneous data from a single domain and language. In this talk, I review the notion of canonicity, and how it shapes our community`s approach to language. I argue for the use of fortuitous data. Fortuitous data is data out there that just waits to be harvested. It includes data which is in plain sight, but is often neglected, and more distant sources like behavioral data, which first need to be refined. They provide additional contexts and a myriad of opportunities to build more adaptive language technology, some of which I will explore in this talk. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,817 |
inproceedings | chang-2016-entity | From Entity Linking to Question Answering {--} Recent Progress on Semantic Grounding Tasks | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3902/ | Chang, Ming-Wei | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 2 | Entity linking and semantic parsing have been shown to be crucial to important applications such as question answering and document understanding. These tasks often require structured learning models, which make predictions on multiple interdependent variables. In this talk, I argue that carefully designed structured learning algorithms play a central role in entity linking and semantic parsing tasks. In particular, I will present several new structured learning models for entity linking, which jointly detect mentions and disambiguate entities as well as capture non-textual information. I will then show how to use a staged search procedure to building a state-of-the-art knowledge base question answering system. Finally, if time permits, I will discuss different supervision protocols for training semantic parsers and the value of labeling semantic parses. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,818 |
inproceedings | torisawa-2016-disaana | {DISAANA} and {D}-{SUMM}: Large-scale Real Time {NLP} Systems for Analyzing Disaster Related Reports in Tweets | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3903/ | Torisawa, Kentaro | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 3 | This talk presents two NLP systems that were developed for helping disaster victims and rescue workers in the aftermath of large-scale disasters. DISAANA provides answers to questions such as {\textquotedblleft}What is in short supply in Tokyo?{\textquotedblright} and displays locations related to each answer on a map. D-SUMM automatically summarizes a large number of disaster related reports concerning a specified area and helps rescue workers to understand disaster situations from a macro perspective. Both systems are publicly available as Web services. In the aftermath of the 2016 Kumamoto Earthquake (M7.0), the Japanese government actually used DISAANA to analyze the situation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,819 |
inproceedings | ljubesic-fiser-2016-private | Private or Corporate? Predicting User Types on {T}witter | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3904/ | Ljube{\v{s}}i{\'c}, Nikola and Fi{\v{s}}er, Darja | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 4--12 | In this paper we present a series of experiments on discriminating between private and corporate accounts on Twitter. We define features based on Twitter metadata, morphosyntactic tags and surface forms, showing that the simple bag-of-words model achieves single best results that can, however, be improved by building a weighted soft ensemble of classifiers based on each feature type. Investigating the time and language dependence of each feature type delivers quite unexpecting results showing that features based on metadata are neither time- nor language-insensitive as the way the two user groups use the social network varies heavily through time and space. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,820 |
inproceedings | martinez-alonso-etal-2016-noisy | From Noisy Questions to {M}inecraft Texts: Annotation Challenges in Extreme Syntax Scenario | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3905/ | Mart{\'i}nez Alonso, H{\'e}ctor and Seddah, Djam{\'e} and Sagot, Beno{\^i}t | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 13--23 | User-generated content presents many challenges for its automatic processing. While many of them do come from out-of-vocabulary effects, others spawn from different linguistic phenomena such as unusual syntax. In this work we present a French three-domain data set made up of question headlines from a cooking forum, game chat logs and associated forums from two popular online games (MINECRAFT {\&} LEAGUE OF LEGENDS). We chose these domains because they encompass different degrees of lexical and syntactic compliance with canonical language. We conduct an automatic and manual evaluation of the difficulties of processing these domains for part-of-speech prediction, and introduce a pilot study to determine whether dependency analysis lends itself well to annotate these data. We also discuss the development cost of our data set. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,821 |
inproceedings | asakura-etal-2016-disaster | Disaster Analysis using User-Generated Weather Report | Han, Bo and Ritter, Alan and Derczynski, Leon and Xu, Wei and Baldwin, Tim | dec | 2016 | Osaka, Japan | The COLING 2016 Organizing Committee | https://aclanthology.org/W16-3906/ | Asakura, Yasunobu and Hangyo, Masatsugu and Komachi, Mamoru | Proceedings of the 2nd Workshop on Noisy User-generated Text ({WNUT}) | 24--32 | Information extraction from user-generated text has gained much attention with the growth of the Web.Disaster analysis using information from social media provides valuable, real-time, geolocation information for helping people caught up these in disasters. However, it is not convenient to analyze texts posted on social media because disaster keywords match any texts that contain words. For collecting posts about a disaster from social media, we need to develop a classifier to filter posts irrelevant to disasters. Moreover, because of the nature of social media, we can take advantage of posts that come with GPS information. However, a post does not always refer to an event occurring at the place where it has been posted. Therefore, we propose a new task of classifying whether a flood disaster occurred, in addition to predicting the geolocation of events from user-generated text. We report the annotation of the flood disaster corpus and develop a classifier to demonstrate the use of this corpus for disaster analysis. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 58,822 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.