entry_type
stringclasses 4
values | citation_key
stringlengths 10
110
| title
stringlengths 6
276
⌀ | editor
stringclasses 723
values | month
stringclasses 69
values | year
stringdate 1963-01-01 00:00:00
2022-01-01 00:00:00
| address
stringclasses 202
values | publisher
stringclasses 41
values | url
stringlengths 34
62
| author
stringlengths 6
2.07k
⌀ | booktitle
stringclasses 861
values | pages
stringlengths 1
12
⌀ | abstract
stringlengths 302
2.4k
| journal
stringclasses 5
values | volume
stringclasses 24
values | doi
stringlengths 20
39
⌀ | n
stringclasses 3
values | wer
stringclasses 1
value | uas
null | language
stringclasses 3
values | isbn
stringclasses 34
values | recall
null | number
stringclasses 8
values | a
null | b
null | c
null | k
null | f1
stringclasses 4
values | r
stringclasses 2
values | mci
stringclasses 1
value | p
stringclasses 2
values | sd
stringclasses 1
value | female
stringclasses 0
values | m
stringclasses 0
values | food
stringclasses 1
value | f
stringclasses 1
value | note
stringclasses 20
values | __index_level_0__
int64 22k
106k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
inproceedings | paul-2017-feature | Feature Selection as Causal Inference: Experiments with Text Classification | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1018/ | Paul, Michael J. | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 163--172 | This paper proposes a matching technique for learning causal associations between word features and class labels in document classification. The goal is to identify more meaningful and generalizable features than with only correlational approaches. Experiments with sentiment classification show that the proposed method identifies interpretable word associations with sentiment and improves classification performance in a majority of cases. The proposed feature selection method is particularly effective when applied to out-of-domain data. | null | null | 10.18653/v1/K17-1018 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,851 |
inproceedings | peng-etal-2017-joint | A Joint Model for Semantic Sequences: Frames, Entities, Sentiments | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1019/ | Peng, Haoruo and Chaturvedi, Snigdha and Roth, Dan | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 173--183 | Understanding stories {--} sequences of events {--} is a crucial yet challenging natural language understanding task. These events typically carry multiple aspects of semantics including actions, entities and emotions. Not only does each individual aspect contribute to the meaning of the story, so does the interaction among these aspects. Building on this intuition, we propose to jointly model important aspects of semantic knowledge {--} frames, entities and sentiments {--} via a semantic language model. We achieve this by first representing these aspects' semantic units at an appropriate level of abstraction and then using the resulting vector representations for each semantic aspect to learn a joint representation via a neural language model. We show that the joint semantic language model is of high quality and can generate better semantic sequences than models that operate on the word level. We further demonstrate that our joint model can be applied to story cloze test and shallow discourse parsing tasks with improved performance and that each semantic aspect contributes to the model. | null | null | 10.18653/v1/K17-1019 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,852 |
inproceedings | ruzsics-samardzic-2017-neural | Neural Sequence-to-sequence Learning of Internal Word Structure | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1020/ | Ruzsics, Tatyana and Samard{\v{z}}i{\'c}, Tanja | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 184--194 | Learning internal word structure has recently been recognized as an important step in various multilingual processing tasks and in theoretical language comparison. In this paper, we present a neural encoder-decoder model for learning canonical morphological segmentation. Our model combines character-level sequence-to-sequence transformation with a language model over canonical segments. We obtain up to 4{\%} improvement over a strong character-level encoder-decoder baseline for three languages. Our model outperforms the previous state-of-the-art for two languages, while eliminating the need for external resources such as large dictionaries. Finally, by comparing the performance of encoder-decoder and classical statistical machine translation systems trained with and without corpus counts, we show that including corpus counts is beneficial to both approaches. | null | null | 10.18653/v1/K17-1020 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,853 |
inproceedings | collins-etal-2017-supervised | A Supervised Approach to Extractive Summarisation of Scientific Papers | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1021/ | Collins, Ed and Augenstein, Isabelle and Riedel, Sebastian | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 195--205 | Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. In this paper, we introduce a new dataset for summarisation of computer science publications by exploiting a large resource of author provided summaries and show straightforward ways of extending it further. We develop models on the dataset making use of both neural sentence encoding and traditionally used summarisation features and show that models which encode sentences as well as their local and global context perform best, significantly outperforming well-established baseline methods. | null | null | 10.18653/v1/K17-1021 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,854 |
inproceedings | bhatia-etal-2017-automatic | An Automatic Approach for Document-level Topic Model Evaluation | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1022/ | Bhatia, Shraey and Lau, Jey Han and Baldwin, Timothy | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 206--215 | Topic models jointly learn topics and document-level topic distribution. Extrinsic evaluation of topic models tends to focus exclusively on topic-level evaluation, e.g. by assessing the coherence of topics. We demonstrate that there can be large discrepancies between topic- and document-level model quality, and that basing model evaluation on topic-level analysis can be highly misleading. We propose a method for automatically predicting topic model quality based on analysis of document-level topic allocations, and provide empirical evidence for its robustness. | null | null | 10.18653/v1/K17-1022 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,855 |
inproceedings | chen-etal-2017-robust | Robust Coreference Resolution and Entity Linking on Dialogues: Character Identification on {TV} Show Transcripts | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1023/ | Chen, Henry Y. and Zhou, Ethan and Choi, Jinho D. | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 216--225 | This paper presents a novel approach to character identification, that is an entity linking task that maps mentions to characters in dialogues from TV show transcripts. We first augment and correct several cases of annotation errors in an existing corpus so the corpus is clearer and cleaner for statistical learning. We also introduce the agglomerative convolutional neural network that takes groups of features and learns mention and mention-pair embeddings for coreference resolution. We then propose another neural model that employs the embeddings learned and creates cluster embeddings for entity linking. Our coreference resolution model shows comparable results to other state-of-the-art systems. Our entity linking model significantly outperforms the previous work, showing the F1 score of 86.76{\%} and the accuracy of 95.30{\%} for character identification. | null | null | 10.18653/v1/K17-1023 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,856 |
inproceedings | joty-etal-2017-cross | Cross-language Learning with Adversarial Neural Networks | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1024/ | Joty, Shafiq and Nakov, Preslav and M{\`a}rquez, Llu{\'i}s and Jaradat, Israa | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 226--237 | We address the problem of cross-language adaptation for question-question similarity reranking in community question answering, with the objective to port a system trained on one input language to another input language given labeled training data for the first language and only unlabeled data for the second language. In particular, we propose to use adversarial training of neural networks to learn high-level features that are discriminative for the main learning task, and at the same time are invariant across the input languages. The evaluation results show sizable improvements for our cross-language adversarial neural network (CLANN) model over a strong non-adversarial system. | null | null | 10.18653/v1/K17-1024 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,857 |
inproceedings | renduchintala-etal-2017-knowledge | Knowledge Tracing in Sequential Learning of Inflected Vocabulary | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1025/ | Renduchintala, Adithya and Koehn, Philipp and Eisner, Jason | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 238--247 | We present a feature-rich knowledge tracing method that captures a student`s acquisition and retention of knowledge during a foreign language phrase learning task. We model the student`s behavior as making predictions under a log-linear model, and adopt a neural gating mechanism to model how the student updates their log-linear parameters in response to feedback. The gating mechanism allows the model to learn complex patterns of retention and acquisition for each feature, while the log-linear parameterization results in an interpretable knowledge state. We collect human data and evaluate several versions of the model. | null | null | 10.18653/v1/K17-1025 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,858 |
inproceedings | saparov-etal-2017-probabilistic | A Probabilistic Generative Grammar for Semantic Parsing | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1026/ | Saparov, Abulhair and Saraswat, Vijay and Mitchell, Tom | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 248--259 | We present a generative model of natural language sentences and demonstrate its application to semantic parsing. In the generative process, a logical form sampled from a prior, and conditioned on this logical form, a grammar probabilistically generates the output sentence. Grammar induction using MCMC is applied to learn the grammar given a set of labeled sentences with corresponding logical forms. We develop a semantic parser that finds the logical form with the highest posterior probability exactly. We obtain strong results on the GeoQuery dataset and achieve state-of-the-art F1 on Jobs. | null | null | 10.18653/v1/K17-1026 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,859 |
inproceedings | nicosia-moschitti-2017-learning | Learning Contextual Embeddings for Structural Semantic Similarity using Categorical Information | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1027/ | Nicosia, Massimo and Moschitti, Alessandro | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 260--270 | Tree kernels (TKs) and neural networks are two effective approaches for automatic feature engineering. In this paper, we combine them by modeling context word similarity in semantic TKs. This way, the latter can operate subtree matching by applying neural-based similarity on tree lexical nodes. We study how to learn representations for the words in context such that TKs can exploit more focused information. We found that neural embeddings produced by current methods do not provide a suitable contextual similarity. Thus, we define a new approach based on a Siamese Network, which produces word representations while learning a binary text similarity. We set the latter considering examples in the same category as similar. The experiments on question and sentiment classification show that our semantic TK highly improves previous results. | null | null | 10.18653/v1/K17-1027 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,860 |
inproceedings | weissenborn-etal-2017-making | Making Neural {QA} as Simple as Possible but not Simpler | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1028/ | Weissenborn, Dirk and Wiese, Georg and Seiffe, Laura | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 271--280 | Recent development of large-scale question answering (QA) datasets triggered a substantial amount of research into end-to-end neural architectures for QA. Increasingly complex systems have been conceived without comparison to simpler neural baseline systems that would justify their complexity. In this work, we propose a simple heuristic that guides the development of neural baseline systems for the extractive QA task. We find that there are two ingredients necessary for building a high-performing neural QA system: first, the awareness of question words while processing the context and second, a composition function that goes beyond simple bag-of-words modeling, such as recurrent neural networks. Our results show that FastQA, a system that meets these two requirements, can achieve very competitive performance compared with existing models. We argue that this surprising finding puts results of previous systems and the complexity of recent QA datasets into perspective. | null | null | 10.18653/v1/K17-1028 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,861 |
inproceedings | wiese-etal-2017-neural | Neural Domain Adaptation for Biomedical Question Answering | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1029/ | Wiese, Georg and Weissenborn, Dirk and Neves, Mariana | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 281--289 | Factoid question answering (QA) has recently benefited from the development of deep learning (DL) systems. Neural network models outperform traditional approaches in domains where large datasets exist, such as SQuAD (ca. 100,000 questions) for Wikipedia articles. However, these systems have not yet been applied to QA in more specific domains, such as biomedicine, because datasets are generally too small to train a DL system from scratch. For example, the BioASQ dataset for biomedical QA comprises less then 900 factoid (single answer) and list (multiple answers) QA instances. In this work, we adapt a neural QA system trained on a large open-domain dataset (SQuAD, source) to a biomedical dataset (BioASQ, target) by employing various transfer learning techniques. Our network architecture is based on a state-of-the-art QA system, extended with biomedical word embeddings and a novel mechanism to answer list questions. In contrast to existing biomedical QA systems, our system does not rely on domain-specific ontologies, parsers or entity taggers, which are expensive to create. Despite this fact, our systems achieve state-of-the-art results on factoid questions and competitive results on list questions. | null | null | 10.18653/v1/K17-1029 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,862 |
inproceedings | hulden-2017-phoneme | A phoneme clustering algorithm based on the obligatory contour principle | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1030/ | Hulden, Mans | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 290--300 | This paper explores a divisive hierarchical clustering algorithm based on the well-known Obligatory Contour Principle in phonology. The purpose is twofold: to see if such an algorithm could be used for unsupervised classification of phonemes or graphemes in corpora, and to investigate whether this purported universal constraint really holds for several classes of phonological distinctive features. The algorithm achieves very high accuracies in an unsupervised setting of inferring a consonant-vowel distinction, and also has a strong tendency to detect coronal phonemes in an unsupervised fashion. Remaining classes, however, do not correspond as neatly to phonological distinctive feature splits. While the results offer only mixed support for a universal Obligatory Contour Principle, the algorithm can be very useful for many NLP tasks due to the high accuracy in revealing consonant/vowel/coronal distinctions. | null | null | 10.18653/v1/K17-1030 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,863 |
inproceedings | li-shah-2017-learning | Learning Stock Market Sentiment Lexicon and Sentiment-Oriented Word Vector from {S}tock{T}wits | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1031/ | Li, Quanzhi and Shah, Sameena | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 301--310 | Previous studies have shown that investor sentiment indicators can predict stock market change. A domain-specific sentiment lexicon and sentiment-oriented word embedding model would help the sentiment analysis in financial domain and stock market. In this paper, we present a new approach to learning stock market lexicon from StockTwits, a popular financial social network for investors to share ideas. It learns word polarity by predicting message sentiment, using a neural net-work. The sentiment-oriented word embeddings are learned from tens of millions of StockTwits posts, and this is the first study presenting sentiment-oriented word embeddings for stock market. The experiments of predicting investor sentiment show that our lexicon outperformed other lexicons built by the state-of-the-art methods, and the sentiment-oriented word vector was much better than the general word embeddings. | null | null | 10.18653/v1/K17-1031 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,864 |
inproceedings | raj-etal-2017-learning | Learning local and global contexts using a convolutional recurrent network model for relation classification in biomedical text | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1032/ | Raj, Desh and Sahu, Sunil and Anand, Ashish | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 311--321 | The task of relation classification in the biomedical domain is complex due to the presence of samples obtained from heterogeneous sources such as research articles, discharge summaries, or electronic health records. It is also a constraint for classifiers which employ manual feature engineering. In this paper, we propose a convolutional recurrent neural network (CRNN) architecture that combines RNNs and CNNs in sequence to solve this problem. The rationale behind our approach is that CNNs can effectively identify coarse-grained local features in a sentence, while RNNs are more suited for long-term dependencies. We compare our CRNN model with several baselines on two biomedical datasets, namely the i2b2-2010 clinical relation extraction challenge dataset, and the SemEval-2013 DDI extraction dataset. We also evaluate an attentive pooling technique and report its performance in comparison with the conventional max pooling method. Our results indicate that the proposed model achieves state-of-the-art performance on both datasets. | null | null | 10.18653/v1/K17-1032 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,865 |
inproceedings | sirts-etal-2017-idea | Idea density for predicting {A}lzheimer`s disease from transcribed speech | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1033/ | Sirts, Kairit and Piguet, Olivier and Johnson, Mark | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 322--332 | Idea Density (ID) measures the rate at which ideas or elementary predications are expressed in an utterance or in a text. Lower ID is found to be associated with an increased risk of developing Alzheimer`s disease (AD) (Snowdon et al., 1996; Engelman et al., 2010). ID has been used in two different versions: propositional idea density (PID) counts the expressed ideas and can be applied to any text while semantic idea density (SID) counts pre-defined information content units and is naturally more applicable to normative domains, such as picture description tasks. In this paper, we develop DEPID, a novel dependency-based method for computing PID, and its version DEPID-R that enables to exclude repeating ideas{---}a feature characteristic to AD speech. We conduct the first comparison of automatically extracted PID and SID in the diagnostic classification task on two different AD datasets covering both closed-topic and free-recall domains. While SID performs better on the normative dataset, adding PID leads to a small but significant improvement (+1.7 F-score). On the free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in F-score) but adding the features derived from the word embedding clustering underlying the automatic SID increases the results considerably, leading to an F-score of 84.8. | null | null | 10.18653/v1/K17-1033 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,866 |
inproceedings | levy-etal-2017-zero | Zero-Shot Relation Extraction via Reading Comprehension | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1034/ | Levy, Omer and Seo, Minjoon and Choi, Eunsol and Zettlemoyer, Luke | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 333--342 | We show that relation extraction can be reduced to answering simple reading comprehension questions, by associating one or more natural-language questions with each relation slot. This reduction has several advantages: we can (1) learn relation-extraction models by extending recent neural reading-comprehension techniques, (2) build very large training sets for those models by combining relation-specific crowd-sourced questions with distant supervision, and even (3) do zero-shot learning by extracting new relation types that are only specified at test-time, for which we have no labeled training examples. Experiments on a Wikipedia slot-filling task demonstrate that the approach can generalize to new questions for known relation types with high accuracy, and that zero-shot generalization to unseen relation types is possible, at lower accuracy levels, setting the bar for future work on this task. | null | null | 10.18653/v1/K17-1034 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,867 |
inproceedings | zhang-etal-2017-covert | The Covert Helps Parse the Overt | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1035/ | Zhang, Xun and Sun, Weiwei and Wan, Xiaojun | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 343--353 | This paper is concerned with whether deep syntactic information can help surface parsing, with a particular focus on empty categories. We design new algorithms to produce dependency trees in which empty elements are allowed, and evaluate the impact of information about empty category on parsing overt elements. Such information is helpful to reduce the approximation error in a structured parsing model, but increases the search space for inference and accordingly the estimation error. To deal with structure-based overfitting, we propose to integrate disambiguation models with and without empty elements, and perform structure regularization via joint decoding. Experiments on English and Chinese TreeBanks with different parsing models indicate that incorporating empty elements consistently improves surface parsing. | null | null | 10.18653/v1/K17-1035 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,868 |
inproceedings | schlechtweg-etal-2017-german | {G}erman in Flux: Detecting Metaphoric Change via Word Entropy | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1036/ | Schlechtweg, Dominik and Eckmann, Stefanie and Santus, Enrico and Schulte im Walde, Sabine and Hole, Daniel | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 354--367 | This paper explores the information-theoretic measure entropy to detect metaphoric change, transferring ideas from hypernym detection to research on language change. We build the first diachronic test set for German as a standard for metaphoric change annotation. Our model is unsupervised, language-independent and generalizable to other processes of semantic change. | null | null | 10.18653/v1/K17-1036 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,869 |
inproceedings | alishahi-etal-2017-encoding | Encoding of phonology in a recurrent neural model of grounded speech | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1037/ | Alishahi, Afra and Barking, Marie and Chrupa{\l}a, Grzegorz | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 368--378 | We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics. | null | null | 10.18653/v1/K17-1037 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,870 |
inproceedings | duong-etal-2017-multilingual-semantic | Multilingual Semantic Parsing And Code-Switching | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1038/ | Duong, Long and Afshar, Hadi and Estival, Dominique and Pink, Glen and Cohen, Philip and Johnson, Mark | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 379--389 | Extending semantic parsing systems to new domains and languages is a highly expensive, time-consuming process, so making effective use of existing resources is critical. In this paper, we describe a transfer learning method using crosslingual word embeddings in a sequence-to-sequence model. On the NLmaps corpus, our approach achieves state-of-the-art accuracy of 85.7{\%} for English. Most importantly, we observed a consistent improvement for German compared with several baseline domain adaptation techniques. As a by-product of this approach, our models that are trained on a combination of English and German utterances perform reasonably well on code-switching utterances which contain a mixture of English and German, even though the training data does not contain any such. As far as we know, this is the first study of code-switching in semantic parsing. We manually constructed the set of code-switching test utterances for the NLmaps corpus and achieve 78.3{\%} accuracy on this dataset. | null | null | 10.18653/v1/K17-1038 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,871 |
inproceedings | le-titov-2017-optimizing | Optimizing Differentiable Relaxations of Coreference Evaluation Metrics | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1039/ | Le, Phong and Titov, Ivan | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 390--399 | Coreference evaluation metrics are hard to optimize directly as they are non-differentiable functions, not easily decomposable into elementary decisions. Consequently, most approaches optimize objectives only indirectly related to the end goal, resulting in suboptimal performance. Instead, we propose a differentiable relaxation that lends itself to gradient-based optimisation, thus bypassing the need for reinforcement learning or heuristic modification of cross-entropy. We show that by modifying the training objective of a competitive neural coreference system, we obtain a substantial gain in performance. This suggests that our approach can be regarded as a viable alternative to using reinforcement learning or more computationally expensive imitation learning. | null | null | 10.18653/v1/K17-1039 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,872 |
inproceedings | ziser-reichart-2017-neural | Neural Structural Correspondence Learning for Domain Adaptation | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1040/ | Ziser, Yftah and Reichart, Roi | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 400--410 | We introduce a neural network model that marries together ideas from two prominent strands of research on domain adaptation through representation learning: structural correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural networks (NNs). Our model is a three-layer NN that learns to encode the non-pivot features of an input example into a low dimensional representation, so that the existence of pivot features (features that are prominent in both domains and convey useful information for the NLP task) in the example can be decoded from that representation. The low-dimensional representation is then employed in a learning algorithm for the task. Moreover, we show how to inject pre-trained word embeddings into our model in order to improve generalization across examples with similar pivot features. We experiment with the task of cross-domain sentiment classification on 16 domain pairs and show substantial improvements over strong baselines. | null | null | 10.18653/v1/K17-1040 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,873 |
inproceedings | marcheggiani-etal-2017-simple | A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based Semantic Role Labeling | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1041/ | Marcheggiani, Diego and Frolov, Anton and Titov, Ivan | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 411--420 | We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets. | null | null | 10.18653/v1/K17-1041 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,874 |
inproceedings | inoue-etal-2017-joint | Joint Prediction of Morphosyntactic Categories for Fine-Grained {A}rabic Part-of-Speech Tagging Exploiting Tag Dictionary Information | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1042/ | Inoue, Go and Shindo, Hiroyuki and Matsumoto, Yuji | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 421--431 | Part-of-speech (POS) tagging for morphologically rich languages such as Arabic is a challenging problem because of their enormous tag sets. One reason for this is that in the tagging scheme for such languages, a complete POS tag is formed by combining tags from multiple tag sets defined for each morphosyntactic category. Previous approaches in Arabic POS tagging applied one model for each morphosyntactic tagging task, without utilizing shared information between the tasks. In this paper, we propose an approach that utilizes this information by jointly modeling multiple morphosyntactic tagging tasks with a multi-task learning framework. We also propose a method of incorporating tag dictionary information into our neural models by combining word representations with representations of the sets of possible tags. Our experiments showed that the joint model with tag dictionary information results in an accuracy of 91.38{\%} on the Penn Arabic Treebank data set, with an absolute improvement of 2.11{\%} over the current state-of-the-art tagger. | null | null | 10.18653/v1/K17-1042 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,875 |
inproceedings | samih-etal-2017-learning | Learning from Relatives: Unified Dialectal {A}rabic Segmentation | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1043/ | Samih, Younes and Eldesouki, Mohamed and Attia, Mohammed and Darwish, Kareem and Abdelali, Ahmed and Mubarak, Hamdy and Kallmeyer, Laura | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 432--441 | Arabic dialects do not just share a common koin{\'e}, but there are shared pan-dialectal linguistic phenomena that allow computational models for dialects to learn from each other. In this paper we build a unified segmentation model where the training data for different dialects are combined and a single model is trained. The model yields higher accuracies than dialect-specific models, eliminating the need for dialect identification before segmentation. We also measure the degree of relatedness between four major Arabic dialects by testing how a segmentation model trained on one dialect performs on the other dialects. We found that linguistic relatedness is contingent with geographical proximity. In our experiments we use SVM-based ranking and bi-LSTM-CRF sequence labeling. | null | null | 10.18653/v1/K17-1043 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,876 |
inproceedings | tran-nguyen-2017-natural | Natural Language Generation for Spoken Dialogue System using {RNN} Encoder-Decoder Networks | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1044/ | Tran, Van-Khanh and Nguyen, Le-Minh | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 442--451 | Natural language generation (NLG) is a critical component in a spoken dialogue system. This paper presents a Recurrent Neural Network based Encoder-Decoder architecture, in which an LSTM-based decoder is introduced to select, aggregate semantic elements produced by an attention mechanism over the input elements, and to produce the required utterances. The proposed generator can be jointly trained both sentence planning and surface realization to produce natural language sentences. The proposed model was extensively evaluated on four different NLG datasets. The experimental results showed that the proposed generators not only consistently outperform the previous methods across all the NLG domains but also show an ability to generalize from a new, unseen domain and learn from multi-domain datasets. | null | null | 10.18653/v1/K17-1044 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,877 |
inproceedings | yasunaga-etal-2017-graph | Graph-based Neural Multi-Document Summarization | Levy, Roger and Specia, Lucia | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-1045/ | Yasunaga, Michihiro and Zhang, Rui and Meelu, Kshitijh and Pareek, Ayush and Srinivasan, Krishnan and Radev, Dragomir | Proceedings of the 21st Conference on Computational Natural Language Learning ({C}o{NLL} 2017) | 452--462 | We propose a neural multi-document summarization system that incorporates sentence relation graphs. We employ a Graph Convolutional Network (GCN) on the relation graphs, with sentence embeddings obtained from Recurrent Neural Networks as input node features. Through multiple layer-wise propagation, the GCN generates high-level hidden sentence features for salience estimation. We then use a greedy heuristic to extract salient sentences that avoid redundancy. In our experiments on DUC 2004, we consider three types of sentence relation graphs and demonstrate the advantage of combining sentence relations in graphs with the representation power of deep neural networks. Our model improves upon other traditional graph-based extractive approaches and the vanilla GRU sequence model with no graph, and it achieves competitive results against other state-of-the-art multi-document summarization systems. | null | null | 10.18653/v1/K17-1045 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,878 |
inproceedings | zeman-etal-2017-conll | {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3001/ | Zeman, Daniel and Popel, Martin and Straka, Milan and Haji{\v{c, Jan and Nivre, Joakim and Ginter, Filip and Luotolahti, Juhani and Pyysalo, Sampo and Petrov, Slav and Potthast, Martin and Tyers, Francis and Badmaeva, Elena and Gokirmak, Memduh and Nedoluzhko, Anna and Cinkov{\'a, Silvie and Haji{\v{c jr., Jan and Hlav{\'a{\v{cov{\'a, Jaroslava and Kettnerov{\'a, V{\'aclava and Ure{\v{sov{\'a, Zde{\v{nka and Kanerva, Jenna and Ojala, Stina and Missil{\"a, Anna and Manning, Christopher D. and Schuster, Sebastian and Reddy, Siva and Taji, Dima and Habash, Nizar and Leung, Herman and de Marneffe, Marie-Catherine and Sanguinetti, Manuela and Simi, Maria and Kanayama, Hiroshi and de Paiva, Valeria and Droganova, Kira and Mart{\'inez Alonso, H{\'ector and {\c{C{\"oltekin, {\c{Ca{\u{gr{\i and Sulubacak, Umut and Uszkoreit, Hans and Macketanz, Vivien and Burchardt, Aljoscha and Harris, Kim and Marheinecke, Katrin and Rehm, Georg and Kayadelen, Tolga and Attia, Mohammed and Elkahky, Ali and Yu, Zhuoran and Pitler, Emily and Lertpradit, Saran and Mandl, Michael and Kirchner, Jesse and Alcalde, Hector Fernandez and Strnadov{\'a, Jana and Banerjee, Esha and Manurung, Ruli and Stella, Antonio and Shimada, Atsuko and Kwak, Sookyoung and Mendon{\c{ca, Gustavo and Lando, Tatiana and Nitisaroj, Rattima and Li, Josie | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 1--19 | The Conference on Computational Natural Language Learning (CoNLL) features a shared task, in which participants train and test their learning systems on the same data sets. In 2017, the task was devoted to learning dependency parsers for a large number of languages, in a real-world setting without any gold-standard annotation on input. All test sets followed a unified annotation scheme, namely that of Universal Dependencies. In this paper, we define the task and evaluation methodology, describe how the data sets were prepared, report and analyze the main results, and provide a brief categorization of the different approaches of the participating systems. | null | null | 10.18653/v1/K17-3001 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,893 |
inproceedings | dozat-etal-2017-stanfords | {S}tanford`s Graph-based Neural Dependency Parser at the {C}o{NLL} 2017 Shared Task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3002/ | Dozat, Timothy and Qi, Peng and Manning, Christopher D. | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 20--30 | This paper describes the neural dependency parser submitted by Stanford to the CoNLL 2017 Shared Task on parsing Universal Dependencies. Our system uses relatively simple LSTM networks to produce part of speech tags and labeled dependency parses from segmented and tokenized sequences of words. In order to address the rare word problem that abounds in languages with complex morphology, we include a character-based word representation that uses an LSTM to produce embeddings from sequences of characters. Our system was ranked first according to all five relevant metrics for the system: UPOS tagging (93.09{\%}), XPOS tagging (82.27{\%}), unlabeled attachment score (81.30{\%}), labeled attachment score (76.30{\%}), and content word labeled attachment score (72.57{\%}). | null | null | 10.18653/v1/K17-3002 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,894 |
inproceedings | shi-etal-2017-combining | Combining Global Models for Parsing {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3003/ | Shi, Tianze and Wu, Felix G. and Chen, Xilun and Cheng, Yao | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 31--39 | We describe our entry, C2L2, to the CoNLL 2017 shared task on parsing Universal Dependencies from raw text. Our system features an ensemble of three global parsing paradigms, one graph-based and two transition-based. Each model leverages character-level bi-directional LSTMs as lexical feature extractors to encode morphological information. Though relying on baseline tokenizers and focusing only on parsing, our system ranked second in the official end-to-end evaluation with a macro-average of 75.00 LAS F1 score over 81 test treebanks. In addition, we had the top average performance on the four surprise languages and on the small treebank subset. | null | null | 10.18653/v1/K17-3003 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,895 |
inproceedings | bjorkelund-etal-2017-ims | {IMS} at the {C}o{NLL} 2017 {UD} Shared Task: {CRF}s and Perceptrons Meet Neural Networks | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3004/ | Bj{\"orkelund, Anders and Falenska, Agnieszka and Yu, Xiang and Kuhn, Jonas | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 40--51 | This paper presents the IMS contribution to the CoNLL 2017 Shared Task. In the preprocessing step we employed a CRF POS/morphological tagger and a neural tagger predicting supertags. On some languages, we also applied word segmentation with the CRF tagger and sentence segmentation with a perceptron-based parser. For parsing we took an ensemble approach by blending multiple instances of three parsers with very different architectures. Our system achieved the third place overall and the second place for the surprise languages. | null | null | 10.18653/v1/K17-3004 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,896 |
inproceedings | che-etal-2017-hit | The {HIT}-{SCIR} System for End-to-End Parsing of {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3005/ | Che, Wanxiang and Guo, Jiang and Wang, Yuxuan and Zheng, Bo and Zhao, Huaipeng and Liu, Yang and Teng, Dechuan and Liu, Ting | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 52--62 | This paper describes our system (HIT-SCIR) for the CoNLL 2017 shared task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system includes three pipelined components: \textit{tokenization}, \textit{Part-of-Speech} (POS) \textit{tagging} and \textit{dependency parsing}. We use character-based bidirectional long short-term memory (LSTM) networks for both tokenization and POS tagging. Afterwards, we employ a list-based transition-based algorithm for general non-projective parsing and present an improved Stack-LSTM-based architecture for representing each transition state and making predictions. Furthermore, to parse low/zero-resource languages and cross-domain data, we use a model transfer approach to make effective use of existing resources. We demonstrate substantial gains against the UDPipe baseline, with an average improvement of 3.76{\%} in LAS of all languages. And finally, we rank the 4th place on the official test sets. | null | null | 10.18653/v1/K17-3005 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,897 |
inproceedings | lim-poibeau-2017-system | A System for Multilingual Dependency Parsing based on Bidirectional {LSTM} Feature Representations | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3006/ | Lim, KyungTae and Poibeau, Thierry | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 63--70 | In this paper, we present our multilingual dependency parser developed for the CoNLL 2017 UD Shared Task dealing with {\textquotedblleft}Multilingual Parsing from Raw Text to Universal Dependencies{\textquotedblright}. Our parser extends the monolingual BIST-parser as a multi-source multilingual trainable parser. Thanks to multilingual word embeddings and one hot encodings for languages, our system can use both monolingual and multi-source training. We trained 69 monolingual language models and 13 multilingual models for the shared task. Our multilingual approach making use of different resources yield better results than the monolingual approach for 11 languages. Our system ranked 5 th and achieved 70.93 overall LAS score over the 81 test corpora (macro-averaged LAS F1 score). | null | null | 10.18653/v1/K17-3006 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,898 |
inproceedings | sato-etal-2017-adversarial | Adversarial Training for Cross-Domain {U}niversal {D}ependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3007/ | Sato, Motoki and Manabe, Hitoshi and Noji, Hiroshi and Matsumoto, Yuji | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 71--79 | We describe our submission to the CoNLL 2017 shared task, which exploits the shared common knowledge of a language across different domains via a domain adaptation technique. Our approach is an extension to the recently proposed adversarial training technique for domain adaptation, which we apply on top of a graph-based neural dependency parsing model on bidirectional LSTMs. In our experiments, we find our baseline graph-based parser already outperforms the official baseline model (UDPipe) by a large margin. Further, by applying our technique to the treebanks of the same language with different domains, we observe an additional gain in the performance, in particular for the domains with less training data. | null | null | 10.18653/v1/K17-3007 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,899 |
inproceedings | kirnap-etal-2017-parsing | Parsing with Context Embeddings | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3008/ | K{\irnap, {\"Omer and {\"Onder, Berkay Furkan and Yuret, Deniz | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 80--87 | We introduce context embeddings, dense vectors derived from a language model that represent the left/right context of a word instance, and demonstrate that context embeddings significantly improve the accuracy of our transition based parser. Our model consists of a bidirectional LSTM (BiLSTM) based language model that is pre-trained to predict words in plain text, and a multi-layer perceptron (MLP) decision model that uses features from the language model to predict the correct actions for an ArcHybrid transition based parser. We participated in the CoNLL 2017 UD Shared Task as the {\textquotedblleft}Ko{\c{c}} University{\textquotedblright} team and our system was ranked 7th out of 33 systems that parsed 81 treebanks in 49 languages. | null | null | 10.18653/v1/K17-3008 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,900 |
inproceedings | straka-strakova-2017-tokenizing | Tokenizing, {POS} Tagging, Lemmatizing and Parsing {UD} 2.0 with {UDP}ipe | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3009/ | Straka, Milan and Strakov{\'a}, Jana | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 88--99 | Many natural language processing tasks, including the most advanced ones, routinely start by several basic processing steps {--} tokenization and segmentation, most likely also POS tagging and lemmatization, and commonly parsing as well. A multilingual pipeline performing these steps can be trained using the Universal Dependencies project, which contains annotations of the described tasks for 50 languages in the latest release UD 2.0. We present an update to UDPipe, a simple-to-use pipeline processing CoNLL-U version 2.0 files, which performs these tasks for multiple languages without requiring additional external data. We provide models for all 50 languages of UD 2.0, and furthermore, the pipeline can be trained easily using data in CoNLL-U format. UDPipe is a standalone application in C++, with bindings available for Python, Java, C{\#} and Perl. In the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, UDPipe was the eight best system, while achieving low running times and moderately sized models. | null | null | 10.18653/v1/K17-3009 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,901 |
inproceedings | vania-etal-2017-uparse | {UP}arse: the {E}dinburgh system for the {C}o{NLL} 2017 {UD} shared task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3010/ | Vania, Clara and Zhang, Xingxing and Lopez, Adam | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 100--110 | This paper presents our submissions for the CoNLL 2017 UD Shared Task. Our parser, called UParse, is based on a neural network graph-based dependency parser. The parser uses features from a bidirectional LSTM to to produce a distribution over possible heads for each word in the sentence. To allow transfer learning for low-resource treebanks and surprise languages, we train several multilingual models for related languages, grouped by their genus and language families. Out of 33 participants, our system achieves rank 9th in the main results, with 75.49 UAS and 68.87 LAS F-1 scores (average across 81 treebanks). | null | null | 10.18653/v1/K17-3010 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,902 |
inproceedings | heinecke-asadullah-2017-multi | Multi-Model and Crosslingual Dependency Analysis | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3011/ | Heinecke, Johannes and Asadullah, Munshi | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 111--118 | This paper describes the system of the Team Orange-Deski{\~n}, used for the CoNLL 2017 UD Shared Task in Multilingual Dependency Parsing. We based our approach on an existing open source tool (BistParser), which we modified in order to produce the required output. Additionally we added a kind of pseudo-projectivisation. This was needed since some of the task`s languages have a high percentage of non-projective dependency trees. In most cases we also employed word embeddings. For the 4 surprise languages, the data provided seemed too little to train on. Thus we decided to use the training data of typologically close languages instead. Our system achieved a macro-averaged LAS of 68.61{\%} (10th in the overall ranking) which improved to 69.38{\%} after bug fixes. | null | null | 10.18653/v1/K17-3011 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,903 |
inproceedings | kanerva-etal-2017-turkunlp | {T}urku{NLP}: Delexicalized Pre-training of Word Embeddings for Dependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3012/ | Kanerva, Jenna and Luotolahti, Juhani and Ginter, Filip | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 119--125 | We present the TurkuNLP entry in the CoNLL 2017 Shared Task on Multilingual Parsing from Raw Text to Universal Dependencies. The system is based on the UDPipe parser with our focus being in exploring various techniques to pre-train the word embeddings used by the parser in order to improve its performance especially on languages with small training sets. The system ranked 11th among the 33 participants overall, being 8th on the small treebanks, 10th on the large treebanks, 12th on the parallel test sets, and 26th on the surprise languages. | null | null | 10.18653/v1/K17-3012 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,904 |
inproceedings | yu-etal-2017-parse | The parse is darc and full of errors: Universal dependency parsing with transition-based and graph-based algorithms | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3013/ | Yu, Kuan and Sofroniev, Pavel and Schill, Erik and Hinrichs, Erhard | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 126--133 | We developed two simple systems for dependency parsing: darc, a transition-based parser, and mstnn, a graph-based parser. We tested our systems in the CoNLL 2017 UD Shared Task, with darc being the official system. Darc ranked 12th among 33 systems, just above the baseline. Mstnn had no official ranking, but its main score was above the 27th. In this paper, we describe our two systems, examine their strengths and weaknesses, and discuss the lessons we learned. | null | null | 10.18653/v1/K17-3013 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,905 |
inproceedings | nguyen-etal-2017-novel | A Novel Neural Network Model for Joint {POS} Tagging and Graph-based Dependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3014/ | Nguyen, Dat Quoc and Dras, Mark and Johnson, Mark | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 134--142 | We present a novel neural network model that learns POS tagging and graph-based dependency parsing jointly. Our model uses bidirectional LSTMs to learn feature representations shared for both POS tagging and dependency parsing tasks, thus handling the feature-engineering problem. Our extensive experiments, on 19 languages from the Universal Dependencies project, show that our model outperforms the state-of-the-art neural network-based Stack-propagation model for joint POS tagging and transition-based dependency parsing, resulting in a new state of the art. Our code is open-source and available together with pre-trained models at: \url{https://github.com/datquocnguyen/jPTDP} | null | null | 10.18653/v1/K17-3014 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,906 |
inproceedings | qian-liu-2017-non | A non-{DNN} Feature Engineering Approach to Dependency Parsing {--} {FBAML} at {C}o{NLL} 2017 Shared Task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3015/ | Qian, Xian and Liu, Yang | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 143--151 | For this year`s multilingual dependency parsing shared task, we developed a pipeline system, which uses a variety of features for each of its components. Unlike the recent popular deep learning approaches that learn low dimensional dense features using non-linear classifier, our system uses structured linear classifiers to learn millions of sparse features. Specifically, we trained a linear classifier for sentence boundary prediction, linear chain conditional random fields (CRFs) for tokenization, part-of-speech tagging and morph analysis. A second order graph based parser learns the tree structure (without relations), and fa linear tree CRF then assigns relations to the dependencies in the tree. Our system achieves reasonable performance {--} 67.87{\%} official averaged macro F1 score | null | null | 10.18653/v1/K17-3015 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,907 |
inproceedings | vilares-gomez-rodriguez-2017-non | A non-projective greedy dependency parser with bidirectional {LSTM}s | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3016/ | Vilares, David and G{\'o}mez-Rodr{\'i}guez, Carlos | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 152--162 | The LyS-FASTPARSE team present BIST-COVINGTON, a neural implementation of the Covington (2001) algorithm for non-projective dependency parsing. The bidirectional LSTM approach by Kiperwasser and Goldberg (2016) is used to train a greedy parser with a dynamic oracle to mitigate error propagation. The model participated in the CoNLL 2017 UD Shared Task. In spite of not using any ensemble methods and using the baseline segmentation and PoS tagging, the parser obtained good results on both macro-average LAS and UAS in the big treebanks category (55 languages), ranking 7th out of 33 teams. In the all treebanks category (LAS and UAS) we ranked 16th and 12th. The gap between the all and big categories is mainly due to the poor performance on four parallel PUD treebanks, suggesting that some {\textquoteleft}suffixed' treebanks (e.g. Spanish-AnCora) perform poorly on cross-treebank settings, which does not occur with the corresponding {\textquoteleft}unsuffixed' treebank (e.g. Spanish). By changing that, we obtain the 11th best LAS among all runs (official and unofficial). The code is made available at \url{https://github.com/CoNLL-UD-2017/LyS-FASTPARSE} | null | null | 10.18653/v1/K17-3016 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,908 |
inproceedings | aufrant-etal-2017-limsi | {LIMSI}@{C}o{NLL}`17: {UD} Shared Task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3017/ | Aufrant, Lauriane and Wisniewski, Guillaume and Yvon, Fran{\c{c}}ois | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 163--173 | This paper describes LIMSI`s submission to the CoNLL 2017 UD Shared Task, which is focused on small treebanks, and how to improve low-resourced parsing only by ad hoc combination of multiple views and resources. We present our approach for low-resourced parsing, together with a detailed analysis of the results for each test treebank. We also report extensive analysis experiments on model selection for the PUD treebanks, and on annotation consistency among UD treebanks. | null | null | 10.18653/v1/K17-3017 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,909 |
inproceedings | dumitrescu-etal-2017-racais | {RACAI}`s Natural Language Processing pipeline for {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3018/ | Dumitrescu, Stefan Daniel and Boros, Tiberiu and Tufis, Dan | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 174--181 | This paper presents RACAI`s approach, experiments and results at CONLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. We handle raw text and we cover tokenization, sentence splitting, word segmentation, tagging, lemmatization and parsing. All results are reported under strict training, development and testing conditions, in which the corpora provided for the shared tasks is used {\textquotedblleft}as is{\textquotedblright}, without any modifications to the composition of the train and development sets. | null | null | 10.18653/v1/K17-3018 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,910 |
inproceedings | das-etal-2017-delexicalized | Delexicalized transfer parsing for low-resource languages using transformed and combined treebanks | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3019/ | Das, Ayan and Zaffar, Affan and Sarkar, Sudeshna | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 182--190 | This paper describes our dependency parsing system in CoNLL-2017 shared task on Multilingual Parsing from Raw Text to Universal Dependencies. We primarily focus on the low-resource languages (surprise languages). We have developed a framework to combine multiple treebanks to train parsers for low resource languages by delexicalization method. We have applied transformation on source language treebanks based on syntactic features of the low-resource language to improve performance of the parser. In the official evaluation, our system achieves an macro-averaged LAS score of 67.61 and 37.16 on the entire blind test data and the surprise language test data respectively. | null | null | 10.18653/v1/K17-3019 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,911 |
inproceedings | wang-etal-2017-transition-based | A Transition-based System for {U}niversal {D}ependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3020/ | Wang, Hao and Zhao, Hai and Zhang, Zhisong | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 191--197 | This paper describes the system for our participation in the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. In this work, we design a system based on UDPipe1 for universal dependency parsing, where multilingual transition-based models are trained for different treebanks. Our system directly takes raw texts as input, performing several intermediate steps like tokenizing and tagging, and finally generates the corresponding dependency trees. For the special surprise languages for this task, we adopt a delexicalized strategy and predict basing on transfer learning from other related languages. In the final evaluation of the shared task, our system achieves a result of 66.53{\%} in macro-averaged LAS F1-score. | null | null | 10.18653/v1/K17-3020 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,912 |
inproceedings | hornby-etal-2017-corpus | Corpus Selection Approaches for Multilingual Parsing from Raw Text to {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3021/ | Hornby, Ryan and Taylor, Clark and Park, Jungyeul | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 198--206 | This paper describes UALing`s approach to the \textit{CoNLL 2017 UD Shared Task using corpus selection techniques to reduce training data size. The methodology is simple: we use similarity measures to select a corpus from available training data (even from multiple corpora for surprise languages) and use the resulting corpus to complete the parsing task. The training and parsing is done with the baseline UDPipe system (Straka et al., 2016). While our approach reduces the size of training data significantly, it retains performance within 0.5{\% of the baseline system. Due to the reduction in training data size, our system performs faster than the na{\"ive, complete corpus method. Specifically, our system runs in less than 10 minutes, ranking it among the fastest entries for this task. Our system is available at \url{https://github.com/CoNLL-UD-2017/UALING. | null | null | 10.18653/v1/K17-3021 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,913 |
inproceedings | de-lhoneux-etal-2017-raw | From Raw Text to {U}niversal {D}ependencies - Look, No Tags! | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3022/ | de Lhoneux, Miryam and Shao, Yan and Basirat, Ali and Kiperwasser, Eliyahu and Stymne, Sara and Goldberg, Yoav and Nivre, Joakim | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 207--217 | We present the Uppsala submission to the CoNLL 2017 shared task on parsing from raw text to universal dependencies. Our system is a simple pipeline consisting of two components. The first performs joint word and sentence segmentation on raw text; the second predicts dependency trees from raw words. The parser bypasses the need for part-of-speech tagging, but uses word embeddings based on universal tag distributions. We achieved a macro-averaged LAS F1 of 65.11 in the official test run, which improved to 70.49 after bug fixes. We obtained the 2nd best result for sentence segmentation with a score of 89.03. | null | null | 10.18653/v1/K17-3022 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,914 |
inproceedings | akkus-etal-2017-initial | Initial Explorations of {CCG} Supertagging for {U}niversal {D}ependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3023/ | Akkus, Burak Kerim and Azizoglu, Heval and Cakici, Ruket | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 218--227 | In this paper we describe the system by METU team for universal dependency parsing of multilingual text. We use a neural network-based dependency parser that has a greedy transition approach to dependency parsing. CCG supertags contain rich structural information that proves useful in certain NLP tasks. We experiment with CCG supertags as additional features in our experiments. The neural network parser is trained together with dependencies and simplified CCG tags as well as other features provided. | null | null | 10.18653/v1/K17-3023 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,915 |
inproceedings | moor-etal-2017-clcl | {CLCL} (Geneva) {DINN} Parser: a Neural Network Dependency Parser Ten Years Later | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3024/ | Moor, Christophe and Merlo, Paola and Henderson, James and Wang, Haozhou | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 228--236 | This paper describes the University of Geneva`s submission to the CoNLL 2017 shared task Multilingual Parsing from Raw Text to Universal Dependencies (listed as the CLCL (Geneva) entry). Our submitted parsing system is the grandchild of the first transition-based neural network dependency parser, which was the University of Geneva`s entry in the CoNLL 2007 multilingual dependency parsing shared task, with some improvements to speed and portability. These results provide a baseline for investigating how far we have come in the past ten years of work on neural network dependency parsing. | null | null | 10.18653/v1/K17-3024 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,916 |
inproceedings | ji-etal-2017-fast | A Fast and Lightweight System for Multilingual Dependency Parsing | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3025/ | Ji, Tao and Wu, Yuanbin and Lan, Man | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 237--242 | We present a multilingual dependency parser with a bidirectional-LSTM (BiLSTM) feature extractor and a multi-layer perceptron (MLP) classifier. We trained our transition-based projective parser in UD version 2.0 datasets without any additional data. The parser is fast, lightweight and effective on big treebanks. In the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, the official results show that the macro-averaged LAS F1 score of our system Mengest is 61.33{\%}. | null | null | 10.18653/v1/K17-3025 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,917 |
inproceedings | de-la-clergerie-etal-2017-parisnlp | The {P}aris{NLP} entry at the {C}on{LL} {UD} Shared Task 2017: A Tale of a {\#}{P}arsing{T}ragedy | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3026/ | de La Clergerie, {\'E}ric and Sagot, Beno{\^i}t and Seddah, Djam{\'e} | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 243--252 | We present the ParisNLP entry at the UD CoNLL 2017 parsing shared task. In addition to the UDpipe models provided, we built our own data-driven tokenization models, sentence segmenter and lexicon-based morphological analyzers. All of these were used with a range of different parsing models (neural or not, feature-rich or not, transition or graph-based, etc.) and the best combination for each language was selected. Unfortunately, a glitch in the shared task`s Matrix led our model selector to run generic, weakly lexicalized models, tailored for surprise languages, instead of our dataset-specific models. Because of this {\#}ParsingTragedy, we officially ranked 27th, whereas our real models finally unofficially ranked 6th. | null | null | 10.18653/v1/K17-3026 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,918 |
inproceedings | more-tsarfaty-2017-universal | Universal Joint Morph-Syntactic Processing: The {O}pen {U}niversity of {I}srael`s Submission to The {C}o{NLL} 2017 Shared Task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3027/ | More, Amir and Tsarfaty, Reut | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 253--264 | We present the Open University`s submission to the CoNLL 2017 Shared Task on multilingual parsing from raw text to Universal Dependencies. The core of our system is a joint morphological disambiguator and syntactic parser which accepts morphologically analyzed surface tokens as input and returns morphologically disambiguated dependency trees as output. Our parser requires a lattice as input, so we generate morphological analyses of surface tokens using a data-driven morphological analyzer that derives its lexicon from the UD training corpora, and we rely on UDPipe for sentence segmentation and surface-level tokenization. We report our official macro-average LAS is 56.56. Although our model is not as performant as many others, it does not make use of neural networks, therefore we do not rely on word embeddings or any other data source other than the corpora themselves. In addition, we show the utility of a lexicon-backed morphological analyzer for the MRL Modern Hebrew. We use our results on Modern Hebrew to argue that the UD community should define a UD-compatible standard for access to lexical resources, which we argue is crucial for MRLs and low resource languages in particular. | null | null | 10.18653/v1/K17-3027 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,919 |
inproceedings | kanayama-etal-2017-semi | A Semi-universal Pipelined Approach to the {C}o{NLL} 2017 {UD} Shared Task | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3028/ | Kanayama, Hiroshi and Muraoka, Masayasu and Yoshikawa, Katsumasa | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 265--273 | This paper presents our system submitted for the CoNLL 2017 Shared Task, {\textquotedblleft}Multilingual Parsing from Raw Text to Universal Dependencies.{\textquotedblright} We ran the system for all languages with our own fully pipelined components without relying on re-trained baseline systems. To train the dependency parser, we used only the universal part-of-speech tags and distance between words, and applied deterministic rules to assign dependency labels. The simple and delexicalized models are suitable for cross-lingual transfer approaches and a universal language model. Experimental results show that our model performed well in some metrics and leads discussion on topics such as contribution of each component and on syntactic similarities among languages. | null | null | 10.18653/v1/K17-3028 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,920 |
inproceedings | garcia-gamallo-2017-rule | A rule-based system for cross-lingual parsing of {R}omance languages with {U}niversal {D}ependencies | Haji{\v{c}}, Jan and Zeman, Dan | aug | 2017 | Vancouver, Canada | Association for Computational Linguistics | https://aclanthology.org/K17-3029/ | Garcia, Marcos and Gamallo, Pablo | Proceedings of the {C}o{NLL} 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies | 274--282 | This article describes MetaRomance, a rule-based cross-lingual parser for Romance languages submitted to CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. The system is an almost delexicalized parser which does not need training data to analyze Romance languages. It contains linguistically motivated rules based on PoS-tag patterns. The rules included in MetaRomance were developed in about 12 hours by one expert with no prior knowledge in Universal Dependencies, and can be easily extended using a transparent formalism. In this paper we compare the performance of MetaRomance with other supervised systems participating in the competition, paying special attention to the parsing of different treebanks of the same language. We also compare our system with a delexicalized parser for Romance languages, and take advantage of the harmonized annotation of Universal Dependencies to propose a language ranking based on the syntactic distance each variety has from Romance languages. | null | null | 10.18653/v1/K17-3029 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,921 |
article | gardent-perez-beltrachini-2017-statistical | A Statistical, Grammar-Based Approach to Microplanning | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1001/ | Gardent, Claire and Perez-Beltrachini, Laura | null | 1--30 | Although there has been much work in recent years on data-driven natural language generation, little attention has been paid to the fine-grained interactions that arise during microplanning between aggregation, surface realization, and sentence segmentation. In this article, we propose a hybrid symbolic/statistical approach to jointly model the constraints regulating these interactions. Our approach integrates a small handwritten grammar, a statistical hypertagger, and a surface realization algorithm. It is applied to the verbalization of knowledge base queries and tested on 13 knowledge bases to demonstrate domain independence. We evaluate our approach in several ways. A quantitative analysis shows that the hybrid approach outperforms a purely symbolic approach in terms of both speed and coverage. Results from a human study indicate that users find the output of this hybrid statistic/symbolic system more fluent than both a template-based and a purely symbolic grammar-based approach. Finally, we illustrate by means of examples that our approach can account for various factors impacting aggregation, sentence segmentation, and surface realization. | Computational Linguistics | 43 | 10.1162/COLI_a_00273 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,923 |
article | tripodi-pelillo-2017-game | A Game-Theoretic Approach to Word Sense Disambiguation | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1002/ | Tripodi, Rocco and Pelillo, Marcello | null | 31--70 | This article presents a new model for word sense disambiguation formulated in terms of evolutionary game theory, where each word to be disambiguated is represented as a node on a graph whose edges represent word relations and senses are represented as classes. The words simultaneously update their class membership preferences according to the senses that neighboring words are likely to choose. We use distributional information to weigh the influence that each word has on the decisions of the others and semantic similarity information to measure the strength of compatibility among the choices. With this information we can formulate the word sense disambiguation problem as a constraint satisfaction problem and solve it using tools derived from game theory, maintaining the textual coherence. The model is based on two ideas: Similar words should be assigned to similar classes and the meaning of a word does not depend on all the words in a text but just on some of them. The article provides an in-depth motivation of the idea of modeling the word sense disambiguation problem in terms of game theory, which is illustrated by an example. The conclusion presents an extensive analysis on the combination of similarity measures to use in the framework and a comparison with state-of-the-art systems. The results show that our model outperforms state-of-the-art algorithms and can be applied to different tasks and in different scenarios. | Computational Linguistics | 43 | 10.1162/COLI_a_00274 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,924 |
article | shutova-etal-2017-multilingual | Multilingual Metaphor Processing: Experiments with Semi-Supervised and Unsupervised Learning | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1003/ | Shutova, Ekaterina and Sun, Lin and Dar{\'i}o Guti{\'e}rrez, Elkin and Lichtenstein, Patricia and Narayanan, Srini | null | 71--123 | Highly frequent in language and communication, metaphor represents a significant challenge for Natural Language Processing (NLP) applications. Computational work on metaphor has traditionally evolved around the use of hand-coded knowledge, making the systems hard to scale. Recent years have witnessed a rise in statistical approaches to metaphor processing. However, these approaches often require extensive human annotation effort and are predominantly evaluated within a limited domain. In contrast, we experiment with weakly supervised and unsupervised techniques{---}with little or no annotation{---}to generalize higher-level mechanisms of metaphor from distributional properties of concepts. We investigate different levels and types of supervision (learning from linguistic examples vs. learning from a given set of metaphorical mappings vs. learning without annotation) in flat and hierarchical, unconstrained and constrained clustering settings. Our aim is to identify the optimal type of supervision for a learning algorithm that discovers patterns of metaphorical association from text. In order to investigate the scalability and adaptability of our models, we applied them to data in three languages from different language groups{---}English, Spanish, and Russian{---}achieving state-of-the-art results with little supervision. Finally, we demonstrate that statistical methods can facilitate and scale up cross-linguistic research on metaphor. | Computational Linguistics | 43 | 10.1162/COLI_a_00275 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,925 |
article | habernal-gurevych-2017-argumentation | Argumentation Mining in User-Generated Web Discourse | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1004/ | Habernal, Ivan and Gurevych, Iryna | null | 125--179 | The goal of argumentation mining, an evolving research field in computational linguistics, is to design methods capable of analyzing people`s argumentation. In this article, we go beyond the state of the art in several ways. (i) We deal with actual Web data and take up the challenges given by the variety of registers, multiple domains, and unrestricted noisy user-generated Web discourse. (ii) We bridge the gap between normative argumentation theories and argumentation phenomena encountered in actual data by adapting an argumentation model tested in an extensive annotation study. (iii) We create a new gold standard corpus (90k tokens in 340 documents) and experiment with several machine learning methods to identify argument components. We offer the data, source codes, and annotation guidelines to the community under free licenses. Our findings show that argumentation mining in user-generated Web discourse is a feasible but challenging task. | Computational Linguistics | 43 | 10.1162/COLI_a_00276 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,926 |
article | stilo-velardi-2017-hashtag | Hashtag Sense Clustering Based on Temporal Similarity | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1005/ | Stilo, Giovanni and Velardi, Paola | null | 181--200 | Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns. | Computational Linguistics | 43 | 10.1162/COLI_a_00277 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,927 |
article | benamara-etal-2017-evaluative | Evaluative Language Beyond Bags of Words: Linguistic Insights and Computational Applications | null | apr | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-1006/ | Benamara, Farah and Taboada, Maite and Mathieu, Yannick | null | 201--264 | The study of evaluation, affect, and subjectivity is a multidisciplinary enterprise, including sociology, psychology, economics, linguistics, and computer science. A number of excellent computational linguistics and linguistic surveys of the field exist. Most surveys, however, do not bring the two disciplines together to show how methods from linguistics can benefit computational sentiment analysis systems. In this survey, we show how incorporating linguistic insights, discourse information, and other contextual phenomena, in combination with the statistical exploitation of data, can result in an improvement over approaches that take advantage of only one of these perspectives. We first provide a comprehensive introduction to evaluative language from both a linguistic and computational perspective. We then argue that the standard computational definition of the concept of evaluative language neglects the dynamic nature of evaluation, in which the interpretation of a given evaluation depends on linguistic and extra-linguistic contextual factors. We thus propose a dynamic definition that incorporates update functions. The update functions allow for different contextual aspects to be incorporated into the calculation of sentiment for evaluative words or expressions, and can be applied at all levels of discourse. We explore each level and highlight which linguistic aspects contribute to accurate extraction of sentiment. We end the review by outlining what we believe the future directions of sentiment analysis are, and the role that discourse and contextual information need to play. | Computational Linguistics | 43 | 10.1162/COLI_a_00278 | null | null | null | null | null | null | 1 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,928 |
article | irvine-callison-burch-2017-comprehensive | A Comprehensive Analysis of Bilingual Lexicon Induction | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2001/ | Irvine, Ann and Callison-Burch, Chris | null | 273--310 | Bilingual lexicon induction is the task of inducing word translations from monolingual corpora in two languages. In this article we present the most comprehensive analysis of bilingual lexicon induction to date. We present experiments on a wide range of languages and data sizes. We examine translation into English from 25 foreign languages: Albanian, Azeri, Bengali, Bosnian, Bulgarian, Cebuano, Gujarati, Hindi, Hungarian, Indonesian, Latvian, Nepali, Romanian, Serbian, Slovak, Somali, Spanish, Swedish, Tamil, Telugu, Turkish, Ukrainian, Uzbek, Vietnamese, and Welsh. We analyze the behavior of bilingual lexicon induction on low-frequency words, rather than testing solely on high-frequency words, as previous research has done. Low-frequency words are more relevant to statistical machine translation, where systems typically lack translations of rare words that fall outside of their training data. We systematically explore a wide range of features and phenomena that affect the quality of the translations discovered by bilingual lexicon induction. We provide illustrative examples of the highest ranking translations for orthogonal signals of translation equivalence like contextual similarity and temporal similarity. We analyze the effects of frequency and burstiness, and the sizes of the seed bilingual dictionaries and the monolingual training corpora. Additionally, we introduce a novel discriminative approach to bilingual lexicon induction. Our discriminative model is capable of combining a wide variety of features that individually provide only weak indications of translation equivalence. When feature weights are discriminatively set, these signals produce dramatically higher translation quality than previous approaches that combined signals in an unsupervised fashion (e.g., using minimum reciprocal rank). We also directly compare our model`s performance against a sophisticated generative approach, the matching canonical correlation analysis (MCCA) algorithm used by Haghighi et al. (2008). Our algorithm achieves an accuracy of 42{\%} versus MCCA`s 15{\%}. | Computational Linguistics | 43 | 10.1162/COLI_a_00284 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,932 |
article | ballesteros-etal-2017-greedy | Greedy Transition-Based Dependency Parsing with Stack {LSTM}s | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2002/ | Ballesteros, Miguel and Dyer, Chris and Goldberg, Yoav and Smith, Noah A. | null | 311--347 | We introduce a greedy transition-based parser that learns to represent parser states using recurrent neural networks. Our primary innovation that enables us to do this efficiently is a new control structure for sequential neural networks{---}the stack long short-term memory unit (LSTM). Like the conventional stack data structures used in transition-based parsers, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. Our model captures three facets of the parser`s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of transition actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. In addition, we compare two different word representations: (i) standard word vectors based on look-up tables and (ii) character-based models of words. Although standard word embedding models work well in all languages, the character-based models improve the handling of out-of-vocabulary words, particularly in morphologically rich languages. Finally, we discuss the use of dynamic oracles in training the parser. During training, dynamic oracles alternate between sampling parser states from the training data and from the model as it is being learned, making the model more robust to the kinds of errors that will be made at test time. Training our model with dynamic oracles yields a linear-time greedy parser with very competitive performance. | Computational Linguistics | 43 | 10.1162/COLI_a_00285 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,933 |
article | sajjad-etal-2017-statistical | Statistical Models for Unsupervised, Semi-Supervised Supervised Transliteration Mining | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2003/ | Sajjad, Hassan and Schmid, Helmut and Fraser, Alexander and Sch{\"utze, Hinrich | null | 349--375 | We present a generative model that efficiently mines transliteration pairs in a consistent fashion in three different settings: unsupervised, semi-supervised, and supervised transliteration mining. The model interpolates two sub-models, one for the generation of transliteration pairs and one for the generation of non-transliteration pairs (i.e., noise). The model is trained on noisy unlabeled data using the EM algorithm. During training the transliteration sub-model learns to generate transliteration pairs and the fixed non-transliteration model generates the noise pairs. After training, the unlabeled data is disambiguated based on the posterior probabilities of the two sub-models. We evaluate our transliteration mining system on data from a transliteration mining shared task and on parallel corpora. For three out of four language pairs, our system outperforms all semi-supervised and supervised systems that participated in the NEWS 2010 shared task. On word pairs extracted from parallel corpora with fewer than 2{\%} transliteration pairs, our system achieves up to 86.7{\%} F-measure with 77.9{\%} precision and 97.8{\%} recall. | Computational Linguistics | 43 | 10.1162/COLI_a_00286 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,934 |
article | chinaei-etal-2017-identifying | Identifying and Avoiding Confusion in Dialogue with People with {A}lzheimer`s Disease | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2004/ | Chinaei, Hamidreza and Currie, Leila Chan and Danks, Andrew and Lin, Hubert and Mehta, Tejas and Rudzicz, Frank | null | 377--406 | Alzheimer`s disease (AD) is an increasingly prevalent cognitive disorder in which memory, language, and executive function deteriorate, usually in that order. There is a growing need to support individuals with AD and other forms of dementia in their daily lives, and our goal is to do so through speech-based interaction. Given that 33{\%} of conversations with people with middle-stage AD involve a breakdown in communication, it is vital that automated dialogue systems be able to identify those breakdowns and, if possible, avoid them. In this article, we discuss several linguistic features that are verbal indicators of confusion in AD (including vocabulary richness, parse tree structures, and acoustic cues) and apply several machine learning algorithms to identify dialogue-relevant confusion from speech with up to 82{\%} accuracy. We also learn dialogue strategies to avoid confusion in the first place, which is accomplished using a partially observable Markov decision process and which obtains accuracies (up to 96.1{\%}) that are significantly higher than several baselines. This work represents a major step towards automated dialogue systems for individuals with dementia. | Computational Linguistics | 43 | 10.1162/COLI_a_00290 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,935 |
article | jansen-etal-2017-framing | Framing {QA} as Building and Ranking Intersentence Answer Justifications | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2005/ | Jansen, Peter and Sharp, Rebecca and Surdeanu, Mihai and Clark, Peter | null | 407--449 | We propose a question answering (QA) approach for standardized science exams that both identifies correct answers and produces compelling human-readable justifications for why those answers are correct. Our method first identifies the actual information needed in a question using psycholinguistic concreteness norms, then uses this information need to construct answer justifications by aggregating multiple sentences from different knowledge bases using syntactic and lexical information. We then jointly rank answers and their justifications using a reranking perceptron that treats justification quality as a latent variable. We evaluate our method on 1,000 multiple-choice questions from elementary school science exams, and empirically demonstrate that it performs better than several strong baselines, including neural network approaches. Our best configuration answers 44{\%} of the questions correctly, where the top justifications for 57{\%} of these correct answers contain a compelling human-readable justification that explains the inference required to arrive at the correct answer. We include a detailed characterization of the justification quality for both our method and a strong baseline, and show that information aggregation is key to addressing the information need in complex questions. | Computational Linguistics | 43 | 10.1162/COLI_a_00287 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,936 |
article | paraboni-etal-2017-squib | {S}quib: Effects of Cognitive Effort on the Resolution of Overspecified Descriptions | null | jun | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-2006/ | Paraboni, Ivandr{\'e} and Lan, Alex Gwo Jen and de Sant{'}Ana, Matheus Mendes and Coutinho, Fl{\'a}vio Luiz | null | 451--459 | Studies in referring expression generation (REG) have shown different effects of referential overspecification on the resolution of certain descriptions. To further investigate effects of this kind, this article reports two eye-tracking experiments that measure the time required to recognize target objects based on different kinds of information. Results suggest that referential overspecification may be either helpful or detrimental to identification depending on the kind of information that is actually overspecified, an insight that may be useful for the design of more informed hearer-oriented REG algorithms. | Computational Linguistics | 43 | 10.1162/COLI_a_00288 | null | null | null | null | null | null | 2 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,937 |
article | gebhardt-etal-2017-hybrid | Hybrid Grammars for Parsing of Discontinuous Phrase Structures and Non-Projective Dependency Structures | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3001/ | Gebhardt, Kilian and Nederhof, Mark-Jan and Vogler, Heiko | null | 465--520 | We explore the concept of hybrid grammars, which formalize and generalize a range of existing frameworks for dealing with discontinuous syntactic structures. Covered are both discontinuous phrase structures and non-projective dependency structures. Technically, hybrid grammars are related to synchronous grammars, where one grammar component generates linear structures and another generates hierarchical structures. By coupling lexical elements of both components together, discontinuous structures result. Several types of hybrid grammars are characterized. We also discuss grammar induction from treebanks. The main advantage over existing frameworks is the ability of hybrid grammars to separate discontinuity of the desired structures from time complexity of parsing. This permits exploration of a large variety of parsing algorithms for discontinuous structures, with different properties. This is confirmed by the reported experimental results, which show a wide variety of running time, accuracy, and frequency of parse failures. | Computational Linguistics | 43 | 10.1162/COLI_a_00291 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,940 |
article | deng-xue-2017-translation | Translation Divergences in {C}hinese{--}{E}nglish Machine Translation: An Empirical Investigation | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3002/ | Deng, Dun and Xue, Nianwen | null | 521--565 | In this article, we conduct an empirical investigation of translation divergences between Chinese and English relying on a parallel treebank. To do this, we first devise a hierarchical alignment scheme where Chinese and English parse trees are aligned in a way that eliminates conflicts and redundancies between word alignments and syntactic parses to prevent the generation of spurious translation divergences. Using this Hierarchically Aligned Chinese{--}English Parallel Treebank (HACEPT), we are able to semi-automatically identify and categorize the translation divergences between the two languages and quantify each type of translation divergence. Our results show that the translation divergences are much broader than described in previous studies that are largely based on anecdotal evidence and linguistic knowledge. The distribution of the translation divergences also shows that some high-profile translation divergences that motivate previous research are actually very rare in our data, whereas other translation divergences that have previously received little attention actually exist in large quantities. We also show that HACEPT allows the extraction of syntax-based translation rules, most of which are expressive enough to capture the translation divergences, and point out that the syntactic annotation in existing treebanks is not optimal for extracting such translation rules. We also discuss the implications of our study for attempts to bridge translation divergences by devising shared semantic representations across languages. Our quantitative results lend further support to the observation that although it is possible to bridge some translation divergences with semantic representations, other translation divergences are open-ended, thus building a semantic representation that captures all possible translation divergences may be impractical. | Computational Linguistics | 43 | 10.1162/COLI_a_00292 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,941 |
article | nguyen-eisenstein-2017-kernel | A Kernel Independence Test for Geographical Language Variation | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3003/ | Nguyen, Dong and Eisenstein, Jacob | null | 567--592 | Quantifying the degree of spatial dependence for linguistic variables is a key task for analyzing dialectal variation. However, existing approaches have important drawbacks. First, they are based on parametric models of dependence, which limits their power in cases where the underlying parametric assumptions are violated. Second, they are not applicable to all types of linguistic data: Some approaches apply only to frequencies, others to boolean indicators of whether a linguistic variable is present. We present a new method for measuring geographical language variation, which solves both of these problems. Our approach builds on Reproducing Kernel Hilbert Space (RKHS) representations for nonparametric statistics, and takes the form of a test statistic that is computed from pairs of individual geotagged observations without aggregation into predefined geographical bins. We compare this test with prior work using synthetic data as well as a diverse set of real data sets: a corpus of Dutch tweets, a Dutch syntactic atlas, and a data set of letters to the editor in North American newspapers. Our proposed test is shown to support robust inferences across a broad range of scenarios and types of data. | Computational Linguistics | 43 | 10.1162/COLI_a_00293 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,942 |
article | rothe-schutze-2017-autoextend | {A}uto{E}xtend: Combining Word Embeddings with Semantic Resources | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3004/ | Rothe, Sascha and Sch{\"utze, Hinrich | null | 593--617 | We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource. The method is based on encoding and decoding the word embeddings and is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The obtained embeddings live in the same vector space as the input word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet, GermaNet, and Freebase as semantic resources. AutoExtend achieves state-of-the-art performance on Word-in-Context Similarity and Word Sense Disambiguation tasks. | Computational Linguistics | 43 | 10.1162/COLI_a_00294 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,943 |
article | stab-gurevych-2017-parsing | Parsing Argumentation Structures in Persuasive Essays | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3005/ | Stab, Christian and Gurevych, Iryna | null | 619--659 | In this article, we present a novel approach for parsing argumentation structures. We identify argument components using sequence labeling at the token level and apply a new joint model for detecting argumentation structures. The proposed model globally optimizes argument component types and argumentative relations using Integer Linear Programming. We show that our model significantly outperforms challenging heuristic baselines on two different types of discourse. Moreover, we introduce a novel corpus of persuasive essays annotated with argumentation structures. We show that our annotation scheme and annotation guidelines successfully guide human annotators to substantial agreement. | Computational Linguistics | 43 | 10.1162/COLI_a_00295 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,944 |
article | mathet-2017-agreement | The Agreement Measure {\ensuremath{\gamma}}cat a Complement to {\ensuremath{\gamma}} Focused on Categorization of a Continuum | null | sep | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-3006/ | Mathet, Yann | null | 661--681 | Agreement on unitizing, where several annotators freely put units of various sizes and categories on a continuum, is difficult to assess because of the simultaneaous discrepancies in positioning and categorizing. The recent agreement measure {\ensuremath{\gamma}} offers an overall solution that simultaneously takes into account positions and categories. In this article, I propose the additional coefficient {\ensuremath{\gamma}}cat, which complements {\ensuremath{\gamma}} by assessing the agreement on categorization of a continuum, putting aside positional discrepancies. When applied to pure categorization (with predefined units), {\ensuremath{\gamma}}cat behaves the same way as the famous dedicated Krippendorff`s {\ensuremath{\alpha}}, even with missing values, which proves its consistency. A variation of {\ensuremath{\gamma}}cat is also proposed that provides an in-depth assessment of categorizing for each individual category. The entire family of {\ensuremath{\gamma}} coefficients is implemented in free software. | Computational Linguistics | 43 | 10.1162/COLI_a_00296 | null | null | null | null | null | null | 3 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,945 |
article | joty-etal-2017-discourse | Discourse Structure in Machine Translation Evaluation | null | dec | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-4001/ | Joty, Shafiq and Guzm{\'a}n, Francisco and M{\`a}rquez, Llu{\'i}s and Nakov, Preslav | null | 683--722 | In this article, we explore the potential of using sentence-level discourse structure for machine translation evaluation. We first design discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory (RST). Then, we show that a simple linear combination with these measures can help improve various existing machine translation evaluation metrics regarding correlation with human judgments both at the segment level and at the system level. This suggests that discourse information is complementary to the information used by many of the existing evaluation metrics, and thus it could be taken into account when developing richer evaluation metrics, such as the WMT-14 winning combined metric DiscoTKparty. We also provide a detailed analysis of the relevance of various discourse elements and relations from the RST parse trees for machine translation evaluation. In particular, we show that (i) all aspects of the RST tree are relevant, (ii) nuclearity is more useful than relation type, and (iii) the similarity of the translation RST tree to the reference RST tree is positively correlated with translation quality. | Computational Linguistics | 43 | 10.1162/COLI_a_00298 | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,947 |
article | rozovskaya-etal-2017-adapting | Adapting to Learner Errors with Minimal Supervision | null | dec | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-4002/ | Rozovskaya, Alla and Roth, Dan and Sammons, Mark | null | 723--760 | This article considers the problem of correcting errors made by English as a Second Language writers from a machine learning perspective, and addresses an important issue of developing an appropriate training paradigm for the task, one that accounts for error patterns of non-native writers using minimal supervision. Existing training approaches present a trade-off between large amounts of cheap data offered by the native-trained models and additional knowledge of learner error patterns provided by the more expensive method of training on annotated learner data. We propose a novel training approach that draws on the strengths offered by the two standard training paradigms{---}of training either on native or on annotated learner data{---}and that outperforms both of these standard methods. Using the key observation that parameters relating to error regularities exhibited by non-native writers are relatively simple, we develop models that can incorporate knowledge about error regularities based on a small annotated sample but that are otherwise trained on native English data. The key contribution of this article is the introduction and analysis of two methods for adapting the learned models to error patterns of non-native writers; one method that applies to generative classifiers and a second that applies to discriminative classifiers. Both methods demonstrated state-of-the-art performance in several text correction competitions. In particular, the Illinois system that implements these methods ranked at the top in two recent CoNLL shared tasks on error correction.1 We conduct further evaluation of the proposed approaches studying the effect of using error data from speakers of the same native language, languages that are closely related linguistically, and unrelated languages. | Computational Linguistics | 43 | 10.1162/COLI_a_00299 | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,948 |
article | kadar-etal-2017-representation | Representation of Linguistic Form and Function in Recurrent Neural Networks | null | dec | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-4003/ | K{\'a}d{\'a}r, {\'A}kos and Chrupa{\l}a, Grzegorz and Alishahi, Afra | null | 761--780 | We present novel methods for analyzing the activation patterns of recurrent neural networks from a linguistic point of view and explore the types of linguistic structure they learn. As a case study, we use a standard standalone language model, and a multi-task gated recurrent network architecture consisting of two parallel pathways with shared word embeddings: The Visual pathway is trained on predicting the representations of the visual scene corresponding to an input sentence, and the Textual pathway is trained to predict the next word in the same sentence. We propose a method for estimating the amount of contribution of individual tokens in the input to the final prediction of the networks. Using this method, we show that the Visual pathway pays selective attention to lexical categories and grammatical functions that carry semantic information, and learns to treat word types differently depending on their grammatical function and their position in the sequential structure of the sentence. In contrast, the language models are comparatively more sensitive to words with a syntactic function. Further analysis of the most informative n-gram contexts for each model shows that in comparison with the Visual pathway, the language models react more strongly to abstract contexts that represent syntactic constructions. | Computational Linguistics | 43 | 10.1162/COLI_a_00300 | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,949 |
article | vulic-etal-2017-hyperlex | {H}yper{L}ex: A Large-Scale Evaluation of Graded Lexical Entailment | null | dec | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-4004/ | Vuli{\'c}, Ivan and Gerz, Daniela and Kiela, Douwe and Hill, Felix and Korhonen, Anna | null | 781--835 | We introduce HyperLex{---}a data set and evaluation resource that quantifies the extent of the semantic category membership, that is, type-of relation, also known as hyponymy{--}hypernymy or lexical entailment (LE) relation between 2,616 concept pairs. Cognitive psychology research has established that typicality and category/class membership are computed in human semantic memory as a gradual rather than binary relation. Nevertheless, most NLP research and existing large-scale inventories of concept category membership (WordNet, DBPedia, etc.) treat category membership and LE as binary. To address this, we asked hundreds of native English speakers to indicate typicality and strength of category membership between a diverse range of concept pairs on a crowdsourcing platform. Our results confirm that category membership and LE are indeed more gradual than binary. We then compare these human judgments with the predictions of automatic systems, which reveals a huge gap between human performance and state-of-the-art LE, distributional and representation learning models, and substantial differences between the models themselves. We discuss a pathway for improving semantic models to overcome this discrepancy, and indicate future application areas for improved graded LE systems. | Computational Linguistics | 43 | 10.1162/COLI_a_00301 | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,950 |
article | constant-etal-2017-survey | {S}urvey: Multiword Expression Processing: A {S}urvey | null | dec | 2017 | Cambridge, MA | MIT Press | https://aclanthology.org/J17-4005/ | Constant, Mathieu and Eryiǧit, G{\"ul{\c{sen and Monti, Johanna and van der Plas, Lonneke and Ramisch, Carlos and Rosner, Michael and Todirascu, Amalia | null | 837--892 | Multiword expressions (MWEs) are a class of linguistic forms spanning conventional word boundaries that are both idiosyncratic and pervasive across different languages. The structure of linguistic processing that depends on the clear distinction between words and phrases has to be re-thought to accommodate MWEs. The issue of MWE handling is crucial for NLP applications, where it raises a number of challenges. The emergence of solutions in the absence of guiding principles motivates this survey, whose aim is not only to provide a focused review of MWE processing, but also to clarify the nature of interactions between MWE processing and downstream applications. We propose a conceptual framework within which challenges and research contributions can be positioned. It offers a shared understanding of what is meant by {\textquotedblleft}MWE processing,{\textquotedblright} distinguishing the subtasks of MWE discovery and identification. It also elucidates the interactions between MWE processing and two use cases: Parsing and machine translation. Many of the approaches in the literature can be differentiated according to how MWE processing is timed with respect to underlying use cases. We discuss how such orchestration choices affect the scope of MWE-aware systems. For each of the two MWE processing subtasks and for each of the two use cases, we conclude on open issues and research perspectives. | Computational Linguistics | 43 | 10.1162/COLI_a_00302 | null | null | null | null | null | null | 4 | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,951 |
inproceedings | belinkov-etal-2017-evaluating | Evaluating Layers of Representation in Neural Machine Translation on Part-of-Speech and Semantic Tagging Tasks | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1001/ | Belinkov, Yonatan and M{\`a}rquez, Llu{\'i}s and Sajjad, Hassan and Durrani, Nadir and Dalvi, Fahim and Glass, James | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 1--10 | While neural machine translation (NMT) models provide improved translation quality in an elegant framework, it is less clear what they learn about language. Recent work has started evaluating the quality of vector representations learned by NMT models on morphological and syntactic tasks. In this paper, we investigate the representations learned at different layers of NMT encoders. We train NMT systems on parallel data and use the models to extract features for training a classifier on two tasks: part-of-speech and semantic tagging. We then measure the performance of the classifier as a proxy to the quality of the original NMT model for the given task. Our quantitative analysis yields interesting insights regarding representation learning in NMT models. For instance, we find that higher layers are better at learning semantics while lower layers tend to be better for part-of-speech tagging. We also observe little effect of the target language on source-side representations, especially in higher quality models. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,956 |
inproceedings | chen-etal-2017-context | Context-Aware Smoothing for Neural Machine Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1002/ | Chen, Kehai and Wang, Rui and Utiyama, Masao and Sumita, Eiichiro and Zhao, Tiejun | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 11--20 | In Neural Machine Translation (NMT), each word is represented as a low-dimension, real-value vector for encoding its syntax and semantic information. This means that even if the word is in a different sentence context, it is represented as the fixed vector to learn source representation. Moreover, a large number of Out-Of-Vocabulary (OOV) words, which have different syntax and semantic information, are represented as the same vector representation of {\textquotedblleft}unk{\textquotedblright}. To alleviate this problem, we propose a novel context-aware smoothing method to dynamically learn a sentence-specific vector for each word (including OOV words) depending on its local context words in a sentence. The learned context-aware representation is integrated into the NMT to improve the translation performance. Empirical results on NIST Chinese-to-English translation task show that the proposed approach achieves 1.78 BLEU improvements on average over a strong attentional NMT, and outperforms some existing systems. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,957 |
inproceedings | nguyen-le-etal-2017-improving | Improving Sequence to Sequence Neural Machine Translation by Utilizing Syntactic Dependency Information | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1003/ | Nguyen Le, An and Martinez, Ander and Yoshimoto, Akifumi and Matsumoto, Yuji | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 21--29 | Sequence to Sequence Neural Machine Translation has achieved significant performance in recent years. Yet, there are some existing issues that Neural Machine Translation still does not solve completely. Two of them are translation for long sentences and the {\textquotedblleft}over-translation{\textquotedblright}. To address these two problems, we propose an approach that utilize more grammatical information such as syntactic dependencies, so that the output can be generated based on more abundant information. In our approach, syntactic dependencies is employed in decoding. In addition, the output of the model is presented not as a simple sequence of tokens but as a linearized tree construction. In order to assess the performance, we construct model based on an attention mechanism encoder-decoder model in which the source language is input to the encoder as a sequence and the decoder generates the target language as a linearized dependency tree structure. Experiments on the Europarl-v7 dataset of French-to-English translation demonstrate that our proposed method improves BLEU scores by 1.57 and 2.40 on datasets consisting of sentences with up to 50 and 80 tokens, respectively. Furthermore, the proposed method also solved the two existing problems, ineffective translation for long sentences and over-translation in Neural Machine Translation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,958 |
inproceedings | ghader-monz-2017-attention | What does Attention in Neural Machine Translation Pay Attention to? | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1004/ | Ghader, Hamidreza and Monz, Christof | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 30--39 | Attention in neural machine translation provides the possibility to encode relevant parts of the source sentence at each translation step. As a result, attention is considered to be an alignment model as well. However, there is no work that specifically studies attention and provides analysis of what is being learned by attention models. Thus, the question still remains that how attention is similar or different from the traditional alignment. In this paper, we provide detailed analysis of attention and compare it to traditional alignment. We answer the question of whether attention is only capable of modelling translational equivalent or it captures more information. We show that attention is different from alignment in some cases and is capturing useful information other than alignments. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,959 |
inproceedings | kaneko-etal-2017-grammatical | Grammatical Error Detection Using Error- and Grammaticality-Specific Word Embeddings | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1005/ | Kaneko, Masahiro and Sakaizawa, Yuya and Komachi, Mamoru | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 40--48 | In this study, we improve grammatical error detection by learning word embeddings that consider grammaticality and error patterns. Most existing algorithms for learning word embeddings usually model only the syntactic context of words so that classifiers treat erroneous and correct words as similar inputs. We address the problem of contextual information by considering learner errors. Specifically, we propose two models: one model that employs grammatical error patterns and another model that considers grammaticality of the target word. We determine grammaticality of n-gram sequence from the annotated error tags and extract grammatical error patterns for word embeddings from large-scale learner corpora. Experimental results show that a bidirectional long-short term memory model initialized by our word embeddings achieved the state-of-the-art accuracy by a large margin in an English grammatical error detection task on the First Certificate in English dataset. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,960 |
inproceedings | zhang-etal-2017-dependency-parsing | Dependency Parsing with Partial Annotations: An Empirical Comparison | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1006/ | Zhang, Yue and Li, Zhenghua and Lang, Jun and Xia, Qingrong and Zhang, Min | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 49--58 | This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graph-based parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,961 |
inproceedings | ma-hovy-2017-neural | Neural Probabilistic Model for Non-projective {MST} Parsing | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1007/ | Ma, Xuezhe and Hovy, Eduard | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 59--69 | In this paper, we propose a probabilistic parsing model that defines a proper conditional probability distribution over non-projective dependency trees for a given sentence, using neural representations as inputs. The neural network architecture is based on bi-directional LSTMCNNs, which automatically benefits from both word- and character-level representations, by using a combination of bidirectional LSTMs and CNNs. On top of the neural network, we introduce a probabilistic structured layer, defining a conditional log-linear model over non-projective trees. By exploiting Kirchhoff`s Matrix-Tree Theorem (Tutte, 1984), the partition functions and marginals can be computed efficiently, leading to a straightforward end-to-end model training procedure via back-propagation. We evaluate our model on 17 different datasets, across 14 different languages. Our parser achieves state-of-the-art parsing performance on nine datasets. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,962 |
inproceedings | nishida-nakayama-2017-word | Word Ordering as Unsupervised Learning Towards Syntactically Plausible Word Representations | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1008/ | Nishida, Noriki and Nakayama, Hideki | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 70--79 | The research question we explore in this study is how to obtain syntactically plausible word representations without using human annotations. Our underlying hypothesis is that word ordering tests, or linearizations, is suitable for learning syntactic knowledge about words. To verify this hypothesis, we develop a differentiable model called Word Ordering Network (WON) that explicitly learns to recover correct word order while implicitly acquiring word embeddings representing syntactic knowledge. We evaluate the word embeddings produced by the proposed method on downstream syntax-related tasks such as part-of-speech tagging and dependency parsing. The experimental results demonstrate that the WON consistently outperforms both order-insensitive and order-sensitive baselines on these tasks. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,963 |
inproceedings | kajiwara-etal-2017-mipa | {MIPA}: Mutual Information Based Paraphrase Acquisition via Bilingual Pivoting | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1009/ | Kajiwara, Tomoyuki and Komachi, Mamoru and Mochihashi, Daichi | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 80--89 | We present a pointwise mutual information (PMI)-based approach to formalize paraphrasability and propose a variant of PMI, called MIPA, for the paraphrase acquisition. Our paraphrase acquisition method first acquires lexical paraphrase pairs by bilingual pivoting and then reranks them by PMI and distributional similarity. The complementary nature of information from bilingual corpora and from monolingual corpora makes the proposed method robust. Experimental results show that the proposed method substantially outperforms bilingual pivoting and distributional similarity themselves in terms of metrics such as MRR, MAP, coverage, and Spearman`s correlation. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,964 |
inproceedings | do-etal-2017-improving | Improving Implicit Semantic Role Labeling by Predicting Semantic Frame Arguments | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1010/ | Do, Quynh Ngoc Thi and Bethard, Steven and Moens, Marie-Francine | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 90--99 | Implicit semantic role labeling (iSRL) is the task of predicting the semantic roles of a predicate that do not appear as explicit arguments, but rather regard common sense knowledge or are mentioned earlier in the discourse. We introduce an approach to iSRL based on a predictive recurrent neural semantic frame model (PRNSFM) that uses a large unannotated corpus to learn the probability of a sequence of semantic arguments given a predicate. We leverage the sequence probabilities predicted by the PRNSFM to estimate selectional preferences for predicates and their arguments. On the NomBank iSRL test set, our approach improves state-of-the-art performance on implicit semantic role labeling with less reliance than prior work on manually constructed language resources. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,965 |
inproceedings | lai-etal-2017-natural | Natural Language Inference from Multiple Premises | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1011/ | Lai, Alice and Bisk, Yonatan and Hockenmaier, Julia | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 100--109 | We define a novel textual entailment task that requires inference over multiple premise sentences. We present a new dataset for this task that minimizes trivial lexical inferences, emphasizes knowledge of everyday events, and presents a more challenging setting for textual entailment. We evaluate several strong neural baselines and analyze how the multiple premise task differs from standard textual entailment. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,966 |
inproceedings | wang-ku-2017-enabling | Enabling Transitivity for Lexical Inference on {C}hinese Verbs Using Probabilistic Soft Logic | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1012/ | Wang, Wei-Chung and Ku, Lun-Wei | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 110--119 | To learn more knowledge, enabling transitivity is a vital step for lexical inference. However, most of the lexical inference models with good performance are for nouns or noun phrases, which cannot be directly applied to the inference on events or states. In this paper, we construct the largest Chinese verb lexical inference dataset containing 18,029 verb pairs, where for each pair one of four inference relations are annotated. We further build a probabilistic soft logic (PSL) model to infer verb lexicons using the logic language. With PSL, we easily enable transitivity in two layers, the observed layer and the feature layer, which are included in the knowledge base. We further discuss the effect of transitives within and between these layers. Results show the performance of the proposed PSL model can be improved at least 3.5{\%} (relative) when the transitivity is enabled. Furthermore, experiments show that enabling transitivity in the observed layer benefits the most. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,967 |
inproceedings | junczys-dowmunt-grundkiewicz-2017-exploration | An Exploration of Neural Sequence-to-Sequence Architectures for Automatic Post-Editing | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1013/ | Junczys-Dowmunt, Marcin and Grundkiewicz, Roman | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 120--129 | In this work, we explore multiple neural architectures adapted for the task of automatic post-editing of machine translation output. We focus on neural end-to-end models that combine both inputs $mt$ (raw MT output) and $src$ (source language input) in a single neural architecture, modeling $\{mt, src\} \rightarrow pe$ directly. Apart from that, we investigate the influence of hard-attention models which seem to be well-suited for monolingual tasks, as well as combinations of both ideas. We report results on data sets provided during the WMT-2016 shared task on automatic post-editing and can demonstrate that dual-attention models that incorporate all available data in the APE scenario in a single model improve on the best shared task system and on all other published results after the shared task. Dual-attention models that are combined with hard attention remain competitive despite applying fewer changes to the input. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,968 |
inproceedings | elliott-kadar-2017-imagination | Imagination Improves Multimodal Translation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1014/ | Elliott, Desmond and K{\'a}d{\'a}r, {\'A}kos | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 130--141 | We decompose multimodal translation into two sub-tasks: learning to translate and learning visually grounded representations. In a multitask learning framework, translations are learned in an attention-based encoder-decoder, and grounded representations are learned through image representation prediction. Our approach improves translation performance compared to the state of the art on the Multi30K dataset. Furthermore, it is equally effective if we train the image prediction task on the external MS COCO dataset, and we find improvements if we train the translation model on the external News Commentary parallel text. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,969 |
inproceedings | dalvi-etal-2017-understanding | Understanding and Improving Morphological Learning in the Neural Machine Translation Decoder | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1015/ | Dalvi, Fahim and Durrani, Nadir and Sajjad, Hassan and Belinkov, Yonatan and Vogel, Stephan | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 142--151 | End-to-end training makes the neural machine translation (NMT) architecture simpler, yet elegant compared to traditional statistical machine translation (SMT). However, little is known about linguistic patterns of morphology, syntax and semantics learned during the training of NMT systems, and more importantly, which parts of the architecture are responsible for learning each of these phenomenon. In this paper we i) analyze how much morphology an NMT decoder learns, and ii) investigate whether injecting target morphology in the decoder helps it to produce better translations. To this end we present three methods: i) simultaneous translation, ii) joint-data learning, and iii) multi-task learning. Our results show that explicit morphological information helps the decoder learn target language morphology and improves the translation quality by 0.2{--}0.6 BLEU points. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,970 |
inproceedings | zhang-etal-2017-improving | Improving Neural Machine Translation through Phrase-based Forced Decoding | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1016/ | Zhang, Jingyi and Utiyama, Masao and Sumita, Eiichro and Neubig, Graham and Nakamura, Satoshi | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 152--162 | Compared to traditional statistical machine translation (SMT), neural machine translation (NMT) often sacrifices adequacy for the sake of fluency. We propose a method to combine the advantages of traditional SMT and NMT by exploiting an existing phrase-based SMT model to compute the phrase-based decoding cost for an NMT output and then using the phrase-based decoding cost to rerank the n-best NMT outputs. The main challenge in implementing this approach is that NMT outputs may not be in the search space of the standard phrase-based decoding algorithm, because the search space of phrase-based SMT is limited by the phrase-based translation rule table. We propose a soft forced decoding algorithm, which can always successfully find a decoding path for any NMT output. We show that using the forced decoding cost to rerank the NMT outputs can successfully improve translation quality on four different language pairs. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,971 |
inproceedings | wang-xu-2017-convolutional | Convolutional Neural Network with Word Embeddings for {C}hinese Word Segmentation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1017/ | Wang, Chunqi and Xu, Bo | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 163--172 | Character-based sequence labeling framework is flexible and efficient for Chinese word segmentation (CWS). Recently, many character-based neural models have been applied to CWS. While they obtain good performance, they have two obvious weaknesses. The first is that they heavily rely on manually designed bigram feature, i.e. they are not good at capturing $n$-gram features automatically. The second is that they make no use of full word information. For the first weakness, we propose a convolutional neural model, which is able to capture rich $n$-gram features without any feature engineering. For the second one, we propose an effective approach to integrate the proposed model with word embeddings. We evaluate the model on two benchmark datasets: PKU and MSR. Without any feature engineering, the model obtains competitive performance {---} 95.7{\%} on PKU and 97.3{\%} on MSR. Armed with word embeddings, the model achieves state-of-the-art performance on both datasets {---} 96.5{\%} on PKU and 98.0{\%} on MSR, without using any external labeled resource. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,972 |
inproceedings | shao-etal-2017-character | Character-based Joint Segmentation and {POS} Tagging for {C}hinese using Bidirectional {RNN}-{CRF} | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1018/ | Shao, Yan and Hardmeier, Christian and Tiedemann, J{\"org and Nivre, Joakim | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 173--183 | We present a character-based model for joint segmentation and POS tagging for Chinese. The bidirectional RNN-CRF architecture for general sequence tagging is adapted and applied with novel vector representations of Chinese characters that capture rich contextual information and lower-than-character level features. The proposed model is extensively evaluated and compared with a state-of-the-art tagger respectively on CTB5, CTB9 and UD Chinese. The experimental results indicate that our model is accurate and robust across datasets in different sizes, genres and annotation schemes. We obtain state-of-the-art performance on CTB5, achieving 94.38 F1-score for joint segmentation and POS tagging. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,973 |
inproceedings | huang-etal-2017-addressing | Addressing Domain Adaptation for {C}hinese Word Segmentation with Global Recurrent Structure | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1019/ | Huang, Shen and Sun, Xu and Wang, Houfeng | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 184--193 | Boundary features are widely used in traditional Chinese Word Segmentation (CWS) methods as they can utilize unlabeled data to help improve the Out-of-Vocabulary (OOV) word recognition performance. Although various neural network methods for CWS have achieved performance competitive with state-of-the-art systems, these methods, constrained by the domain and size of the training corpus, do not work well in domain adaptation. In this paper, we propose a novel BLSTM-based neural network model which incorporates a global recurrent structure designed for modeling boundary features dynamically. Experiments show that the proposed structure can effectively boost the performance of Chinese Word Segmentation, especially OOV-Recall, which brings benefits to domain adaptation. We achieved state-of-the-art results on 6 domains of CNKI articles, and competitive results to the best reported on the 4 domains of SIGHAN Bakeoff 2010 data. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,974 |
inproceedings | vishal-etal-2017-information | Information Bottleneck Inspired Method For Chat Text Segmentation | Kondrak, Greg and Watanabe, Taro | nov | 2017 | Taipei, Taiwan | Asian Federation of Natural Language Processing | https://aclanthology.org/I17-1020/ | Vishal, S and Yadav, Mohit and Vig, Lovekesh and Shroff, Gautam | Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers) | 194--203 | We present a novel technique for segmenting chat conversations using the information bottleneck method (Tishby et al., 2000), augmented with sequential continuity constraints. Furthermore, we utilize critical non-textual clues such as time between two consecutive posts and people mentions within the posts. To ascertain the effectiveness of the proposed method, we have collected data from public Slack conversations and Fresco, a proprietary platform deployed inside our organization. Experiments demonstrate that the proposed method yields an absolute (relative) improvement of as high as 3.23{\%} (11.25{\%}). To facilitate future research, we are releasing manual annotations for segmentation on public Slack conversations. | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | null | 56,975 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.