Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
In-Order Transition-based Constituent Parsing | Both bottom-up and top-down strategies have been used for neural
transition-based constituent parsing. The parsing strategies differ in terms of
the order in which they recognize productions in the derivation tree, where
bottom-up strategies and top-down strategies take post-order and pre-order
traversal over trees, respectively. Bottom-up parsers benefit from rich
features from readily built partial parses, but lack lookahead guidance in the
parsing process; top-down parsers benefit from non-local guidance for local
decisions, but rely on a strong encoder over the input to predict a constituent
hierarchy before its construction.To mitigate both issues, we propose a novel
parsing system based on in-order traversal over syntactic trees, designing a
set of transition actions to find a compromise between bottom-up constituent
information and top-down lookahead information. Based on stack-LSTM, our
psycholinguistically motivated constituent parsing system achieves 91.8 F1 on
WSJ benchmark. Furthermore, the system achieves 93.6 F1 with supervised
reranking and 94.2 F1 with semi-supervised reranking, which are the best
results on the WSJ benchmark.
| 2,017 | Computation and Language |
Towards Bidirectional Hierarchical Representations for Attention-Based
Neural Machine Translation | This paper proposes a hierarchical attentional neural translation model which
focuses on enhancing source-side hierarchical representations by covering both
local and global semantic information using a bidirectional tree-based encoder.
To maximize the predictive likelihood of target words, a weighted variant of an
attention mechanism is used to balance the attentive information between
lexical and phrase vectors. Using a tree-based rare word encoding, the proposed
model is extended to sub-word level to alleviate the out-of-vocabulary (OOV)
problem. Empirical results reveal that the proposed model significantly
outperforms sequence-to-sequence attention-based and tree-based neural
translation models in English-Chinese translation tasks.
| 2,017 | Computation and Language |
To Normalize, or Not to Normalize: The Impact of Normalization on
Part-of-Speech Tagging | Does normalization help Part-of-Speech (POS) tagging accuracy on noisy,
non-canonical data? To the best of our knowledge, little is known on the actual
impact of normalization in a real-world scenario, where gold error detection is
not available. We investigate the effect of automatic normalization on POS
tagging of tweets. We also compare normalization to strategies that leverage
large amounts of unlabeled data kept in its raw form. Our results show that
normalization helps, but does not add consistently beyond just word embedding
layer initialization. The latter approach yields a tagging model that is
competitive with a Twitter state-of-the-art tagger.
| 2,017 | Computation and Language |
LIG-CRIStAL System for the WMT17 Automatic Post-Editing Task | This paper presents the LIG-CRIStAL submission to the shared Automatic Post-
Editing task of WMT 2017. We propose two neural post-editing models: a
monosource model with a task-specific attention mechanism, which performs
particularly well in a low-resource scenario; and a chained architecture which
makes use of the source sentence to provide extra context. This latter
architecture manages to slightly improve our results when more training data is
available. We present and discuss our results on two datasets (en-de and de-en)
that are made available for the task.
| 2,017 | Computation and Language |
Neural Reranking for Named Entity Recognition | We propose a neural reranking system for named entity recognition (NER). The
basic idea is to leverage recurrent neural network models to learn
sentence-level patterns that involve named entity mentions. In particular,
given an output sentence produced by a baseline NER model, we replace all
entity mentions, such as \textit{Barack Obama}, into their entity types, such
as \textit{PER}. The resulting sentence patterns contain direct output
information, yet is less sparse without specific named entities. For example,
"PER was born in LOC" can be such a pattern. LSTM and CNN structures are
utilised for learning deep representations of such sentences for reranking.
Results show that our system can significantly improve the NER accuracies over
two different baselines, giving the best reported results on a standard
benchmark.
| 2,017 | Computation and Language |
Auxiliary Objectives for Neural Error Detection Models | We investigate the utility of different auxiliary objectives and training
strategies within a neural sequence labeling approach to error detection in
learner writing. Auxiliary costs provide the model with additional linguistic
information, allowing it to learn general-purpose compositional features that
can then be exploited for other objectives. Our experiments show that a joint
learning approach trained with parallel labels on in-domain data improves
performance over the previous best error detection system. While the resulting
model has the same number of parameters, the additional objectives allow it to
be optimised more efficiently and achieve better performance.
| 2,017 | Computation and Language |
Detecting Off-topic Responses to Visual Prompts | Automated methods for essay scoring have made great progress in recent years,
achieving accuracies very close to human annotators. However, a known weakness
of such automated scorers is not taking into account the semantic relevance of
the submitted text. While there is existing work on detecting answer relevance
given a textual prompt, very little previous research has been done to
incorporate visual writing prompts. We propose a neural architecture and
several extensions for detecting off-topic responses to visual prompts and
evaluate it on a dataset of texts written by language learners.
| 2,017 | Computation and Language |
Artificial Error Generation with Machine Translation and Syntactic
Patterns | Shortage of available training data is holding back progress in the area of
automated error detection. This paper investigates two alternative methods for
artificially generating writing errors, in order to create additional
resources. We propose treating error generation as a machine translation task,
where grammatically correct text is translated to contain errors. In addition,
we explore a system for extracting textual patterns from an annotated corpus,
which can then be used to insert errors into grammatically correct sentences.
Our experiments show that the inclusion of artificially generated errors
significantly improves error detection accuracy on both FCE and CoNLL 2014
datasets.
| 2,017 | Computation and Language |
Learning to select data for transfer learning with Bayesian Optimization | Domain similarity measures can be used to gauge adaptability and select
suitable data for transfer learning, but existing approaches define ad hoc
measures that are deemed suitable for respective tasks. Inspired by work on
curriculum learning, we propose to \emph{learn} data selection measures using
Bayesian Optimization and evaluate them across models, domains and tasks. Our
learned measures outperform existing domain similarity measures significantly
on three tasks: sentiment analysis, part-of-speech tagging, and parsing. We
show the importance of complementing similarity with diversity, and that
learned measures are -- to some degree -- transferable across models, domains,
and even tasks.
| 2,017 | Computation and Language |
Exploring text datasets by visualizing relevant words | When working with a new dataset, it is important to first explore and
familiarize oneself with it, before applying any advanced machine learning
algorithms. However, to the best of our knowledge, no tools exist that quickly
and reliably give insight into the contents of a selection of documents with
respect to what distinguishes them from other documents belonging to different
categories. In this paper we propose to extract `relevant words' from a
collection of texts, which summarize the contents of documents belonging to a
certain class (or discovered cluster in the case of unlabeled datasets), and
visualize them in word clouds to allow for a survey of salient features at a
glance. We compare three methods for extracting relevant words and demonstrate
the usefulness of the resulting word clouds by providing an overview of the
classes contained in a dataset of scientific publications as well as by
discovering trending topics from recent New York Times article snippets.
| 2,017 | Computation and Language |
A Simple Language Model based on PMI Matrix Approximations | In this study, we introduce a new approach for learning language models by
training them to estimate word-context pointwise mutual information (PMI), and
then deriving the desired conditional probabilities from PMI at test time.
Specifically, we show that with minor modifications to word2vec's algorithm, we
get principled language models that are closely related to the well-established
Noise Contrastive Estimation (NCE) based language models. A compelling aspect
of our approach is that our models are trained with the same simple negative
sampling objective function that is commonly used in word2vec to learn word
embeddings.
| 2,017 | Computation and Language |
MAG: A Multilingual, Knowledge-base Agnostic and Deterministic Entity
Linking Approach | Entity linking has recently been the subject of a significant body of
research. Currently, the best performing approaches rely on trained
mono-lingual models. Porting these approaches to other languages is
consequently a difficult endeavor as it requires corresponding training data
and retraining of the models. We address this drawback by presenting a novel
multilingual, knowledge-based agnostic and deterministic approach to entity
linking, dubbed MAG. MAG is based on a combination of context-based retrieval
on structured knowledge bases and graph algorithms. We evaluate MAG on 23 data
sets and in 7 languages. Our results show that the best approach trained on
English datasets (PBOH) achieves a micro F-measure that is up to 4 times worse
on datasets in other languages. MAG, on the other hand, achieves
state-of-the-art performance on English datasets and reaches a micro F-measure
that is up to 0.6 higher than that of PBOH on non-English languages.
| 2,017 | Computation and Language |
Unsupervised Iterative Deep Learning of Speech Features and Acoustic
Tokens with Applications to Spoken Term Detection | In this paper we aim to automatically discover high quality frame-level
speech features and acoustic tokens directly from unlabeled speech data. A
Multi-granular Acoustic Tokenizer (MAT) was proposed for automatic discovery of
multiple sets of acoustic tokens from the given corpus. Each acoustic token set
is specified by a set of hyperparameters describing the model configuration.
These different sets of acoustic tokens carry different characteristics for the
given corpus and the language behind, thus can be mutually reinforced. The
multiple sets of token labels are then used as the targets of a Multi-target
Deep Neural Network (MDNN) trained on frame-level acoustic features. Bottleneck
features extracted from the MDNN are then used as the feedback input to the MAT
and the MDNN itself in the next iteration. The multi-granular acoustic token
sets and the frame-level speech features can be iteratively optimized in the
iterative deep learning framework. We call this framework the Multi-granular
Acoustic Tokenizing Deep Neural Network (MATDNN). The results were evaluated
using the metrics and corpora defined in the Zero Resource Speech Challenge
organized at Interspeech 2015, and improved performance was obtained with a set
of experiments of query-by-example spoken term detection on the same corpora.
Visualization for the discovered tokens against the English phonemes was also
shown.
| 2,017 | Computation and Language |
Improved Neural Machine Translation with a Syntax-Aware Encoder and
Decoder | Most neural machine translation (NMT) models are based on the sequential
encoder-decoder framework, which makes no use of syntactic information. In this
paper, we improve this model by explicitly incorporating source-side syntactic
trees. More specifically, we propose (1) a bidirectional tree encoder which
learns both sequential and tree structured representations; (2) a tree-coverage
model that lets the attention depend on the source-side syntax. Experiments on
Chinese-English translation demonstrate that our proposed models outperform the
sequential attentional model as well as a stronger baseline with a bottom-up
tree encoder and word coverage.
| 2,017 | Computation and Language |
Top-Rank Enhanced Listwise Optimization for Statistical Machine
Translation | Pairwise ranking methods are the basis of many widely used discriminative
training approaches for structure prediction problems in natural language
processing(NLP). Decomposing the problem of ranking hypotheses into pairwise
comparisons enables simple and efficient solutions. However, neglecting the
global ordering of the hypothesis list may hinder learning. We propose a
listwise learning framework for structure prediction problems such as machine
translation. Our framework directly models the entire translation list's
ordering to learn parameters which may better fit the given listwise samples.
Furthermore, we propose top-rank enhanced loss functions, which are more
sensitive to ranking errors at higher positions. Experiments on a large-scale
Chinese-English translation task show that both our listwise learning framework
and top-rank enhanced listwise losses lead to significant improvements in
translation quality.
| 2,017 | Computation and Language |
Detecting Intentional Lexical Ambiguity in English Puns | The article describes a model of automatic analysis of puns, where a word is
intentionally used in two meanings at the same time (the target word). We
employ Roget's Thesaurus to discover two groups of words which, in a pun, form
around two abstract bits of meaning (semes). They become a semantic vector,
based on which an SVM classifier learns to recognize puns, reaching a score
0.73 for F-measure. We apply several rule-based methods to locate intentionally
ambiguous (target) words, based on structural and semantic criteria. It appears
that the structural criterion is more effective, although it possibly
characterizes only the tested dataset. The results we get correlate with the
results of other teams at SemEval-2017 competition (Task 7 Detection and
Interpretation of English Puns) considering effects of using supervised
learning models and word statistics.
| 2,017 | Computation and Language |
PunFields at SemEval-2017 Task 7: Employing Roget's Thesaurus in
Automatic Pun Recognition and Interpretation | The article describes a model of automatic interpretation of English puns,
based on Roget's Thesaurus, and its implementation, PunFields. In a pun, the
algorithm discovers two groups of words that belong to two main semantic
fields. The fields become a semantic vector based on which an SVM classifier
learns to recognize puns. A rule-based model is then applied for recognition of
intentionally ambiguous (target) words and their definitions. In SemEval Task 7
PunFields shows a considerably good result in pun classification, but requires
improvement in searching for the target word and its definition.
| 2,017 | Computation and Language |
A Comparative Analysis of Social Network Pages by Interests of Their
Followers | Being a matter of cognition, user interests should be apt to classification
independent of the language of users, social network and content of interest
itself. To prove it, we analyze a collection of English and Russian Twitter and
Vkontakte community pages by interests of their followers. First, we create a
model of Major Interests (MaIs) with the help of expert analysis and then
classify a set of pages using machine learning algorithms (SVM, Neural Network,
Naive Bayes, and some other). We take three interest domains that are typical
of both English and Russian-speaking communities: football, rock music,
vegetarianism. The results of classification show a greater correlation between
Russian-Vkontakte and Russian-Twitter pages while English-Twitterpages appear
to provide the highest score.
| 2,017 | Computation and Language |
Story Generation from Sequence of Independent Short Descriptions | Existing Natural Language Generation (NLG) systems are weak AI systems and
exhibit limited capabilities when language generation tasks demand higher
levels of creativity, originality and brevity. Effective solutions or, at least
evaluations of modern NLG paradigms for such creative tasks have been elusive,
unfortunately. This paper introduces and addresses the task of coherent story
generation from independent descriptions, describing a scene or an event.
Towards this, we explore along two popular text-generation paradigms -- (1)
Statistical Machine Translation (SMT), posing story generation as a translation
problem and (2) Deep Learning, posing story generation as a sequence to
sequence learning problem. In SMT, we chose two popular methods such as phrase
based SMT (PB-SMT) and syntax based SMT (SYNTAX-SMT) to `translate' the
incoherent input text into stories. We then implement a deep recurrent neural
network (RNN) architecture that encodes sequence of variable length input
descriptions to corresponding latent representations and decodes them to
produce well formed comprehensive story like summaries. The efficacy of the
suggested approaches is demonstrated on a publicly available dataset with the
help of popular machine translation and summarization evaluation metrics.
| 2,017 | Computation and Language |
On the State of the Art of Evaluation in Neural Language Models | Ongoing innovations in recurrent neural network architectures have provided a
steady influx of apparently state-of-the-art results on language modelling
benchmarks. However, these have been evaluated using differing code bases and
limited computational resources, which represent uncontrolled sources of
experimental variation. We reevaluate several popular architectures and
regularisation methods with large-scale automatic black-box hyperparameter
tuning and arrive at the somewhat surprising conclusion that standard LSTM
architectures, when properly regularised, outperform more recent models. We
establish a new state of the art on the Penn Treebank and Wikitext-2 corpora,
as well as strong baselines on the Hutter Prize dataset.
| 2,017 | Computation and Language |
Spherical Paragraph Model | Representing texts as fixed-length vectors is central to many language
processing tasks. Most traditional methods build text representations based on
the simple Bag-of-Words (BoW) representation, which loses the rich semantic
relations between words. Recent advances in natural language processing have
shown that semantically meaningful representations of words can be efficiently
acquired by distributed models, making it possible to build text
representations based on a better foundation called the Bag-of-Word-Embedding
(BoWE) representation. However, existing text representation methods using BoWE
often lack sound probabilistic foundations or cannot well capture the semantic
relatedness encoded in word vectors. To address these problems, we introduce
the Spherical Paragraph Model (SPM), a probabilistic generative model based on
BoWE, for text representation. SPM has good probabilistic interpretability and
can fully leverage the rich semantics of words, the word co-occurrence
information as well as the corpus-wide information to help the representation
learning of texts. Experimental results on topical classification and sentiment
analysis demonstrate that SPM can achieve new state-of-the-art performances on
several benchmark datasets.
| 2,017 | Computation and Language |
A Short Survey of Biomedical Relation Extraction Techniques | Biomedical information is growing rapidly in the recent years and retrieving
useful data through information extraction system is getting more attention. In
the current research, we focus on different aspects of relation extraction
techniques in biomedical domain and briefly describe the state-of-the-art for
relation extraction between a variety of biological elements.
| 2,017 | Computation and Language |
Encoding Word Confusion Networks with Recurrent Neural Networks for
Dialog State Tracking | This paper presents our novel method to encode word confusion networks, which
can represent a rich hypothesis space of automatic speech recognition systems,
via recurrent neural networks. We demonstrate the utility of our approach for
the task of dialog state tracking in spoken dialog systems that relies on
automatic speech recognition output. Encoding confusion networks outperforms
encoding the best hypothesis of the automatic speech recognition in a neural
system for dialog state tracking on the well-known second Dialog State Tracking
Challenge dataset.
| 2,017 | Computation and Language |
Deep Active Learning for Named Entity Recognition | Deep learning has yielded state-of-the-art performance on many natural
language processing tasks including named entity recognition (NER). However,
this typically requires large amounts of labeled data. In this work, we
demonstrate that the amount of labeled training data can be drastically reduced
when deep learning is combined with active learning. While active learning is
sample-efficient, it can be computationally expensive since it requires
iterative retraining. To speed this up, we introduce a lightweight architecture
for NER, viz., the CNN-CNN-LSTM model consisting of convolutional character and
word encoders and a long short term memory (LSTM) tag decoder. The model
achieves nearly state-of-the-art performance on standard datasets for the task
while being computationally much more efficient than best performing models. We
carry out incremental active learning, during the training process, and are
able to nearly match state-of-the-art performance with just 25\% of the
original training data.
| 2,018 | Computation and Language |
Measuring Thematic Fit with Distributional Feature Overlap | In this paper, we introduce a new distributional method for modeling
predicate-argument thematic fit judgments. We use a syntax-based DSM to build a
prototypical representation of verb-specific roles: for every verb, we extract
the most salient second order contexts for each of its roles (i.e. the most
salient dimensions of typical role fillers), and then we compute thematic fit
as a weighted overlap between the top features of candidate fillers and role
prototypes. Our experiments show that our method consistently outperforms a
baseline re-implementing a state-of-the-art system, and achieves better or
comparable results to those reported in the literature for the other
unsupervised systems. Moreover, it provides an explicit representation of the
features characterizing verb-specific semantic roles.
| 2,017 | Computation and Language |
Argotario: Computational Argumentation Meets Serious Games | An important skill in critical thinking and argumentation is the ability to
spot and recognize fallacies. Fallacious arguments, omnipresent in
argumentative discourse, can be deceptive, manipulative, or simply leading to
`wrong moves' in a discussion. Despite their importance, argumentation scholars
and NLP researchers with focus on argumentation quality have not yet
investigated fallacies empirically. The nonexistence of resources dealing with
fallacious argumentation calls for scalable approaches to data acquisition and
annotation, for which the serious games methodology offers an appealing, yet
unexplored, alternative. We present Argotario, a serious game that deals with
fallacies in everyday argumentation. Argotario is a multilingual, open-source,
platform-independent application with strong educational aspects, accessible at
www.argotario.net.
| 2,022 | Computation and Language |
Modeling Target-Side Inflection in Neural Machine Translation | NMT systems have problems with large vocabulary sizes. Byte-pair encoding
(BPE) is a popular approach to solving this problem, but while BPE allows the
system to generate any target-side word, it does not enable effective
generalization over the rich vocabulary in morphologically rich languages with
strong inflectional phenomena. We introduce a simple approach to overcome this
problem by training a system to produce the lemma of a word and its
morphologically rich POS tag, which is then followed by a deterministic
generation step. We apply this strategy for English-Czech and English-German
translation scenarios, obtaining improvements in both settings. We furthermore
show that the improvement is not due to only adding explicit morphological
information.
| 2,017 | Computation and Language |
Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in
Speech Recognition | Layer normalization is a recently introduced technique for normalizing the
activities of neurons in deep neural networks to improve the training speed and
stability. In this paper, we introduce a new layer normalization technique
called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling
in speech recognition. By dynamically generating the scaling and shifting
parameters in layer normalization, DLN adapts neural acoustic models to the
acoustic variability arising from various factors such as speakers, channel
noises, and environments. Unlike other adaptive acoustic models, our proposed
approach does not require additional adaptation data or speaker information
such as i-vectors. Moreover, the model size is fixed as it dynamically
generates adaptation parameters. We apply our proposed DLN to deep
bidirectional LSTM acoustic models and evaluate them on two benchmark datasets
for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The
experimental results show that our DLN improves neural acoustic models in terms
of transcription accuracy by dynamically adapting to various speakers and
environments.
| 2,017 | Computation and Language |
Discovering topics in text datasets by visualizing relevant words | When dealing with large collections of documents, it is imperative to quickly
get an overview of the texts' contents. In this paper we show how this can be
achieved by using a clustering algorithm to identify topics in the dataset and
then selecting and visualizing relevant words, which distinguish a group of
documents from the rest of the texts, to summarize the contents of the
documents belonging to each topic. We demonstrate our approach by discovering
trending topics in a collection of New York Times article snippets.
| 2,017 | Computation and Language |
Improving Language Modeling using Densely Connected Recurrent Neural
Networks | In this paper, we introduce the novel concept of densely connected layers
into recurrent neural networks. We evaluate our proposed architecture on the
Penn Treebank language modeling task. We show that we can obtain similar
perplexity scores with six times fewer parameters compared to a standard
stacked 2-layer LSTM model trained with dropout (Zaremba et al. 2014). In
contrast with the current usage of skip connections, we show that densely
connecting only a few stacked layers with skip connections already yields
significant perplexity reductions.
| 2,017 | Computation and Language |
Expect the unexpected: Harnessing Sentence Completion for Sarcasm
Detection | The trigram `I love being' is expected to be followed by positive words such
as `happy'. In a sarcastic sentence, however, the word `ignored' may be
observed. The expected and the observed words are, thus, incongruous. We model
sarcasm detection as the task of detecting incongruity between an observed and
an expected word. In order to obtain the expected word, we use Context2Vec, a
sentence completion library based on Bidirectional LSTM. However, since the
exact word where such an incongruity occurs may not be known in advance, we
present two approaches: an All-words approach (which consults sentence
completion for every content word) and an Incongruous words-only approach
(which consults sentence completion for the 50% most incongruous content
words). The approaches outperform reported values for tweets but not for
discussion forum posts. This is likely to be because of redundant consultation
of sentence completion for discussion forum posts. Therefore, we consider an
oracle case where the exact incongruous word is manually labeled in a corpus
reported in past work. In this case, the performance is higher than the
all-words approach. This sets up the promise for using sentence completion for
sarcasm detection.
| 2,017 | Computation and Language |
Sentence-level quality estimation by predicting HTER as a
multi-component metric | This submission investigates alternative machine learning models for
predicting the HTER score on the sentence level. Instead of directly predicting
the HTER score, we suggest a model that jointly predicts the amount of the 4
distinct post-editing operations, which are then used to calculate the HTER
score. This also gives the possibility to correct invalid (e.g. negative)
predicted values prior to the calculation of the HTER score. Without any
feature exploration, a multi-layer perceptron with 4 outputs yields small but
significant improvements over the baseline.
| 2,017 | Computation and Language |
Fast and Accurate OOV Decoder on High-Level Features | This work proposes a novel approach to out-of-vocabulary (OOV) keyword search
(KWS) task. The proposed approach is based on using high-level features from an
automatic speech recognition (ASR) system, so called phoneme posterior based
(PPB) features, for decoding. These features are obtained by calculating
time-dependent phoneme posterior probabilities from word lattices, followed by
their smoothing. For the PPB features we developed a special novel very fast,
simple and efficient OOV decoder. Experimental results are presented on the
Georgian language from the IARPA Babel Program, which was the test language in
the OpenKWS 2016 evaluation campaign. The results show that in terms of maximum
term weighted value (MTWV) metric and computational speed, for single ASR
systems, the proposed approach significantly outperforms the state-of-the-art
approach based on using in-vocabulary proxies for OOV keywords in the indexed
database. The comparison of the two OOV KWS approaches on the fusion results of
the nine different ASR systems demonstrates that the proposed OOV decoder
outperforms the proxy-based approach in terms of MTWV metric given the
comparable processing speed. Other important advantages of the OOV decoder
include extremely low memory consumption and simplicity of its implementation
and parameter optimization.
| 2,021 | Computation and Language |
The Role of Conversation Context for Sarcasm Detection in Online
Interactions | Computational models for sarcasm detection have often relied on the content
of utterances in isolation. However, speaker's sarcastic intent is not always
obvious without additional context. Focusing on social media discussions, we
investigate two issues: (1) does modeling of conversation context help in
sarcasm detection and (2) can we understand what part of conversation context
triggered the sarcastic reply. To address the first issue, we investigate
several types of Long Short-Term Memory (LSTM) networks that can model both the
conversation context and the sarcastic response. We show that the conditional
LSTM network (Rocktaschel et al., 2015) and LSTM networks with sentence level
attention on context and response outperform the LSTM model that reads only the
response. To address the second issue, we present a qualitative analysis of
attention weights produced by the LSTM models with attention and discuss the
results compared with human performance on the task.
| 2,017 | Computation and Language |
Unsupervised Domain Adaptation for Robust Speech Recognition via
Variational Autoencoder-Based Data Augmentation | Domain mismatch between training and testing can lead to significant
degradation in performance in many machine learning scenarios. Unfortunately,
this is not a rare situation for automatic speech recognition deployments in
real-world applications. Research on robust speech recognition can be regarded
as trying to overcome this domain mismatch issue. In this paper, we address the
unsupervised domain adaptation problem for robust speech recognition, where
both source and target domain speech are presented, but word transcripts are
only available for the source domain speech. We present novel
augmentation-based methods that transform speech in a way that does not change
the transcripts. Specifically, we first train a variational autoencoder on both
source and target domain data (without supervision) to learn a latent
representation of speech. We then transform nuisance attributes of speech that
are irrelevant to recognition by modifying the latent representations, in order
to augment labeled training data with additional data whose distribution is
more similar to the target domain. The proposed method is evaluated on the
CHiME-4 dataset and reduces the absolute word error rate (WER) by as much as
35% compared to the non-adapted baseline.
| 2,017 | Computation and Language |
Reward-Balancing for Statistical Spoken Dialogue Systems using
Multi-objective Reinforcement Learning | Reinforcement learning is widely used for dialogue policy optimization where
the reward function often consists of more than one component, e.g., the
dialogue success and the dialogue length. In this work, we propose a structured
method for finding a good balance between these components by searching for the
optimal reward component weighting. To render this search feasible, we use
multi-objective reinforcement learning to significantly reduce the number of
training dialogues required. We apply our proposed method to find optimized
component weights for six domains and compare them to a default baseline.
| 2,017 | Computation and Language |
Learning Visually Grounded Sentence Representations | We introduce a variety of models, trained on a supervised image captioning
corpus to predict the image features for a given caption, to perform sentence
representation grounding. We train a grounded sentence encoder that achieves
good performance on COCO caption and image retrieval and subsequently show that
this encoder can successfully be transferred to various NLP tasks, with
improved performance over text-only models. Lastly, we analyze the contribution
of grounding, and show that word embeddings learned by this system outperform
non-grounded ones.
| 2,018 | Computation and Language |
A Sub-Character Architecture for Korean Language Processing | We introduce a novel sub-character architecture that exploits a unique
compositional structure of the Korean language. Our method decomposes each
character into a small set of primitive phonetic units called jamo letters from
which character- and word-level representations are induced. The jamo letters
divulge syntactic and semantic information that is difficult to access with
conventional character-level units. They greatly alleviate the data sparsity
problem, reducing the observation space to 1.6% of the original while
increasing accuracy in our experiments. We apply our architecture to dependency
parsing and achieve dramatic improvement over strong lexical baselines.
| 2,017 | Computation and Language |
Improving Discourse Relation Projection to Build Discourse Annotated
Corpora | The naive approach to annotation projection is not effective to project
discourse annotations from one language to another because implicit discourse
relations are often changed to explicit ones and vice-versa in the translation.
In this paper, we propose a novel approach based on the intersection between
statistical word-alignment models to identify unsupported discourse
annotations. This approach identified 65% of the unsupported annotations in the
English-French parallel sentences from Europarl. By filtering out these
unsupported annotations, we induced the first PDTB-style discourse annotated
corpus for French from Europarl. We then used this corpus to train a classifier
to identify the discourse-usage of French discourse connectives and show a 15%
improvement of F1-score compared to the classifier trained on the non-filtered
annotations.
| 2,017 | Computation and Language |
Large-Scale Goodness Polarity Lexicons for Community Question Answering | We transfer a key idea from the field of sentiment analysis to a new domain:
community question answering (cQA). The cQA task we are interested in is the
following: given a question and a thread of comments, we want to re-rank the
comments so that the ones that are good answers to the question would be ranked
higher than the bad ones. We notice that good vs. bad comments use specific
vocabulary and that one can often predict the goodness/badness of a comment
even ignoring the question, based on the comment contents only. This leads us
to the idea to build a good/bad polarity lexicon as an analogy to the
positive/negative sentiment polarity lexicons, commonly used in sentiment
analysis. In particular, we use pointwise mutual information in order to build
large-scale goodness polarity lexicons in a semi-supervised manner starting
with a small number of initial seeds. The evaluation results show an
improvement of 0.7 MAP points absolute over a very strong baseline and
state-of-the art performance on SemEval-2016 Task 3.
| 2,017 | Computation and Language |
Revisiting Selectional Preferences for Coreference Resolution | Selectional preferences have long been claimed to be essential for
coreference resolution. However, they are mainly modeled only implicitly by
current coreference resolvers. We propose a dependency-based embedding model of
selectional preferences which allows fine-grained compatibility judgments with
high coverage. We show that the incorporation of our model improves coreference
resolution performance on the CoNLL dataset, matching the state-of-the-art
results of a more complex system. However, it comes with a cost that makes it
debatable how worthwhile such improvements are.
| 2,017 | Computation and Language |
Syllable-aware Neural Language Models: A Failure to Beat Character-aware
Ones | Syllabification does not seem to improve word-level RNN language modeling
quality when compared to character-based segmentation. However, our best
syllable-aware language model, achieving performance comparable to the
competitive character-aware model, has 18%-33% fewer parameters and is trained
1.2-2.2 times faster.
| 2,017 | Computation and Language |
Language Transfer of Audio Word2Vec: Learning Audio Segment
Representations without Target Language Data | Audio Word2Vec offers vector representations of fixed dimensionality for
variable-length audio segments using Sequence-to-sequence Autoencoder (SA).
These vector representations are shown to describe the sequential phonetic
structures of the audio segments to a good degree, with real world applications
such as query-by-example Spoken Term Detection (STD). This paper examines the
capability of language transfer of Audio Word2Vec. We train SA from one
language (source language) and use it to extract the vector representation of
the audio segments of another language (target language). We found that SA can
still catch phonetic structure from the audio segments of the target language
if the source and target languages are similar. In query-by-example STD, we
obtain the vector representations from the SA learned from a large amount of
source language data, and found them surpass the representations from naive
encoder and SA directly learned from a small amount of target language data.
The result shows that it is possible to learn Audio Word2Vec model from
high-resource languages and use it on low-resource languages. This further
expands the usability of Audio Word2Vec.
| 2,018 | Computation and Language |
High-risk learning: acquiring new word vectors from tiny data | Distributional semantics models are known to struggle with small data. It is
generally accepted that in order to learn 'a good vector' for a word, a model
must have sufficient examples of its usage. This contradicts the fact that
humans can guess the meaning of a word from a few occurrences only. In this
paper, we show that a neural language model such as Word2Vec only necessitates
minor modifications to its standard architecture to learn new terms from tiny
data, using background knowledge from a previously learnt semantic space. We
test our model on word definitions and on a nonce task involving 2-6 sentences'
worth of context, showing a large increase in performance over state-of-the-art
models on the definitional task.
| 2,017 | Computation and Language |
DeepPath: A Reinforcement Learning Method for Knowledge Graph Reasoning | We study the problem of learning to reason in large scale knowledge graphs
(KGs). More specifically, we describe a novel reinforcement learning framework
for learning multi-hop relational paths: we use a policy-based agent with
continuous states based on knowledge graph embeddings, which reasons in a KG
vector space by sampling the most promising relation to extend its path. In
contrast to prior work, our approach includes a reward function that takes the
accuracy, diversity, and efficiency into consideration. Experimentally, we show
that our proposed method outperforms a path-ranking based algorithm and
knowledge graph embedding methods on Freebase and Never-Ending Language
Learning datasets.
| 2,018 | Computation and Language |
Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling
Tasks | Selecting optimal parameters for a neural network architecture can often make
the difference between mediocre and state-of-the-art performance. However,
little is published which parameters and design choices should be evaluated or
selected making the correct hyperparameter optimization often a "black art that
requires expert experiences" (Snoek et al., 2012). In this paper, we evaluate
the importance of different network design choices and hyperparameters for five
common linguistic sequence tagging tasks (POS, Chunking, NER, Entity
Recognition, and Event Detection). We evaluated over 50.000 different setups
and found, that some parameters, like the pre-trained word embeddings or the
last layer of the network, have a large impact on the performance, while other
parameters, for example the number of LSTM layers or the number of recurrent
units, are of minor importance. We give a recommendation on a configuration
that performs well among different tasks.
| 2,017 | Computation and Language |
Shallow reading with Deep Learning: Predicting popularity of online
content using only its title | With the ever decreasing attention span of contemporary Internet users, the
title of online content (such as a news article or video) can be a major factor
in determining its popularity. To take advantage of this phenomenon, we propose
a new method based on a bidirectional Long Short-Term Memory (LSTM) neural
network designed to predict the popularity of online content using only its
title. We evaluate the proposed architecture on two distinct datasets of news
articles and news videos distributed in social media that contain over 40,000
samples in total. On those datasets, our approach improves the performance over
traditional shallow approaches by a margin of 15%. Additionally, we show that
using pre-trained word vectors in the embedding layer improves the results of
LSTM models, especially when the training set is small. To our knowledge, this
is the first attempt of applying popularity prediction using only textual
information from the title.
| 2,017 | Computation and Language |
An Error-Oriented Approach to Word Embedding Pre-Training | We propose a novel word embedding pre-training approach that exploits writing
errors in learners' scripts. We compare our method to previous models that tune
the embeddings based on script scores and the discrimination between correct
and corrupt word contexts in addition to the generic commonly-used embeddings
pre-trained on large corpora. The comparison is achieved by using the
aforementioned models to bootstrap a neural network that learns to predict a
holistic score for scripts. Furthermore, we investigate augmenting our model
with error corrections and monitor the impact on performance. Our results show
that our error-oriented approach outperforms other comparable ones which is
further demonstrated when training on more data. Additionally, extending the
model with corrections provides further performance gains when data sparsity is
an issue.
| 2,017 | Computation and Language |
Why We Need New Evaluation Metrics for NLG | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In
this paper, we motivate the need for novel, system- and data-independent
automatic evaluation methods: We investigate a wide range of metrics, including
state-of-the-art word-based and novel grammar-based ones, and demonstrate that
they only weakly reflect human judgements of system outputs as generated by
data-driven, end-to-end NLG. We also show that metric performance is data- and
system-specific. Nevertheless, our results also suggest that automatic metrics
perform reliably at system-level and can support system development by finding
cases where a system performs poorly.
| 2,017 | Computation and Language |
Unsupervised, Knowledge-Free, and Interpretable Word Sense
Disambiguation | Interpretability of a predictive model is a powerful feature that gains the
trust of users in the correctness of the predictions. In word sense
disambiguation (WSD), knowledge-based systems tend to be much more
interpretable than knowledge-free counterparts as they rely on the wealth of
manually-encoded elements representing word senses, such as hypernyms, usage
examples, and images. We present a WSD system that bridges the gap between
these two so far disconnected groups of methods. Namely, our system, providing
access to several state-of-the-art WSD models, aims to be interpretable as a
knowledge-based system while it remains completely unsupervised and
knowledge-free. The presented tool features a Web interface for all-word
disambiguation of texts that makes the sense predictions human readable by
providing interpretable word sense inventories, sense representations, and
disambiguation results. We provide a public API, enabling seamless integration.
| 2,018 | Computation and Language |
SGNMT -- A Flexible NMT Decoding Platform for Quick Prototyping of New
Models and Search Strategies | This paper introduces SGNMT, our experimental platform for machine
translation research. SGNMT provides a generic interface to neural and symbolic
scoring modules (predictors) with left-to-right semantic such as translation
models like NMT, language models, translation lattices, $n$-best lists or other
kinds of scores and constraints. Predictors can be combined with other
predictors to form complex decoding tasks. SGNMT implements a number of search
strategies for traversing the space spanned by the predictors which are
appropriate for different predictor constellations. Adding new predictors or
decoding strategies is particularly easy, making it a very efficient tool for
prototyping new research ideas. SGNMT is actively being used by students in the
MPhil program in Machine Learning, Speech and Language Technology at the
University of Cambridge for course work and theses, as well as for most of the
research work in our group.
| 2,017 | Computation and Language |
A study on text-score disagreement in online reviews | In this paper, we focus on online reviews and employ artificial intelligence
tools, taken from the cognitive computing field, to help understanding the
relationships between the textual part of the review and the assigned numerical
score. We move from the intuitions that 1) a set of textual reviews expressing
different sentiments may feature the same score (and vice-versa); and 2)
detecting and analyzing the mismatches between the review content and the
actual score may benefit both service providers and consumers, by highlighting
specific factors of satisfaction (and dissatisfaction) in texts.
To prove the intuitions, we adopt sentiment analysis techniques and we
concentrate on hotel reviews, to find polarity mismatches therein. In
particular, we first train a text classifier with a set of annotated hotel
reviews, taken from the Booking website. Then, we analyze a large dataset, with
around 160k hotel reviews collected from Tripadvisor, with the aim of detecting
a polarity mismatch, indicating if the textual content of the review is in
line, or not, with the associated score.
Using well established artificial intelligence techniques and analyzing in
depth the reviews featuring a mismatch between the text polarity and the score,
we find that -on a scale of five stars- those reviews ranked with middle scores
include a mixture of positive and negative aspects.
The approach proposed here, beside acting as a polarity detector, provides an
effective selection of reviews -on an initial very large dataset- that may
allow both consumers and providers to focus directly on the review subset
featuring a text/score disagreement, which conveniently convey to the user a
summary of positive and negative features of the review target.
| 2,017 | Computation and Language |
Cross-Lingual Induction and Transfer of Verb Classes Based on Word
Vector Space Specialisation | Existing approaches to automatic VerbNet-style verb classification are
heavily dependent on feature engineering and therefore limited to languages
with mature NLP pipelines. In this work, we propose a novel cross-lingual
transfer method for inducing VerbNets for multiple languages. To the best of
our knowledge, this is the first study which demonstrates how the architectures
for learning word embeddings can be applied to this challenging
syntactic-semantic task. Our method uses cross-lingual translation pairs to tie
each of the six target languages into a bilingual vector space with English,
jointly specialising the representations to encode the relational information
from English VerbNet. A standard clustering algorithm is then run on top of the
VerbNet-specialised representations, using vector dimensions as features for
learning verb classes. Our results show that the proposed cross-lingual
transfer approach sets new state-of-the-art verb classification performance
across all six target languages explored in this work.
| 2,017 | Computation and Language |
Reconstruction of Word Embeddings from Sub-Word Parameters | Pre-trained word embeddings improve the performance of a neural model at the
cost of increasing the model size. We propose to benefit from this resource
without paying the cost by operating strictly at the sub-lexical level. Our
approach is quite simple: before task-specific training, we first optimize
sub-word parameters to reconstruct pre-trained word embeddings using various
distance measures. We report interesting results on a variety of tasks: word
similarity, word analogy, and part-of-speech tagging.
| 2,017 | Computation and Language |
Mimicking Word Embeddings using Subword RNNs | Word embeddings improve generalization over lexical features by placing each
word in a lower-dimensional space, using distributional information obtained
from unlabeled data. However, the effectiveness of word embeddings for
downstream NLP tasks is limited by out-of-vocabulary (OOV) words, for which
embeddings do not exist. In this paper, we present MIMICK, an approach to
generating OOV word embeddings compositionally, by learning a function from
spellings to distributional embeddings. Unlike prior work, MIMICK does not
require re-training on the original word embedding corpus; instead, learning is
performed at the type level. Intrinsic and extrinsic evaluations demonstrate
the power of this simple approach. On 23 languages, MIMICK improves performance
over a word-based baseline for tagging part-of-speech and morphosyntactic
attributes. It is competitive with (and complementary to) a supervised
character-based model in low-resource settings.
| 2,017 | Computation and Language |
Split and Rephrase | We propose a new sentence simplification task (Split-and-Rephrase) where the
aim is to split a complex sentence into a meaning preserving sequence of
shorter sentences. Like sentence simplification, splitting-and-rephrasing has
the potential of benefiting both natural language processing and societal
applications. Because shorter sentences are generally better processed by NLP
systems, it could be used as a preprocessing step which facilitates and
improves the performance of parsers, semantic role labellers and machine
translation systems. It should also be of use for people with reading
disabilities because it allows the conversion of longer sentences into shorter
ones. This paper makes two contributions towards this new task. First, we
create and make available a benchmark consisting of 1,066,115 tuples mapping a
single complex sentence to a sequence of sentences expressing the same meaning.
Second, we propose five models (vanilla sequence-to-sequence to
semantically-motivated models) to understand the difficulty of the proposed
task.
| 2,017 | Computation and Language |
A Sentiment-and-Semantics-Based Approach for Emotion Detection in
Textual Conversations | Emotions are physiological states generated in humans in reaction to internal
or external events. They are complex and studied across numerous fields
including computer science. As humans, on reading "Why don't you ever text me!"
we can either interpret it as a sad or angry emotion and the same ambiguity
exists for machines. Lack of facial expressions and voice modulations make
detecting emotions from text a challenging problem. However, as humans
increasingly communicate using text messaging applications, and digital agents
gain popularity in our society, it is essential that these digital agents are
emotion aware, and respond accordingly.
In this paper, we propose a novel approach to detect emotions like happy, sad
or angry in textual conversations using an LSTM based Deep Learning model. Our
approach consists of semi-automated techniques to gather training data for our
model. We exploit advantages of semantic and sentiment based embeddings and
propose a solution combining both. Our work is evaluated on real-world
conversations and significantly outperforms traditional Machine Learning
baselines as well as other off-the-shelf Deep Learning models.
| 2,018 | Computation and Language |
End-to-end Neural Coreference Resolution | We introduce the first end-to-end coreference resolution model and show that
it significantly outperforms all previous work without using a syntactic parser
or hand-engineered mention detector. The key idea is to directly consider all
spans in a document as potential mentions and learn distributions over possible
antecedents for each. The model computes span embeddings that combine
context-dependent boundary representations with a head-finding attention
mechanism. It is trained to maximize the marginal likelihood of gold antecedent
spans from coreference clusters and is factored to enable aggressive pruning of
potential mentions. Experiments demonstrate state-of-the-art performance, with
a gain of 1.5 F1 on the OntoNotes benchmark and by 3.1 F1 using a 5-model
ensemble, despite the fact that this is the first approach to be successfully
trained with no external resources.
| 2,017 | Computation and Language |
Progressive Joint Modeling in Unsupervised Single-channel Overlapped
Speech Recognition | Unsupervised single-channel overlapped speech recognition is one of the
hardest problems in automatic speech recognition (ASR). Permutation invariant
training (PIT) is a state of the art model-based approach, which applies a
single neural network to solve this single-input, multiple-output modeling
problem. We propose to advance the current state of the art by imposing a
modular structure on the neural network, applying a progressive pretraining
regimen, and improving the objective function with transfer learning and a
discriminative training criterion. The modular structure splits the problem
into three sub-tasks: frame-wise interpreting, utterance-level speaker tracing,
and speech recognition. The pretraining regimen uses these modules to solve
progressively harder tasks. Transfer learning leverages parallel clean speech
to improve the training targets for the network. Our discriminative training
formulation is a modification of standard formulations, that also penalizes
competing outputs of the system. Experiments are conducted on the artificial
overlapped Switchboard and hub5e-swb dataset. The proposed framework achieves
over 30% relative improvement of WER over both a strong jointly trained system,
PIT for ASR, and a separately optimized system, PIT for speech separation with
clean speech ASR model. The improvement comes from better model generalization,
training efficiency and the sequence level linguistic knowledge integration.
| 2,018 | Computation and Language |
A Pilot Study of Domain Adaptation Effect for Neural Abstractive
Summarization | We study the problem of domain adaptation for neural abstractive
summarization. We make initial efforts in investigating what information can be
transferred to a new domain. Experimental results on news stories and opinion
articles indicate that neural summarization model benefits from pre-training
based on extractive summaries. We also find that the combination of in-domain
and out-of-domain setup yields better summaries when in-domain data is
insufficient. Further analysis shows that, the model is capable to select
salient content even trained on out-of-domain data, but requires in-domain data
to capture the style for a target domain.
| 2,017 | Computation and Language |
Identifying civilians killed by police with distantly supervised
entity-event extraction | We propose a new, socially-impactful task for natural language processing:
from a news corpus, extract names of persons who have been killed by police. We
present a newly collected police fatality corpus, which we release publicly,
and present a model to solve this problem that uses EM-based distant
supervision with logistic regression and convolutional neural network
classifiers. Our model outperforms two off-the-shelf event extractor systems,
and it can suggest candidate victim names in some cases faster than one of the
major manually-collected police fatality databases.
| 2,017 | Computation and Language |
Predicting the Gender of Indonesian Names | We investigated a way to predict the gender of a name using character-level
Long-Short Term Memory (char-LSTM). We compared our method with some
conventional machine learning methods, namely Naive Bayes, logistic regression,
and XGBoost with n-grams as the features. We evaluated the models on a dataset
consisting of the names of Indonesian people. It is not common to use a family
name as the surname in Indonesian culture, except in some ethnicities.
Therefore, we inferred the gender from both full names and first names. The
results show that we can achieve 92.25% accuracy from full names, while using
first names only yields 90.65% accuracy. These results are better than the ones
from applying the classical machine learning algorithms to n-grams.
| 2,017 | Computation and Language |
Attention-Based End-to-End Speech Recognition on Voice Search | Recently, there has been a growing interest in end-to-end speech recognition
that directly transcribes speech to text without any predefined alignments. In
this paper, we explore the use of attention-based encoder-decoder model for
Mandarin speech recognition on a voice search task. Previous attempts have
shown that applying attention-based encoder-decoder to Mandarin speech
recognition was quite difficult due to the logographic orthography of Mandarin,
the large vocabulary and the conditional dependency of the attention model. In
this paper, we use character embedding to deal with the large vocabulary.
Several tricks are used for effective model training, including L2
regularization, Gaussian weight noise and frame skipping. We compare two
attention mechanisms and use attention smoothing to cover long context in the
attention model. Taken together, these tricks allow us to finally achieve a
character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on
the MiTV voice search dataset. While together with a trigram language model,
CER and SER reach 2.81% and 5.77%, respectively.
| 2,018 | Computation and Language |
Native Language Identification on Text and Speech | This paper presents an ensemble system combining the output of multiple SVM
classifiers to native language identification (NLI). The system was submitted
to the NLI Shared Task 2017 fusion track which featured students essays and
spoken responses in form of audio transcriptions and iVectors by non-native
English speakers of eleven native languages. Our system competed in the
challenge under the team name ZCD and was based on an ensemble of SVM
classifiers trained on character n-grams achieving 83.58% accuracy and ranking
3rd in the shared task.
| 2,017 | Computation and Language |
MoodSwipe: A Soft Keyboard that Suggests Messages Based on
User-Specified Emotions | We present MoodSwipe, a soft keyboard that suggests text messages given the
user-specified emotions utilizing the real dialog data. The aim of MoodSwipe is
to create a convenient user interface to enjoy the technology of emotion
classification and text suggestion, and at the same time to collect labeled
data automatically for developing more advanced technologies. While users
select the MoodSwipe keyboard, they can type as usual but sense the emotion
conveyed by their text and receive suggestions for their message as a benefit.
In MoodSwipe, the detected emotions serve as the medium for suggested texts,
where viewing the latter is the incentive to correcting the former. We conduct
several experiments to show the superiority of the emotion classification
models trained on the dialog data, and further to verify good emotion cues are
important context for text suggestion.
| 2,017 | Computation and Language |
"i have a feeling trump will win..................": Forecasting Winners
and Losers from User Predictions on Twitter | Social media users often make explicit predictions about upcoming events.
Such statements vary in the degree of certainty the author expresses toward the
outcome:"Leonardo DiCaprio will win Best Actor" vs. "Leonardo DiCaprio may win"
or "No way Leonardo wins!". Can popular beliefs on social media predict who
will win? To answer this question, we build a corpus of tweets annotated for
veridicality on which we train a log-linear classifier that detects positive
veridicality with high precision. We then forecast uncertain outcomes using the
wisdom of crowds, by aggregating users' explicit predictions. Our method for
forecasting winners is fully automated, relying only on a set of contenders as
input. It requires no training data of past outcomes and outperforms sentiment
and tweet volume baselines on a broad range of contest prediction tasks. We
further demonstrate how our approach can be used to measure the reliability of
individual accounts' predictions and retrospectively identify surprise
outcomes.
| 2,017 | Computation and Language |
Language modeling with Neural trans-dimensional random fields | Trans-dimensional random field language models (TRF LMs) have recently been
introduced, where sentences are modeled as a collection of random fields. The
TRF approach has been shown to have the advantages of being computationally
more efficient in inference than LSTM LMs with close performance and being able
to flexibly integrating rich features. In this paper we propose neural TRFs,
beyond of the previous discrete TRFs that only use linear potentials with
discrete features. The idea is to use nonlinear potentials with continuous
features, implemented by neural networks (NNs), in the TRF framework. Neural
TRFs combine the advantages of both NNs and TRFs. The benefits of word
embedding, nonlinear feature learning and larger context modeling are inherited
from the use of NNs. At the same time, the strength of efficient inference by
avoiding expensive softmax is preserved. A number of technical contributions,
including employing deep convolutional neural networks (CNNs) to define the
potentials and incorporating the joint stochastic approximation (JSA) strategy
in the training algorithm, are developed in this work, which enable us to
successfully train neural TRF LMs. Various LMs are evaluated in terms of speech
recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The
results show that neural TRF LMs not only improve over discrete TRF LMs, but
also perform slightly better than LSTM LMs with only one fifth of parameters
and 16x faster inference efficiency.
| 2,017 | Computation and Language |
Tensor Fusion Network for Multimodal Sentiment Analysis | Multimodal sentiment analysis is an increasingly popular research area, which
extends the conventional language-based definition of sentiment analysis to a
multimodal setup where other relevant modalities accompany language. In this
paper, we pose the problem of multimodal sentiment analysis as modeling
intra-modality and inter-modality dynamics. We introduce a novel model, termed
Tensor Fusion Network, which learns both such dynamics end-to-end. The proposed
approach is tailored for the volatile nature of spoken language in online
videos as well as accompanying gestures and voice. In the experiments, our
model outperforms state-of-the-art approaches for both multimodal and unimodal
sentiment analysis.
| 2,017 | Computation and Language |
Composing Distributed Representations of Relational Patterns | Learning distributed representations for relation instances is a central
technique in downstream NLP applications. In order to address semantic modeling
of relational patterns, this paper constructs a new dataset that provides
multiple similarity ratings for every pair of relational patterns on the
existing dataset. In addition, we conduct a comparative study of different
encoders including additive composition, RNN, LSTM, and GRU for composing
distributed representations of relational patterns. We also present Gated
Additive Composition, which is an enhancement of additive composition with the
gating mechanism. Experiments show that the new dataset does not only enable
detailed analyses of the different encoders, but also provides a gauge to
predict successes of distributed representations of relational patterns in the
relation classification task.
| 2,017 | Computation and Language |
Hierarchical Embeddings for Hypernymy Detection and Directionality | We present a novel neural model HyperVec to learn hierarchical embeddings for
hypernymy detection and directionality. While previous embeddings have shown
limitations on prototypical hypernyms, HyperVec represents an unsupervised
measure where embeddings are learned in a specific order and capture the
hypernym$-$hyponym distributional hierarchy. Moreover, our model is able to
generalize over unseen hypernymy pairs, when using only small sets of training
data, and by mapping to other languages. Results on benchmark datasets show
that HyperVec outperforms both state$-$of$-$the$-$art unsupervised measures and
embedding models on hypernymy detection and directionality, and on predicting
graded lexical entailment.
| 2,017 | Computation and Language |
Fine Grained Citation Span for References in Wikipedia | \emph{Verifiability} is one of the core editing principles in Wikipedia,
editors being encouraged to provide citations for the added content. For a
Wikipedia article, determining the \emph{citation span} of a citation, i.e.
what content is covered by a citation, is important as it helps decide for
which content citations are still missing.
We are the first to address the problem of determining the \emph{citation
span} in Wikipedia articles. We approach this problem by classifying which
textual fragments in an article are covered by a citation. We propose a
sequence classification approach where for a paragraph and a citation, we
determine the citation span at a fine-grained level.
We provide a thorough experimental evaluation and compare our approach
against baselines adopted from the scientific domain, where we show improvement
for all evaluation metrics.
| 2,017 | Computation and Language |
Using Argument-based Features to Predict and Analyse Review Helpfulness | We study the helpful product reviews identification problem in this paper. We
observe that the evidence-conclusion discourse relations, also known as
arguments, often appear in product reviews, and we hypothesise that some
argument-based features, e.g. the percentage of argumentative sentences, the
evidences-conclusions ratios, are good indicators of helpful reviews. To
validate this hypothesis, we manually annotate arguments in 110 hotel reviews,
and investigate the effectiveness of several combinations of argument-based
features. Experiments suggest that, when being used together with the
argument-based features, the state-of-the-art baseline features can enjoy a
performance boost (in terms of F1) of 11.01\% in average.
| 2,017 | Computation and Language |
Adversarial Examples for Evaluating Reading Comprehension Systems | Standard accuracy metrics indicate that reading comprehension systems are
making rapid progress, but the extent to which these systems truly understand
language remains unclear. To reward systems with real language understanding
abilities, we propose an adversarial evaluation scheme for the Stanford
Question Answering Dataset (SQuAD). Our method tests whether systems can answer
questions about paragraphs that contain adversarially inserted sentences, which
are automatically generated to distract computer systems without changing the
correct answer or misleading humans. In this adversarial setting, the accuracy
of sixteen published models drops from an average of $75\%$ F1 score to $36\%$;
when the adversary is allowed to add ungrammatical sequences of words, average
accuracy on four models decreases further to $7\%$. We hope our insights will
motivate the development of new models that understand language more precisely.
| 2,017 | Computation and Language |
Rule-Based Spanish Morphological Analyzer Built From Spell Checking
Lexicon | Preprocessing tools for automated text analysis have become more widely
available in major languages, but non-English tools are often still limited in
their functionality. When working with Spanish-language text, researchers can
easily find tools for tokenization and stemming, but may not have the means to
extract more complex word features like verb tense or mood. Yet Spanish is a
morphologically rich language in which such features are often identifiable
from word form. Conjugation rules are consistent, but many special verbs and
nouns take on different rules. While building a complete dictionary of known
words and their morphological rules would be labor intensive, resources to do
so already exist, in spell checkers designed to generate valid forms of known
words. This paper introduces a set of tools for Spanish-language morphological
analysis, built using the COES spell checking tools, to label person, mood,
tense, gender and number, derive a word's root noun or verb infinitive, and
convert verbs to their nominal form.
| 2,017 | Computation and Language |
A Sequential Model for Classifying Temporal Relations between
Intra-Sentence Events | We present a sequential model for temporal relation classification between
intra-sentence events. The key observation is that the overall syntactic
structure and compositional meanings of the multi-word context between events
are important for distinguishing among fine-grained temporal relations.
Specifically, our approach first extracts a sequence of context words that
indicates the temporal relation between two events, which well align with the
dependency path between two event mentions. The context word sequence, together
with a parts-of-speech tag sequence and a dependency relation sequence that are
generated corresponding to the word sequence, are then provided as input to
bidirectional recurrent neural network (LSTM) models. The neural nets learn
compositional syntactic and semantic representations of contexts surrounding
the two events and predict the temporal relation between them. Evaluation of
the proposed approach on TimeBank corpus shows that sequential modeling is
capable of accurately recognizing temporal relations between events, which
outperforms a neural net model using various discrete features as input that
imitates previous feature based models.
| 2,017 | Computation and Language |
Event Coreference Resolution by Iteratively Unfolding Inter-dependencies
among Events | We introduce a novel iterative approach for event coreference resolution that
gradually builds event clusters by exploiting inter-dependencies among event
mentions within the same chain as well as across event chains. Among event
mentions in the same chain, we distinguish within- and cross-document event
coreference links by using two distinct pairwise classifiers, trained
separately to capture differences in feature distributions of within- and
cross-document event clusters. Our event coreference approach alternates
between WD and CD clustering and combines arguments from both event clusters
after every merge, continuing till no more merge can be made. And then it
performs further merging between event chains that are both closely related to
a set of other chains of events. Experiments on the ECB+ corpus show that our
model outperforms state-of-the-art methods in joint task of WD and CD event
coreference resolution.
| 2,017 | Computation and Language |
Reinforcement Learning for Bandit Neural Machine Translation with
Simulated Human Feedback | Machine translation is a natural candidate problem for reinforcement learning
from human feedback: users provide quick, dirty ratings on candidate
translations to guide a system to improve. Yet, current neural machine
translation training focuses on expensive human-generated reference
translations. We describe a reinforcement learning algorithm that improves
neural machine translation systems from simulated human feedback. Our algorithm
combines the advantage actor-critic algorithm (Mnih et al., 2016) with the
attention-based neural encoder-decoder architecture (Luong et al., 2015). This
algorithm (a) is well-designed for problems with a large action space and
delayed rewards, (b) effectively optimizes traditional corpus-level machine
translation metrics, and (c) is robust to skewed, high-variance, granular
feedback modeled after actual human behaviors.
| 2,017 | Computation and Language |
Exploring Neural Transducers for End-to-End Speech Recognition | In this work, we perform an empirical comparison among the CTC,
RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech
recognition. We show that, without any language model, Seq2Seq and
RNN-Transducer models both outperform the best reported CTC models with a
language model, on the popular Hub5'00 benchmark. On our internal diverse
dataset, these trends continue - RNNTransducer models rescored with a language
model after beam search outperform our best CTC models. These results simplify
the speech recognition pipeline so that decoding can now be expressed purely as
neural network operations. We also study how the choice of encoder architecture
affects the performance of the three models - when all encoder layers are
forward only, and when encoders downsample the input representation
aggressively.
| 2,017 | Computation and Language |
Character-level Intra Attention Network for Natural Language Inference | Natural language inference (NLI) is a central problem in language
understanding. End-to-end artificial neural networks have reached
state-of-the-art performance in NLI field recently.
In this paper, we propose Character-level Intra Attention Network (CIAN) for
the NLI task. In our model, we use the character-level convolutional network to
replace the standard word embedding layer, and we use the intra attention to
capture the intra-sentence semantics. The proposed CIAN model provides improved
results based on a newly published MNLI corpus.
| 2,017 | Computation and Language |
Analysing Errors of Open Information Extraction Systems | We report results on benchmarking Open Information Extraction (OIE) systems
using RelVis, a toolkit for benchmarking Open Information Extraction systems.
Our comprehensive benchmark contains three data sets from the news domain and
one data set from Wikipedia with overall 4522 labeled sentences and 11243
binary or n-ary OIE relations. In our analysis on these data sets we compared
the performance of four popular OIE systems, ClausIE, OpenIE 4.2, Stanford
OpenIE and PredPatt. In addition, we evaluated the impact of five common error
classes on a subset of 749 n-ary tuples. From our deep analysis we unreveal
important research directions for a next generation of OIE systems.
| 2,017 | Computation and Language |
Learning Rare Word Representations using Semantic Bridging | We propose a methodology that adapts graph embedding techniques (DeepWalk
(Perozzi et al., 2014) and node2vec (Grover and Leskovec, 2016)) as well as
cross-lingual vector space mapping approaches (Least Squares and Canonical
Correlation Analysis) in order to merge the corpus and ontological sources of
lexical knowledge. We also perform comparative analysis of the used algorithms
in order to identify the best combination for the proposed system. We then
apply this to the task of enhancing the coverage of an existing word
embedding's vocabulary with rare and unseen words. We show that our technique
can provide considerable extra coverage (over 99%), leading to consistent
performance gain (around 10% absolute gain is achieved with w2v-gn-500K cf.\S
3.3) on the Rare Word Similarity dataset.
| 2,017 | Computation and Language |
CAp 2017 challenge: Twitter Named Entity Recognition | The paper describes the CAp 2017 challenge. The challenge concerns the
problem of Named Entity Recognition (NER) for tweets written in French. We
first present the data preparation steps we followed for constructing the
dataset released in the framework of the challenge. We begin by demonstrating
why NER for tweets is a challenging problem especially when the number of
entities increases. We detail the annotation process and the necessary
decisions we made. We provide statistics on the inter-annotator agreement, and
we conclude the data description part with examples and statistics for the
data. We, then, describe the participation in the challenge, where 8 teams
participated, with a focus on the methods employed by the challenge
participants and the scores achieved in terms of F$_1$ measure. Importantly,
the constructed dataset comprising $\sim$6,000 tweets annotated for 13 types of
entities, which to the best of our knowledge is the first such dataset in
French, is publicly available at \url{http://cap2017.imag.fr/competition.html} .
| 2,017 | Computation and Language |
Transition-Based Generation from Abstract Meaning Representations | This work addresses the task of generating English sentences from Abstract
Meaning Representation (AMR) graphs. To cope with this task, we transform each
input AMR graph into a structure similar to a dependency tree and annotate it
with syntactic information by applying various predefined actions to it.
Subsequently, a sentence is obtained from this tree structure by visiting its
nodes in a specific order. We train maximum entropy models to estimate the
probability of each individual action and devise an algorithm that efficiently
approximates the best sequence of actions to be applied. Using a substandard
language model, our generator achieves a Bleu score of 27.4 on the LDC2014T12
test set, the best result reported so far without using silver standard
annotations from another corpus as additional training data.
| 2,017 | Computation and Language |
Image Pivoting for Learning Multilingual Multimodal Representations | In this paper we propose a model to learn multimodal multilingual
representations for matching images and sentences in different languages, with
the aim of advancing multilingual versions of image search and image
understanding. Our model learns a common representation for images and their
descriptions in two different languages (which need not be parallel) by
considering the image as a pivot between two languages. We introduce a new
pairwise ranking loss function which can handle both symmetric and asymmetric
similarity between the two modalities. We evaluate our models on
image-description ranking for German and English, and on semantic textual
similarity of image descriptions in English. In both cases we achieve
state-of-the-art performance.
| 2,017 | Computation and Language |
Improve Lexicon-based Word Embeddings By Word Sense Disambiguation | There have been some works that learn a lexicon together with the corpus to
improve the word embeddings. However, they either model the lexicon separately
but update the neural networks for both the corpus and the lexicon by the same
likelihood, or minimize the distance between all of the synonym pairs in the
lexicon. Such methods do not consider the relatedness and difference of the
corpus and the lexicon, and may not be the best optimized. In this paper, we
propose a novel method that considers the relatedness and difference of the
corpus and the lexicon. It trains word embeddings by learning the corpus to
predicate a word and its corresponding synonym under the context at the same
time. For polysemous words, we use a word sense disambiguation filter to
eliminate the synonyms that have different meanings for the context. To
evaluate the proposed method, we compare the performance of the word embeddings
trained by our proposed model, the control groups without the filter or the
lexicon, and the prior works in the word similarity tasks and text
classification task. The experimental results show that the proposed model
provides better embeddings for polysemous words and improves the performance
for text classification.
| 2,017 | Computation and Language |
Deep Architectures for Neural Machine Translation | It has been shown that increasing model depth improves the quality of neural
machine translation. However, different architectural variants to increase
model depth have been proposed, and so far, there has been no thorough
comparative study.
In this work, we describe and evaluate several existing approaches to
introduce depth in neural machine translation. Additionally, we explore novel
architectural variants, including deep transition RNNs, and we vary how
attention is used in the deep decoder. We introduce a novel "BiDeep" RNN
architecture that combines deep transition RNNs and stacked RNNs.
Our evaluation is carried out on the English to German WMT news translation
dataset, using a single-GPU machine for both training and inference. We find
that several of our proposed architectures improve upon existing approaches in
terms of speed and translation quality. We obtain best improvements with a
BiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU
over a strong shallow baseline.
We release our code for ease of adoption.
| 2,017 | Computation and Language |
Global Normalization of Convolutional Neural Networks for Joint Entity
and Relation Classification | We introduce globally normalized convolutional neural networks for joint
entity classification and relation extraction. In particular, we propose a way
to utilize a linear-chain conditional random field output layer for predicting
entity types and relations between entities at the same time. Our experiments
show that global normalization outperforms a locally normalized softmax layer
on a benchmark dataset.
| 2,018 | Computation and Language |
AMR Parsing using Stack-LSTMs | We present a transition-based AMR parser that directly generates AMR parses
from plain text. We use Stack-LSTMs to represent our parser state and make
decisions greedily. In our experiments, we show that our parser achieves very
competitive scores on English using only AMR training data. Adding additional
information, such as POS tags and dependency trees, improves the results
further.
| 2,017 | Computation and Language |
Macro Grammars and Holistic Triggering for Efficient Semantic Parsing | To learn a semantic parser from denotations, a learning algorithm must search
over a combinatorially large space of logical forms for ones consistent with
the annotated denotations. We propose a new online learning algorithm that
searches faster as training progresses. The two key ideas are using macro
grammars to cache the abstract patterns of useful logical forms found thus far,
and holistic triggering to efficiently retrieve the most relevant patterns
based on sentence similarity. On the WikiTableQuestions dataset, we first
expand the search space of an existing model to improve the state-of-the-art
accuracy from 38.7% to 42.7%, and then use macro grammars and holistic
triggering to achieve an 11x speedup and an accuracy of 43.7%.
| 2,017 | Computation and Language |
Machine Translation at Booking.com: Journey and Lessons Learned | We describe our recently developed neural machine translation (NMT) system
and benchmark it against our own statistical machine translation (SMT) system
as well as two other general purpose online engines (statistical and neural).
We present automatic and human evaluation results of the translation output
provided by each system. We also analyze the effect of sentence length on the
quality of output for SMT and NMT systems.
| 2,017 | Computation and Language |
Question Dependent Recurrent Entity Network for Question Answering | Question Answering is a task which requires building models capable of
providing answers to questions expressed in human language. Full question
answering involves some form of reasoning ability. We introduce a neural
network architecture for this task, which is a form of $Memory\ Network$, that
recognizes entities and their relations to answers through a focus attention
mechanism. Our model is named $Question\ Dependent\ Recurrent\ Entity\ Network$
and extends $Recurrent\ Entity\ Network$ by exploiting aspects of the question
during the memorization process. We validate the model on both synthetic and
real datasets: the $bAbI$ question answering dataset and the $CNN\ \&\ Daily\
News$ $reading\ comprehension$ dataset. In our experiments, the models achieved
a State-of-The-Art in the former and competitive results in the latter.
| 2,017 | Computation and Language |
Synthesising Sign Language from semantics, approaching "from the target
and back" | We present a Sign Language modelling approach allowing to build grammars and
create linguistic input for Sign synthesis through avatars. We comment on the
type of grammar it allows to build, and observe a resemblance between the
resulting expressions and traditional semantic representations. Comparing the
ways in which the paradigms are designed, we name and contrast two essentially
different strategies for building higher-level linguistic input:
"source-and-forward" vs. "target-and-back". We conclude by favouring the
latter, acknowledging the power of being able to automatically generate output
from semantically relevant input straight into articulations of the target
language.
| 2,017 | Computation and Language |
Challenges in Data-to-Document Generation | Recent neural models have shown significant progress on the problem of
generating short descriptive texts conditioned on a small number of database
records. In this work, we suggest a slightly more difficult data-to-text
generation task, and investigate how effective current approaches are on this
task. In particular, we introduce a new, large-scale corpus of data records
paired with descriptive documents, propose a series of extractive evaluation
methods for analyzing performance, and obtain baseline results using current
neural generation methods. Experiments show that these models produce fluent
text, but fail to convincingly approximate human-generated documents. Moreover,
even templated baselines exceed the performance of these neural models on some
metrics, though copy- and reconstruction-based extensions lead to noticeable
improvements.
| 2,017 | Computation and Language |
Learning Word Relatedness over Time | Search systems are often focused on providing relevant results for the "now",
assuming both corpora and user needs that focus on the present. However, many
corpora today reflect significant longitudinal collections ranging from 20
years of the Web to hundreds of years of digitized newspapers and books.
Understanding the temporal intent of the user and retrieving the most relevant
historical content has become a significant challenge. Common search features,
such as query expansion, leverage the relationship between terms but cannot
function well across all times when relationships vary temporally. In this
work, we introduce a temporal relationship model that is extracted from
longitudinal data collections. The model supports the task of identifying,
given two words, when they relate to each other. We present an algorithmic
framework for this task and show its application for the task of query
expansion, achieving high gain.
| 2,017 | Computation and Language |
ShotgunWSD: An unsupervised algorithm for global word sense
disambiguation inspired by DNA sequencing | In this paper, we present a novel unsupervised algorithm for word sense
disambiguation (WSD) at the document level. Our algorithm is inspired by a
widely-used approach in the field of genetics for whole genome sequencing,
known as the Shotgun sequencing technique. The proposed WSD algorithm is based
on three main steps. First, a brute-force WSD algorithm is applied to short
context windows (up to 10 words) selected from the document in order to
generate a short list of likely sense configurations for each window. In the
second step, these local sense configurations are assembled into longer
composite configurations based on suffix and prefix matching. The resulted
configurations are ranked by their length, and the sense of each word is chosen
based on a voting scheme that considers only the top k configurations in which
the word appears. We compare our algorithm with other state-of-the-art
unsupervised WSD algorithms and demonstrate better performance, sometimes by a
very large margin. We also show that our algorithm can yield better performance
than the Most Common Sense (MCS) baseline on one data set. Moreover, our
algorithm has a very small number of parameters, is robust to parameter tuning,
and, unlike other bio-inspired methods, it gives a deterministic solution (it
does not involve random choices).
| 2,017 | Computation and Language |
From Image to Text Classification: A Novel Approach based on Clustering
Word Embeddings | In this paper, we propose a novel approach for text classification based on
clustering word embeddings, inspired by the bag of visual words model, which is
widely used in computer vision. After each word in a collection of documents is
represented as word vector using a pre-trained word embeddings model, a k-means
algorithm is applied on the word vectors in order to obtain a fixed-size set of
clusters. The centroid of each cluster is interpreted as a super word embedding
that embodies all the semantically related word vectors in a certain region of
the embedding space. Every embedded word in the collection of documents is then
assigned to the nearest cluster centroid. In the end, each document is
represented as a bag of super word embeddings by computing the frequency of
each super word embedding in the respective document. We also diverge from the
idea of building a single vocabulary for the entire collection of documents,
and propose to build class-specific vocabularies for better performance. Using
this kind of representation, we report results on two text mining tasks, namely
text categorization by topic and polarity classification. On both tasks, our
model yields better performance than the standard bag of words.
| 2,017 | Computation and Language |
Analogs of Linguistic Structure in Deep Representations | We investigate the compositional structure of message vectors computed by a
deep network trained on a communication game. By comparing truth-conditional
representations of encoder-produced message vectors to human-produced referring
expressions, we are able to identify aligned (vector, utterance) pairs with the
same meaning. We then search for structured relationships among these aligned
pairs to discover simple vector space transformations corresponding to
negation, conjunction, and disjunction. Our results suggest that neural
representations are capable of spontaneously developing a "syntax" with
functional analogues to qualitative properties of natural language.
| 2,017 | Computation and Language |
The RepEval 2017 Shared Task: Multi-Genre Natural Language Inference
with Sentence Representations | This paper presents the results of the RepEval 2017 Shared Task, which
evaluated neural network sentence representation learning models on the
Multi-Genre Natural Language Inference corpus (MultiNLI) recently introduced by
Williams et al. (2017). All of the five participating teams beat the
bidirectional LSTM (BiLSTM) and continuous bag of words baselines reported in
Williams et al.. The best single model used stacked BiLSTMs with residual
connections to extract sentence features and reached 74.5% accuracy on the
genre-matched test set. Surprisingly, the results of the competition were
fairly consistent across the genre-matched and genre-mismatched test sets, and
across subsets of the test data representing a variety of linguistic phenomena,
suggesting that all of the submitted systems learned reasonably
domain-independent representations for sentence meaning.
| 2,017 | Computation and Language |
Dual Rectified Linear Units (DReLUs): A Replacement for Tanh Activation
Functions in Quasi-Recurrent Neural Networks | In this paper, we introduce a novel type of Rectified Linear Unit (ReLU),
called a Dual Rectified Linear Unit (DReLU). A DReLU, which comes with an
unbounded positive and negative image, can be used as a drop-in replacement for
a tanh activation function in the recurrent step of Quasi-Recurrent Neural
Networks (QRNNs) (Bradbury et al. (2017)). Similar to ReLUs, DReLUs are less
prone to the vanishing gradient problem, they are noise robust, and they induce
sparse activations.
We independently reproduce the QRNN experiments of Bradbury et al. (2017) and
compare our DReLU-based QRNNs with the original tanh-based QRNNs and Long
Short-Term Memory networks (LSTMs) on sentiment classification and word-level
language modeling. Additionally, we evaluate on character-level language
modeling, showing that we are able to stack up to eight QRNN layers with
DReLUs, thus making it possible to improve the current state-of-the-art in
character-level language modeling over shallow architectures based on LSTMs.
| 2,019 | Computation and Language |
Can string kernels pass the test of time in Native Language
Identification? | We describe a machine learning approach for the 2017 shared task on Native
Language Identification (NLI). The proposed approach combines several kernels
using multiple kernel learning. While most of our kernels are based on
character p-grams (also known as n-grams) extracted from essays or speech
transcripts, we also use a kernel based on i-vectors, a low-dimensional
representation of audio recordings, provided by the shared task organizers. For
the learning stage, we choose Kernel Discriminant Analysis (KDA) over Kernel
Ridge Regression (KRR), because the former classifier obtains better results
than the latter one on the development set. In our previous work, we have used
a similar machine learning approach to achieve state-of-the-art NLI results.
The goal of this paper is to demonstrate that our shallow and simple approach
based on string kernels (with minor improvements) can pass the test of time and
reach state-of-the-art performance in the 2017 NLI shared task, despite the
recent advances in natural language processing. We participated in all three
tracks, in which the competitors were allowed to use only the essays (essay
track), only the speech transcripts (speech track), or both (fusion track).
Using only the data provided by the organizers for training our models, we have
reached a macro F1 score of 86.95% in the closed essay track, a macro F1 score
of 87.55% in the closed speech track, and a macro F1 score of 93.19% in the
closed fusion track. With these scores, our team (UnibucKernel) ranked in the
first group of teams in all three tracks, while attaining the best scores in
the speech and the fusion tracks.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.