Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
DuoRC: Towards Complex Language Understanding with Paraphrased Reading
Comprehension | We propose DuoRC, a novel dataset for Reading Comprehension (RC) that
motivates several new challenges for neural approaches in language
understanding beyond those offered by existing RC datasets. DuoRC contains
186,089 unique question-answer pairs created from a collection of 7680 pairs of
movie plots where each pair in the collection reflects two versions of the same
movie - one from Wikipedia and the other from IMDb - written by two different
authors. We asked crowdsourced workers to create questions from one version of
the plot and a different set of workers to extract or synthesize answers from
the other version. This unique characteristic of DuoRC where questions and
answers are created from different versions of a document narrating the same
underlying story, ensures by design, that there is very little lexical overlap
between the questions created from one version and the segments containing the
answer in the other version. Further, since the two versions have different
levels of plot detail, narration style, vocabulary, etc., answering questions
from the second version requires deeper language understanding and
incorporating external background knowledge. Additionally, the narrative style
of passages arising from movie plots (as opposed to typical descriptive
passages in existing datasets) exhibits the need to perform complex reasoning
over events across multiple sentences. Indeed, we observe that state-of-the-art
neural RC models which have achieved near human performance on the SQuAD
dataset, even when coupled with traditional NLP techniques to address the
challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42%
on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research
avenues wherein DuoRC could complement other RC datasets to explore novel
neural approaches for studying language understanding.
| 2,018 | Computation and Language |
Generative Stock Question Answering | We study the problem of stock related question answering (StockQA):
automatically generating answers to stock related questions, just like
professional stock analysts providing action recommendations to stocks upon
user's requests. StockQA is quite different from previous QA tasks since (1)
the answers in StockQA are natural language sentences (rather than entities or
values) and due to the dynamic nature of StockQA, it is scarcely possible to
get reasonable answers in an extractive way from the training data; and (2)
StockQA requires properly analyzing the relationship between keywords in QA
pair and the numerical features of a stock. We propose to address the problem
with a memory-augmented encoder-decoder architecture, and integrate different
mechanisms of number understanding and generation, which is a critical
component of StockQA.
We build a large-scale dataset containing over 180K StockQA instances, based
on which various technique combinations are extensively studied and compared.
Experimental results show that a hybrid word-character model with separate
character components for number processing, achieves the best performance. By
analyzing the results, we found that 44.8% of answers generated by our best
model still suffer from the generic answer problem, which can be alleviated by
a straightforward hybrid retrieval-generation model.
| 2,018 | Computation and Language |
Variational Inference In Pachinko Allocation Machines | The Pachinko Allocation Machine (PAM) is a deep topic model that allows
representing rich correlation structures among topics by a directed acyclic
graph over topics. Because of the flexibility of the model, however,
approximate inference is very difficult. Perhaps for this reason, only a small
number of potential PAM architectures have been explored in the literature. In
this paper we present an efficient and flexible amortized variational inference
method for PAM, using a deep inference network to parameterize the approximate
posterior distribution in a manner similar to the variational autoencoder. Our
inference method produces more coherent topics than state-of-art inference
methods for PAM while being an order of magnitude faster, which allows
exploration of a wider range of PAM architectures than have previously been
studied.
| 2,018 | Computation and Language |
Extrofitting: Enriching Word Representation and its Vector Space with
Semantic Lexicons | We propose post-processing method for enriching not only word representation
but also its vector space using semantic lexicons, which we call extrofitting.
The method consists of 3 steps as follows: (i) Expanding 1 or more dimension(s)
on all the word vectors, filling with their representative value. (ii)
Transferring semantic knowledge by averaging each representative values of
synonyms and filling them in the expanded dimension(s). These two steps make
representations of the synonyms close together. (iii) Projecting the vector
space using Linear Discriminant Analysis, which eliminates the expanded
dimension(s) with semantic knowledge. When experimenting with GloVe, we find
that our method outperforms Faruqui's retrofitting on some of word similarity
task. We also report further analysis on our method in respect to word vector
dimensions, vocabulary size as well as other well-known pretrained word vectors
(e.g., Word2Vec, Fasttext).
| 2,018 | Computation and Language |
Automated essay scoring with string kernels and word embeddings | In this work, we present an approach based on combining string kernels and
word embeddings for automatic essay scoring. String kernels capture the
similarity among strings based on counting common character n-grams, which are
a low-level yet powerful type of feature, demonstrating state-of-the-art
results in various text classification tasks such as Arabic dialect
identification or native language identification. To our best knowledge, we are
the first to apply string kernels to automatically score essays. We are also
the first to combine them with a high-level semantic feature representation,
namely the bag-of-super-word-embeddings. We report the best performance on the
Automated Student Assessment Prize data set, in both in-domain and cross-domain
settings, surpassing recent state-of-the-art deep learning approaches.
| 2,018 | Computation and Language |
Faster Shift-Reduce Constituent Parsing with a Non-Binary, Bottom-Up
Strategy | An increasingly wide range of artificial intelligence applications rely on
syntactic information to process and extract meaning from natural language text
or speech, with constituent trees being one of the most widely used syntactic
formalisms. To produce these phrase-structure representations from sentences in
natural language, shift-reduce constituent parsers have become one of the most
efficient approaches. Increasing their accuracy and speed is still one of the
main objectives pursued by the research community so that artificial
intelligence applications that make use of parsing outputs, such as machine
translation or voice assistant services, can improve their performance. With
this goal in mind, we propose in this article a novel non-binary shift-reduce
algorithm for constituent parsing. Our parser follows a classical bottom-up
strategy but, unlike others, it straightforwardly creates non-binary branchings
with just one Reduce transition, instead of requiring prior binarization or a
sequence of binary transitions, allowing its direct application to any language
without the need of further resources such as percolation tables. As a result,
it uses fewer transitions per sentence than existing transition-based
constituent parsers, becoming the fastest such system and, as a consequence,
speeding up downstream applications. Using static oracle training and greedy
search, the accuracy of this novel approach is on par with state-of-the-art
transition-based constituent parsers and outperforms all top-down and bottom-up
greedy shift-reduce systems on the Wall Street Journal section from the English
Penn Treebank and the Penn Chinese Treebank. Additionally, we develop a dynamic
oracle for training the proposed transition-based algorithm, achieving further
improvements in both benchmarks and obtaining the best accuracy to date on the
Penn Chinese Treebank among greedy shift-reduce parsers.
| 2,019 | Computation and Language |
Eval all, trust a few, do wrong to none: Comparing sentence generation
models | In this paper, we study recent neural generative models for text generation
related to variational autoencoders. Previous works have employed various
techniques to control the prior distribution of the latent codes in these
models, which is important for sampling performance, but little attention has
been paid to reconstruction error. In our study, we follow a rigorous
evaluation protocol using a large set of previously used and novel automatic
and human evaluation metrics, applied to both generated samples and
reconstructions. We hope that it will become the new evaluation standard when
comparing neural generative models for text.
| 2,018 | Computation and Language |
Neural-Davidsonian Semantic Proto-role Labeling | We present a model for semantic proto-role labeling (SPRL) using an adapted
bidirectional LSTM encoding strategy that we call "Neural-Davidsonian":
predicate-argument structure is represented as pairs of hidden states
corresponding to predicate and argument head tokens of the input sequence. We
demonstrate: (1) state-of-the-art results in SPRL, and (2) that our network
naturally shares parameters between attributes, allowing for learning new
attribute types with limited added supervision.
| 2,019 | Computation and Language |
Dynamic Meta-Embeddings for Improved Sentence Representations | While one of the first steps in many NLP systems is selecting what
pre-trained word embeddings to use, we argue that such a step is better left
for neural networks to figure out by themselves. To that end, we introduce
dynamic meta-embeddings, a simple yet effective method for the supervised
learning of embedding ensembles, which leads to state-of-the-art performance
within the same model class on a variety of tasks. We subsequently show how the
technique can be used to shed new light on the usage of word embeddings in NLP
systems.
| 2,018 | Computation and Language |
Generating Natural Language Adversarial Examples | Deep neural networks (DNNs) are vulnerable to adversarial examples,
perturbations to correctly classified examples which can cause the model to
misclassify. In the image domain, these perturbations are often virtually
indistinguishable to human perception, causing humans and state-of-the-art
models to disagree. However, in the natural language domain, small
perturbations are clearly perceptible, and the replacement of a single word can
drastically alter the semantics of the document. Given these challenges, we use
a black-box population-based optimization algorithm to generate semantically
and syntactically similar adversarial examples that fool well-trained sentiment
analysis and textual entailment models with success rates of 97% and 70%,
respectively. We additionally demonstrate that 92.3% of the successful
sentiment analysis adversarial examples are classified to their original label
by 20 human annotators, and that the examples are perceptibly quite similar.
Finally, we discuss an attempt to use adversarial training as a defense, but
fail to yield improvement, demonstrating the strength and diversity of our
adversarial examples. We hope our findings encourage researchers to pursue
improving the robustness of DNNs in the natural language domain.
| 2,018 | Computation and Language |
Fine-grained Entity Typing through Increased Discourse Context and
Adaptive Classification Thresholds | Fine-grained entity typing is the task of assigning fine-grained semantic
types to entity mentions. We propose a neural architecture which learns a
distributional semantic representation that leverages a greater amount of
semantic context -- both document and sentence level information -- than prior
work. We find that additional context improves performance, with further
improvements gained by utilizing adaptive classification thresholds.
Experiments show that our approach without reliance on hand-crafted features
achieves the state-of-the-art results on three benchmark datasets.
| 2,018 | Computation and Language |
Integrating Stance Detection and Fact Checking in a Unified Corpus | A reasonable approach for fact checking a claim involves retrieving
potentially relevant documents from different sources (e.g., news websites,
social media, etc.), determining the stance of each document with respect to
the claim, and finally making a prediction about the claim's factuality by
aggregating the strength of the stances, while taking the reliability of the
source into account. Moreover, a fact checking system should be able to explain
its decision by providing relevant extracts (rationales) from the documents.
Yet, this setup is not directly supported by existing datasets, which treat
fact checking, document retrieval, source credibility, stance detection and
rationale extraction as independent tasks. In this paper, we support the
interdependencies between these tasks as annotations in the same corpus. We
implement this setup on an Arabic fact checking corpus, the first of its kind.
| 2,018 | Computation and Language |
Cross-lingual Semantic Parsing | We introduce the task of cross-lingual semantic parsing: mapping content
provided in a source language into a meaning representation based on a target
language. We present: (1) a meaning representation designed to allow systems to
target varying levels of structural complexity (shallow to deep analysis), (2)
an evaluation metric to measure the similarity between system output and
reference meaning representations, (3) an end-to-end model with a novel copy
mechanism that supports intrasentential coreference, and (4) an evaluation
dataset where experiments show our model outperforms strong baselines by at
least 1.18 F1 score.
| 2,018 | Computation and Language |
Semi-supervised User Geolocation via Graph Convolutional Networks | Social media user geolocation is vital to many applications such as event
detection. In this paper, we propose GCN, a multiview geolocation model based
on Graph Convolutional Networks, that uses both text and network context. We
compare GCN to the state-of-the-art, and to two baselines we propose, and show
that our model achieves or is competitive with the state- of-the-art over three
benchmark geolocation datasets when sufficient supervision is available. We
also evaluate GCN under a minimal supervision scenario, and show it outperforms
baselines. We find that highway network gates are essential for controlling the
amount of useful neighbourhood expansion in GCN.
| 2,018 | Computation and Language |
Multi-Head Decoder for End-to-End Speech Recognition | This paper presents a new network architecture called multi-head decoder for
end-to-end speech recognition as an extension of a multi-head attention model.
In the multi-head attention model, multiple attentions are calculated, and
then, they are integrated into a single attention. On the other hand, instead
of the integration in the attention level, our proposed method uses multiple
decoders for each attention and integrates their outputs to generate a final
output. Furthermore, in order to make each head to capture the different
modalities, different attention functions are used for each head, leading to
the improvement of the recognition performance with an ensemble effect. To
evaluate the effectiveness of our proposed method, we conduct an experimental
evaluation using Corpus of Spontaneous Japanese. Experimental results
demonstrate that our proposed method outperforms the conventional methods such
as location-based and multi-head attention models, and that it can capture
different speech/linguistic contexts within the attention-based encoder-decoder
framework.
| 2,018 | Computation and Language |
Learning Sentence Embeddings for Coherence Modelling and Beyond | We present a novel and effective technique for performing text coherence
tasks while facilitating deeper insights into the data. Despite obtaining
ever-increasing task performance, modern deep-learning approaches to NLP tasks
often only provide users with the final network decision and no additional
understanding of the data. In this work, we show that a new type of sentence
embedding learned through self-supervision can be applied effectively to text
coherence tasks while serving as a window through which deeper understanding of
the data can be obtained. To produce these sentence embeddings, we train a
recurrent neural network to take individual sentences and predict their
location in a document in the form of a distribution over locations. We
demonstrate that these embeddings, combined with simple visual heuristics, can
be used to achieve performance competitive with state-of-the-art on multiple
text coherence tasks, outperforming more complex and specialized approaches.
Additionally, we demonstrate that these embeddings can provide insights useful
to writers for improving writing quality and informing document structuring,
and assisting readers in summarizing and locating information.
| 2,019 | Computation and Language |
A Study on Passage Re-ranking in Embedding based Unsupervised Semantic
Search | State of the art approaches for (embedding based) unsupervised semantic
search exploits either compositional similarity (of a query and a passage) or
pair-wise word (or term) similarity (from the query and the passage). By
design, word based approaches do not incorporate similarity in the larger
context (query/passage), while compositional similarity based approaches are
usually unable to take advantage of the most important cues in the context. In
this paper we propose a new compositional similarity based approach, called
variable centroid vector (VCVB), that tries to address both of these
limitations. We also presents results using a different type of compositional
similarity based approach by exploiting universal sentence embedding. We
provide empirical evaluation on two different benchmarks.
| 2,019 | Computation and Language |
Adversarial Training for Community Question Answer Selection Based on
Multi-scale Matching | Community-based question answering (CQA) websites represent an important
source of information. As a result, the problem of matching the most valuable
answers to their corresponding questions has become an increasingly popular
research topic. We frame this task as a binary (relevant/irrelevant)
classification problem, and present an adversarial training framework to
alleviate label imbalance issue. We employ a generative model to iteratively
sample a subset of challenging negative samples to fool our classification
model. Both models are alternatively optimized using REINFORCE algorithm. The
proposed method is completely different from previous ones, where negative
samples in training set are directly used or uniformly down-sampled. Further,
we propose using Multi-scale Matching which explicitly inspects the correlation
between words and ngrams of different levels of granularity. We evaluate the
proposed method on SemEval 2016 and SemEval 2017 datasets and achieves
state-of-the-art or similar performance.
| 2,018 | Computation and Language |
A Scalable Neural Shortlisting-Reranking Approach for Large-Scale Domain
Classification in Natural Language Understanding | Intelligent personal digital assistants (IPDAs), a popular real-life
application with spoken language understanding capabilities, can cover
potentially thousands of overlapping domains for natural language
understanding, and the task of finding the best domain to handle an utterance
becomes a challenging problem on a large scale. In this paper, we propose a set
of efficient and scalable neural shortlisting-reranking models for large-scale
domain classification in IPDAs. The shortlisting stage focuses on efficiently
trimming all domains down to a list of k-best candidate domains, and the
reranking stage performs a list-wise reranking of the initial k-best domains
with additional contextual information. We show the effectiveness of our
approach with extensive experiments on 1,500 IPDA domains.
| 2,018 | Computation and Language |
Efficient Large-Scale Domain Classification with Personalized Attention | In this paper, we explore the task of mapping spoken language utterances to
one of thousands of natural language understanding domains in intelligent
personal digital assistants (IPDAs). This scenario is observed for many
mainstream IPDAs in industry that allow third parties to develop thousands of
new domains to augment built-in ones to rapidly increase domain coverage and
overall IPDA capabilities. We propose a scalable neural model architecture with
a shared encoder, a novel attention mechanism that incorporates personalization
information and domain-specific classifiers that solves the problem
efficiently. Our architecture is designed to efficiently accommodate new
domains that appear in-between full model retraining cycles with a rapid
bootstrapping mechanism two orders of magnitude faster than retraining. We
account for practical constraints in real-time production systems, and design
to minimize memory footprint and runtime latency. We demonstrate that
incorporating personalization results in significantly more accurate domain
classification in the setting with thousands of overlapping domains.
| 2,018 | Computation and Language |
Unsupervised Discrete Sentence Representation Learning for Interpretable
Neural Dialog Generation | The encoder-decoder dialog model is one of the most prominent methods used to
build dialog systems in complex domains. Yet it is limited because it cannot
output interpretable actions as in traditional systems, which hinders humans
from understanding its generation process. We present an unsupervised discrete
sentence representation learning method that can integrate with any existing
encoder-decoder dialog models for interpretable response generation. Building
upon variational autoencoders (VAEs), we present two novel models, DI-VAE and
DI-VST that improve VAEs and can discover interpretable semantics via either
auto encoding or context predicting. Our methods have been validated on
real-world dialog datasets to discover semantic representations and enhance
encoder-decoder models with interpretable generation.
| 2,018 | Computation and Language |
Inducing and Embedding Senses with Scaled Gumbel Softmax | Methods for learning word sense embeddings represent a single word with
multiple sense-specific vectors. These methods should not only produce
interpretable sense embeddings, but should also learn how to select which sense
to use in a given context. We propose an unsupervised model that learns sense
embeddings using a modified Gumbel softmax function, which allows for
differentiable discrete sense selection. Our model produces sense embeddings
that are competitive (and sometimes state of the art) on multiple similarity
based downstream evaluations. However, performance on these downstream
evaluations tasks does not correlate with interpretability of sense embeddings,
as we discover through an interpretability comparison with competing
multi-sense embeddings. While many previous approaches perform well on
downstream evaluations, they do not produce interpretable embeddings and learn
duplicated sense groups; our method achieves the best of both worlds.
| 2,019 | Computation and Language |
IIIDYT at SemEval-2018 Task 3: Irony detection in English tweets | In this paper we introduce our system for the task of Irony detection in
English tweets, a part of SemEval 2018. We propose representation learning
approach that relies on a multi-layered bidirectional LSTM, without using
external features that provide additional semantic information. Although our
model is able to outperform the baseline in the validation set, our results
show limited generalization power over the test set. Given the limited size of
the dataset, we think the usage of more pre-training schemes would greatly
improve the obtained results.
| 2,018 | Computation and Language |
Performance Impact Caused by Hidden Bias of Training Data for
Recognizing Textual Entailment | The quality of training data is one of the crucial problems when a
learning-centered approach is employed. This paper proposes a new method to
investigate the quality of a large corpus designed for the recognizing textual
entailment (RTE) task. The proposed method, which is inspired by a statistical
hypothesis test, consists of two phases: the first phase is to introduce the
predictability of textual entailment labels as a null hypothesis which is
extremely unacceptable if a target corpus has no hidden bias, and the second
phase is to test the null hypothesis using a Naive Bayes model. The
experimental result of the Stanford Natural Language Inference (SNLI) corpus
does not reject the null hypothesis. Therefore, it indicates that the SNLI
corpus has a hidden bias which allows prediction of textual entailment labels
from hypothesis sentences even if no context information is given by a premise
sentence. This paper also presents the performance impact of NN models for RTE
caused by this hidden bias.
| 2,018 | Computation and Language |
Reduce, Reuse, Recycle: New uses for old QA resources | We investigate applying repurposed generic QA data and models to a recently
proposed relation extraction task. We find that training on SQuAD produces
better zero-shot performance and more robust generalisation compared to the
task specific training set. We also show that standard QA architectures (e.g.
FastQA or BiDAF) can be applied to the slot filling queries without the need
for model modification.
| 2,018 | Computation and Language |
Same Representation, Different Attentions: Shareable Sentence
Representation Learning from Multiple Tasks | Distributed representation plays an important role in deep learning based
natural language processing. However, the representation of a sentence often
varies in different tasks, which is usually learned from scratch and suffers
from the limited amounts of training data. In this paper, we claim that a good
sentence representation should be invariant and can benefit the various
subsequent tasks. To achieve this purpose, we propose a new scheme of
information sharing for multi-task learning. More specifically, all tasks share
the same sentence representation and each task can select the task-specific
information from the shared sentence representation with attention mechanism.
The query vector of each task's attention could be either static parameters or
generated dynamically. We conduct extensive experiments on 16 different text
classification tasks, which demonstrate the benefits of our architecture.
| 2,018 | Computation and Language |
Word Embedding Perturbation for Sentence Classification | In this technique report, we aim to mitigate the overfitting problem of
natural language by applying data augmentation methods. Specifically, we
attempt several types of noise to perturb the input word embedding, such as
Gaussian noise, Bernoulli noise, and adversarial noise, etc. We also apply
several constraints on different types of noise. By implementing these proposed
data augmentation methods, the baseline models can gain improvements on several
sentence classification tasks.
| 2,018 | Computation and Language |
Automatic Language Identification in Texts: A Survey | Language identification (LI) is the problem of determining the natural
language that a document or part thereof is written in. Automatic LI has been
extensively researched for over fifty years. Today, LI is a key part of many
text processing pipelines, as text processing techniques generally assume that
the language of the input text is known. Research in this area has recently
been especially active. This article provides a brief history of LI research,
and an extensive survey of the features and methods used so far in the LI
literature. For describing the features and methods we introduce a unified
notation. We discuss evaluation methods, applications of LI, as well as
off-the-shelf LI systems that do not require training by the end user. Finally,
we identify open issues, survey the work to date on each issue, and propose
future directions for research in LI.
| 2,018 | Computation and Language |
A neural interlingua for multilingual machine translation | We incorporate an explicit neural interlingua into a multilingual
encoder-decoder neural machine translation (NMT) architecture. We demonstrate
that our model learns a language-independent representation by performing
direct zero-shot translation (without using pivot translation), and by using
the source sentence embeddings to create an English Yelp review classifier
that, through the mediation of the neural interlingua, can also classify French
and German reviews. Furthermore, we show that, despite using a smaller number
of parameters than a pairwise collection of bilingual NMT models, our approach
produces comparable BLEU scores for each language pair in WMT15.
| 2,018 | Computation and Language |
Linguistically-Informed Self-Attention for Semantic Role Labeling | Current state-of-the-art semantic role labeling (SRL) uses a deep neural
network with no explicit linguistic features. However, prior work has shown
that gold syntax trees can dramatically improve SRL decoding, suggesting the
possibility of increased accuracy from explicit modeling of syntax. In this
work, we present linguistically-informed self-attention (LISA): a neural
network model that combines multi-head self-attention with multi-task learning
across dependency parsing, part-of-speech tagging, predicate detection and SRL.
Unlike previous models which require significant pre-processing to prepare
linguistic features, LISA can incorporate syntax using merely raw tokens as
input, encoding the sequence only once to simultaneously perform parsing,
predicate detection and role labeling for all predicates. Syntax is
incorporated by training one attention head to attend to syntactic parents for
each token. Moreover, if a high-quality syntactic parse is already available,
it can be beneficially injected at test time without re-training our SRL model.
In experiments on CoNLL-2005 SRL, LISA achieves new state-of-the-art
performance for a model using predicted predicates and standard word
embeddings, attaining 2.5 F1 absolute higher than the previous state-of-the-art
on newswire and more than 3.5 F1 on out-of-domain data, nearly 10% reduction in
error. On ConLL-2012 English SRL we also show an improvement of more than 2.5
F1. LISA also out-performs the state-of-the-art with contextually-encoded
(ELMo) word representations, by nearly 1.0 F1 on news and more than 2.0 F1 on
out-of-domain text.
| 2,018 | Computation and Language |
Knowledge-based end-to-end memory networks | End-to-end dialog systems have become very popular because they hold the
promise of learning directly from human to human dialog interaction. Retrieval
and Generative methods have been explored in this area with mixed results. A
key element that is missing so far, is the incorporation of a-priori knowledge
about the task at hand. This knowledge may exist in the form of structured or
unstructured information. As a first step towards this direction, we present a
novel approach, Knowledge based end-to-end memory networks (KB-memN2N), which
allows special handling of named entities for goal-oriented dialog tasks. We
present results on two datasets, DSTC6 challenge dataset and dialog bAbI tasks.
| 2,018 | Computation and Language |
Spell Once, Summon Anywhere: A Two-Level Open-Vocabulary Language Model | We show how the spellings of known words can help us deal with unknown words
in open-vocabulary NLP tasks. The method we propose can be used to extend any
closed-vocabulary generative model, but in this paper we specifically consider
the case of neural language modeling. Our Bayesian generative story combines a
standard RNN language model (generating the word tokens in each sentence) with
an RNN-based spelling model (generating the letters in each word type). These
two RNNs respectively capture sentence structure and word structure, and are
kept separate as in linguistics. By invoking the second RNN to generate
spellings for novel words in context, we obtain an open-vocabulary language
model. For known words, embeddings are naturally inferred by combining evidence
from type spelling and token context. Comparing to baselines (including a novel
strong baseline), we beat previous work and establish state-of-the-art results
on multiple datasets.
| 2,020 | Computation and Language |
Collecting Diverse Natural Language Inference Problems for Sentence
Representation Evaluation | We present a large-scale collection of diverse natural language inference
(NLI) datasets that help provide insight into how well a sentence
representation captures distinct types of reasoning. The collection results
from recasting 13 existing datasets from 7 semantic phenomena into a common NLI
structure, resulting in over half a million labeled context-hypothesis pairs in
total. We refer to our collection as the DNC: Diverse Natural Language
Inference Collection. The DNC is available online at https://www.decomp.net,
and will grow over time as additional resources are recast and added from novel
sources.
| 2,018 | Computation and Language |
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End
Task-Oriented Dialog Systems | End-to-end task-oriented dialog systems usually suffer from the challenge of
incorporating knowledge bases. In this paper, we propose a novel yet simple
end-to-end differentiable model called memory-to-sequence (Mem2Seq) to address
this issue. Mem2Seq is the first neural generative model that combines the
multi-hop attention over memories with the idea of pointer network. We
empirically show how Mem2Seq controls each generation step, and how its
multi-hop attention mechanism helps in learning correlations between memories.
In addition, our model is quite general without complicated task-specific
designs. As a result, we show that Mem2Seq can be trained faster and attain the
state-of-the-art performance on three different task-oriented dialog datasets.
| 2,018 | Computation and Language |
Parsing Tweets into Universal Dependencies | We study the problem of analyzing tweets with Universal Dependencies. We
extend the UD guidelines to cover special constructions in tweets that affect
tokenization, part-of-speech tagging, and labeled dependencies. Using the
extended guidelines, we create a new tweet treebank for English (Tweebank v2)
that is four times larger than the (unlabeled) Tweebank v1 introduced by Kong
et al. (2014). We characterize the disagreements between our annotators and
show that it is challenging to deliver consistent annotation due to ambiguity
in understanding and explaining tweets. Nonetheless, using the new treebank, we
build a pipeline system to parse raw tweets into UD. To overcome annotation
noise without sacrificing computational efficiency, we propose a new method to
distill an ensemble of 20 transition-based parsers into a single one. Our
parser achieves an improvement of 2.2 in LAS over the un-ensembled baseline and
outperforms parsers that are state-of-the-art on other treebanks in both
accuracy and speed.
| 2,018 | Computation and Language |
Clinical Assistant Diagnosis for Electronic Medical Record Based on
Convolutional Neural Network | Automatically extracting useful information from electronic medical records
along with conducting disease diagnoses is a promising task for both clinical
decision support(CDS) and neural language processing(NLP). Most of the existing
systems are based on artificially constructed knowledge bases, and then
auxiliary diagnosis is done by rule matching. In this study, we present a
clinical intelligent decision approach based on Convolutional Neural
Networks(CNN), which can automatically extract high-level semantic information
of electronic medical records and then perform automatic diagnosis without
artificial construction of rules or knowledge bases. We use collected 18,590
copies of the real-world clinical electronic medical records to train and test
the proposed model. Experimental results show that the proposed model can
achieve 98.67\% accuracy and 96.02\% recall, which strongly supports that using
convolutional neural network to automatically learn high-level semantic
features of electronic medical records and then conduct assist diagnosis is
feasible and effective.
| 2,018 | Computation and Language |
On the Diachronic Stability of Irregularity in Inflectional Morphology | Many languages' inflectional morphological systems are replete with
irregulars, i.e., words that do not seem to follow standard inflectional rules.
In this work, we quantitatively investigate the conditions under which
irregulars can survive in a language over the course of time. Using recurrent
neural networks to simulate language learners, we test the diachronic relation
between frequency of words and their irregularity.
| 2,018 | Computation and Language |
NLITrans at SemEval-2018 Task 12: Transfer of Semantic Knowledge for
Argument Comprehension | The Argument Reasoning Comprehension Task requires significant language
understanding and complex reasoning over world knowledge. We focus on transfer
of a sentence encoder to bootstrap more complicated models given the small size
of the dataset. Our best model uses a pre-trained BiLSTM to encode input
sentences, learns task-specific features for the argument and warrants, then
performs independent argument-warrant matching. This model achieves mean test
set accuracy of 64.43%. Encoder transfer yields a significant gain to our best
model over random initialization. Independent warrant matching effectively
doubles the size of the dataset and provides additional regularization. We
demonstrate that regularization comes from ignoring statistical correlations
between warrant features and position. We also report an experiment with our
best model that only matches warrants to reasons, ignoring claims. Relatively
low performance degradation suggests that our model is not necessarily learning
the intended task.
| 2,018 | Computation and Language |
PlusEmo2Vec at SemEval-2018 Task 1: Exploiting emotion knowledge from
emoji and #hashtags | This paper describes our system that has been submitted to SemEval-2018 Task
1: Affect in Tweets (AIT) to solve five subtasks. We focus on modeling both
sentence and word level representations of emotion inside texts through large
distantly labeled corpora with emojis and hashtags. We transfer the emotional
knowledge by exploiting neural network models as feature extractors and use
these representations for traditional machine learning models such as support
vector regression (SVR) and logistic regression to solve the competition tasks.
Our system is placed among the Top3 for all subtasks we participated.
| 2,018 | Computation and Language |
Exploiting Semantics in Neural Machine Translation with Graph
Convolutional Networks | Semantic representations have long been argued as potentially useful for
enforcing meaning preservation and improving generalization performance of
machine translation methods. In this work, we are the first to incorporate
information about predicate-argument structure of source sentences (namely,
semantic-role representations) into neural machine translation. We use Graph
Convolutional Networks (GCNs) to inject a semantic bias into sentence encoders
and achieve improvements in BLEU scores over the linguistic-agnostic and
syntax-aware versions on the English--German language pair.
| 2,020 | Computation and Language |
Bilingual Embeddings with Random Walks over Multilingual Wordnets | Bilingual word embeddings represent words of two languages in the same space,
and allow to transfer knowledge from one language to the other without machine
translation. The main approach is to train monolingual embeddings first and
then map them using bilingual dictionaries. In this work, we present a novel
method to learn bilingual embeddings based on multilingual knowledge bases (KB)
such as WordNet. Our method extracts bilingual information from multilingual
wordnets via random walks and learns a joint embedding space in one go. We
further reinforce cross-lingual equivalence adding bilingual con- straints in
the loss function of the popular skipgram model. Our experiments involve twelve
cross-lingual word similarity and relatedness datasets in six lan- guage pairs
covering four languages, and show that: 1) random walks over mul- tilingual
wordnets improve results over just using dictionaries; 2) multilingual wordnets
on their own improve over text-based systems in similarity datasets; 3) the
good results are consistent for large wordnets (e.g. English, Spanish), smaller
wordnets (e.g. Basque) or loosely aligned wordnets (e.g. Italian); 4) the
combination of wordnets and text yields the best results, above mapping-based
approaches. Our method can be applied to richer KBs like DBpedia or Babel- Net,
and can be easily extended to multilingual embeddings. All software and
resources are open source.
| 2,018 | Computation and Language |
Semantic Parsing with Syntax- and Table-Aware SQL Generation | We present a generative model to map natural language questions into SQL
queries. Existing neural network based approaches typically generate a SQL
query word-by-word, however, a large portion of the generated results are
incorrect or not executable due to the mismatch between question words and
table contents. Our approach addresses this problem by considering the
structure of table and the syntax of SQL language. The quality of the generated
SQL query is significantly improved through (1) learning to replicate content
from column names, cells or SQL keywords; and (2) improving the generation of
WHERE clause by leveraging the column-cell relation. Experiments are conducted
on WikiSQL, a recently released dataset with the largest question-SQL pairs.
Our approach significantly improves the state-of-the-art execution accuracy
from 69.0% to 74.4%.
| 2,018 | Computation and Language |
Exploiting Partially Annotated Data for Temporal Relation Extraction | Annotating temporal relations (TempRel) between events described in natural
language is known to be labor intensive, partly because the total number of
TempRels is quadratic in the number of events. As a result, only a small number
of documents are typically annotated, limiting the coverage of various
lexical/semantic phenomena. In order to improve existing approaches, one
possibility is to make use of the readily available, partially annotated data
(P as in partial) that cover more documents. However, missing annotations in P
are known to hurt, rather than help, existing systems. This work is a case
study in exploring various usages of P for TempRel extraction. Results show
that despite missing annotations, P is still a useful supervision signal for
this task within a constrained bootstrapping learning framework. The system
described in this system is publicly available.
| 2,018 | Computation and Language |
LightRel SemEval-2018 Task 7: Lightweight and Fast Relation
Classification | We present LightRel, a lightweight and fast relation classifier. Our goal is
to develop a high baseline for different relation extraction tasks. By defining
only very few data-internal, word-level features and external knowledge sources
in the form of word clusters and word embeddings, we train a fast and simple
linear classifier.
| 2,018 | Computation and Language |
Attention Based Natural Language Grounding by Navigating Virtual
Environment | In this work, we focus on the problem of grounding language by training an
agent to follow a set of natural language instructions and navigate to a target
object in an environment. The agent receives visual information through raw
pixels and a natural language instruction telling what task needs to be
achieved and is trained in an end-to-end way. We develop an attention mechanism
for multi-modal fusion of visual and textual modalities that allows the agent
to learn to complete the task and achieve language grounding. Our experimental
results show that our attention mechanism outperforms the existing multi-modal
fusion mechanisms proposed for both 2D and 3D environments in order to solve
the above-mentioned task in terms of both speed and success rate. We show that
the learnt textual representations are semantically meaningful as they follow
vector arithmetic in the embedding space. The effectiveness of our attention
approach over the contemporary fusion mechanisms is also highlighted from the
textual embeddings learnt by the different approaches. We also show that our
model generalizes effectively to unseen scenarios and exhibit zero-shot
generalization capabilities both in 2D and 3D environments. The code for our 2D
environment as well as the models that we developed for both 2D and 3D are
available at https://github.com/rl-lang-grounding/rl-lang-ground.
| 2,018 | Computation and Language |
Mixing Context Granularities for Improved Entity Linking on Question
Answering Data across Entity Categories | The first stage of every knowledge base question answering approach is to
link entities in the input question. We investigate entity linking in the
context of a question answering task and present a jointly optimized neural
architecture for entity mention detection and entity disambiguation that models
the surrounding context on different levels of granularity. We use the Wikidata
knowledge base and available question answering datasets to create benchmarks
for entity linking on question answering data. Our approach outperforms the
previous state-of-the-art system on this data, resulting in an average 8%
improvement of the final score. We further demonstrate that our model delivers
a strong performance across different entity categories.
| 2,018 | Computation and Language |
ASR Performance Prediction on Unseen Broadcast Programs using
Convolutional Neural Networks | In this paper, we address a relatively new task: prediction of ASR
performance on unseen broadcast programs. We first propose an heterogenous
French corpus dedicated to this task. Two prediction approaches are compared: a
state-of-the-art performance prediction based on regression (engineered
features) and a new strategy based on convolutional neural networks (learnt
features). We particularly focus on the combination of both textual (ASR
transcription) and signal inputs. While the joint use of textual and signal
features did not work for the regression baseline, the combination of inputs
for CNNs leads to the best WER prediction performance. We also show that our
CNN prediction remarkably predicts the WER distribution on a collection of
speech recordings.
| 2,018 | Computation and Language |
Using Aspect Extraction Approaches to Generate Review Summaries and User
Profiles | Reviews of products or services on Internet marketplace websites contain a
rich amount of information. Users often wish to survey reviews or review
snippets from the perspective of a certain aspect, which has resulted in a
large body of work on aspect identification and extraction from such corpora.
In this work, we evaluate a newly-proposed neural model for aspect extraction
on two practical tasks. The first is to extract canonical sentences of various
aspects from reviews, and is judged by human evaluators against alternatives. A
$k$-means baseline does remarkably well in this setting. The second experiment
focuses on the suitability of the recovered aspect distributions to represent
users by the reviews they have written. Through a set of review reranking
experiments, we find that aspect-based profiles can largely capture notions of
user preferences, by showing that divergent users generate markedly different
review rankings.
| 2,020 | Computation and Language |
Data-Driven Investigative Journalism For Connectas Dataset | The following paper explores the possibility of using Machine Learning
algorithms to detect the cases of corruption and malpractice by governments.
The dataset used by the authors contains information about several government
contracts in Colombia from year 2007 to 2012. The authors begin with exploring
and cleaning the data, followed by which they perform feature engineering
before finally implementing Machine Learning models to detect anomalies in the
given dataset.
| 2,018 | Computation and Language |
Can Eye Movement Data Be Used As Ground Truth For Word Embeddings
Evaluation? | In recent years a certain success in the task of modeling lexical semantics
was obtained with distributional semantic models. Nevertheless, the scientific
community is still unaware what is the most reliable evaluation method for
these models. Some researchers argue that the only possible gold standard could
be obtained from neuro-cognitive resources that store information about human
cognition. One of such resources is eye movement data on silent reading. The
goal of this work is to test the hypothesis of whether such data could be used
to evaluate distributional semantic models on different languages. We propose
experiments with English and Russian eye movement datasets (Provo Corpus, GECO
and Russian Sentence Corpus), word vectors (Skip-Gram models trained on
national corpora and Web corpora) and word similarity datasets of Russian and
English assessed by humans in order to find the existence of correlation
between embeddings and eye movement data and test the hypothesis that this
correlation is language independent. As a result, we found that the validity of
the hypothesis being tested could be questioned.
| 2,018 | Computation and Language |
Detecting Syntactic Features of Translated Chinese | We present a machine learning approach to distinguish texts translated to
Chinese (by humans) from texts originally written in Chinese, with a focus on a
wide range of syntactic features. Using Support Vector Machines (SVMs) as
classifier on a genre-balanced corpus in translation studies of Chinese, we
find that constituent parse trees and dependency triples as features without
lexical information perform very well on the task, with an F-measure above 90%,
close to the results of lexical n-gram features, without the risk of learning
topic information rather than translation features. Thus, we claim syntactic
features alone can accurately distinguish translated from original Chinese.
Translated Chinese exhibits an increased use of determiners, subject position
pronouns, NP + 'de' as NP modifiers, multiple NPs or VPs conjoined by a Chinese
specific punctuation, among other structures. We also interpret the syntactic
features with reference to previous translation studies in Chinese,
particularly the usage of pronouns.
| 2,018 | Computation and Language |
A Call for Clarity in Reporting BLEU Scores | The field of machine translation faces an under-recognized problem because of
inconsistency in the reporting of scores from its dominant metric. Although
people refer to "the" BLEU score, BLEU is in fact a parameterized metric whose
values can vary wildly with changes to these parameters. These parameters are
often not reported or are hard to find, and consequently, BLEU scores between
papers cannot be directly compared. I quantify this variation, finding
differences as high as 1.8 between commonly used configurations. The main
culprit is different tokenization and normalization schemes applied to the
reference. Pointing to the success of the parsing community, I suggest machine
translation researchers settle upon the BLEU scheme used by the annual
Conference on Machine Translation (WMT), which does not allow for user-supplied
reference processing, and provide a new tool, SacreBLEU, to facilitate this.
| 2,018 | Computation and Language |
SimpleQuestions Nearly Solved: A New Upperbound and Baseline Approach | The SimpleQuestions dataset is one of the most commonly used benchmarks for
studying single-relation factoid questions. In this paper, we present new
evidence that this benchmark can be nearly solved by standard methods. First we
show that ambiguity in the data bounds performance on this benchmark at 83.4%;
there are often multiple answers that cannot be disambiguated from the
linguistic signal alone. Second we introduce a baseline that sets a new
state-of-the-art performance level at 78.1% accuracy, despite using standard
methods. Finally, we report an empirical analysis showing that the upperbound
is loose; roughly a third of the remaining errors are also not resolvable from
the linguistic signal. Together, these results suggest that the SimpleQuestions
dataset is nearly solved.
| 2,018 | Computation and Language |
End-Task Oriented Textual Entailment via Deep Explorations of
Inter-Sentence Interactions | This work deals with SciTail, a natural entailment challenge derived from a
multi-choice question answering problem. The premises and hypotheses in SciTail
were generated with no awareness of each other, and did not specifically aim at
the entailment task. This makes it more challenging than other entailment data
sets and more directly useful to the end-task -- question answering. We propose
DEISTE (deep explorations of inter-sentence interactions for textual
entailment) for this entailment task. Given word-to-word interactions between
the premise-hypothesis pair ($P$, $H$), DEISTE consists of: (i) a
parameter-dynamic convolution to make important words in $P$ and $H$ play a
dominant role in learnt representations; and (ii) a position-aware attentive
convolution to encode the representation and position information of the
aligned word pairs. Experiments show that DEISTE gets $\approx$5\% improvement
over prior state of the art and that the pretrained DEISTE on SciTail
generalizes well on RTE-5.
| 2,018 | Computation and Language |
Integrating Multiplicative Features into Supervised Distributional
Methods for Lexical Entailment | Supervised distributional methods are applied successfully in lexical
entailment, but recent work questioned whether these methods actually learn a
relation between two words. Specifically, Levy et al. (2015) claimed that
linear classifiers learn only separate properties of each word. We suggest a
cheap and easy way to boost the performance of these methods by integrating
multiplicative features into commonly used representations. We provide an
extensive evaluation with different classifiers and evaluation setups, and
suggest a suitable evaluation setup for the task, eliminating biases existing
in previous ones.
| 2,018 | Computation and Language |
DeepEmo: Learning and Enriching Pattern-Based Emotion Representations | We propose a graph-based mechanism to extract rich-emotion bearing patterns,
which fosters a deeper analysis of online emotional expressions, from a corpus.
The patterns are then enriched with word embeddings and evaluated through
several emotion recognition tasks. Moreover, we conduct analysis on the
emotion-oriented patterns to demonstrate its applicability and to explore its
properties. Our experimental results demonstrate that the proposed techniques
outperform most state-of-the-art emotion recognition techniques.
| 2,018 | Computation and Language |
Data-driven Summarization of Scientific Articles | Data-driven approaches to sequence-to-sequence modelling have been
successfully applied to short text summarization of news articles. Such models
are typically trained on input-summary pairs consisting of only a single or a
few sentences, partially due to limited availability of multi-sentence training
data. Here, we propose to use scientific articles as a new milestone for text
summarization: large-scale training data come almost for free with two types of
high-quality summaries at different levels - the title and the abstract. We
generate two novel multi-sentence summarization datasets from scientific
articles and test the suitability of a wide range of existing extractive and
abstractive neural network-based summarization approaches. Our analysis
demonstrates that scientific papers are suitable for data-driven text
summarization. Our results could serve as valuable benchmarks for scaling
sequence-to-sequence models to very long sequences.
| 2,018 | Computation and Language |
Assessing Language Models with Scaling Properties | Language models have primarily been evaluated with perplexity. While
perplexity quantifies the most comprehensible prediction performance, it does
not provide qualitative information on the success or failure of models.
Another approach for evaluating language models is thus proposed, using the
scaling properties of natural language. Five such tests are considered, with
the first two accounting for the vocabulary population and the other three for
the long memory of natural language. The following models were evaluated with
these tests: n-grams, probabilistic context-free grammar (PCFG), Simon and
Pitman-Yor (PY) processes, hierarchical PY, and neural language models. Only
the neural language models exhibit the long memory properties of natural
language, but to a limited degree. The effectiveness of every test of these
models is also discussed.
| 2,018 | Computation and Language |
SIRIUS-LTG-UiO at SemEval-2018 Task 7: Convolutional Neural Networks
with Shortest Dependency Paths for Semantic Relation Extraction and
Classification in Scientific Papers | This article presents the SIRIUS-LTG-UiO system for the SemEval 2018 Task 7
on Semantic Relation Extraction and Classification in Scientific Papers. First
we extract the shortest dependency path (sdp) between two entities, then we
introduce a convolutional neural network (CNN) which takes the shortest
dependency path embeddings as input and performs relation classification with
differing objectives for each subtask of the shared task. This approach
achieved overall F1 scores of 76.7 and 83.2 for relation classification on
clean and noisy data, respectively. Furthermore, for combined relation
extraction and classification on clean data, it obtained F1 scores of 37.4 and
33.6 for each phase. Our system ranks 3rd in all three sub-tasks of the shared
task.
| 2,018 | Computation and Language |
Scheduled Multi-Task Learning: From Syntax to Translation | Neural encoder-decoder models of machine translation have achieved impressive
results, while learning linguistic knowledge of both the source and target
languages in an implicit end-to-end manner. We propose a framework in which our
model begins learning syntax and translation interleaved, gradually putting
more focus on translation. Using this approach, we achieve considerable
improvements in terms of BLEU score on relatively large parallel corpus (WMT14
English to German) and a low-resource (WIT German to English) setup.
| 2,018 | Computation and Language |
Style Transfer Through Back-Translation | Style transfer is the task of rephrasing the text to contain specific
stylistic properties without changing the intent or affect within the context.
This paper introduces a new method for automatic style transfer. We first learn
a latent representation of the input sentence which is grounded in a language
translation model in order to better preserve the meaning of the sentence while
reducing stylistic properties. Then adversarial generation techniques are used
to make the output match the desired style. We evaluate this technique on three
different style transformations: sentiment, gender and political slant.
Compared to two state-of-the-art style transfer modeling techniques we show
improvements both in automatic evaluation of style transfer and in manual
evaluation of meaning preservation and fluency.
| 2,018 | Computation and Language |
Towards a Neural Network Approach to Abstractive Multi-Document
Summarization | Till now, neural abstractive summarization methods have achieved great
success for single document summarization (SDS). However, due to the lack of
large scale multi-document summaries, such methods can be hardly applied to
multi-document summarization (MDS). In this paper, we investigate neural
abstractive methods for MDS by adapting a state-of-the-art neural abstractive
summarization model for SDS. We propose an approach to extend the neural
abstractive model trained on large scale SDS data to the MDS task. Our approach
only makes use of a small number of multi-document summaries for fine tuning.
Experimental results on two benchmark DUC datasets demonstrate that our
approach can outperform a variety of baseline neural models.
| 2,018 | Computation and Language |
Label-aware Double Transfer Learning for Cross-Specialty Medical Named
Entity Recognition | We study the problem of named entity recognition (NER) from electronic
medical records, which is one of the most fundamental and critical problems for
medical text mining. Medical records which are written by clinicians from
different specialties usually contain quite different terminologies and writing
styles. The difference of specialties and the cost of human annotation makes it
particularly difficult to train a universal medical NER system. In this paper,
we propose a label-aware double transfer learning framework (La-DTL) for
cross-specialty NER, so that a medical NER system designed for one specialty
could be conveniently applied to another one with minimal annotation efforts.
The transferability is guaranteed by two components: (i) we propose label-aware
MMD for feature representation transfer, and (ii) we perform parameter transfer
with a theoretical upper bound which is also label aware. We conduct extensive
experiments on 12 cross-specialty NER tasks. The experimental results
demonstrate that La-DTL provides consistent accuracy improvement over strong
baselines. Besides, the promising experimental results on non-medical NER
scenarios indicate that La-DTL is potential to be seamlessly adapted to a wide
range of NER tasks.
| 2,018 | Computation and Language |
Unsupervised Neural Machine Translation with Weight Sharing | Unsupervised neural machine translation (NMT) is a recently proposed approach
for machine translation which aims to train the model without using any labeled
data. The models proposed for unsupervised NMT often use only one shared
encoder to map the pairs of sentences from different languages to a
shared-latent space, which is weak in keeping the unique and internal
characteristics of each language, such as the style, terminology, and sentence
structure. To address this issue, we introduce an extension by utilizing two
independent encoders but sharing some partial weights which are responsible for
extracting high-level representations of the input sentences. Besides, two
different generative adversarial networks (GANs), namely the local GAN and
global GAN, are proposed to enhance the cross-language translation. With this
new approach, we achieve significant improvements on English-German,
English-French and Chinese-to-English translation tasks.
| 2,018 | Computation and Language |
A Report on the Complex Word Identification Shared Task 2018 | We report the findings of the second Complex Word Identification (CWI) shared
task organized as part of the BEA workshop co-located with NAACL-HLT'2018. The
second CWI shared task featured multilingual and multi-genre datasets divided
into four tracks: English monolingual, German monolingual, Spanish monolingual,
and a multilingual track with a French test set, and two tasks: binary
classification and probabilistic classification. A total of 12 teams submitted
their results in different task/track combinations and 11 of them wrote system
description papers that are referred to in this report and appear in the BEA
workshop proceedings.
| 2,018 | Computation and Language |
Automated Detection of Adverse Drug Reactions in the Biomedical
Literature Using Convolutional Neural Networks and Biomedical Word Embeddings | Monitoring the biomedical literature for cases of Adverse Drug Reactions
(ADRs) is a critically important and time consuming task in pharmacovigilance.
The development of computer assisted approaches to aid this process in
different forms has been the subject of many recent works. One particular area
that has shown promise is the use of Deep Neural Networks, in particular,
Convolutional Neural Networks (CNNs), for the detection of ADR relevant
sentences. Using token-level convolutions and general purpose word embeddings,
this architecture has shown good performance relative to more traditional
models as well as Long Short Term Memory (LSTM) models. In this work, we
evaluate and compare two different CNN architectures using the ADE corpus. In
addition, we show that by de-duplicating the ADR relevant sentences, we can
greatly reduce overoptimism in the classification results. Finally, we evaluate
the use of word embeddings specifically developed for biomedical text and show
that they lead to a better performance in this task.
| 2,018 | Computation and Language |
No Metrics Are Perfect: Adversarial Reward Learning for Visual
Storytelling | Though impressive results have been achieved in visual captioning, the task
of generating abstract stories from photo streams is still a little-tapped
problem. Different from captions, stories have more expressive language styles
and contain many imaginary concepts that do not appear in the images. Thus it
poses challenges to behavioral cloning algorithms. Furthermore, due to the
limitations of automatic metrics on evaluating story quality, reinforcement
learning methods with hand-crafted rewards also face difficulties in gaining an
overall performance boost. Therefore, we propose an Adversarial REward Learning
(AREL) framework to learn an implicit reward function from human
demonstrations, and then optimize policy search with the learned reward
function. Though automatic eval- uation indicates slight performance boost over
state-of-the-art (SOTA) methods in cloning expert behaviors, human evaluation
shows that our approach achieves significant improvement in generating more
human-like stories than SOTA systems.
| 2,018 | Computation and Language |
Commonsense mining as knowledge base completion? A study on the impact
of novelty | Commonsense knowledge bases such as ConceptNet represent knowledge in the
form of relational triples. Inspired by the recent work by Li et al., we
analyse if knowledge base completion models can be used to mine commonsense
knowledge from raw text. We propose novelty of predicted triples with respect
to the training set as an important factor in interpreting results. We
critically analyse the difficulty of mining novel commonsense knowledge, and
show that a simple baseline method outperforms the previous state of the art on
predicting more novel.
| 2,018 | Computation and Language |
Seq2Seq-Vis: A Visual Debugging Tool for Sequence-to-Sequence Models | Neural Sequence-to-Sequence models have proven to be accurate and robust for
many sequence prediction tasks, and have become the standard approach for
automatic translation of text. The models work in a five stage blackbox process
that involves encoding a source sequence to a vector space and then decoding
out to a new target sequence. This process is now standard, but like many deep
learning methods remains quite difficult to understand or debug. In this work,
we present a visual analysis tool that allows interaction with a trained
sequence-to-sequence model through each stage of the translation process. The
aim is to identify which patterns have been learned and to detect model errors.
We demonstrate the utility of our tool through several real-world large-scale
sequence-to-sequence use cases.
| 2,018 | Computation and Language |
Gender Bias in Coreference Resolution | We present an empirical study of gender bias in coreference resolution
systems. We first introduce a novel, Winograd schema-style set of minimal pair
sentences that differ only by pronoun gender. With these "Winogender schemas,"
we evaluate and confirm systematic gender bias in three publicly-available
coreference resolution systems, and correlate this bias with real-world and
textual gender statistics.
| 2,018 | Computation and Language |
Hierarchical RNN for Information Extraction from Lawsuit Documents | Every lawsuit document contains the information about the party's claim,
court's analysis, decision and others, and all of this information are helpful
to understand the case better and predict the judge's decision on similar case
in the future. However, the extraction of these information from the document
is difficult because the language is too complicated and sentences varied at
length. We treat this problem as a task of sequence labeling, and this paper
presents the first research to extract relevant information from the civil
lawsuit document in China with the hierarchical RNN framework.
| 2,018 | Computation and Language |
Strong Baselines for Neural Semi-supervised Learning under Domain Shift | Novel neural models have been proposed in recent years for learning under
domain shift. Most models, however, only evaluate on a single task, on
proprietary datasets, or compare to weak baselines, which makes comparison of
models difficult. In this paper, we re-evaluate classic general-purpose
bootstrapping approaches in the context of neural networks under domain shifts
vs. recent neural approaches and propose a novel multi-task tri-training method
that reduces the time and space complexity of classic tri-training. Extensive
experiments on two benchmarks are negative: while our novel method establishes
a new state-of-the-art for sentiment analysis, it does not fare consistently
the best. More importantly, we arrive at the somewhat surprising conclusion
that classic tri-training, with some additions, outperforms the state of the
art. We conclude that classic approaches constitute an important and strong
baseline.
| 2,018 | Computation and Language |
NE-Table: A Neural key-value table for Named Entities | Many Natural Language Processing (NLP) tasks depend on using Named Entities
(NEs) that are contained in texts and in external knowledge sources. While this
is easy for humans, the present neural methods that rely on learned word
embeddings may not perform well for these NLP tasks, especially in the presence
of Out-Of-Vocabulary (OOV) or rare NEs. In this paper, we propose a solution
for this problem, and present empirical evaluations on: a) a structured
Question-Answering task, b) three related Goal-Oriented dialog tasks, and c) a
Reading-Comprehension task, which show that the proposed method can be
effective in dealing with both in-vocabulary and OOV NEs. We create extended
versions of dialog bAbI tasks 1,2 and 4 and OOV versions of the CBT test set
available at - https://github.com/IBM/ne-table-datasets.
| 2,019 | Computation and Language |
QANet: Combining Local Convolution with Global Self-Attention for
Reading Comprehension | Current end-to-end machine reading and question answering (Q\&A) models are
primarily based on recurrent neural networks (RNNs) with attention. Despite
their success, these models are often slow for both training and inference due
to the sequential nature of RNNs. We propose a new Q\&A architecture called
QANet, which does not require recurrent networks: Its encoder consists
exclusively of convolution and self-attention, where convolution models local
interactions and self-attention models global interactions. On the SQuAD
dataset, our model is 3x to 13x faster in training and 4x to 9x faster in
inference, while achieving equivalent accuracy to recurrent models. The
speed-up gain allows us to train the model with much more data. We hence
combine our model with data generated by backtranslation from a neural machine
translation model. On the SQuAD dataset, our single model, trained with
augmented data, achieves 84.6 F1 score on the test set, which is significantly
better than the best published F1 score of 81.8.
| 2,018 | Computation and Language |
The Future of Prosody: It's about Time | Prosody is usually defined in terms of the three distinct but interacting
domains of pitch, intensity and duration patterning, or, more generally, as
phonological and phonetic properties of 'suprasegmentals', speech segments
which are larger than consonants and vowels. Rather than taking this approach,
the concept of multiple time domains for prosody processing is taken up, and
methods of time domain analysis are discussed: annotation mining with timing
dispersion measures, time tree induction, oscillator models in phonology and
phonetics, and finally the use of the Amplitude Envelope Modulation Spectrum
(AEMS). While frequency demodulation (in the form of pitch tracking) is a
central issue in prosodic analysis, in the present context it is amplitude
envelope demodulation and frequency zones in the long time-domain spectra of
the demodulated envelope which are focused. A generalised view is taken of
oscillation as iteration in abstract prosodic models and as modulation and
demodulation of a variety of rhythms in the speech signal.
| 2,018 | Computation and Language |
Automatic speech recognition for launch control center communication
using recurrent neural networks with data augmentation and custom language
model | Transcribing voice communications in NASA's launch control center is
important for information utilization. However, automatic speech recognition in
this environment is particularly challenging due to the lack of training data,
unfamiliar words in acronyms, multiple different speakers and accents, and
conversational characteristics of speaking. We used bidirectional deep
recurrent neural networks to train and test speech recognition performance. We
showed that data augmentation and custom language models can improve speech
recognition accuracy. Transcribing communications from the launch control
center will help the machine analyze information and accelerate knowledge
generation.
| 2,018 | Computation and Language |
A Visual Distance for WordNet | Measuring the distance between concepts is an important field of study of
Natural Language Processing, as it can be used to improve tasks related to the
interpretation of those same concepts. WordNet, which includes a wide variety
of concepts associated with words (i.e., synsets), is often used as a source
for computing those distances. In this paper, we explore a distance for WordNet
synsets based on visual features, instead of lexical ones. For this purpose, we
extract the graphic features generated within a deep convolutional neural
networks trained with ImageNet and use those features to generate a
representative of each synset. Based on those representatives, we define a
distance measure of synsets, which complements the traditional lexical
distances. Finally, we propose some experiments to evaluate its performance and
compare it with the current state-of-the-art.
| 2,018 | Computation and Language |
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP
Applications | Peer reviewing is a central component in the scientific publishing process.
We present the first public dataset of scientific peer reviews available for
research purposes (PeerRead v1) providing an opportunity to study this
important artifact. The dataset consists of 14.7K paper drafts and the
corresponding accept/reject decisions in top-tier venues including ACL, NIPS
and ICLR. The dataset also includes 10.7K textual peer reviews written by
experts for a subset of the papers. We describe the data collection process and
report interesting observed phenomena in the peer reviews. We also propose two
novel NLP tasks based on this dataset and provide simple baseline models. In
the first task, we show that simple models can predict whether a paper is
accepted with up to 21% error reduction compared to the majority baseline. In
the second task, we predict the numerical scores of review aspects and show
that simple models can outperform the mean baseline for aspects with high
variance such as 'originality' and 'impact'.
| 2,018 | Computation and Language |
Personalized Language Model for Query Auto-Completion | Query auto-completion is a search engine feature whereby the system suggests
completed queries as the user types. Recently, the use of a recurrent neural
network language model was suggested as a method of generating query
completions. We show how an adaptable language model can be used to generate
personalized completions and how the model can use online updating to make
predictions for users not seen during training. The personalized predictions
are significantly better than a baseline that uses no user information.
| 2,018 | Computation and Language |
Factors Influencing the Surprising Instability of Word Embeddings | Despite the recent popularity of word embedding methods, there is only a
small body of work exploring the limitations of these representations. In this
paper, we consider one aspect of embedding spaces, namely their stability. We
show that even relatively high frequency words (100-200 occurrences) are often
unstable. We provide empirical evidence for how various factors contribute to
the stability of word embeddings, and we analyze the effects of stability on
downstream tasks.
| 2,018 | Computation and Language |
TypeSQL: Knowledge-based Type-Aware Neural Text-to-SQL Generation | Interacting with relational databases through natural language helps users of
any background easily query and analyze a vast amount of data. This requires a
system that understands users' questions and converts them to SQL queries
automatically. In this paper we present a novel approach, TypeSQL, which views
this problem as a slot filling task. Additionally, TypeSQL utilizes type
information to better understand rare entities and numbers in natural language
questions. We test this idea on the WikiSQL dataset and outperform the prior
state-of-the-art by 5.5% in much less time. We also show that accessing the
content of databases can significantly improve the performance when users'
queries are not well-formed. TypeSQL gets 82.6% accuracy, a 17.5% absolute
improvement compared to the previous content-sensitive model.
| 2,018 | Computation and Language |
On the Evaluation of Semantic Phenomena in Neural Machine Translation
Using Natural Language Inference | We propose a process for investigating the extent to which sentence
representations arising from neural machine translation (NMT) systems encode
distinct semantic phenomena. We use these representations as features to train
a natural language inference (NLI) classifier based on datasets recast from
existing semantic annotations. In applying this process to a representative NMT
system, we find its encoder appears most suited to supporting inferences at the
syntax-semantics interface, as compared to anaphora resolution requiring
world-knowledge. We conclude with a discussion on the merits and potential
deficiencies of the existing process, and how it may be improved and extended
as a broader framework for evaluating semantic coverage.
| 2,018 | Computation and Language |
Hierarchical Density Order Embeddings | By representing words with probability densities rather than point vectors,
probabilistic word embeddings can capture rich and interpretable semantic
information and uncertainty. The uncertainty information can be particularly
meaningful in capturing entailment relationships -- whereby general words such
as "entity" correspond to broad distributions that encompass more specific
words such as "animal" or "instrument". We introduce density order embeddings,
which learn hierarchical representations through encapsulation of probability
densities. In particular, we propose simple yet effective loss functions and
distance metrics, as well as graph-based schemes to select negative samples to
better learn hierarchical density representations. Our approach provides
state-of-the-art performance on the WordNet hypernym relationship prediction
task and the challenging HyperLex lexical entailment dataset -- while retaining
a rich and interpretable density representation.
| 2,018 | Computation and Language |
The Best of Both Worlds: Combining Recent Advances in Neural Machine
Translation | The past year has witnessed rapid advances in sequence-to-sequence (seq2seq)
modeling for Machine Translation (MT). The classic RNN-based approaches to MT
were first out-performed by the convolutional seq2seq model, which was then
out-performed by the more recent Transformer model. Each of these new
approaches consists of a fundamental architecture accompanied by a set of
modeling and training techniques that are in principle applicable to other
seq2seq architectures. In this paper, we tease apart the new architectures and
their accompanying techniques in two ways. First, we identify several key
modeling and training techniques, and apply them to the RNN architecture,
yielding a new RNMT+ model that outperforms all of the three fundamental
architectures on the benchmark WMT'14 English to French and English to German
tasks. Second, we analyze the properties of each fundamental seq2seq
architecture and devise new hybrid architectures intended to combine their
strengths. Our hybrid models obtain further improvements, outperforming the
RNMT+ model on both benchmark datasets.
| 2,018 | Computation and Language |
Integrating Local Context and Global Cohesiveness for Open Information
Extraction | Extracting entities and their relations from text is an important task for
understanding massive text corpora. Open information extraction (IE) systems
mine relation tuples (i.e., entity arguments and a predicate string to describe
their relation) from sentences. These relation tuples are not confined to a
predefined schema for the relations of interests. However, current Open IE
systems focus on modeling local context information in a sentence to extract
relation tuples, while ignoring the fact that global statistics in a large
corpus can be collectively leveraged to identify high-quality sentence-level
extractions. In this paper, we propose a novel Open IE system, called ReMine,
which integrates local context signals and global structural signals in a
unified, distant-supervision framework. Leveraging facts from external
knowledge bases as supervision, the new system can be applied to many different
domains to facilitate sentence-level tuple extractions using corpus-level
statistics. Our system operates by solving a joint optimization problem to
unify (1) segmenting entity/relation phrases in individual sentences based on
local context; and (2) measuring the quality of tuples extracted from
individual sentences with a translating-based objective. Learning the two
subtasks jointly helps correct errors produced in each subtask so that they can
mutually enhance each other. Experiments on two real-world corpora from
different domains demonstrate the effectiveness, generality, and robustness of
ReMine when compared to state-of-the-art open IE systems.
| 2,018 | Computation and Language |
Lessons from the Bible on Modern Topics: Low-Resource Multilingual Topic
Model Evaluation | Multilingual topic models enable document analysis across languages through
coherent multilingual summaries of the data. However, there is no standard and
effective metric to evaluate the quality of multilingual topics. We introduce a
new intrinsic evaluation of multilingual topic models that correlates well with
human judgments of multilingual topic coherence as well as performance in
downstream applications. Importantly, we also study evaluation for low-resource
languages. Because standard metrics fail to accurately measure topic quality
when robust external resources are unavailable, we propose an adaptation model
that improves the accuracy and reliability of these metrics in low-resource
settings.
| 2,018 | Computation and Language |
Extracting Parallel Paragraphs from Common Crawl | Most of the current methods for mining parallel texts from the web assume
that web pages of web sites share same structure across languages. We believe
that there still exists a non-negligible amount of parallel data spread across
sources not satisfying this assumption. We propose an approach based on a
combination of bivec (a bilingual extension of word2vec) and locality-sensitive
hashing which allows us to efficiently identify pairs of parallel segments
located anywhere on pages of a given web domain, regardless their structure. We
validate our method on realigning segments from a large parallel corpus.
Another experiment with real-world data provided by Common Crawl Foundation
confirms that our solution scales to hundreds of terabytes large set of
web-crawled data.
| 2,017 | Computation and Language |
Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading | This paper aims at improving how machines can answer questions directly from
text, with the focus of having models that can answer correctly multiple types
of questions and from various types of texts, documents or even from large
collections of them. To that end, we introduce the Weaver model that uses a new
way to relate a question to a textual context by weaving layers of recurrent
networks, with the goal of making as few assumptions as possible as to how the
information from both question and context should be combined to form the
answer. We show empirically on six datasets that Weaver performs well in
multiple conditions. For instance, it produces solid results on the very
popular SQuAD dataset (Rajpurkar et al., 2016), solves almost all bAbI tasks
(Weston et al., 2015) and greatly outperforms state-of-the-art methods for open
domain question answering from text (Chen et al., 2017).
| 2,018 | Computation and Language |
Improving Coverage and Runtime Complexity for Exact Inference in
Non-Projective Transition-Based Dependency Parsers | We generalize Cohen, G\'omez-Rodr\'iguez, and Satta's (2011) parser to a
family of non-projective transition-based dependency parsers allowing
polynomial-time exact inference. This includes novel parsers with better
coverage than Cohen et al. (2011), and even a variant that reduces time
complexity to $O(n^6)$, improving over the known bounds in exact inference for
non-projective transition-based parsing. We hope that this piece of theoretical
work inspires design of novel transition systems with better coverage and
better run-time guarantees.
Code available at https://github.com/tzshi/nonproj-dp-variants-naacl2018
| 2,018 | Computation and Language |
Improving Entity Linking by Modeling Latent Relations between Mentions | Entity linking involves aligning textual mentions of named entities to their
corresponding entries in a knowledge base. Entity linking systems often exploit
relations between textual mentions in a document (e.g., coreference) to decide
if the linking decisions are compatible. Unlike previous approaches, which
relied on supervised systems or heuristics to predict these relations, we treat
relations as latent variables in our neural entity-linking model. We induce the
relations without any supervision while optimizing the entity-linking system in
an end-to-end fashion. Our multi-relational model achieves the best reported
scores on the standard benchmark (AIDA-CoNLL) and substantially outperforms its
relation-agnostic version. Its training also converges much faster, suggesting
that the injected structural bias helps to explain regularities in the training
data.
| 2,018 | Computation and Language |
An Unsupervised Word Sense Disambiguation System for Under-Resourced
Languages | In this paper, we present Watasense, an unsupervised system for word sense
disambiguation. Given a sentence, the system chooses the most relevant sense of
each input word with respect to the semantic similarity between the given
sentence and the synset constituting the sense of the target word. Watasense
has two modes of operation. The sparse mode uses the traditional vector space
model to estimate the most similar word sense corresponding to its context. The
dense mode, instead, uses synset embeddings to cope with the sparsity problem.
We describe the architecture of the present system and also conduct its
evaluation on three different lexical semantic resources for Russian. We found
that the dense mode substantially outperforms the sparse one on all datasets
according to the adjusted Rand index.
| 2,018 | Computation and Language |
Sentiment Adaptive End-to-End Dialog Systems | End-to-end learning framework is useful for building dialog systems for its
simplicity in training and efficiency in model updating. However, current
end-to-end approaches only consider user semantic inputs in learning and
under-utilize other user information. Therefore, we propose to include user
sentiment obtained through multimodal information (acoustic, dialogic and
textual), in the end-to-end learning framework to make systems more
user-adaptive and effective. We incorporated user sentiment information in both
supervised and reinforcement learning settings. In both settings, adding
sentiment information reduced the dialog length and improved the task success
rate on a bus information search task. This work is the first attempt to
incorporate multimodal user information in the adaptive end-to-end dialog
system training framework and attained state-of-the-art performance.
| 2,019 | Computation and Language |
Neural Particle Smoothing for Sampling from Conditional Sequence Models | We introduce neural particle smoothing, a sequential Monte Carlo method for
sampling annotations of an input string from a given probability model. In
contrast to conventional particle filtering algorithms, we train a proposal
distribution that looks ahead to the end of the input string by means of a
right-to-left LSTM. We demonstrate that this innovation can improve the quality
of the sample. To motivate our formal choices, we explain how our neural model
and neural sampler can be viewed as low-dimensional but nonlinear
approximations to working with HMMs over very large state spaces.
| 2,018 | Computation and Language |
A Tree Search Algorithm for Sequence Labeling | In this paper we propose a novel reinforcement learning based model for
sequence tagging, referred to as MM-Tag. Inspired by the success and
methodology of the AlphaGo Zero, MM-Tag formalizes the problem of sequence
tagging with a Monte Carlo tree search (MCTS) enhanced Markov decision process
(MDP) model, in which the time steps correspond to the positions of words in a
sentence from left to right, and each action corresponds to assign a tag to a
word. Two long short-term memory networks (LSTM) are used to summarize the past
tag assignments and words in the sentence. Based on the outputs of LSTMs, the
policy for guiding the tag assignment and the value for predicting the whole
tagging accuracy of the whole sentence are produced. The policy and value are
then strengthened with MCTS, which takes the produced raw policy and value as
inputs, simulates and evaluates the possible tag assignments at the subsequent
positions, and outputs a better search policy for assigning tags. A
reinforcement learning algorithm is proposed to train the model parameters. Our
work is the first to apply the MCTS enhanced MDP model to the sequence tagging
task. We show that MM-Tag can accurately predict the tags thanks to the
exploratory decision making mechanism introduced by MCTS. Experimental results
show based on a chunking benchmark showed that MM-Tag outperformed the
state-of-the-art sequence tagging baselines including CRF and CRF with LSTM.
| 2,018 | Computation and Language |
OPA2Vec: combining formal and informal content of biomedical ontologies
to improve similarity-based prediction | Motivation: Ontologies are widely used in biology for data annotation,
integration, and analysis. In addition to formally structured axioms,
ontologies contain meta-data in the form of annotation axioms which provide
valuable pieces of information that characterize ontology classes. Annotations
commonly used in ontologies include class labels, descriptions, or synonyms.
Despite being a rich source of semantic information, the ontology meta-data are
generally unexploited by ontology-based analysis methods such as semantic
similarity measures. Results: We propose a novel method, OPA2Vec, to generate
vector representations of biological entities in ontologies by combining formal
ontology axioms and annotation axioms from the ontology meta-data. We apply a
Word2Vec model that has been pre-trained on PubMed abstracts to produce feature
vectors from our collected data. We validate our method in two different ways:
first, we use the obtained vector representations of proteins as a similarity
measure to predict protein-protein interaction (PPI) on two different datasets.
Second, we evaluate our method on predicting gene-disease associations based on
phenotype similarity by generating vector representations of genes and diseases
using a phenotype ontology, and applying the obtained vectors to predict
gene-disease associations. These two experiments are just an illustration of
the possible applications of our method. OPA2Vec can be used to produce vector
representations of any biomedical entity given any type of biomedical ontology.
Availability: https://github.com/bio-ontology-research-group/opa2vec Contact:
[email protected] and [email protected].
| 2,018 | Computation and Language |
Subword Regularization: Improving Neural Network Translation Models with
Multiple Subword Candidates | Subword units are an effective way to alleviate the open vocabulary problems
in neural machine translation (NMT). While sentences are usually converted into
unique subword sequences, subword segmentation is potentially ambiguous and
multiple segmentations are possible even with the same vocabulary. The question
addressed in this paper is whether it is possible to harness the segmentation
ambiguity as a noise to improve the robustness of NMT. We present a simple
regularization method, subword regularization, which trains the model with
multiple subword segmentations probabilistically sampled during training. In
addition, for better subword sampling, we propose a new subword segmentation
algorithm based on a unigram language model. We experiment with multiple
corpora and report consistent improvements especially on low resource and
out-of-domain settings.
| 2,018 | Computation and Language |
From Credit Assignment to Entropy Regularization: Two New Algorithms for
Neural Sequence Prediction | In this work, we study the credit assignment problem in reward augmented
maximum likelihood (RAML) learning, and establish a theoretical equivalence
between the token-level counterpart of RAML and the entropy regularized
reinforcement learning. Inspired by the connection, we propose two sequence
prediction algorithms, one extending RAML with fine-grained credit assignment
and the other improving Actor-Critic with a systematic entropy regularization.
On two benchmark datasets, we show the proposed algorithms outperform RAML and
Actor-Critic respectively, providing new alternatives to sequence prediction.
| 2,018 | Computation and Language |
Recurrent Entity Networks with Delayed Memory Update for Targeted
Aspect-based Sentiment Analysis | While neural networks have been shown to achieve impressive results for
sentence-level sentiment analysis, targeted aspect-based sentiment analysis
(TABSA) --- extraction of fine-grained opinion polarity w.r.t. a pre-defined
set of aspects --- remains a difficult task. Motivated by recent advances in
memory-augmented models for machine reading, we propose a novel architecture,
utilising external "memory chains" with a delayed memory update mechanism to
track entities. On a TABSA task, the proposed model demonstrates substantial
improvements over state-of-the-art approaches, including those using external
knowledge bases.
| 2,018 | Computation and Language |
Cross-Modal Retrieval in the Cooking Context: Learning Semantic
Text-Image Embeddings | Designing powerful tools that support cooking activities has rapidly gained
popularity due to the massive amounts of available data, as well as recent
advances in machine learning that are capable of analyzing them. In this paper,
we propose a cross-modal retrieval model aligning visual and textual data (like
pictures of dishes and their recipes) in a shared representation space. We
describe an effective learning scheme, capable of tackling large-scale
problems, and validate it on the Recipe1M dataset containing nearly 1 million
picture-recipe pairs. We show the effectiveness of our approach regarding
previous state-of-the-art models and present qualitative results over
computational cooking use cases.
| 2,018 | Computation and Language |
Automatic Metric Validation for Grammatical Error Correction | Metric validation in Grammatical Error Correction (GEC) is currently done by
observing the correlation between human and metric-induced rankings. However,
such correlation studies are costly, methodologically troublesome, and suffer
from low inter-rater agreement. We propose MAEGE, an automatic methodology for
GEC metric validation, that overcomes many of the difficulties with existing
practices. Experiments with \maege\ shed a new light on metric quality, showing
for example that the standard $M^2$ metric fares poorly on corpus-level
ranking. Moreover, we use MAEGE to perform a detailed analysis of metric
behavior, showing that correcting some types of errors is consistently
penalized by existing metrics.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.