Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Fine-Grained Static Detection of Obfuscation Transforms Using
Ensemble-Learning and Semantic Reasoning | The ability to efficiently detect the software protections used is at a prime
to facilitate the selection and application of adequate deob-fuscation
techniques. We present a novel approach that combines semantic reasoning
techniques with ensemble learning classification for the purpose of providing a
static detection framework for obfuscation transformations. By contrast to
existing work, we provide a methodology that can detect multiple layers of
obfuscation, without depending on knowledge of the underlying functionality of
the training-set used. We also extend our work to detect constructions of
obfuscation transformations, thus providing a fine-grained methodology. To that
end, we provide several studies for the best practices of the use of machine
learning techniques for a scalable and efficient model. According to our
experimental results and evaluations on obfuscators such as Tigress and OLLVM,
our models have up to 91% accuracy on state-of-the-art obfuscation
transformations. Our overall accuracies for their constructions are up to 100%.
| 2,019 | Computation and Language |
Short Text Language Identification for Under Resourced Languages | The paper presents a hierarchical naive Bayesian and lexicon based classifier
for short text language identification (LID) useful for under resourced
languages. The algorithm is evaluated on short pieces of text for the 11
official South African languages some of which are similar languages. The
algorithm is compared to recent approaches using test sets from previous works
on South African languages as well as the Discriminating between Similar
Languages (DSL) shared tasks' datasets. Remaining research opportunities and
pressing concerns in evaluating and comparing LID approaches are also
discussed.
| 2,019 | Computation and Language |
An Annotated Corpus of Reference Resolution for Interpreting Common
Grounding | Common grounding is the process of creating, repairing and updating mutual
understandings, which is a fundamental aspect of natural language conversation.
However, interpreting the process of common grounding is a challenging task,
especially under continuous and partially-observable context where complex
ambiguity, uncertainty, partial understandings and misunderstandings are
introduced. Interpretation becomes even more challenging when we deal with
dialogue systems which still have limited capability of natural language
understanding and generation. To address this problem, we consider reference
resolution as the central subtask of common grounding and propose a new
resource to study its intermediate process. Based on a simple and general
annotation schema, we collected a total of 40,172 referring expressions in
5,191 dialogues curated from an existing corpus, along with multiple judgements
of referent interpretations. We show that our annotation is highly reliable,
captures the complexity of common grounding through a natural degree of
reasonable disagreements, and allows for more detailed and quantitative
analyses of common grounding strategies. Finally, we demonstrate the advantages
of our annotation for interpreting, analyzing and improving common grounding in
baseline dialogue systems.
| 2,019 | Computation and Language |
A Subword Level Language Model for Bangla Language | Language models are at the core of natural language processing. The ability
to represent natural language gives rise to its applications in numerous NLP
tasks including text classification, summarization, and translation. Research
in this area is very limited in Bangla due to the scarcity of resources, except
for some count-based models and very recent neural language models being
proposed, which are all based on words and limited in practical tasks due to
their high perplexity. This paper attempts to approach this issue of perplexity
and proposes a subword level neural language model with the AWD-LSTM
architecture and various other techniques suitable for training in Bangla
language. The model is trained on a corpus of Bangla newspaper articles of an
appreciable size consisting of more than 28.5 million word tokens. The
performance comparison with various other models depicts the significant
reduction in perplexity the proposed model provides, reaching as low as 39.84,
in just 20 epochs.
| 2,019 | Computation and Language |
Selection-based Question Answering of an MOOC | e-Yantra Robotics Competition (eYRC) is a unique Robotics Competition hosted
by IIT Bombay that is actually an Embedded Systems and Robotics MOOC.
Registrations have been growing exponentially in each year from 4500 in 2012 to
over 34000 in 2019. In this 5-month long competition students learn complex
skills under severe time pressure and have access to a discussion forum to post
doubts about the learning material. Responding to questions in real-time is a
challenge for project staff. Here, we illustrate the advantage of Deep Learning
for real-time question answering in the eYRC discussion forum. We illustrate
the advantage of Transformer based contextual embedding mechanisms such as
Bidirectional Encoder Representation From Transformer (BERT) over word
embedding mechanisms such as Word2Vec. We propose a weighted similarity metric
as a measure of matching and find it more reliable than Content-Content or
Title-Title similarities alone. The automation of replying to questions has
brought the turn around response time(TART) down from a minimum of 21 mins to a
minimum of 0.3 secs.
| 2,019 | Computation and Language |
Drug Repurposing for Cancer: An NLP Approach to Identify Low-Cost
Therapies | More than 200 generic drugs approved by the U.S. Food and Drug Administration
for non-cancer indications have shown promise for treating cancer. Due to their
long history of safe patient use, low cost, and widespread availability,
repurposing of generic drugs represents a major opportunity to rapidly improve
outcomes for cancer patients and reduce healthcare costs worldwide. Evidence on
the efficacy of non-cancer generic drugs being tested for cancer exists in
scientific publications, but trying to manually identify and extract such
evidence is intractable. In this paper, we introduce a system to automate this
evidence extraction from PubMed abstracts. Our primary contribution is to
define the natural language processing pipeline required to obtain such
evidence, comprising the following modules: querying, filtering, cancer type
entity extraction, therapeutic association classification, and study type
classification. Using the subject matter expertise on our team, we create our
own datasets for these specialized domain-specific tasks. We obtain promising
performance in each of the modules by utilizing modern language modeling
techniques and plan to treat them as baseline approaches for future improvement
of individual components.
| 2,019 | Computation and Language |
Improving Document Classification with Multi-Sense Embeddings | Efficient representation of text documents is an important building block in
many NLP tasks. Research on long text categorization has shown that simple
weighted averaging of word vectors for sentence representation often
outperforms more sophisticated neural models. Recently proposed Sparse
Composite Document Vector (SCDV) (Mekala et. al, 2017) extends this approach
from sentences to documents using soft clustering over word vectors. However,
SCDV disregards the multi-sense nature of words, and it also suffers from the
curse of higher dimensionality. In this work, we address these shortcomings and
propose SCDV-MS. SCDV-MS utilizes multi-sense word embeddings and learns a
lower dimensional manifold. Through extensive experiments on multiple
real-world datasets, we show that SCDV-MS embeddings outperform previous
state-of-the-art embeddings on multi-class and multi-label text categorization
tasks. Furthermore, SCDV-MS embeddings are more efficient than SCDV in terms of
time and space complexity on textual classification tasks.
| 2,019 | Computation and Language |
Extended Answer and Uncertainty Aware Neural Question Generation | In this paper, we study automatic question generation, the task of creating
questions from corresponding text passages where some certain spans of the text
can serve as the answers. We propose an Extended Answer-aware Network (EAN)
which is trained with Word-based Coverage Mechanism (WCM) and decodes with
Uncertainty-aware Beam Search (UBS). The EAN represents the target answer by
its surrounding sentence with an encoder, and incorporates the information of
the extended answer into paragraph representation with gated
paragraph-to-answer attention to tackle the problem of the inadequate
representation of the target answer. To reduce undesirable repetition, the WCM
penalizes repeatedly attending to the same words at different time-steps in the
training stage. The UBS aims to seek a better balance between the model
confidence in copying words from an input text paragraph and the confidence in
generating words from a vocabulary. We conduct experiments on the SQuAD
dataset, and the results show our approach achieves significant performance
improvement.
| 2,019 | Computation and Language |
Hunting for Troll Comments in News Community Forums | There are different definitions of what a troll is. Certainly, a troll can be
somebody who teases people to make them angry, or somebody who offends people,
or somebody who wants to dominate any single discussion, or somebody who tries
to manipulate people's opinion (sometimes for money), etc. The last definition
is the one that dominates the public discourse in Bulgaria and Eastern Europe,
and this is our focus in this paper. In our work, we examine two types of
opinion manipulation trolls: paid trolls that have been revealed from leaked
reputation management contracts and mentioned trolls that have been called such
by several different people. We show that these definitions are sensible: we
build two classifiers that can distinguish a post by such a paid troll from one
by a non-troll with 81-82% accuracy; the same classifier achieves 81-82%
accuracy on so called mentioned troll vs. non-troll posts.
| 2,016 | Computation and Language |
A Hybrid Morpheme-Word Representation for Machine Translation of
Morphologically Rich Languages | We propose a language-independent approach for improving statistical machine
translation for morphologically rich languages using a hybrid morpheme-word
representation where the basic unit of translation is the morpheme, but word
boundaries are respected at all stages of the translation process. Our model
extends the classic phrase-based model by means of (1) word boundary-aware
morpheme-level phrase extraction, (2) minimum error-rate training for a
morpheme-level translation model using word-level BLEU, and (3) joint scoring
with morpheme- and word-level language models. Further improvements are
achieved by combining our model with the classic one. The evaluation on English
to Finnish using Europarl (714K sentence pairs; 15.5M English words) shows
statistically significant improvements over the classic model based on BLEU and
human judgments.
| 2,010 | Computation and Language |
In Search of Credible News | We study the problem of finding fake online news. This is an important
problem as news of questionable credibility have recently been proliferating in
social media at an alarming scale. As this is an understudied problem,
especially for languages other than English, we first collect and release to
the research community three new balanced credible vs. fake news datasets
derived from four online sources. We then propose a language-independent
approach for automatically distinguishing credible from fake news, based on a
rich feature set. In particular, we use linguistic (n-gram),
credibility-related (capitalization, punctuation, pronoun use, sentiment
polarity), and semantic (embeddings and DBPedia data) features. Our experiments
on three different testsets show that our model can distinguish credible from
fake news with very high accuracy.
| 2,016 | Computation and Language |
Retrospective and Prospective Mixture-of-Generators for Task-oriented
Dialogue Response Generation | Dialogue response generation (DRG) is a critical component of task-oriented
dialogue systems (TDSs). Its purpose is to generate proper natural language
responses given some context, e.g., historical utterances, system states, etc.
State-of-the-art work focuses on how to better tackle DRG in an end-to-end way.
Typically, such studies assume that each token is drawn from a single
distribution over the output vocabulary, which may not always be optimal.
Responses vary greatly with different intents, e.g., domains, system actions.
We propose a novel mixture-of-generators network (MoGNet) for DRG, where we
assume that each token of a response is drawn from a mixture of distributions.
MoGNet consists of a chair generator and several expert generators. Each expert
is specialized for DRG w.r.t. a particular intent. The chair coordinates
multiple experts and combines the output they have generated to produce more
appropriate responses. We propose two strategies to help the chair make better
decisions, namely, a retrospective mixture-of-generators (RMoG) and prospective
mixture-of-generators (PMoG). The former only considers the historical
expert-generated responses until the current time step while the latter also
considers possible expert-generated responses in the future by encouraging
exploration. In order to differentiate experts, we also devise a
global-and-local (GL) learning scheme that forces each expert to be specialized
towards a particular intent using a local loss and trains the chair and all
experts to coordinate using a global loss.
We carry out extensive experiments on the MultiWOZ benchmark dataset. MoGNet
significantly outperforms state-of-the-art methods in terms of both automatic
and human evaluations, demonstrating its effectiveness for DRG.
| 2,020 | Computation and Language |
Deep Poetry: A Chinese Classical Poetry Generation System | In this work, we demonstrate a Chinese classical poetry generation system
called Deep Poetry. Existing systems for Chinese classical poetry generation
are mostly template-based and very few of them can accept multi-modal input.
Unlike previous systems, Deep Poetry uses neural networks that are trained on
over 200 thousand poems and 3 million ancient Chinese prose. Our system can
accept plain text, images or artistic conceptions as inputs to generate Chinese
classical poetry. More importantly, users are allowed to participate in the
process of writing poetry by our system. For the user's convenience, we deploy
the system at the WeChat applet platform, users can use the system on the
mobile device whenever and wherever possible. The demo video of this paper is
available at https://youtu.be/jD1R_u9TA3M.
| 2,019 | Computation and Language |
An Accuracy-Enhanced Stemming Algorithm for Arabic Information Retrieval | This paper provides a method for indexing and retrieving Arabic texts, based
on natural language processing. Our approach exploits the notion of template in
word stemming and replaces the words by their stems. This technique has proven
to be effective since it has returned significant relevant retrieval results by
decreasing silence during the retrieval phase. Series of experiments have been
conducted to test the performance of the proposed algorithm ESAIR (Enhanced
Stemmer for Arabic Information Retrieval). The results obtained indicate that
the algorithm extracts the exact root with an accuracy rate up to 96% and
hence, improving information retrieval.
| 2,014 | Computation and Language |
Unsupervised Natural Question Answering with a Small Model | The recent (2019-02) demonstration of the power of huge language models such
as GPT-2 to memorise the answers to factoid questions raises questions about
the extent to which knowledge is being embedded directly within these large
models. This short paper describes an architecture through which much smaller
models can also answer such questions - by making use of 'raw' external
knowledge. The contribution of this work is that the methods presented here
rely on unsupervised learning techniques, complementing the unsupervised
training of the Language Model. The goal of this line of research is to be able
to add knowledge explicitly, without extensive training.
| 2,019 | Computation and Language |
End-to-end ASR: from Supervised to Semi-Supervised Learning with Modern
Architectures | We study pseudo-labeling for the semi-supervised training of ResNet,
Time-Depth Separable ConvNets, and Transformers for speech recognition, with
either CTC or Seq2Seq loss functions. We perform experiments on the standard
LibriSpeech dataset, and leverage additional unlabeled data from LibriVox
through pseudo-labeling. We show that while Transformer-based acoustic models
have superior performance with the supervised dataset alone, semi-supervision
improves all models across architectures and loss functions and bridges much of
the performance gaps between them. In doing so, we reach a new state-of-the-art
for end-to-end acoustic models decoded with an external language model in the
standard supervised learning setting, and a new absolute state-of-the-art with
semi-supervised training. Finally, we study the effect of leveraging different
amounts of unlabeled audio, propose several ways of evaluating the
characteristics of unlabeled audio which improve acoustic modeling, and show
that acoustic models trained with more audio rely less on external language
models.
| 2,020 | Computation and Language |
Aging Memories Generate More Fluent Dialogue Responses with Memory
Augmented Neural Networks | Memory Networks have emerged as effective models to incorporate Knowledge
Bases (KB) into neural networks. By storing KB embeddings into a memory
component, these models can learn meaningful representations that are grounded
to external knowledge. However, as the memory unit becomes full, the oldest
memories are replaced by newer representations.
In this paper, we question this approach and provide experimental evidence
that conventional Memory Networks store highly correlated vectors during
training. While increasing the memory size mitigates this problem, this also
leads to overfitting as the memory stores a large number of training latent
representations. To address these issues, we propose a novel regularization
mechanism named memory dropout which 1) Samples a single latent vector from the
distribution of redundant memories. 2) Ages redundant memories thus increasing
their probability of overwriting them during training. This fully
differentiable technique allows us to achieve state-of-the-art response
generation in the Stanford Multi-Turn Dialogue and Cambridge Restaurant
datasets.
| 2,020 | Computation and Language |
Classification as Decoder: Trading Flexibility for Control in Medical
Dialogue | Generative seq2seq dialogue systems are trained to predict the next word in
dialogues that have already occurred. They can learn from large unlabeled
conversation datasets, build a deeper understanding of conversational context,
and generate a wide variety of responses. This flexibility comes at the cost of
control, a concerning tradeoff in doctor/patient interactions. Inaccuracies,
typos, or undesirable content in the training data will be reproduced by the
model at inference time. We trade a small amount of labeling effort and some
loss of response variety in exchange for quality control. More specifically, a
pretrained language model encodes the conversational context, and we finetune a
classification head to map an encoded conversational context to a response
class, where each class is a noisily labeled group of interchangeable
responses. Experts can update these exemplar responses over time as best
practices change without retraining the classifier or invalidating old training
data. Expert evaluation of 775 unseen doctor/patient conversations shows that
only 12% of the discriminative model's responses are worse than the what the
doctor ended up writing, compared to 18% for the generative model.
| 2,019 | Computation and Language |
Co-Attention Hierarchical Network: Generating Coherent Long Distractors
for Reading Comprehension | In reading comprehension, generating sentence-level distractors is a
significant task, which requires a deep understanding of the article and
question. The traditional entity-centered methods can only generate word-level
or phrase-level distractors. Although recently proposed neural-based methods
like sequence-to-sequence (Seq2Seq) model show great potential in generating
creative text, the previous neural methods for distractor generation ignore two
important aspects. First, they didn't model the interactions between the
article and question, making the generated distractors tend to be too general
or not relevant to question context. Second, they didn't emphasize the
relationship between the distractor and article, making the generated
distractors not semantically relevant to the article and thus fail to form a
set of meaningful options. To solve the first problem, we propose a
co-attention enhanced hierarchical architecture to better capture the
interactions between the article and question, thus guide the decoder to
generate more coherent distractors. To alleviate the second problem, we add an
additional semantic similarity loss to push the generated distractors more
relevant to the article. Experimental results show that our model outperforms
several strong baselines on automatic metrics, achieving state-of-the-art
performance. Further human evaluation indicates that our generated distractors
are more coherent and more educative compared with those distractors generated
by baselines.
| 2,019 | Computation and Language |
Global Greedy Dependency Parsing | Most syntactic dependency parsing models may fall into one of two categories:
transition- and graph-based models. The former models enjoy high inference
efficiency with linear time complexity, but they rely on the stacking or
re-ranking of partially-built parse trees to build a complete parse tree and
are stuck with slower training for the necessity of dynamic oracle training.
The latter, graph-based models, may boast better performance but are
unfortunately marred by polynomial time inference. In this paper, we propose a
novel parsing order objective, resulting in a novel dependency parsing model
capable of both global (in sentence scope) feature extraction as in graph
models and linear time inference as in transitional models. The proposed global
greedy parser only uses two arc-building actions, left and right arcs, for
projective parsing. When equipped with two extra non-projective arc-building
actions, the proposed parser may also smoothly support non-projective parsing.
Using multiple benchmark treebanks, including the Penn Treebank (PTB), the
CoNLL-X treebanks, and the Universal Dependency Treebanks, we evaluate our
parser and demonstrate that the proposed novel parser achieves good performance
with faster training and decoding.
| 2,020 | Computation and Language |
EmpDG: Multiresolution Interactive Empathetic Dialogue Generation | A humanized dialogue system is expected to generate empathetic replies, which
should be sensitive to the users' expressed emotion. The task of empathetic
dialogue generation is proposed to address this problem. The essential
challenges lie in accurately capturing the nuances of human emotion and
considering the potential of user feedback, which are overlooked by the
majority of existing work. In response to this problem, we propose a
multi-resolution adversarial model -- EmpDG, to generate more empathetic
responses. EmpDG exploits both the coarse-grained dialogue-level and
fine-grained token-level emotions, the latter of which helps to better capture
the nuances of user emotion. In addition, we introduce an interactive
adversarial learning framework which exploits the user feedback, to identify
whether the generated responses evoke emotion perceptivity in dialogues.
Experimental results show that the proposed approach significantly outperforms
the state-of-the-art baselines in both content quality and emotion
perceptivity.
| 2,020 | Computation and Language |
Controlling Neural Machine Translation Formality with Synthetic
Supervision | This work aims to produce translations that convey source language content at
a formality level that is appropriate for a particular audience. Framing this
problem as a neural sequence-to-sequence task ideally requires training
triplets consisting of a bilingual sentence pair labeled with target language
formality. However, in practice, available training examples are limited to
English sentence pairs of different styles, and bilingual parallel sentences of
unknown formality. We introduce a novel training scheme for multi-task models
that automatically generates synthetic training triplets by inferring the
missing element on the fly, thus enabling end-to-end training. Comprehensive
automatic and human assessments show that our best model outperforms existing
models by producing translations that better match desired formality levels
while preserving the source meaning.
| 2,019 | Computation and Language |
SemanticZ at SemEval-2016 Task 3: Ranking Relevant Answers in Community
Question Answering Using Semantic Similarity Based on Fine-tuned Word
Embeddings | We describe our system for finding good answers in a community forum, as
defined in SemEval-2016, Task 3 on Community Question Answering. Our approach
relies on several semantic similarity features based on fine-tuned word
embeddings and topics similarities. In the main Subtask C, our primary
submission was ranked third, with a MAP of 51.68 and accuracy of 69.94. In
Subtask A, our primary submission was also third, with MAP of 77.58 and
accuracy of 73.39.
| 2,016 | Computation and Language |
Global Thread-Level Inference for Comment Classification in Community
Question Answering | Community question answering, a recent evolution of question answering in the
Web context, allows a user to quickly consult the opinion of a number of people
on a particular topic, thus taking advantage of the wisdom of the crowd. Here
we try to help the user by deciding automatically which answers are good and
which are bad for a given question. In particular, we focus on exploiting the
output structure at the thread level in order to make more consistent global
decisions. More specifically, we exploit the relations between pairs of
comments at any distance in the thread, which we incorporate in a graph-cut and
in an ILP frameworks. We evaluated our approach on the benchmark dataset of
SemEval-2015 Task 3. Results improved over the state of the art, confirming the
importance of using thread level information.
| 2,015 | Computation and Language |
Paraphrasing Verbs for Noun Compound Interpretation | An important challenge for the automatic analysis of English written text is
the abundance of noun compounds: sequences of nouns acting as a single noun. In
our view, their semantics is best characterized by the set of all possible
paraphrasing verbs, with associated weights, e.g., malaria mosquito is carry
(23), spread (16), cause (12), transmit (9), etc. Using Amazon's Mechanical
Turk, we collect paraphrasing verbs for 250 noun-noun compounds previously
proposed in the linguistic literature, thus creating a valuable resource for
noun compound interpretation. Using these verbs, we further construct a dataset
of pairs of sentences representing a special kind of textual entailment task,
where a binary decision is to be made about whether an expression involving a
verb and two nouns can be transformed into a noun compound, while preserving
the sentence meaning.
| 2,008 | Computation and Language |
Joint Embedding Learning of Educational Knowledge Graphs | As an efficient model for knowledge organization, the knowledge graph has
been widely adopted in several fields, e.g., biomedicine, sociology, and
education. And there is a steady trend of learning embedding representations of
knowledge graphs to facilitate knowledge graph construction and downstream
tasks. In general, knowledge graph embedding techniques aim to learn vectorized
representations which preserve the structural information of the graph. And
conventional embedding learning models rely on structural relationships among
entities and relations. However, in educational knowledge graphs, structural
relationships are not the focus. Instead, rich literals of the graphs are more
valuable. In this paper, we focus on this problem and propose a novel model for
embedding learning of educational knowledge graphs. Our model considers both
structural and literal information and jointly learns embedding
representations. Three experimental graphs were constructed based on an
educational knowledge graph which has been applied in real-world teaching. We
conducted two experiments on the three graphs and other common benchmark
graphs. The experimental results proved the effectiveness of our model and its
superiority over other baselines when processing educational knowledge graphs.
| 2,019 | Computation and Language |
Joint Emotion Label Space Modelling for Affect Lexica | Emotion lexica are commonly used resources to combat data poverty in
automatic emotion detection. However, vocabulary coverage issues, differences
in construction method and discrepancies in emotion framework and
representation result in a heterogeneous landscape of emotion detection
resources, calling for a unified approach to utilising them. To combat this, we
present an extended emotion lexicon of 30,273 unique entries, which is a result
of merging eight existing emotion lexica by means of a multi-view variational
autoencoder (VAE). We showed that a VAE is a valid approach for combining
lexica with different label spaces into a joint emotion label space with a
chosen number of dimensions, and that these dimensions are still interpretable.
We tested the utility of the unified VAE lexicon by employing the lexicon
values as features in an emotion detection model. We found that the VAE lexicon
outperformed individual lexica, but contrary to our expectations, it did not
outperform a naive concatenation of lexica, although it did contribute to the
naive concatenation when added as an extra lexicon. Furthermore, using lexicon
information as additional features on top of state-of-the-art language models
usually resulted in a better performance than when no lexicon information was
used.
| 2,021 | Computation and Language |
Natural Language Generation Challenges for Explainable AI | Good quality explanations of artificial intelligence (XAI) reasoning must be
written (and evaluated) for an explanatory purpose, targeted towards their
readers, have a good narrative and causal structure, and highlight where
uncertainty and data quality affect the AI output. I discuss these challenges
from a Natural Language Generation (NLG) perspective, and highlight four
specific NLG for XAI research challenges.
| 2,019 | Computation and Language |
Zero-Shot Semantic Parsing for Instructions | We consider a zero-shot semantic parsing task: parsing instructions into
compositional logical forms, in domains that were not seen during training. We
present a new dataset with 1,390 examples from 7 application domains (e.g. a
calendar or a file manager), each example consisting of a triplet: (a) the
application's initial state, (b) an instruction, to be carried out in the
context of that state, and (c) the state of the application after carrying out
the instruction. We introduce a new training algorithm that aims to train a
semantic parser on examples from a set of source domains, so that it can
effectively parse instructions from an unknown target domain. We integrate our
algorithm into the floating parser of Pasupat and Liang (2015), and further
augment the parser with features and a logical form candidate filtering logic,
to support zero-shot adaptation. Our experiments with various zero-shot
adaptation setups demonstrate substantial performance gains over a non-adapted
parser.
| 2,019 | Computation and Language |
Casting a Wide Net: Robust Extraction of Potentially Idiomatic
Expressions | Idiomatic expressions like `out of the woods' and `up the ante' present a
range of difficulties for natural language processing applications. We present
work on the annotation and extraction of what we term potentially idiomatic
expressions (PIEs), a subclass of multiword expressions covering both literal
and non-literal uses of idiomatic expressions. Existing corpora of PIEs are
small and have limited coverage of different PIE types, which hampers research.
To further progress on the extraction and disambiguation of potentially
idiomatic expressions, larger corpora of PIEs are required. In addition, larger
corpora are a potential source for valuable linguistic insights into idiomatic
expressions and their variability. We propose automatic tools to facilitate the
building of larger PIE corpora, by investigating the feasibility of using
dictionary-based extraction of PIEs as a pre-extraction tool for English. We do
this by assessing the reliability and coverage of idiom dictionaries, the
annotation of a PIE corpus, and the automatic extraction of PIEs from a large
corpus. Results show that combinations of dictionaries are a reliable source of
idiomatic expressions, that PIEs can be annotated with a high reliability
(0.74-0.91 Fleiss' Kappa), and that parse-based PIE extraction yields highly
accurate performance (88% F1-score). Combining complementary PIE extraction
methods increases reliability further, to over 92% F1-score. Moreover, the
extraction method presented here could be extended to other types of multiword
expressions and to other languages, given that sufficient NLP tools are
available.
| 2,019 | Computation and Language |
Table-Of-Contents generation on contemporary documents | The generation of precise and detailed Table-Of-Contents (TOC) from a
document is a problem of major importance for document understanding and
information extraction. Despite its importance, it is still a challenging task,
especially for non-standardized documents with rich layout information such as
commercial documents. In this paper, we present a new neural-based pipeline for
TOC generation applicable to any searchable document. Unlike previous methods,
we do not use semantic labeling nor assume the presence of parsable TOC pages
in the document. Moreover, we analyze the influence of using external knowledge
encoded as a template. We empirically show that this approach is only useful in
a very low resource environment. Finally, we propose a new domain-specific data
set that sheds some light on the difficulties of TOC generation in real-world
documents. The proposed method shows better performance than the
state-of-the-art on a public data set and on the newly released data set.
| 2,019 | Computation and Language |
A Comparative Study on End-to-end Speech to Text Translation | Recent advances in deep learning show that end-to-end speech to text
translation model is a promising approach to direct the speech translation
field. In this work, we provide an overview of different end-to-end
architectures, as well as the usage of an auxiliary connectionist temporal
classification (CTC) loss for better convergence. We also investigate on
pre-training variants such as initializing different components of a model
using pre-trained models, and their impact on the final performance, which
gives boosts up to 4% in BLEU and 5% in TER. Our experiments are performed on
270h IWSLT TED-talks En->De, and 100h LibriSpeech Audiobooks En->Fr. We also
show improvements over the current end-to-end state-of-the-art systems on both
tasks.
| 2,019 | Computation and Language |
On Using SpecAugment for End-to-End Speech Translation | This work investigates a simple data augmentation technique, SpecAugment, for
end-to-end speech translation. SpecAugment is a low-cost implementation method
applied directly to the audio input features and it consists of masking blocks
of frequency channels, and/or time steps. We apply SpecAugment on end-to-end
speech translation tasks and achieve up to +2.2\% \BLEU on LibriSpeech
Audiobooks En->Fr and +1.2% on IWSLT TED-talks En->De by alleviating
overfitting to some extent. We also examine the effectiveness of the method in
a variety of data scenarios and show that the method also leads to significant
improvements in various data conditions irrespective of the amount of training
data.
| 2,019 | Computation and Language |
On using 2D sequence-to-sequence models for speech recognition | Attention-based sequence-to-sequence models have shown promising results in
automatic speech recognition. Using these architectures, one-dimensional input
and output sequences are related by an attention approach, thereby replacing
more explicit alignment processes, like in classical HMM-based modeling. In
contrast, here we apply a novel two-dimensional long short-term memory (2DLSTM)
architecture to directly model the input/output relation between audio/feature
vector sequences and word sequences. The proposed model is an alternative model
such that instead of using any type of attention components, we apply a 2DLSTM
layer to assimilate the context from both input observations and output
transcriptions. The experimental evaluation on the Switchboard 300h automatic
speech recognition task shows word error rates for the 2DLSTM model that are
competitive to end-to-end attention-based model.
| 2,019 | Computation and Language |
Discovering New Intents via Constrained Deep Adaptive Clustering with
Cluster Refinement | Identifying new user intents is an essential task in the dialogue system.
However, it is hard to get satisfying clustering results since the definition
of intents is strongly guided by prior knowledge. Existing methods incorporate
prior knowledge by intensive feature engineering, which not only leads to
overfitting but also makes it sensitive to the number of clusters. In this
paper, we propose constrained deep adaptive clustering with cluster refinement
(CDAC+), an end-to-end clustering method that can naturally incorporate
pairwise constraints as prior knowledge to guide the clustering process.
Moreover, we refine the clusters by forcing the model to learn from the high
confidence assignments. After eliminating low confidence assignments, our
approach is surprisingly insensitive to the number of clusters. Experimental
results on the three benchmark datasets show that our method can yield
significant improvements over strong baselines.
| 2,019 | Computation and Language |
Rule-Guided Compositional Representation Learning on Knowledge Graphs | Representation learning on a knowledge graph (KG) is to embed entities and
relations of a KG into low-dimensional continuous vector spaces. Early KG
embedding methods only pay attention to structured information encoded in
triples, which would cause limited performance due to the structure sparseness
of KGs. Some recent attempts consider paths information to expand the structure
of KGs but lack explainability in the process of obtaining the path
representations. In this paper, we propose a novel Rule and Path-based Joint
Embedding (RPJE) scheme, which takes full advantage of the explainability and
accuracy of logic rules, the generalization of KG embedding as well as the
supplementary semantic structure of paths. Specifically, logic rules of
different lengths (the number of relations in rule body) in the form of Horn
clauses are first mined from the KG and elaborately encoded for representation
learning. Then, the rules of length 2 are applied to compose paths accurately
while the rules of length 1 are explicitly employed to create semantic
associations among relations and constrain relation embeddings. Besides, the
confidence level of each rule is also considered in optimization to guarantee
the availability of applying the rule to representation learning. Extensive
experimental results illustrate that RPJE outperforms other state-of-the-art
baselines on KG completion task, which also demonstrate the superiority of
utilizing logic rules as well as paths for improving the accuracy and
explainability of representation learning.
| 2,020 | Computation and Language |
Knowledge Graph Alignment Network with Gated Multi-hop Neighborhood
Aggregation | Graph neural networks (GNNs) have emerged as a powerful paradigm for
embedding-based entity alignment due to their capability of identifying
isomorphic subgraphs. However, in real knowledge graphs (KGs), the counterpart
entities usually have non-isomorphic neighborhood structures, which easily
causes GNNs to yield different representations for them. To tackle this
problem, we propose a new KG alignment network, namely AliNet, aiming at
mitigating the non-isomorphism of neighborhood structures in an end-to-end
manner. As the direct neighbors of counterpart entities are usually dissimilar
due to the schema heterogeneity, AliNet introduces distant neighbors to expand
the overlap between their neighborhood structures. It employs an attention
mechanism to highlight helpful distant neighbors and reduce noises. Then, it
controls the aggregation of both direct and distant neighborhood information
using a gating mechanism. We further propose a relation loss to refine entity
representations. We perform thorough experiments with detailed ablation studies
and analyses on five entity alignment datasets, demonstrating the effectiveness
of AliNet.
| 2,019 | Computation and Language |
CAIL2019-SCM: A Dataset of Similar Case Matching in Legal Domain | In this paper, we introduce CAIL2019-SCM, Chinese AI and Law 2019 Similar
Case Matching dataset. CAIL2019-SCM contains 8,964 triplets of cases published
by the Supreme People's Court of China. CAIL2019-SCM focuses on detecting
similar cases, and the participants are required to check which two cases are
more similar in the triplets. There are 711 teams who participated in this
year's competition, and the best team has reached a score of 71.88. We have
also implemented several baselines to help researchers better understand this
task. The dataset and more details can be found from
https://github.com/china-ai-law-challenge/CAIL2019/tree/master/scm.
| 2,019 | Computation and Language |
Red Dragon AI at TextGraphs 2019 Shared Task: Language Model Assisted
Explanation Generation | The TextGraphs-13 Shared Task on Explanation Regeneration asked participants
to develop methods to reconstruct gold explanations for elementary science
questions. Red Dragon AI's entries used the language of the questions and
explanation text directly, rather than a constructing a separate graph-like
representation. Our leaderboard submission placed us 3rd in the competition,
but we present here three methods of increasing sophistication, each of which
scored successively higher on the test set after the competition close.
| 2,019 | Computation and Language |
Real-Time Emotion Recognition via Attention Gated Hierarchical Memory
Network | Real-time emotion recognition (RTER) in conversations is significant for
developing emotionally intelligent chatting machines. Without the future
context in RTER, it becomes critical to build the memory bank carefully for
capturing historical context and summarize the memories appropriately to
retrieve relevant information. We propose an Attention Gated Hierarchical
Memory Network (AGHMN) to address the problems of prior work: (1) Commonly used
convolutional neural networks (CNNs) for utterance feature extraction are less
compatible in the memory modules; (2) Unidirectional gated recurrent units
(GRUs) only allow each historical utterance to have context before it,
preventing information propagation in the opposite direction; (3) The Soft
Attention for summarizing loses the positional and ordering information of
memories, regardless of how the memory bank is built. Particularly, we propose
a Hierarchical Memory Network (HMN) with a bidirectional GRU (BiGRU) as the
utterance reader and a BiGRU fusion layer for the interaction between
historical utterances. For memory summarizing, we propose an Attention GRU
(AGRU) where we utilize the attention weights to update the internal state of
GRU. We further promote the AGRU to a bidirectional variant (BiAGRU) to balance
the contextual information from recent memories and that from distant memories.
We conduct experiments on two emotion conversation datasets with extensive
analysis, demonstrating the efficacy of our AGHMN models.
| 2,019 | Computation and Language |
Assessing the Benchmarking Capacity of Machine Reading Comprehension
Datasets | Existing analysis work in machine reading comprehension (MRC) is largely
concerned with evaluating the capabilities of systems. However, the
capabilities of datasets are not assessed for benchmarking language
understanding precisely. We propose a semi-automated, ablation-based
methodology for this challenge; By checking whether questions can be solved
even after removing features associated with a skill requisite for language
understanding, we evaluate to what degree the questions do not require the
skill. Experiments on 10 datasets (e.g., CoQA, SQuAD v2.0, and RACE) with a
strong baseline model show that, for example, the relative scores of a baseline
model provided with content words only and with shuffled sentence words in the
context are on average 89.2% and 78.5% of the original score, respectively.
These results suggest that most of the questions already answered correctly by
the model do not necessarily require grammatical and complex reasoning. For
precise benchmarking, MRC datasets will need to take extra care in their design
to ensure that questions can correctly evaluate the intended skills.
| 2,019 | Computation and Language |
How Do You #relax When You're #stressed? A Content Analysis and
Infodemiology Study of Stress-Related Tweets | Background: Stress is a contributing factor to many major health problems in
the United States, such as heart disease, depression, and autoimmune diseases.
Relaxation is often recommended in mental health treatment as a frontline
strategy to reduce stress, thereby improving health conditions.
Objective: The objective of our study was to understand how people express
their feelings of stress and relaxation through Twitter messages.
Methods: We first performed a qualitative content analysis of 1326 and 781
tweets containing the keywords "stress" and "relax", respectively. We then
investigated the use of machine learning algorithms to automatically classify
tweets as stress versus non stress and relaxation versus non relaxation.
Finally, we applied these classifiers to sample datasets drawn from 4 cities
with the goal of evaluating the extent of any correlation between our automatic
classification of tweets and results from public stress surveys.
Results: Content analysis showed that the most frequent topic of stress
tweets was education, followed by work and social relationships. The most
frequent topic of relaxation tweets was rest and vacation, followed by nature
and water. When we applied the classifiers to the cities dataset, the
proportion of stress tweets in New York and San Diego was substantially higher
than that in Los Angeles and San Francisco.
Conclusions: This content analysis and infodemiology study revealed that
Twitter, when used in conjunction with natural language processing techniques,
is a useful data source for understanding stress and stress management
strategies, and can potentially supplement infrequently collected survey-based
stress data.
| 2,017 | Computation and Language |
How to Ask Better Questions? A Large-Scale Multi-Domain Dataset for
Rewriting Ill-Formed Questions | We present a large-scale dataset for the task of rewriting an ill-formed
natural language question to a well-formed one. Our multi-domain question
rewriting MQR dataset is constructed from human contributed Stack Exchange
question edit histories. The dataset contains 427,719 question pairs which come
from 303 domains. We provide human annotations for a subset of the dataset as a
quality estimate. When moving from ill-formed to well-formed questions, the
question quality improves by an average of 45 points across three aspects. We
train sequence-to-sequence neural models on the constructed dataset and obtain
an improvement of 13.2% in BLEU-4 over baseline methods built from other data
resources. We release the MQR dataset to encourage research on the problem of
question rewriting.
| 2,019 | Computation and Language |
Cantonese Automatic Speech Recognition Using Transfer Learning from
Mandarin | We propose a system to develop a basic automatic speech recognizer(ASR) for
Cantonese, a low-resource language, through transfer learning of Mandarin, a
high-resource language. We take a time-delayed neural network trained on
Mandarin, and perform weight transfer of several layers to a newly initialized
model for Cantonese. We experiment with the number of layers transferred, their
learning rates, and pretraining i-vectors. Key findings are that this approach
allows for quicker training time with less data. We find that for every epoch,
log-probability is smaller for transfer learning models compared to a
Cantonese-only model. The transfer learning models show slight improvement in
CER.
| 2,019 | Computation and Language |
Attention-Informed Mixed-Language Training for Zero-shot Cross-lingual
Task-oriented Dialogue Systems | Recently, data-driven task-oriented dialogue systems have achieved promising
performance in English. However, developing dialogue systems that support
low-resource languages remains a long-standing challenge due to the absence of
high-quality data. In order to circumvent the expensive and time-consuming data
collection, we introduce Attention-Informed Mixed-Language Training (MLT), a
novel zero-shot adaptation method for cross-lingual task-oriented dialogue
systems. It leverages very few task-related parallel word pairs to generate
code-switching sentences for learning the inter-lingual semantics across
languages. Instead of manually selecting the word pairs, we propose to extract
source words based on the scores computed by the attention layer of a trained
English task-related model and then generate word pairs using existing
bilingual dictionaries. Furthermore, intensive experiments with different
cross-lingual embeddings demonstrate the effectiveness of our approach.
Finally, with very few word pairs, our model achieves significant zero-shot
adaptation performance improvements in both cross-lingual dialogue state
tracking and natural language understanding (i.e., intent detection and slot
filling) tasks compared to the current state-of-the-art approaches, which
utilize a much larger amount of bilingual data.
| 2,019 | Computation and Language |
Automatic Text-based Personality Recognition on Monologues and
Multiparty Dialogues Using Attentive Networks and Contextual Embeddings | Previous works related to automatic personality recognition focus on using
traditional classification models with linguistic features. However, attentive
neural networks with contextual embeddings, which have achieved huge success in
text classification, are rarely explored for this task. In this project, we
have two major contributions. First, we create the first dialogue-based
personality dataset, FriendsPersona, by annotating 5 personality traits of
speakers from Friends TV Show through crowdsourcing. Second, we present a novel
approach to automatic personality recognition using pre-trained contextual
embeddings (BERT and RoBERTa) and attentive neural networks. Our models largely
improve the state-of-art results on the monologue Essays dataset by 2.49%, and
establish a solid benchmark on our FriendsPersona. By comparing results in two
datasets, we demonstrate the challenges of modeling personality in multi-party
dialogue.
| 2,019 | Computation and Language |
An Empirical Study of Sections in Classifying Disease Outbreak Reports | Identifying articles that relate to infectious diseases is a necessary step
for any automatic bio-surveillance system that monitors news articles from the
Internet. Unlike scientific articles which are available in a strongly
structured form, news articles are usually loosely structured. In this chapter,
we investigate the importance of each section and the effect of section
weighting on performance of text classification. The experimental results show
that (1) classification models using the headline and leading sentence achieve
a high performance in terms of F-score compared to other parts of the article;
(2) all section with bag-of-word representation (full text) achieves the
highest recall; and (3) section weighting information can help to improve
accuracy.
| 2,010 | Computation and Language |
Minimizing the Bag-of-Ngrams Difference for Non-Autoregressive Neural
Machine Translation | Non-Autoregressive Neural Machine Translation (NAT) achieves significant
decoding speedup through generating target words independently and
simultaneously. However, in the context of non-autoregressive translation, the
word-level cross-entropy loss cannot model the target-side sequential
dependency properly, leading to its weak correlation with the translation
quality. As a result, NAT tends to generate influent translations with
over-translation and under-translation errors. In this paper, we propose to
train NAT to minimize the Bag-of-Ngrams (BoN) difference between the model
output and the reference sentence. The bag-of-ngrams training objective is
differentiable and can be efficiently calculated, which encourages NAT to
capture the target-side sequential dependency and correlates well with the
translation quality. We validate our approach on three translation tasks and
show that our approach largely outperforms the NAT baseline by about 5.0 BLEU
scores on WMT14 En$\leftrightarrow$De and about 2.5 BLEU scores on WMT16
En$\leftrightarrow$Ro.
| 2,019 | Computation and Language |
Generating Diverse Translation by Manipulating Multi-Head Attention | Transformer model has been widely used on machine translation tasks and
obtained state-of-the-art results. In this paper, we report an interesting
phenomenon in its encoder-decoder multi-head attention: different attention
heads of the final decoder layer align to different word translation
candidates. We empirically verify this discovery and propose a method to
generate diverse translations by manipulating heads. Furthermore, we make use
of these diverse translations with the back-translation technique for better
data augmentation. Experiment results show that our method generates diverse
translations without severe drop in translation quality. Experiments also show
that back-translation with these diverse translations could bring significant
improvement on performance on translation tasks. An auxiliary experiment of
conversation response generation task proves the effect of diversity as well.
| 2,019 | Computation and Language |
Incorporating Textual Evidence in Visual Storytelling | Previous work on visual storytelling mainly focused on exploring image
sequence as evidence for storytelling and neglected textual evidence for
guiding story generation. Motivated by human storytelling process which recalls
stories for familiar images, we exploit textual evidence from similar images to
help generate coherent and meaningful stories. To pick the images which may
provide textual experience, we propose a two-step ranking method based on image
object recognition techniques. To utilize textual information, we design an
extended Seq2Seq model with two-channel encoder and attention. Experiments on
the VIST dataset show that our method outperforms state-of-the-art baseline
models without heavy engineering.
| 2,019 | Computation and Language |
Emotion Recognition for Vietnamese Social Media Text | Emotion recognition or emotion prediction is a higher approach or a special
case of sentiment analysis. In this task, the result is not produced in terms
of either polarity: positive or negative or in the form of rating (from 1 to 5)
but of a more detailed level of analysis in which the results are depicted in
more expressions like sadness, enjoyment, anger, disgust, fear, and surprise.
Emotion recognition plays a critical role in measuring the brand value of a
product by recognizing specific emotions of customers' comments. In this study,
we have achieved two targets. First and foremost, we built a standard
Vietnamese Social Media Emotion Corpus (UIT-VSMEC) with exactly 6,927
emotion-annotated sentences, contributing to emotion recognition research in
Vietnamese which is a low-resource language in natural language processing
(NLP). Secondly, we assessed and measured machine learning and deep neural
network models on our UIT-VSMEC corpus. As a result, the CNN model achieved the
highest performance with the weighted F1-score of 59.74%. Our corpus is
available at our research website.
| 2,019 | Computation and Language |
Entity Extraction with Knowledge from Web Scale Corpora | Entity extraction is an important task in text mining and natural language
processing. A popular method for entity extraction is by comparing substrings
from free text against a dictionary of entities. In this paper, we present
several techniques as a post-processing step for improving the effectiveness of
the existing entity extraction technique. These techniques utilise models
trained with the web-scale corpora which makes our techniques robust and
versatile. Experiments show that our techniques bring a notable improvement on
efficiency and effectiveness.
| 2,019 | Computation and Language |
What Do You Mean `Why?': Resolving Sluices in Conversations | In conversation, we often ask one-word questions such as `Why?' or `Who?'.
Such questions are typically easy for humans to answer, but can be hard for
computers, because their resolution requires retrieving both the right semantic
frames and the right arguments from context. This paper introduces the novel
ellipsis resolution task of resolving such one-word questions, referred to as
sluices in linguistics. We present a crowd-sourced dataset containing
annotations of sluices from over 4,000 dialogues collected from conversational
QA datasets, as well as a series of strong baseline architectures.
| 2,019 | Computation and Language |
MUSE: Parallel Multi-Scale Attention for Sequence to Sequence Learning | In sequence to sequence learning, the self-attention mechanism proves to be
highly effective, and achieves significant improvements in many tasks. However,
the self-attention mechanism is not without its own flaws. Although
self-attention can model extremely long dependencies, the attention in deep
layers tends to overconcentrate on a single token, leading to insufficient use
of local information and difficultly in representing long sequences. In this
work, we explore parallel multi-scale representation learning on sequence data,
striving to capture both long-range and short-range language structures. To
this end, we propose the Parallel MUlti-Scale attEntion (MUSE) and MUSE-simple.
MUSE-simple contains the basic idea of parallel multi-scale sequence
representation learning, and it encodes the sequence in parallel, in terms of
different scales with the help from self-attention, and pointwise
transformation. MUSE builds on MUSE-simple and explores combining convolution
and self-attention for learning sequence representations from more different
scales. We focus on machine translation and the proposed approach achieves
substantial performance improvements over Transformer, especially on long
sequences. More importantly, we find that although conceptually simple, its
success in practice requires intricate considerations, and the multi-scale
attention must build on unified semantic space. Under common setting, the
proposed model achieves substantial performance and outperforms all previous
models on three main machine translation tasks. In addition, MUSE has potential
for accelerating inference due to its parallelism. Code will be available at
https://github.com/lancopku/MUSE
| 2,019 | Computation and Language |
Chemical-protein Interaction Extraction via Gaussian Probability
Distribution and External Biomedical Knowledge | Motivation: The biomedical literature contains a wealth of chemical-protein
interactions (CPIs). Automatically extracting CPIs described in biomedical
literature is essential for drug discovery, precision medicine, as well as
basic biomedical research. Most existing methods focus only on the sentence
sequence to identify these CPIs. However, the local structure of sentences and
external biomedical knowledge also contain valuable information. Effective use
of such information may improve the performance of CPI extraction. Results: In
this paper, we propose a novel neural network-based approach to improve CPI
extraction. Specifically, the approach first employs BERT to generate
high-quality contextual representations of the title sequence, instance
sequence, and knowledge sequence. Then, the Gaussian probability distribution
is introduced to capture the local structure of the instance. Meanwhile, the
attention mechanism is applied to fuse the title information and biomedical
knowledge, respectively. Finally, the related representations are concatenated
and fed into the softmax function to extract CPIs. We evaluate our proposed
model on the CHEMPROT corpus. Our proposed model is superior in performance as
compared with other state-of-the-art models. The experimental results show that
the Gaussian probability distribution and external knowledge are complementary
to each other. Integrating them can effectively improve the CPI extraction
performance. Furthermore, the Gaussian probability distribution can effectively
improve the extraction performance of sentences with overlapping relations in
biomedical relation extraction tasks. Availability: Data and code are available
at https://github.com/CongSun-dlut/CPI_extraction. Contact: [email protected],
[email protected] Supplementary information: Supplementary data are
available at Bioinformatics online.
| 2,020 | Computation and Language |
An analysis of observation length requirements for machine understanding
of human behaviors from spoken language | The task of quantifying human behavior by observing interaction cues is an
important and useful one across a range of domains in psychological research
and practice. Machine learning-based approaches typically perform this task by
first estimating behavior based on cues within an observation window, such as a
fixed number of words, and then aggregating the behavior over all the windows
in that interaction. The length of this window directly impacts the accuracy of
estimation by controlling the amount of information being used. The exact link
between window length and accuracy, however, has not been well studied,
especially in spoken language. In this paper, we investigate this link and
present an analysis framework that determines appropriate window lengths for
the task of behavior estimation. Our proposed framework utilizes a two-pronged
evaluation approach: (a) extrinsic similarity between machine predictions and
human expert annotations, and (b) intrinsic consistency between intra-machine
and intra-human behavior relations. We apply our analysis to real-life
conversations that are annotated for a large and diverse set of behavior codes
and examine the relation between the nature of a behavior and how long it
should be observed. We find that behaviors describing negative and positive
affect can be accurately estimated from short to medium-length expressions
whereas behaviors related to problem-solving and dysphoria require much longer
observations and are difficult to quantify from language alone. These findings
are found to be generally consistent across different behavior modeling
approaches.
| 2,020 | Computation and Language |
A Cluster Ranking Model for Full Anaphora Resolution | Anaphora resolution (coreference) systems designed for the CONLL 2012 dataset
typically cannot handle key aspects of the full anaphora resolution task such
as the identification of singletons and of certain types of non-referring
expressions (e.g., expletives), as these aspects are not annotated in that
corpus. However, the recently released dataset for the CRAC 2018 Shared Task
can now be used for that purpose. In this paper, we introduce an architecture
to simultaneously identify non-referring expressions (including expletives,
predicative s, and other types) and build coreference chains, including
singletons. Our cluster-ranking system uses an attention mechanism to determine
the relative importance of the mentions in the same cluster. Additional
classifiers are used to identify singletons and non-referring markables. Our
contributions are as follows. First all, we report the first result on the CRAC
data using system mentions; our result is 5.8% better than the shared task
baseline system, which used gold mentions. Second, we demonstrate that the
availability of singleton clusters and non-referring expressions can lead to
substantially improved performance on non-singleton clusters as well. Third, we
show that despite our model not being designed specifically for the CONLL data,
it achieves a score equivalent to that of the state-of-the-art system by Kantor
and Globerson (2019) on that dataset.
| 2,020 | Computation and Language |
Automatically Generating Macro Research Reports from a Piece of News | Automatically generating macro research reports from economic news is an
important yet challenging task. As we all know, it requires the macro analysts
to write such reports within a short period of time after the important
economic news are released. This motivates our work, i.e., using AI techniques
to save manual cost. The goal of the proposed system is to generate macro
research reports as the draft for macro analysts. Essentially, the core
challenge is the long text generation issue. To address this issue, we propose
a novel deep learning technique based approach which includes two components,
i.e., outline generation and macro research report generation.For the model
performance evaluation, we first crawl a large news-to-report dataset and then
evaluate our approach on this dataset, and the generated reports are given for
the subjective evaluation.
| 2,019 | Computation and Language |
Learning Hierarchical Discrete Linguistic Units from Visually-Grounded
Speech | In this paper, we present a method for learning discrete linguistic units by
incorporating vector quantization layers into neural models of visually
grounded speech. We show that our method is capable of capturing both
word-level and sub-word units, depending on how it is configured. What
differentiates this paper from prior work on speech unit learning is the choice
of training objective. Rather than using a reconstruction-based loss, we use a
discriminative, multimodal grounding objective which forces the learned units
to be useful for semantic image retrieval. We evaluate the sub-word units on
the ZeroSpeech 2019 challenge, achieving a 27.3\% reduction in ABX error rate
over the top-performing submission, while keeping the bitrate approximately the
same. We also present experiments demonstrating the noise robustness of these
units. Finally, we show that a model with multiple quantizers can
simultaneously learn phone-like detectors at a lower layer and word-like
detectors at a higher layer. We show that these detectors are highly accurate,
discovering 279 words with an F1 score of greater than 0.5.
| 2,020 | Computation and Language |
Temporal Reasoning via Audio Question Answering | Multimodal question answering tasks can be used as proxy tasks to study
systems that can perceive and reason about the world. Answering questions about
different types of input modalities stresses different aspects of reasoning
such as visual reasoning, reading comprehension, story understanding, or
navigation. In this paper, we use the task of Audio Question Answering (AQA) to
study the temporal reasoning abilities of machine learning models. To this end,
we introduce the Diagnostic Audio Question Answering (DAQA) dataset comprising
audio sequences of natural sound events and programmatically generated
questions and answers that probe various aspects of temporal reasoning. We
adapt several recent state-of-the-art methods for visual question answering to
the AQA task, and use DAQA to demonstrate that they perform poorly on questions
that require in-depth temporal reasoning. Finally, we propose a new model,
Multiple Auxiliary Controllers for Linear Modulation (MALiMo) that extends the
recent Feature-wise Linear Modulation (FiLM) model and significantly improves
its temporal reasoning capabilities. We envisage DAQA to foster research on AQA
and temporal reasoning and MALiMo a step towards models for AQA.
| 2,019 | Computation and Language |
Paraphrasing with Large Language Models | Recently, large language models such as GPT-2 have shown themselves to be
extremely adept at text generation and have also been able to achieve
high-quality results in many downstream NLP tasks such as text classification,
sentiment analysis and question answering with the aid of fine-tuning. We
present a useful technique for using a large language model to perform the task
of paraphrasing on a variety of texts and subjects. Our approach is
demonstrated to be capable of generating paraphrases not only at a sentence
level but also for longer spans of text such as paragraphs without needing to
break the text into smaller chunks.
| 2,019 | Computation and Language |
Automatically Neutralizing Subjective Bias in Text | Texts like news, encyclopedias, and some social media strive for objectivity.
Yet bias in the form of inappropriate subjectivity - introducing attitudes via
framing, presupposing truth, and casting doubt - remains ubiquitous. This kind
of bias erodes our collective trust and fuels social conflict. To address this
issue, we introduce a novel testbed for natural language generation:
automatically bringing inappropriately subjective text into a neutral point of
view ("neutralizing" biased text). We also offer the first parallel corpus of
biased language. The corpus contains 180,000 sentence pairs and originates from
Wikipedia edits that removed various framings, presuppositions, and attitudes
from biased sentences. Last, we propose two strong encoder-decoder baselines
for the task. A straightforward yet opaque CONCURRENT system uses a BERT
encoder to identify subjective words as part of the generation process. An
interpretable and controllable MODULAR algorithm separates these steps, using
(1) a BERT-based classifier to identify problematic words and (2) a novel join
embedding through which the classifier can edit the hidden states of the
encoder. Large-scale human evaluation across four domains (encyclopedias, news
headlines, books, and political speeches) suggests that these algorithms are a
first step towards the automatic identification and reduction of bias.
| 2,019 | Computation and Language |
Improving Conditioning in Context-Aware Sequence to Sequence Models | Neural sequence to sequence models are well established for applications
which can be cast as mapping a single input sequence into a single output
sequence. In this work, we focus on cases where generation is conditioned on
both a short query and a long context, such as abstractive question answering
or document-level translation. We modify the standard sequence-to-sequence
approach to make better use of both the query and the context by expanding the
conditioning mechanism to intertwine query and context attention. We also
introduce a simple and efficient data augmentation method for the proposed
model. Experiments on three different tasks show that both changes lead to
consistent improvements.
| 2,019 | Computation and Language |
Speech Sentiment Analysis via Pre-trained Features from End-to-end ASR
Models | In this paper, we propose to use pre-trained features from end-to-end ASR
models to solve speech sentiment analysis as a down-stream task. We show that
end-to-end ASR features, which integrate both acoustic and text information
from speech, achieve promising results. We use RNN with self-attention as the
sentiment classifier, which also provides an easy visualization through
attention weights to help interpret model predictions. We use well benchmarked
IEMOCAP dataset and a new large-scale speech sentiment dataset SWBD-sentiment
for evaluation. Our approach improves the-state-of-the-art accuracy on IEMOCAP
from 66.6% to 71.7%, and achieves an accuracy of 70.10% on SWBD-sentiment with
more than 49,500 utterances.
| 2,020 | Computation and Language |
LATTE: Latent Type Modeling for Biomedical Entity Linking | Entity linking is the task of linking mentions of named entities in natural
language text, to entities in a curated knowledge-base. This is of significant
importance in the biomedical domain, where it could be used to semantically
annotate a large volume of clinical records and biomedical literature, to
standardized concepts described in an ontology such as Unified Medical Language
System (UMLS). We observe that with precise type information, entity
disambiguation becomes a straightforward task. However, fine-grained type
information is usually not available in biomedical domain. Thus, we propose
LATTE, a LATent Type Entity Linking model, that improves entity linking by
modeling the latent fine-grained type information about mentions and entities.
Unlike previous methods that perform entity linking directly between the
mentions and the entities, LATTE jointly does entity disambiguation, and latent
fine-grained type learning, without direct supervision. We evaluate our model
on two biomedical datasets: MedMentions, a large scale public dataset annotated
with UMLS concepts, and a de-identified corpus of dictated doctor's notes that
has been annotated with ICD concepts. Extensive experimental evaluation shows
our model achieves significant performance improvements over several
state-of-the-art techniques.
| 2,020 | Computation and Language |
Are Noisy Sentences Useless for Distant Supervised Relation Extraction? | The noisy labeling problem has been one of the major obstacles for distant
supervised relation extraction. Existing approaches usually consider that the
noisy sentences are useless and will harm the model's performance. Therefore,
they mainly alleviate this problem by reducing the influence of noisy
sentences, such as applying bag-level selective attention or removing noisy
sentences from sentence-bags. However, the underlying cause of the noisy
labeling problem is not the lack of useful information, but the missing
relation labels. Intuitively, if we can allocate credible labels for noisy
sentences, they will be transformed into useful training data and benefit the
model's performance. Thus, in this paper, we propose a novel method for distant
supervised relation extraction, which employs unsupervised deep clustering to
generate reliable labels for noisy sentences. Specifically, our model contains
three modules: a sentence encoder, a noise detector and a label generator. The
sentence encoder is used to obtain feature representations. The noise detector
detects noisy sentences from sentence-bags, and the label generator produces
high-confidence relation labels for noisy sentences. Extensive experimental
results demonstrate that our model outperforms the state-of-the-art baselines
on a popular benchmark dataset, and can indeed alleviate the noisy labeling
problem.
| 2,019 | Computation and Language |
Learning Multi-level Dependencies for Robust Word Recognition | Robust language processing systems are becoming increasingly important given
the recent awareness of dangerous situations where brittle machine learning
models can be easily broken with the presence of noises. In this paper, we
introduce a robust word recognition framework that captures multi-level
sequential dependencies in noised sentences. The proposed framework employs a
sequence-to-sequence model over characters of each word, whose output is given
to a word-level bi-directional recurrent neural network. We conduct extensive
experiments to verify the effectiveness of the framework. The results show that
the proposed framework outperforms state-of-the-art methods by a large margin
and they also suggest that character-level dependencies can play an important
role in word recognition.
| 2,019 | Computation and Language |
Joint Learning of Answer Selection and Answer Summary Generation in
Community Question Answering | Community question answering (CQA) gains increasing popularity in both
academy and industry recently. However, the redundancy and lengthiness issues
of crowdsourced answers limit the performance of answer selection and lead to
reading difficulties and misunderstandings for community users. To solve these
problems, we tackle the tasks of answer selection and answer summary generation
in CQA with a novel joint learning model. Specifically, we design a
question-driven pointer-generator network, which exploits the correlation
information between question-answer pairs to aid in attending the essential
information when generating answer summaries. Meanwhile, we leverage the answer
summaries to alleviate noise in original lengthy answers when ranking the
relevancy degrees of question-answer pairs. In addition, we construct a new
large-scale CQA corpus, WikiHowQA, which contains long answers for answer
selection as well as reference summaries for answer summarization. The
experimental results show that the joint learning method can effectively
address the answer redundancy issue in CQA and achieves state-of-the-art
results on both answer selection and text summarization tasks. Furthermore, the
proposed model is shown to be of great transferring ability and applicability
for resource-poor CQA tasks, which lack of reference answer summaries.
| 2,019 | Computation and Language |
Zero-Resource Cross-Lingual Named Entity Recognition | Recently, neural methods have achieved state-of-the-art (SOTA) results in
Named Entity Recognition (NER) tasks for many languages without the need for
manually crafted features. However, these models still require manually
annotated training data, which is not available for many languages. In this
paper, we propose an unsupervised cross-lingual NER model that can transfer NER
knowledge from one language to another in a completely unsupervised way without
relying on any bilingual dictionary or parallel data. Our model achieves this
through word-level adversarial learning and augmented fine-tuning with
parameter sharing and feature augmentation. Experiments on five different
languages demonstrate the effectiveness of our approach, outperforming existing
models by a good margin and setting a new SOTA for each language pair.
| 2,020 | Computation and Language |
Weakly-Supervised Opinion Summarization by Leveraging External
Information | Opinion summarization from online product reviews is a challenging task,
which involves identifying opinions related to various aspects of the product
being reviewed. While previous works require additional human effort to
identify relevant aspects, we instead apply domain knowledge from external
sources to automatically achieve the same goal. This work proposes AspMem, a
generative method that contains an array of memory cells to store
aspect-related knowledge. This explicit memory can help obtain a better opinion
representation and infer the aspect information more precisely. We evaluate
this method on both aspect identification and opinion summarization tasks. Our
experiments show that AspMem outperforms the state-of-the-art methods even
though, unlike the baselines, it does not rely on human supervision which is
carefully handcrafted for the given tasks.
| 2,019 | Computation and Language |
A Discrete CVAE for Response Generation on Short-Text Conversation | Neural conversation models such as encoder-decoder models are easy to
generate bland and generic responses. Some researchers propose to use the
conditional variational autoencoder(CVAE) which maximizes the lower bound on
the conditional log-likelihood on a continuous latent variable. With different
sampled la-tent variables, the model is expected to generate diverse responses.
Although the CVAE-based models have shown tremendous potential, their
improvement of generating high-quality responses is still unsatisfactory. In
this paper, we introduce a discrete latent variable with an explicit semantic
meaning to improve the CVAE on short-text conversation. A major advantage of
our model is that we can exploit the semantic distance between the latent
variables to maintain good diversity between the sampled latent variables.
Accordingly, we pro-pose a two-stage sampling approach to enable efficient
diverse variable selection from a large latent space assumed in the short-text
conversation task. Experimental results indicate that our model outperforms
various kinds of generation models under both automatic and human evaluations
and generates more diverse and in-formative responses.
| 2,019 | Computation and Language |
Neuron Interaction Based Representation Composition for Neural Machine
Translation | Recent NLP studies reveal that substantial linguistic information can be
attributed to single neurons, i.e., individual dimensions of the representation
vectors. We hypothesize that modeling strong interactions among neurons helps
to better capture complex information by composing the linguistic properties
embedded in individual neurons. Starting from this intuition, we propose a
novel approach to compose representations learned by different components in
neural machine translation (e.g., multi-layer networks or multi-head
attention), based on modeling strong interactions among neurons in the
representation vectors. Specifically, we leverage bilinear pooling to model
pairwise multiplicative interactions among individual neurons, and a low-rank
approximation to make the model computationally feasible. We further propose
extended bilinear pooling to incorporate first-order representations.
Experiments on WMT14 English-German and English-French translation tasks show
that our model consistently improves performances over the SOTA Transformer
baseline. Further analyses demonstrate that our approach indeed captures more
syntactic and semantic information as expected.
| 2,019 | Computation and Language |
Classifying Vietnamese Disease Outbreak Reports with Important Sentences
and Rich Features | Text classification is an important field of research from mid 90s up to now.
It has many applications, one of them is in Web-based biosurveillance systems
which identify and summarize online disease outbreak reports. In this paper we
focus on classifying Vietnamese disease outbreak reports. We investigate
important properties of disease outbreak reports, e.g., sentences containing
names of outbreak disease, locations. Evaluation on 10-time 10- fold
cross-validation using the Support Vector Machine algorithm shows that using
sentences containing disease outbreak names with its preceding/following
sentences in combination with location features achieve the best F-score with
86.67% - an improvement of 0.38% in comparison to using all raw text. Our
results suggest that using important sentences and rich feature can improve
performance of Vietnamese disease outbreak text classification.
| 2,012 | Computation and Language |
Effective Modeling of Encoder-Decoder Architecture for Joint Entity and
Relation Extraction | A relation tuple consists of two entities and the relation between them, and
often such tuples are found in unstructured text. There may be multiple
relation tuples present in a text and they may share one or both entities among
them. Extracting such relation tuples from a sentence is a difficult task and
sharing of entities or overlapping entities among the tuples makes it more
challenging. Most prior work adopted a pipeline approach where entities were
identified first followed by finding the relations among them, thus missing the
interaction among the relation tuples in a sentence. In this paper, we propose
two approaches to use encoder-decoder architecture for jointly extracting
entities and relations. In the first approach, we propose a representation
scheme for relation tuples which enables the decoder to generate one word at a
time like machine translation models and still finds all the tuples present in
a sentence with full entity names of different length and with overlapping
entities. Next, we propose a pointer network-based decoding approach where an
entire tuple is generated at every time step. Experiments on the publicly
available New York Times corpus show that our proposed approaches outperform
previous work and achieve significantly higher F1 scores.
| 2,019 | Computation and Language |
Continual adaptation for efficient machine communication | To communicate with new partners in new contexts, humans rapidly form new
linguistic conventions. Recent neural language models are able to comprehend
and produce the existing conventions present in their training data, but are
not able to flexibly and interactively adapt those conventions on the fly as
humans do. We introduce an interactive repeated reference task as a benchmark
for models of adaptation in communication and propose a regularized continual
learning framework that allows an artificial agent initialized with a generic
language model to more accurately and efficiently communicate with a partner
over time. We evaluate this framework through simulations on COCO and in
real-time reference game experiments with human partners.
| 2,020 | Computation and Language |
Go From the General to the Particular: Multi-Domain Translation with
Domain Transformation Networks | The key challenge of multi-domain translation lies in simultaneously encoding
both the general knowledge shared across domains and the particular knowledge
distinctive to each domain in a unified model. Previous work shows that the
standard neural machine translation (NMT) model, trained on mixed-domain data,
generally captures the general knowledge, but misses the domain-specific
knowledge. In response to this problem, we augment NMT model with additional
domain transformation networks to transform the general representations to
domain-specific representations, which are subsequently fed to the NMT decoder.
To guarantee the knowledge transformation, we also propose two complementary
supervision signals by leveraging the power of knowledge distillation and
adversarial learning. Experimental results on several language pairs, covering
both balanced and unbalanced multi-domain translation, demonstrate the
effectiveness and universality of the proposed approach. Encouragingly, the
proposed unified model achieves comparable results with the fine-tuning
approach that requires multiple models to preserve the particular knowledge.
Further analyses reveal that the domain transformation networks successfully
capture the domain-specific knowledge as expected.
| 2,019 | Computation and Language |
Resource production of written forms of Sign Languages by a
user-centered editor, SWift (SignWriting improved fast transcriber) | The SignWriting improved fast transcriber (SWift), presented in this paper,
is an advanced editor for computer-aided writing and transcribing of any Sign
Language (SL) using SignWriting (SW). The application is an editor which allows
composing and saving desired signs using the SW elementary components, called
"glyphs". These make up a sort of alphabet, which does not depend on the
national Sign Language and which codes the basic components of any sign. The
user is guided through a fully-automated procedure, making the composition
process fast and intuitive. SWift pursues the goal of helping to break down the
"electronic barriers" that keep deaf people away from the web, and at the same
time to support linguistic research about Sign Languages features. For this
reason it has been designed with a special attention to deaf user needs, and to
general usability issues. The editor has been developed in a modular way, so it
can be integrated everywhere the use of SW as an alternative to written
"verbal" language may be advisable.
| 2,012 | Computation and Language |
The JDDC Corpus: A Large-Scale Multi-Turn Chinese Dialogue Dataset for
E-commerce Customer Service | Human conversations are complicated and building a human-like dialogue agent
is an extremely challenging task. With the rapid development of deep learning
techniques, data-driven models become more and more prevalent which need a huge
amount of real conversation data. In this paper, we construct a large-scale
real scenario Chinese E-commerce conversation corpus, JDDC, with more than 1
million multi-turn dialogues, 20 million utterances, and 150 million words. The
dataset reflects several characteristics of human-human conversations, e.g.,
goal-driven, and long-term dependency among the context. It also covers various
dialogue types including task-oriented, chitchat and question-answering. Extra
intent information and three well-annotated challenge sets are also provided.
Then, we evaluate several retrieval-based and generative models to provide
basic benchmark performance on the JDDC corpus. And we hope JDDC can serve as
an effective testbed and benefit the development of fundamental research in
dialogue task
| 2,020 | Computation and Language |
Anaphora Resolution in Dialogue Systems for South Asian Languages | Anaphora resolution is a challenging task which has been the interest of NLP
researchers for a long time. Traditional resolution techniques like eliminative
constraints and weighted preferences were successful in many languages.
However, they are ineffective in free word order languages like most SouthAsian
languages.Heuristic and rule-based techniques were typical in these languages,
which are constrained to context and domain.In this paper, we venture a new
strategy us-ing neural networks for resolving anaphora in human-human
dialogues. The architecture chiefly consists of three components, a shallow
parser for extracting features, a feature vector generator which produces the
word embed-dings, and a neural network model which will predict the antecedent
mention of an anaphora.The system has been trained and tested on Telugu
conversation corpus we generated. Given the advantage of the semantic
information in word embeddings and appending actor, gender, number, person and
part of plural features the model has reached an F1-score of 86.
| 2,019 | Computation and Language |
Multilingual Culture-Independent Word Analogy Datasets | In text processing, deep neural networks mostly use word embeddings as an
input. Embeddings have to ensure that relations between words are reflected
through distances in a high-dimensional numeric space. To compare the quality
of different text embeddings, typically, we use benchmark datasets. We present
a collection of such datasets for the word analogy task in nine languages:
Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian,
and Swedish. We redesigned the original monolingual analogy task to be much
more culturally independent and also constructed cross-lingual analogy datasets
for the involved languages. We present basic statistics of the created datasets
and their initial evaluation using fastText embeddings.
| 2,020 | Computation and Language |
High Quality ELMo Embeddings for Seven Less-Resourced Languages | Recent results show that deep neural networks using contextual embeddings
significantly outperform non-contextual embeddings on a majority of text
classification task. We offer precomputed embeddings from popular contextual
ELMo model for seven languages: Croatian, Estonian, Finnish, Latvian,
Lithuanian, Slovenian, and Swedish. We demonstrate that the quality of
embeddings strongly depends on the size of training set and show that existing
publicly available ELMo embeddings for listed languages shall be improved. We
train new ELMo embeddings on much larger training sets and show their advantage
over baseline non-contextual FastText embeddings. In evaluation, we use two
benchmarks, the analogy task and the NER task.
| 2,020 | Computation and Language |
Injecting Prior Knowledge into Image Caption Generation | Automatically generating natural language descriptions from an image is a
challenging problem in artificial intelligence that requires a good
understanding of the visual and textual signals and the correlations between
them. The state-of-the-art methods in image captioning struggles to approach
human level performance, especially when data is limited. In this paper, we
propose to improve the performance of the state-of-the-art image captioning
models by incorporating two sources of prior knowledge: (i) a conditional
latent topic attention, that uses a set of latent variables (topics) as an
anchor to generate highly probable words and, (ii) a regularization technique
that exploits the inductive biases in syntactic and semantic structure of
captions and improves the generalization of image captioning models. Our
experiments validate that our method produces more human interpretable captions
and also leads to significant improvements on the MSCOCO dataset in both the
full and low data regimes.
| 2,020 | Computation and Language |
TPsgtR: Neural-Symbolic Tensor Product Scene-Graph-Triplet
Representation for Image Captioning | Image captioning can be improved if the structure of the graphical
representations can be formulated with conceptual positional binding. In this
work, we have introduced a novel technique for caption generation using the
neural-symbolic encoding of the scene-graphs, derived from regional visual
information of the images and we call it Tensor Product Scene-Graph-Triplet
Representation (TP$_{sgt}$R). While, most of the previous works concentrated on
identification of the object features in images, we introduce a neuro-symbolic
embedding that can embed identified relationships among different regions of
the image into concrete forms, instead of relying on the model to compose for
any/all combinations. These neural symbolic representation helps in better
definition of the neural symbolic space for neuro-symbolic attention and can be
transformed to better captions. With this approach, we introduced two novel
architectures (TP$_{sgt}$R-TDBU and TP$_{sgt}$R-sTDBU) for comparison and
experiment result demonstrates that our approaches outperformed the other
models, and generated captions are more comprehensive and natural.
| 2,019 | Computation and Language |
CRUR: Coupled-Recurrent Unit for Unification, Conceptualization and
Context Capture for Language Representation -- A Generalization of Bi
Directional LSTM | In this work we have analyzed a novel concept of sequential binding based
learning capable network based on the coupling of recurrent units with Bayesian
prior definition. The coupling structure encodes to generate efficient tensor
representations that can be decoded to generate efficient sentences and can
describe certain events. These descriptions are derived from structural
representations of visual features of images and media. An elaborated study of
the different types of coupling recurrent structures are studied and some
insights of their performance are provided. Supervised learning performance for
natural language processing is judged based on statistical evaluations,
however, the truth is perspective, and in this case the qualitative evaluations
reveal the real capability of the different architectural strengths and
variations. Bayesian prior definition of different embedding helps in better
characterization of the sentences based on the natural language structure
related to parts of speech and other semantic level categorization in a form
which is machine interpret-able and inherits the characteristics of the Tensor
Representation binding and unbinding based on the mutually orthogonality. Our
approach has surpassed some of the existing basic works related to image
captioning.
| 2,019 | Computation and Language |
Topical Phrase Extraction from Clinical Reports by Incorporating both
Local and Global Context | Making sense of words often requires to simultaneously examine the
surrounding context of a term as well as the global themes characterizing the
overall corpus. Several topic models have already exploited word embeddings to
recognize local context, however, it has been weakly combined with the global
context during the topic inference. This paper proposes to extract topical
phrases corroborating the word embedding information with the global context
detected by Latent Semantic Analysis, and then combine them by means of the
P\'{o}lya urn model. To highlight the effectiveness of this combined approach
the model was assessed analyzing clinical reports, a challenging scenario
characterized by technical jargon and a limited word statistics available.
Results show it outperforms the state-of-the-art approaches in terms of both
topic coherence and computational cost.
| 2,019 | Computation and Language |
Interactive Text Ranking with Bayesian Optimisation: A Case Study on
Community QA and Summarisation | For many NLP applications, such as question answering and summarisation, the
goal is to select the best solution from a large space of candidates to meet a
particular user's needs. To address the lack of user-specific training data, we
propose an interactive text ranking approach that actively selects pairs of
candidates, from which the user selects the best. Unlike previous strategies,
which attempt to learn a ranking across the whole candidate space, our method
employs Bayesian optimisation to focus the user's labelling effort on high
quality candidates and integrates prior knowledge in a Bayesian manner to cope
better with small data scenarios. We apply our method to community question
answering (cQA) and extractive summarisation, finding that it significantly
outperforms existing interactive approaches. We also show that the ranking
function learned by our method is an effective reward function for
reinforcement learning, which improves the state of the art for interactive
summarisation.
| 2,020 | Computation and Language |
Improving N-gram Language Models with Pre-trained Deep Transformer | Although n-gram language models (LMs) have been outperformed by the
state-of-the-art neural LMs, they are still widely used in speech recognition
due to its high efficiency in inference. In this paper, we demonstrate that
n-gram LM can be improved by neural LMs through a text generation based data
augmentation method. In contrast to previous approaches, we employ a
large-scale general domain pre-training followed by in-domain fine-tuning
strategy to construct deep Transformer based neural LMs. Large amount of
in-domain text data is generated with the well trained deep Transformer to
construct new n-gram LMs, which are then interpolated with baseline n-gram
systems. Empirical studies on different speech recognition tasks show that the
proposed approach can effectively improve recognition accuracy. In particular,
our proposed approach brings significant relative word error rate reduction up
to 6.0% for domains with limited in-domain data.
| 2,019 | Computation and Language |
Discourse Level Factors for Sentence Deletion in Text Simplification | This paper presents a data-driven study focusing on analyzing and predicting
sentence deletion -- a prevalent but understudied phenomenon in document
simplification -- on a large English text simplification corpus. We inspect
various document and discourse factors associated with sentence deletion, using
a new manually annotated sentence alignment corpus we collected. We reveal that
professional editors utilize different strategies to meet readability standards
of elementary and middle schools. To predict whether a sentence will be deleted
during simplification to a certain level, we harness automatically aligned data
to train a classification model. Evaluated on our manually annotated data, our
best models reached F1 scores of 65.2 and 59.7 for this task at the levels of
elementary and middle school, respectively. We find that discourse level
factors contribute to the challenging task of predicting sentence deletion for
simplification.
| 2,020 | Computation and Language |
Joint Parsing and Generation for Abstractive Summarization | Sentences produced by abstractive summarization systems can be ungrammatical
and fail to preserve the original meanings, despite being locally fluent. In
this paper we propose to remedy this problem by jointly generating a sentence
and its syntactic dependency parse while performing abstraction. If generating
a word can introduce an erroneous relation to the summary, the behavior must be
discouraged. The proposed method thus holds promise for producing grammatical
sentences and encouraging the summary to stay true-to-original. Our
contributions of this work are twofold. First, we present a novel neural
architecture for abstractive summarization that combines a sequential decoder
with a tree-based decoder in a synchronized manner to generate a summary
sentence and its syntactic parse. Secondly, we describe a novel human
evaluation protocol to assess if, and to what extent, a summary remains true to
its original meanings. We evaluate our method on a number of summarization
datasets and demonstrate competitive results against strong baselines.
| 2,019 | Computation and Language |
Controlling the Amount of Verbatim Copying in Abstractive Summarization | An abstract must not change the meaning of the original text. A single most
effective way to achieve that is to increase the amount of copying while still
allowing for text abstraction. Human editors can usually exercise control over
copying, resulting in summaries that are more extractive than abstractive, or
vice versa. However, it remains poorly understood whether modern neural
abstractive summarizers can provide the same flexibility, i.e., learning from
single reference summaries to generate multiple summary hypotheses with varying
degrees of copying. In this paper, we present a neural summarization model
that, by learning from single human abstracts, can produce a broad spectrum of
summaries ranging from purely extractive to highly generative ones. We frame
the task of summarization as language modeling and exploit alternative
mechanisms to generate summary hypotheses. Our method allows for control over
copying during both training and decoding stages of a neural summarization
model. Through extensive experiments we illustrate the significance of our
proposed method on controlling the amount of verbatim copying and achieve
competitive results over strong baselines. Our analysis further reveals
interesting and unobvious facts.
| 2,019 | Computation and Language |
When is ACL's Deadline? A Scientific Conversational Agent | Our conversational agent UKP-ATHENA assists NLP researchers in finding and
exploring scientific literature, identifying relevant authors, planning or
post-processing conference visits, and preparing paper submissions using a
unified interface based on natural language inputs and responses. UKP-ATHENA
enables new access paths to our swiftly evolving research area with its massive
amounts of scientific information and high turnaround times. UKP-ATHENA's
responses connect information from multiple heterogeneous sources which
researchers currently have to explore manually one after another. Unlike a
search engine, UKP-ATHENA maintains the context of a conversation to allow for
efficient information access on papers, researchers, and conferences. Our
architecture consists of multiple components with reference implementations
that can be easily extended by new skills and domains. Our user-based
evaluation shows that UKP-ATHENA already responds 45% of different formulations
of defined intents with 37% information coverage rate.
| 2,019 | Computation and Language |
A Transformer-based approach to Irony and Sarcasm detection | Figurative Language (FL) seems ubiquitous in all social-media discussion
forums and chats, posing extra challenges to sentiment analysis endeavors.
Identification of FL schemas in short texts remains largely an unresolved issue
in the broader field of Natural Language Processing (NLP), mainly due to their
contradictory and metaphorical meaning content. The main FL expression forms
are sarcasm, irony and metaphor. In the present paper we employ advanced Deep
Learning (DL) methodologies to tackle the problem of identifying the
aforementioned FL forms. Significantly extending our previous work [71], we
propose a neural network methodology that builds on a recently proposed
pre-trained transformer-based network architecture which, is further enhanced
with the employment and devise of a recurrent convolutional neural network
(RCNN). With this set-up, data preprocessing is kept in minimum. The
performance of the devised hybrid neural architecture is tested on four
benchmark datasets, and contrasted with other relevant state of the art
methodologies and systems. Results demonstrate that the proposed methodology
achieves state of the art performance under all benchmark datasets,
outperforming, even by a large margin, all other methodologies and published
studies.
| 2,020 | Computation and Language |
SemEval-2013 Task 4: Free Paraphrases of Noun Compounds | In this paper, we describe SemEval-2013 Task 4: the definition, the data, the
evaluation and the results. The task is to capture some of the meaning of
English noun compounds via paraphrasing. Given a two-word noun compound, the
participating system is asked to produce an explicitly ranked list of its
free-form paraphrases. The list is automatically compared and evaluated against
a similarly ranked list of paraphrases proposed by human annotators, recruited
and managed through Amazon's Mechanical Turk. The comparison of raw paraphrases
is sensitive to syntactic and morphological variation. The "gold" ranking is
based on the relative popularity of paraphrases among annotators. To make the
ranking more reliable, highly similar paraphrases are grouped, so as to
downplay superficial differences in syntax and morphology. Three systems
participated in the task. They all beat a simple baseline on one of the two
evaluation measures, but not on both measures. This shows that the task is
difficult.
| 2,013 | Computation and Language |
SemEval-2010 Task 8: Multi-Way Classification of Semantic Relations
Between Pairs of Nominals | In response to the continuing research interest in computational semantic
analysis, we have proposed a new task for SemEval-2010: multi-way
classification of mutually exclusive semantic relations between pairs of
nominals. The task is designed to compare different approaches to the problem
and to provide a standard testbed for future research. In this paper, we define
the task, describe the creation of the datasets, and discuss the results of the
participating 28 systems submitted by 10 teams.
| 2,010 | Computation and Language |
ScienceExamCER: A High-Density Fine-Grained Science-Domain Corpus for
Common Entity Recognition | Named entity recognition identifies common classes of entities in text, but
these entity labels are generally sparse, limiting utility to downstream tasks.
In this work we present ScienceExamCER, a densely-labeled semantic
classification corpus of 133k mentions in the science exam domain where nearly
all (96%) of content words have been annotated with one or more fine-grained
semantic class labels including taxonomic groups, meronym groups, verb/action
groups, properties and values, and synonyms. Semantic class labels are drawn
from a manually-constructed fine-grained typology of 601 classes generated
through a data-driven analysis of 4,239 science exam questions. We show an
off-the-shelf BERT-based named entity recognition model modified for
multi-label classification achieves an accuracy of 0.85 F1 on this task,
suggesting strong utility for downstream tasks in science domain question
answering requiring densely-labeled semantic classification.
| 2,019 | Computation and Language |
CopyMTL: Copy Mechanism for Joint Extraction of Entities and Relations
with Multi-Task Learning | Joint extraction of entities and relations has received significant attention
due to its potential of providing higher performance for both tasks. Among
existing methods, CopyRE is effective and novel, which uses a
sequence-to-sequence framework and copy mechanism to directly generate the
relation triplets. However, it suffers from two fatal problems. The model is
extremely weak at differing the head and tail entity, resulting in inaccurate
entity extraction. It also cannot predict multi-token entities (e.g.
\textit{Steven Jobs}). To address these problems, we give a detailed analysis
of the reasons behind the inaccurate entity extraction problem, and then
propose a simple but extremely effective model structure to solve this problem.
In addition, we propose a multi-task learning framework equipped with copy
mechanism, called CopyMTL, to allow the model to predict multi-token entities.
Experiments reveal the problems of CopyRE and show that our model achieves
significant improvement over the current state-of-the-art method by 9% in NYT
and 16% in WebNLG (F1 score). Our code is available at
https://github.com/WindChimeRan/CopyMTL
| 2,020 | Computation and Language |
Enhancing Out-Of-Domain Utterance Detection with Data Augmentation Based
on Word Embeddings | For most intelligent assistant systems, it is essential to have a mechanism
that detects out-of-domain (OOD) utterances automatically to handle noisy input
properly. One typical approach would be introducing a separate class that
contains OOD utterance examples combined with in-domain text samples into the
classifier. However, since OOD utterances are usually unseen to the training
datasets, the detection performance largely depends on the quality of the
attached OOD text data with restricted sizes of samples due to computing
limits. In this paper, we study how augmented OOD data based on sampling impact
OOD utterance detection with a small sample size. We hypothesize that OOD
utterance samples chosen randomly can increase the coverage of unknown OOD
utterance space and enhance detection accuracy if they are more dispersed.
Experiments show that given the same dataset with the same OOD sample size, the
OOD utterance detection performance improves when OOD samples are more
spread-out.
| 2,020 | Computation and Language |
Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question
Answering | Answering questions that require multi-hop reasoning at web-scale
necessitates retrieving multiple evidence documents, one of which often has
little lexical or semantic relationship to the question. This paper introduces
a new graph-based recurrent retrieval approach that learns to retrieve
reasoning paths over the Wikipedia graph to answer multi-hop open-domain
questions. Our retriever model trains a recurrent neural network that learns to
sequentially retrieve evidence paragraphs in the reasoning path by conditioning
on the previously retrieved documents. Our reader model ranks the reasoning
paths and extracts the answer span included in the best reasoning path.
Experimental results show state-of-the-art results in three open-domain QA
datasets, showcasing the effectiveness and robustness of our method. Notably,
our method achieves significant improvement in HotpotQA, outperforming the
previous best model by more than 14 points.
| 2,020 | Computation and Language |
Task-Oriented Dialog Systems that Consider Multiple Appropriate
Responses under the Same Context | Conversations have an intrinsic one-to-many property, which means that
multiple responses can be appropriate for the same dialog context. In
task-oriented dialogs, this property leads to different valid dialog policies
towards task completion. However, none of the existing task-oriented dialog
generation approaches takes this property into account. We propose a
Multi-Action Data Augmentation (MADA) framework to utilize the one-to-many
property to generate diverse appropriate dialog responses. Specifically, we
first use dialog states to summarize the dialog history, and then discover all
possible mappings from every dialog state to its different valid system
actions. During dialog system training, we enable the current dialog state to
map to all valid system actions discovered in the previous process to create
additional state-action pairs. By incorporating these additional pairs, the
dialog policy learns a balanced action distribution, which further guides the
dialog model to generate diverse responses. Experimental results show that the
proposed framework consistently improves dialog policy diversity, and results
in improved response diversity and appropriateness. Our model obtains
state-of-the-art results on MultiWOZ.
| 2,019 | Computation and Language |
Causally Denoise Word Embeddings Using Half-Sibling Regression | Distributional representations of words, also known as word vectors, have
become crucial for modern natural language processing tasks due to their wide
applications. Recently, a growing body of word vector postprocessing algorithm
has emerged, aiming to render off-the-shelf word vectors even stronger. In line
with these investigations, we introduce a novel word vector postprocessing
scheme under a causal inference framework. Concretely, the postprocessing
pipeline is realized by Half-Sibling Regression (HSR), which allows us to
identify and remove confounding noise contained in word vectors. Compared to
previous work, our proposed method has the advantages of interpretability and
transparency due to its causal inference grounding. Evaluated on a battery of
standard lexical-level evaluation tasks and downstream sentiment analysis
tasks, our method reaches state-of-the-art performance.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.