Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Universal Language Model Fine-Tuning with Subword Tokenization for
Polish | Universal Language Model for Fine-tuning [arXiv:1801.06146] (ULMFiT) is one
of the first NLP methods for efficient inductive transfer learning.
Unsupervised pretraining results in improvements on many NLP tasks for English.
In this paper, we describe a new method that uses subword tokenization to adapt
ULMFiT to languages with high inflection. Our approach results in a new
state-of-the-art for the Polish language, taking first place in Task 3 of
PolEval'18. After further training, our final model outperformed the second
best model by 35%. We have open-sourced our pretrained models and code.
| 2,018 | Computation and Language |
Learn to Code-Switch: Data Augmentation using Copy Mechanism on Language
Modeling | Building large-scale datasets for training code-switching language models is
challenging and very expensive. To alleviate this problem using parallel corpus
has been a major workaround. However, existing solutions use linguistic
constraints which may not capture the real data distribution. In this work, we
propose a novel method for learning how to generate code-switching sentences
from parallel corpora. Our model uses a Seq2Seq model in combination with
pointer networks to align and choose words from the monolingual sentences and
form a grammatical code-switching sentence. In our experiment, we show that by
training a language model using the augmented sentences we improve the
perplexity score by 10% compared to the LSTM baseline.
| 2,018 | Computation and Language |
A Proof-Theoretic Approach to Scope Ambiguity in Compositional Vector
Space Models | We investigate the extent to which compositional vector space models can be
used to account for scope ambiguity in quantified sentences (of the form "Every
man loves some woman"). Such sentences containing two quantifiers introduce two
readings, a direct scope reading and an inverse scope reading. This ambiguity
has been treated in a vector space model using bialgebras by (Hedges and
Sadrzadeh, 2016) and (Sadrzadeh, 2016), though without an explanation of the
mechanism by which the ambiguity arises. We combine a polarised focussed
sequent calculus for the non-associative Lambek calculus NL, as described in
(Moortgat and Moot, 2011), with the vector based approach to quantifier scope
ambiguity. In particular, we establish a procedure for obtaining a vector space
model for quantifier scope ambiguity in a derivational way.
| 2,018 | Computation and Language |
Learning to Discriminate Noises for Incorporating External Information
in Neural Machine Translation | Previous studies show that incorporating external information could improve
the translation quality of Neural Machine Translation (NMT) systems. However,
there are inevitably noises in the external information, severely reducing the
benefit that the existing methods could receive from the incorporation. To
tackle the problem, this study pays special attention to the discrimination of
the noises during the incorporation. We argue that there exist two kinds of
noise in this external information, i.e. global noise and local noise, which
affect the translations for the whole sentence and for some specific words,
respectively. Accordingly, we propose a general framework that learns to
jointly discriminate both the global and local noises, so that the external
information could be better leveraged. Our model is trained on the dataset
derived from the original parallel corpus without any external labeled data or
annotation. Experimental results in various real-world scenarios, language
pairs, and neural architectures indicate that discriminating noises contributes
to significant improvements in translation quality by being able to better
incorporate the external information, even in very noisy conditions.
| 2,018 | Computation and Language |
The MeMAD Submission to the IWSLT 2018 Speech Translation Task | This paper describes the MeMAD project entry to the IWSLT Speech Translation
Shared Task, addressing the translation of English audio into German text.
Between the pipeline and end-to-end model tracks, we participated only in the
former, with three contrastive systems. We tried also the latter, but were not
able to finish our end-to-end model in time.
All of our systems start by transcribing the audio into text through an
automatic speech recognition (ASR) model trained on the TED-LIUM English Speech
Recognition Corpus (TED-LIUM). Afterwards, we feed the transcripts into
English-German text-based neural machine translation (NMT) models. Our systems
employ three different translation models trained on separate training sets
compiled from the English-German part of the TED Speech Translation Corpus
(TED-Trans) and the OpenSubtitles2018 section of the OPUS collection.
In this paper, we also describe the experiments leading up to our final
systems. Our experiments indicate that using OpenSubtitles2018 in training
significantly improves translation performance. We also experimented with
various pre- and postprocessing routines for the NMT module, but we did not
have much success with these.
Our best-scoring system attains a BLEU score of 16.45 on the test set for
this year's task.
| 2,018 | Computation and Language |
Image-based Natural Language Understanding Using 2D Convolutional Neural
Networks | We propose a new approach to natural language understanding in which we
consider the input text as an image and apply 2D Convolutional Neural Networks
to learn the local and global semantics of the sentences from the variations
ofthe visual patterns of words. Our approach demonstrates that it is possible
to get semantically meaningful features from images with text without using
optical character recognition and sequential processing pipelines, techniques
that traditional Natural Language Understanding algorithms require. To validate
our approach, we present results for two applications: text classification and
dialog modeling. Using a 2D Convolutional Neural Network, we were able to
outperform the state-of-art accuracy results of non-Latin alphabet-based text
classification and achieved promising results for eight text classification
datasets. Furthermore, our approach outperformed the memory networks when using
out of vocabulary entities fromtask 4 of the bAbI dialog dataset.
| 2,018 | Computation and Language |
Effective extractive summarization using frequency-filtered entity
relationship graphs | Word frequency-based methods for extractive summarization are easy to
implement and yield reasonable results across languages. However, they have
significant limitations - they ignore the role of context, they offer uneven
coverage of topics in a document, and sometimes are disjointed and hard to
read. We use a simple premise from linguistic typology - that English sentences
are complete descriptors of potential interactions between entities, usually in
the order subject-verb-object - to address a subset of these difficulties. We
have developed a hybrid model of extractive summarization that combines
word-frequency based keyword identification with information from automatically
generated entity relationship graphs to select sentences for summaries.
Comparative evaluation with word-frequency and topic word-based methods shows
that the proposed method is competitive by conventional ROUGE standards, and
yields moderately more informative summaries on average, as assessed by a large
panel (N=94) of human raters.
| 2,018 | Computation and Language |
Variational Semi-supervised Aspect-term Sentiment Analysis via
Transformer | Aspect-term sentiment analysis (ATSA) is a longstanding challenge in natural
language understanding. It requires fine-grained semantical reasoning about a
target entity appeared in the text. As manual annotation over the aspects is
laborious and time-consuming, the amount of labeled data is limited for
supervised learning. This paper proposes a semi-supervised method for the ATSA
problem by using the Variational Autoencoder based on Transformer (VAET), which
models the latent distribution via variational inference. By disentangling the
latent representation into the aspect-specific sentiment and the lexical
context, our method induces the underlying sentiment prediction for the
unlabeled data, which then benefits the ATSA classifier. Our method is
classifier agnostic, i.e., the classifier is an independent module and various
advanced supervised models can be integrated. Experimental results are obtained
on the SemEval 2014 task 4 and show that our method is effective with four
classical classifiers. The proposed method outperforms two general
semisupervised methods and achieves state-of-the-art performance.
| 2,019 | Computation and Language |
Multi-Multi-View Learning: Multilingual and Multi-Representation Entity
Typing | Knowledge bases (KBs) are paramount in NLP. We employ multiview learning for
increasing accuracy and coverage of entity type information in KBs. We rely on
two metaviews: language and representation. For language, we consider
high-resource and low-resource languages from Wikipedia. For representation, we
consider representations based on the context distribution of the entity (i.e.,
on its embedding), on the entity's name (i.e., on its surface form) and on its
description in Wikipedia. The two metaviews language and representation can be
freely combined: each pair of language and representation (e.g., German
embedding, English description, Spanish name) is a distinct view. Our
experiments on entity typing with fine-grained classes demonstrate the
effectiveness of multiview learning. We release MVET, a large multiview - and,
in particular, multilingual - entity typing dataset we created. Mono- and
multilingual fine-grained entity typing systems can be evaluated on this
dataset.
| 2,018 | Computation and Language |
Clinical Concept Extraction with Contextual Word Embedding | Automatic extraction of clinical concepts is an essential step for turning
the unstructured data within a clinical note into structured and actionable
information. In this work, we propose a clinical concept extraction model for
automatic annotation of clinical problems, treatments, and tests in clinical
notes utilizing domain-specific contextual word embedding. A contextual word
embedding model is first trained on a corpus with a mixture of clinical reports
and relevant Wikipedia pages in the clinical domain. Next, a bidirectional
LSTM-CRF model is trained for clinical concept extraction using the contextual
word embedding model. We tested our proposed model on the I2B2 2010 challenge
dataset. Our proposed model achieved the best performance among reported
baseline models and outperformed the state-of-the-art models by 3.4% in terms
of F1-score.
| 2,018 | Computation and Language |
A Multilingual Study of Compressive Cross-Language Text Summarization | Cross-Language Text Summarization (CLTS) generates summaries in a language
different from the language of the source documents. Recent methods use
information from both languages to generate summaries with the most informative
sentences. However, these methods have performance that can vary according to
languages, which can reduce the quality of summaries. In this paper, we propose
a compressive framework to generate cross-language summaries. In order to
analyze performance and especially stability, we tested our system and
extractive baselines on a dataset available in four languages (English, French,
Portuguese, and Spanish) to generate English and French summaries. An automatic
evaluation showed that our method outperformed extractive state-of-art CLTS
methods with better and more stable ROUGE scores for all languages.
| 2,018 | Computation and Language |
Predicting the Semantic Textual Similarity with Siamese CNN and LSTM | Semantic Textual Similarity (STS) is the basis of many applications in
Natural Language Processing (NLP). Our system combines convolution and
recurrent neural networks to measure the semantic similarity of sentences. It
uses a convolution network to take account of the local context of words and an
LSTM to consider the global context of sentences. This combination of networks
helps to preserve the relevant information of sentences and improves the
calculation of the similarity between sentences. Our model has achieved good
results and is competitive with the best state-of-the-art systems.
| 2,018 | Computation and Language |
Multi-level Memory for Task Oriented Dialogs | Recent end-to-end task oriented dialog systems use memory architectures to
incorporate external knowledge in their dialogs. Current work makes simplifying
assumptions about the structure of the knowledge base, such as the use of
triples to represent knowledge, and combines dialog utterances (context) as
well as knowledge base (KB) results as part of the same memory. This causes an
explosion in the memory size, and makes the reasoning over memory harder. In
addition, such a memory design forces hierarchical properties of the data to be
fit into a triple structure of memory. This requires the memory reader to infer
relationships across otherwise connected attributes. In this paper we relax the
strong assumptions made by existing architectures and separate memories used
for modeling dialog context and KB results. Instead of using triples to store
KB results, we introduce a novel multi-level memory architecture consisting of
cells for each query and their corresponding results. The multi-level memory
first addresses queries, followed by results and finally each key-value pair
within a result. We conduct detailed experiments on three publicly available
task oriented dialog data sets and we find that our method conclusively
outperforms current state-of-the-art models. We report a 15-25% increase in
both entity F1 and BLEU scores.
| 2,020 | Computation and Language |
Word Embedding based Edit Distance | Text similarity calculation is a fundamental problem in natural language
processing and related fields. In recent years, deep neural networks have been
developed to perform the task and high performances have been achieved. The
neural networks are usually trained with labeled data in supervised learning,
and creation of labeled data is usually very costly. In this short paper, we
address unsupervised learning for text similarity calculation. We propose a new
method called Word Embedding based Edit Distance (WED), which incorporates word
embedding into edit distance. Experiments on three benchmark datasets show WED
outperforms state-of-the-art unsupervised methods including edit distance,
TF-IDF based cosine, word embedding based cosine, Jaccard index, etc.
| 2,018 | Computation and Language |
The Logoscope: a Semi-Automatic Tool for Detecting and Documenting
French New Words | In this article we present the design and implementation of the Logoscope,
the first tool especially developed to detect new words of the French language,
to document them and allow a public access through a web interface.
This semi-automatic tool collects new words daily by browsing the online
versions of French well known newspapers such as Le Monde, Le Figaro, L'Equipe,
Lib\'eration, La Croix, Les \'Echos. In contrast to other existing tools
essentially dedicated to dictionary development, the Logoscope attempts to give
a more complete account of the context in which the new words occur. In
addition to the commonly given morpho-syntactic information it also provides
information about the textual and discursive contexts of the word creation; in
particular, it automatically determines the (journalistic) topics of the text
containing the new word.
In this article we first give a general overview of the developed tool. We
then describe the approach taken, we discuss the linguistic background which
guided our design decisions and present the computational methods we used to
implement it.
| 2,018 | Computation and Language |
Tackling Sequence to Sequence Mapping Problems with Neural Networks | In Natural Language Processing (NLP), it is important to detect the
relationship between two sequences or to generate a sequence of tokens given
another observed sequence. We call the type of problems on modelling sequence
pairs as sequence to sequence (seq2seq) mapping problems. A lot of research has
been devoted to finding ways of tackling these problems, with traditional
approaches relying on a combination of hand-crafted features, alignment models,
segmentation heuristics, and external linguistic resources. Although great
progress has been made, these traditional approaches suffer from various
drawbacks, such as complicated pipeline, laborious feature engineering, and the
difficulty for domain adaptation. Recently, neural networks emerged as a
promising solution to many problems in NLP, speech recognition, and computer
vision. Neural models are powerful because they can be trained end to end,
generalise well to unseen examples, and the same framework can be easily
adapted to a new domain.
The aim of this thesis is to advance the state-of-the-art in seq2seq mapping
problems with neural networks. We explore solutions from three major aspects:
investigating neural models for representing sequences, modelling interactions
between sequences, and using unpaired data to boost the performance of neural
models. For each aspect, we propose novel models and evaluate their efficacy on
various tasks of seq2seq mapping.
| 2,018 | Computation and Language |
Dynamic Oracles for Top-Down and In-Order Shift-Reduce Constituent
Parsing | We introduce novel dynamic oracles for training two of the most accurate
known shift-reduce algorithms for constituent parsing: the top-down and
in-order transition-based parsers. In both cases, the dynamic oracles manage to
notably increase their accuracy, in comparison to that obtained by performing
classic static training. In addition, by improving the performance of the
state-of-the-art in-order shift-reduce parser, we achieve the best accuracy to
date (92.0 F1) obtained by a fully-supervised single-model greedy shift-reduce
constituent parser on the WSJ benchmark.
| 2,018 | Computation and Language |
Bayesian Compression for Natural Language Processing | In natural language processing, a lot of the tasks are successfully solved
with recurrent neural networks, but such models have a huge number of
parameters. The majority of these parameters are often concentrated in the
embedding layer, which size grows proportionally to the vocabulary length. We
propose a Bayesian sparsification technique for RNNs which allows compressing
the RNN dozens or hundreds of times without time-consuming hyperparameters
tuning. We also generalize the model for vocabulary sparsification to filter
out unnecessary words and compress the RNN even further. We show that the
choice of the kept words is interpretable. Code is available on github:
https://github.com/tipt0p/SparseBayesianRNN
| 2,018 | Computation and Language |
Understanding the Role of Two-Sided Argumentation in Online Consumer
Reviews: A Language-Based Perspective | This paper examines the effect of two-sided argumentation on the perceived
helpfulness of online consumer reviews. In contrast to previous works, our
analysis thereby sheds light on the reception of reviews from a language-based
perspective. For this purpose, we propose an intriguing text analysis approach
based on distributed text representations and multi-instance learning to
operationalize the two-sidedness of argumentation in review texts. A subsequent
empirical analysis using a large corpus of Amazon reviews suggests that
two-sided argumentation in reviews significantly increases their helpfulness.
We find this effect to be stronger for positive reviews than for negative
reviews, whereas a higher degree of emotional language weakens the effect. Our
findings have immediate implications for retailer platforms, which can utilize
our results to optimize their customer feedback system and to present more
useful product reviews.
| 2,018 | Computation and Language |
Learning Emotion from 100 Observations: Unexpected Robustness of Deep
Learning under Strong Data Limitations | One of the major downsides of Deep Learning is its supposed need for vast
amounts of training data. As such, these techniques appear ill-suited for NLP
areas where annotated data is limited, such as less-resourced languages or
emotion analysis, with its many nuanced and hard-to-acquire annotation formats.
We conduct a questionnaire study indicating that indeed the vast majority of
researchers in emotion analysis deems neural models inferior to traditional
machine learning when training data is limited. In stark contrast to those
survey results, we provide empirical evidence for English, Polish, and
Portuguese that commonly used neural architectures can be trained on
surprisingly few observations, outperforming $n$-gram based ridge regression on
only 100 data points. Our analysis suggests that high-quality, pre-trained word
embeddings are a main factor for achieving those results.
| 2,020 | Computation and Language |
Teaching Syntax by Adversarial Distraction | Existing entailment datasets mainly pose problems which can be answered
without attention to grammar or word order. Learning syntax requires comparing
examples where different grammar and word order change the desired
classification. We introduce several datasets based on synthetic
transformations of natural entailment examples in SNLI or FEVER, to teach
aspects of grammar and word order. We show that without retraining, popular
entailment models are unaware that these syntactic differences change meaning.
With retraining, some but not all popular entailment models can learn to
compare the syntax properly.
| 2,018 | Computation and Language |
UniMorph 2.0: Universal Morphology | The Universal Morphology UniMorph project is a collaborative effort to
improve how NLP handles complex morphology across the world's languages. The
project releases annotated morphological data using a universal tagset, the
UniMorph schema. Each inflected form is associated with a lemma, which
typically carries its underlying lexical meaning, and a bundle of morphological
features from our schema. Additional supporting data and tools are also
released on a per-language basis when available. UniMorph is based at the
Center for Language and Speech Processing (CLSP) at Johns Hopkins University in
Baltimore, Maryland and is sponsored by the DARPA LORELEI program. This paper
details advances made to the collection, annotation, and dissemination of
project resources since the initial UniMorph release described at LREC 2016.
lexical resources} }
| 2,020 | Computation and Language |
A Large-Scale Corpus for Conversation Disentanglement | Disentangling conversations mixed together in a single stream of messages is
a difficult task, made harder by the lack of large manually annotated datasets.
We created a new dataset of 77,563 messages manually annotated with
reply-structure graphs that both disentangle conversations and define internal
conversation structure. Our dataset is 16 times larger than all previously
released datasets combined, the first to include adjudication of annotation
disagreements, and the first to include context. We use our data to re-examine
prior work, in particular, finding that 80% of conversations in a widely used
dialogue corpus are either missing messages or contain extra messages. Our
manually-annotated data presents an opportunity to develop robust data-driven
methods for conversation disentanglement, which will help advance dialogue
research.
| 2,019 | Computation and Language |
Magnitude: A Fast, Efficient Universal Vector Embedding Utility Package | Vector space embedding models like word2vec, GloVe, fastText, and ELMo are
extremely popular representations in natural language processing (NLP)
applications. We present Magnitude, a fast, lightweight tool for utilizing and
processing embeddings. Magnitude is an open source Python package with a
compact vector storage file format that allows for efficient manipulation of
huge numbers of embeddings. Magnitude performs common operations up to 60 to
6,000 times faster than Gensim. Magnitude introduces several novel features for
improved robustness like out-of-vocabulary lookups.
| 2,018 | Computation and Language |
Integrating Transformer and Paraphrase Rules for Sentence Simplification | Sentence simplification aims to reduce the complexity of a sentence while
retaining its original meaning. Current models for sentence simplification
adopted ideas from ma- chine translation studies and implicitly learned
simplification mapping rules from normal- simple sentence pairs. In this paper,
we explore a novel model based on a multi-layer and multi-head attention
architecture and we pro- pose two innovative approaches to integrate the Simple
PPDB (A Paraphrase Database for Simplification), an external paraphrase
knowledge base for simplification that covers a wide range of real-world
simplification rules. The experiments show that the integration provides two
major benefits: (1) the integrated model outperforms multiple state- of-the-art
baseline models for sentence simplification in the literature (2) through
analysis of the rule utilization, the model seeks to select more accurate
simplification rules. The code and models used in the paper are available at
https://github.com/ Sanqiang/text_simplification.
| 2,018 | Computation and Language |
Static and Dynamic Vector Semantics for Lambda Calculus Models of
Natural Language | Vector models of language are based on the contextual aspects of language,
the distributions of words and how they co-occur in text. Truth conditional
models focus on the logical aspects of language, compositional properties of
words and how they compose to form sentences. In the truth conditional
approach, the denotation of a sentence determines its truth conditions, which
can be taken to be a truth value, a set of possible worlds, a context change
potential, or similar. In the vector models, the degree of co-occurrence of
words in context determines how similar the meanings of words are. In this
paper, we put these two models together and develop a vector semantics for
language based on the simply typed lambda calculus models of natural language.
We provide two types of vector semantics: a static one that uses techniques
familiar from the truth conditional tradition and a dynamic one based on a form
of dynamic interpretation inspired by Heim's context change potentials. We show
how the dynamic model can be applied to entailment between a corpus and a
sentence and we provide examples.
| 2,018 | Computation and Language |
LAMVI-2: A Visual Tool for Comparing and Tuning Word Embedding Models | Tuning machine learning models, particularly deep learning architectures, is
a complex process. Automated hyperparameter tuning algorithms often depend on
specific optimization metrics. However, in many situations, a developer trades
one metric against another: accuracy versus overfitting, precision versus
recall, smaller models and accuracy, etc. With deep learning, not only are the
model's representations opaque, the model's behavior when parameters "knobs"
are changed may also be unpredictable. Thus, picking the "best" model often
requires time-consuming model comparison. In this work, we introduce LAMVI-2, a
visual analytics system to support a developer in comparing hyperparameter
settings and outcomes. By focusing on word-embedding models ("deep learning for
text") we integrate views to compare both high-level statistics as well as
internal model behaviors (e.g., comparing word 'distances'). We demonstrate how
developers can work with LAMVI-2 to more quickly and accurately narrow down an
appropriate and effective application-specific model.
| 2,018 | Computation and Language |
Named Person Coreference in English News | People are often entities of interest in tasks such as search and information
extraction. In these tasks, the goal is to find as much information as possible
about people specified by their name. However in text, some of the references
to people are by pronouns (she, his) or generic descriptions (the professor,
the German chancellor). It is therefore important that coreference resolution
systems are able to link these different types of mentions to the correct
person name. Here, we evaluate two state of the art coreference resolution
systems on the subtask of Named Person Coreference, in which we are interested
in identifying a person mentioned by name, along with all other mentions of the
person, by pronoun or generic noun phrase. Our analysis reveals that standard
coreference metrics do not reflect adequately the requirements in this task:
they do not penalize systems for not identifying any mentions by name and they
reward systems even if systems find correctly mentions to the same entity but
fail to link these to a proper name (she--the student---no name). We introduce
new metrics for evaluating named person coreference that address these
discrepancies. We present a simple rule-based named entity recognition driven
system, which outperforms the current state-of-the-art systems on these
task-specific metrics and performs on par with them on traditional coreference
evaluations. Finally, we present similar evaluation for coreference resolution
of other named entities and show that the rule-based approach is effective only
for person named coreference, not other named entity types.
| 2,018 | Computation and Language |
Can Entropy Explain Successor Surprisal Effects in Reading? | Human reading behavior is sensitive to surprisal: more predictable words tend
to be read faster. Unexpectedly, this applies not only to the surprisal of the
word that is currently being read, but also to the surprisal of upcoming
(successor) words that have not been fixated yet. This finding has been
interpreted as evidence that readers can extract lexical information
parafoveally. Calling this interpretation into question, Angele et al. (2015)
showed that successor effects appear even in contexts in which those successor
words are not yet visible. They hypothesized that successor surprisal predicts
reading time because it approximates the reader's uncertainty about upcoming
words. We test this hypothesis on a reading time corpus using an LSTM language
model, and find that successor surprisal and entropy are independent predictors
of reading time. This independence suggests that entropy alone is unlikely to
be the full explanation for successor surprisal effects.
| 2,018 | Computation and Language |
Parsing Coordination for Spoken Language Understanding | Typical spoken language understanding systems provide narrow semantic parses
using a domain-specific ontology. The parses contain intents and slots that are
directly consumed by downstream domain applications. In this work we discuss
expanding such systems to handle compound entities and intents by introducing a
domain-agnostic shallow parser that handles linguistic coordination. We show
that our model for parsing coordination learns domain-independent and
slot-independent features and is able to segment conjunct boundaries of many
different phrasal categories. We also show that using adversarial training can
be effective for improving generalization across different slot types for
coordination parsing.
| 2,018 | Computation and Language |
Handling Imbalanced Dataset in Multi-label Text Categorization using
Bagging and Adaptive Boosting | Imbalanced dataset is occurred due to uneven distribution of data available
in the real world such as disposition of complaints on government offices in
Bandung. Consequently, multi-label text categorization algorithms may not
produce the best performance because classifiers tend to be weighed down by the
majority of the data and ignore the minority. In this paper, Bagging and
Adaptive Boosting algorithms are employed to handle the issue and improve the
performance of text categorization. The result is evaluated with four
evaluation metrics such as hamming loss, subset accuracy, example-based
accuracy and micro-averaged f-measure. Bagging ML-LP with SMO weak classifier
is the best performer in terms of subset accuracy and example-based accuracy.
Bagging ML-BR with SMO weak classifier has the best micro-averaged f-measure
among all. In other hand, AdaBoost MH with J48 weak classifier has the lowest
hamming loss value. Thus, both algorithms have high potential in boosting the
performance of text categorization, but only for certain weak classifiers.
However, bagging has more potential than adaptive boosting in increasing the
accuracy of minority labels.
| 2,019 | Computation and Language |
Suspicious News Detection Using Micro Blog Text | We present a new task, suspicious news detection using micro blog text. This
task aims to support human experts to detect suspicious news articles to be
verified, which is costly but a crucial step before verifying the truthfulness
of the articles. Specifically, in this task, given a set of posts on SNS
referring to a news article, the goal is to judge whether the article is to be
verified or not. For this task, we create a publicly available dataset in
Japanese and provide benchmark results by using several basic machine learning
techniques. Experimental results show that our models can reduce the cost of
manual fact-checking process.
| 2,018 | Computation and Language |
Middle-Out Decoding | Despite being virtually ubiquitous, sequence-to-sequence models are
challenged by their lack of diversity and inability to be externally
controlled. In this paper, we speculate that a fundamental shortcoming of
sequence generation models is that the decoding is done strictly from
left-to-right, meaning that outputs values generated earlier have a profound
effect on those generated later. To address this issue, we propose a novel
middle-out decoder architecture that begins from an initial middle-word and
simultaneously expands the sequence in both directions. To facilitate
information flow and maintain consistent decoding, we introduce a dual
self-attention mechanism that allows us to model complex dependencies between
the outputs. We illustrate the performance of our model on the task of video
captioning, as well as a synthetic sequence de-noising task. Our middle-out
decoder achieves significant improvements on de-noising and competitive
performance in the task of video captioning, while quantifiably improving the
caption diversity. Furthermore, we perform a qualitative analysis that
demonstrates our ability to effectively control the generation process of our
decoder.
| 2,018 | Computation and Language |
Robots Learning to Say `No': Prohibition and Rejective Mechanisms in
Acquisition of Linguistic Negation | `No' belongs to the first ten words used by children and embodies the first
active form of linguistic negation. Despite its early occurrence the details of
its acquisition process remain largely unknown. The circumstance that `no'
cannot be construed as a label for perceptible objects or events puts it
outside of the scope of most modern accounts of language acquisition. Moreover,
most symbol grounding architectures will struggle to ground the word due to its
non-referential character. In an experimental study involving the child-like
humanoid robot iCub that was designed to illuminate the acquisition process of
negation words, the robot is deployed in several rounds of speech-wise
unconstrained interaction with na\"ive participants acting as its language
teachers. The results corroborate the hypothesis that affect or volition plays
a pivotal role in the socially distributed acquisition process. Negation words
are prosodically salient within prohibitive utterances and negative intent
interpretations such that they can be easily isolated from the teacher's speech
signal. These words subsequently may be grounded in negative affective states.
However, observations of the nature of prohibitive acts and the temporal
relationships between its linguistic and extra-linguistic components raise
serious questions over the suitability of Hebbian-type algorithms for language
grounding.
| 2,023 | Computation and Language |
Unsupervised Evaluation Metrics and Learning Criteria for Non-Parallel
Textual Transfer | We consider the problem of automatically generating textual paraphrases with
modified attributes or properties, focusing on the setting without parallel
data (Hu et al., 2017; Shen et al., 2017). This setting poses challenges for
evaluation. We show that the metric of post-transfer classification accuracy is
insufficient on its own, and propose additional metrics based on semantic
preservation and fluency as well as a way to combine them into a single overall
score. We contribute new loss functions and training strategies to address the
different metrics. Semantic preservation is addressed by adding a cyclic
consistency loss and a loss based on paraphrase pairs, while fluency is
improved by integrating losses based on style-specific language models. We
experiment with a Yelp sentiment dataset and a new literature dataset that we
propose, using multiple models that extend prior work (Shen et al., 2017). We
demonstrate that our metrics correlate well with human judgments, at both the
sentence-level and system-level. Automatic and manual evaluation also show
large improvements over the baseline method of Shen et al. (2017). We hope that
our proposed metrics can speed up system development for new textual transfer
tasks while also encouraging the community to address our three complementary
aspects of transfer quality.
| 2,019 | Computation and Language |
Language Modeling for Code-Switching: Evaluation, Integration of
Monolingual Data, and Discriminative Training | We focus on the problem of language modeling for code-switched language, in
the context of automatic speech recognition (ASR). Language modeling for
code-switched language is challenging for (at least) three reasons: (1) lack of
available large-scale code-switched data for training; (2) lack of a replicable
evaluation setup that is ASR directed yet isolates language modeling
performance from the other intricacies of the ASR system; and (3) the reliance
on generative modeling. We tackle these three issues: we propose an
ASR-motivated evaluation setup which is decoupled from an ASR system and the
choice of vocabulary, and provide an evaluation dataset for English-Spanish
code-switching. This setup lends itself to a discriminative training approach,
which we demonstrate to work better than generative language modeling. Finally,
we explore a variety of training protocols and verify the effectiveness of
training with large amounts of monolingual data followed by fine-tuning with
small amounts of code-switched data, for both the generative and discriminative
cases.
| 2,019 | Computation and Language |
A Knowledge-Grounded Multimodal Search-Based Conversational Agent | Multimodal search-based dialogue is a challenging new task: It extends
visually grounded question answering systems into multi-turn conversations with
access to an external database. We address this new challenge by learning a
neural response generation system from the recently released Multimodal
Dialogue (MMD) dataset (Saha et al., 2017). We introduce a knowledge-grounded
multimodal conversational model where an encoded knowledge base (KB)
representation is appended to the decoder input. Our model substantially
outperforms strong baselines in terms of text-based similarity measures (over 9
BLEU points, 3 of which are solely due to the use of additional information
from the KB.
| 2,018 | Computation and Language |
Improving Context Modelling in Multimodal Dialogue Generation | In this work, we investigate the task of textual response generation in a
multimodal task-oriented dialogue system. Our work is based on the recently
released Multimodal Dialogue (MMD) dataset (Saha et al., 2017) in the fashion
domain. We introduce a multimodal extension to the Hierarchical Recurrent
Encoder-Decoder (HRED) model and show that this extension outperforms strong
baselines in terms of text-based similarity metrics. We also showcase the
shortcomings of current vision and language models by performing an error
analysis on our system's output.
| 2,018 | Computation and Language |
Ruuh: A Deep Learning Based Conversational Social Agent | Dialogue systems and conversational agents are becoming increasingly popular
in the modern society but building an agent capable of holding intelligent
conversation with its users is a challenging problem for artificial
intelligence. In this demo, we demonstrate a deep learning based conversational
social agent called "Ruuh" (facebook.com/Ruuh) designed by a team at Microsoft
India to converse on a wide range of topics. Ruuh needs to think beyond the
utilitarian notion of merely generating "relevant" responses and meet a wider
range of user social needs, like expressing happiness when user's favorite team
wins, sharing a cute comment on showing the pictures of the user's pet and so
on. The agent also needs to detect and respond to abusive language, sensitive
topics and trolling behavior of the users. Many of these problems pose
significant research challenges which will be demonstrated in our demo. Our
agent has interacted with over 2 million real world users till date which has
generated over 150 million user conversations.
| 2,018 | Computation and Language |
ReviewQA: a relational aspect-based opinion reading dataset | Deep reading models for question-answering have demonstrated promising
performance over the last couple of years. However current systems tend to
learn how to cleverly extract a span of the source document, based on its
similarity with the question, instead of seeking for the appropriate answer.
Indeed, a reading machine should be able to detect relevant passages in a
document regarding a question, but more importantly, it should be able to
reason over the important pieces of the document in order to produce an answer
when it is required. To motivate this purpose, we present ReviewQA, a
question-answering dataset based on hotel reviews. The questions of this
dataset are linked to a set of relational understanding competencies that we
expect a model to master. Indeed, each question comes with an associated type
that characterizes the required competency. With this framework, it is possible
to benchmark the main families of models and to get an overview of what are the
strengths and the weaknesses of a given model on the set of tasks evaluated in
this dataset. Our corpus contains more than 500.000 questions in natural
language over 100.000 hotel reviews. Our setup is projective, the answer of a
question does not need to be extracted from a document, like in most of the
recent datasets, but selected among a set of candidates that contains all the
possible answers to the questions of the dataset. Finally, we present several
baselines over this dataset.
| 2,018 | Computation and Language |
Learning Comment Generation by Leveraging User-Generated Data | Existing models on open-domain comment generation are difficult to train, and
they produce repetitive and uninteresting responses. The problem is due to
multiple and contradictory responses from a single article, and by the rigidity
of retrieval methods. To solve this problem, we propose a combined approach to
retrieval and generation methods. We propose an attentive scorer to retrieve
informative and relevant comments by leveraging user-generated data. Then, we
use such comments, together with the article, as input for a
sequence-to-sequence model with copy mechanism. We show the robustness of our
model and how it can alleviate the aforementioned issue by using a large scale
comment generation dataset. The result shows that the proposed generative model
significantly outperforms strong baseline such as Seq2Seq with attention and
Information Retrieval models by around 27 and 30 BLEU-1 points respectively.
| 2,019 | Computation and Language |
Content Selection in Deep Learning Models of Summarization | We carry out experiments with deep learning models of summarization across
the domains of news, personal stories, meetings, and medical articles in order
to understand how content selection is performed. We find that many
sophisticated features of state of the art extractive summarizers do not
improve performance over simpler models. These results suggest that it is
easier to create a summarizer for a new domain than previous work suggests and
bring into question the benefit of deep learning models for summarization for
those domains that do have massive datasets (i.e., news). At the same time,
they suggest important questions for new research in summarization; namely, new
forms of sentence representations or external knowledge sources are needed that
are better suited to the summarization task.
| 2,019 | Computation and Language |
Multi-label Multi-task Deep Learning for Behavioral Coding | We propose a methodology for estimating human behaviors in psychotherapy
sessions using mutli-label and multi-task learning paradigms. We discuss the
problem of behavioral coding in which data of human interactions is the
annotated with labels to describe relevant human behaviors of interest. We
describe two related, yet distinct, corpora consisting of therapist client
interactions in psychotherapy sessions. We experimentally compare the proposed
learning approaches for estimating behaviors of interest in these datasets.
Specifically, we compare single and multiple label learning approaches, single
and multiple task learning approaches, and evaluate the performance of these
approaches when incorporating turn context. We demonstrate the prediction
performance gains which can be achieved by using the proposed paradigms and
discuss the insights these models provide into these complex interactions.
| 2,020 | Computation and Language |
A Pragmatic Guide to Geoparsing Evaluation | Empirical methods in geoparsing have thus far lacked a standard evaluation
framework describing the task, metrics and data used to compare
state-of-the-art systems. Evaluation is further made inconsistent, even
unrepresentative of real-world usage by the lack of distinction between the
different types of toponyms, which necessitates new guidelines, a consolidation
of metrics and a detailed toponym taxonomy with implications for Named Entity
Recognition (NER) and beyond. To address these deficiencies, our manuscript
introduces a new framework in three parts. Part 1) Task Definition: clarified
via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of
Toponyms. Part 2) Metrics: discussed and reviewed for a rigorous evaluation
including recommendations for NER/Geoparsing practitioners. Part 3) Evaluation
Data: shared via a new dataset called GeoWebNews to provide test/train examples
and enable immediate use of our contributions. In addition to fine-grained
Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable
for prototyping and evaluating machine learning NLP models.
| 2,019 | Computation and Language |
Language Modeling with Sparse Product of Sememe Experts | Most language modeling methods rely on large-scale data to statistically
learn the sequential patterns of words. In this paper, we argue that words are
atomic language units but not necessarily atomic semantic units. Inspired by
HowNet, we use sememes, the minimum semantic units in human languages, to
represent the implicit semantics behind words for language modeling, named
Sememe-Driven Language Model (SDLM). More specifically, to predict the next
word, SDLM first estimates the sememe distribution gave textual context.
Afterward, it regards each sememe as a distinct semantic expert, and these
experts jointly identify the most probable senses and the corresponding word.
In this way, SDLM enables language models to work beyond word-level
manipulation to fine-grained sememe-level semantics and offers us more powerful
tools to fine-tune language models and improve the interpretability as well as
the robustness of language models. Experiments on language modeling and the
downstream application of headline gener- ation demonstrate the significant
effect of SDLM. Source code and data used in the experiments can be accessed at
https:// github.com/thunlp/SDLM-pytorch.
| 2,018 | Computation and Language |
Parallel Attention Mechanisms in Neural Machine Translation | Recent papers in neural machine translation have proposed the strict use of
attention mechanisms over previous standards such as recurrent and
convolutional neural networks (RNNs and CNNs). We propose that by running
traditionally stacked encoding branches from encoder-decoder attention- focused
architectures in parallel, that even more sequential operations can be removed
from the model and thereby decrease training time. In particular, we modify the
recently published attention-based architecture called Transformer by Google,
by replacing sequential attention modules with parallel ones, reducing the
amount of training time and substantially improving BLEU scores at the same
time. Experiments over the English to German and English to French translation
tasks show that our model establishes a new state of the art.
| 2,018 | Computation and Language |
Learning Better Internal Structure of Words for Sequence Labeling | Character-based neural models have recently proven very useful for many NLP
tasks. However, there is a gap of sophistication between methods for learning
representations of sentences and words. While most character models for
learning representations of sentences are deep and complex, models for learning
representations of words are shallow and simple. Also, in spite of considerable
research on learning character embeddings, it is still not clear which kind of
architecture is the best for capturing character-to-word representations. To
address these questions, we first investigate the gaps between methods for
learning word and sentence representations. We conduct detailed experiments and
comparisons of different state-of-the-art convolutional models, and also
investigate the advantages and disadvantages of their constituents.
Furthermore, we propose IntNet, a funnel-shaped wide convolutional neural
architecture with no down-sampling for learning representations of the internal
structure of words by composing their characters from limited, supervised
training corpora. We evaluate our proposed model on six sequence labeling
datasets, including named entity recognition, part-of-speech tagging, and
syntactic chunking. Our in-depth analysis shows that IntNet significantly
outperforms other character embedding models and obtains new state-of-the-art
performance without relying on any external knowledge or resources.
| 2,018 | Computation and Language |
Simplifying Neural Machine Translation with Addition-Subtraction
Twin-Gated Recurrent Networks | In this paper, we propose an additionsubtraction twin-gated recurrent network
(ATR) to simplify neural machine translation. The recurrent units of ATR are
heavily simplified to have the smallest number of weight matrices among units
of all existing gated RNNs. With the simple addition and subtraction operation,
we introduce a twin-gated mechanism to build input and forget gates which are
highly correlated. Despite this simplification, the essential non-linearities
and capability of modeling long-distance dependencies are preserved.
Additionally, the proposed ATR is more transparent than LSTM/GRU due to the
simplification. Forward self-attention can be easily established in ATR, which
makes the proposed network interpretable. Experiments on WMT14 translation
tasks demonstrate that ATR-based neural machine translation can yield
competitive performance on English- German and English-French language pairs in
terms of both translation quality and speed. Further experiments on NIST
Chinese-English translation, natural language inference and Chinese word
segmentation verify the generality and applicability of ATR on different
natural language processing tasks.
| 2,018 | Computation and Language |
Machine Translation between Vietnamese and English: an Empirical Study | Machine translation is shifting to an end-to-end approach based on deep
neural networks. The state of the art achieves impressive results for popular
language pairs such as English - French or English - Chinese. However for
English - Vietnamese the shortage of parallel corpora and expensive
hyper-parameter search present practical challenges to neural-based approaches.
This paper highlights our efforts on improving English-Vietnamese translations
in two directions: (1) Building the largest open Vietnamese - English corpus to
date, and (2) Extensive experiments with the latest neural models to achieve
the highest BLEU scores. Our experiments provide practical examples of
effectively employing different neural machine translation models with
low-resource language pairs.
| 2,018 | Computation and Language |
Almost-unsupervised Speech Recognition with Close-to-zero Resource Based
on Phonetic Structures Learned from Very Small Unpaired Speech and Text Data | Producing a large amount of annotated speech data for training ASR systems
remains difficult for more than 95% of languages all over the world which are
low-resourced. However, we note human babies start to learn the language by the
sounds of a small number of exemplar words without hearing a large amount of
data. We initiate some preliminary work in this direction in this paper. Audio
Word2Vec is used to obtain embeddings of spoken words which carry phonetic
information extracted from the signals. An autoencoder is used to generate
embeddings of text words based on the articulatory features for the phoneme
sequences. Both sets of embeddings for spoken and text words describe similar
phonetic structures among words in their respective latent spaces. A mapping
relation from the audio embeddings to text embeddings actually gives the
word-level ASR. This can be learned by aligning a small number of spoken words
and the corresponding text words in the embedding spaces. In the initial
experiments only 200 annotated spoken words and one hour of speech data without
annotation gave a word accuracy of 27.5%, which is low but a good starting
point.
| 2,018 | Computation and Language |
Exploring Neural Methods for Parsing Discourse Representation Structures | Neural methods have had several recent successes in semantic parsing, though
they have yet to face the challenge of producing meaning representations based
on formal semantics. We present a sequence-to-sequence neural semantic parser
that is able to produce Discourse Representation Structures (DRSs) for English
sentences with high accuracy, outperforming traditional DRS parsers. To
facilitate the learning of the output, we represent DRSs as a sequence of flat
clauses and introduce a method to verify that produced DRSs are well-formed and
interpretable. We compare models using characters and words as input and see
(somewhat surprisingly) that the former performs better than the latter. We
show that eliminating variable names from the output using De Bruijn-indices
increases parser performance. Adding silver training data boosts performance
even further.
| 2,018 | Computation and Language |
Subword Encoding in Lattice LSTM for Chinese Word Segmentation | We investigate a lattice LSTM network for Chinese word segmentation (CWS) to
utilize words or subwords. It integrates the character sequence features with
all subsequences information matched from a lexicon. The matched subsequences
serve as information shortcut tunnels which link their start and end characters
directly. Gated units are used to control the contribution of multiple input
links. Through formula derivation and comparison, we show that the lattice LSTM
is an extension of the standard LSTM with the ability to take multiple inputs.
Previous lattice LSTM model takes word embeddings as the lexicon input, we
prove that subword encoding can give the comparable performance and has the
benefit of not relying on any external segmentor. The contribution of lattice
LSTM comes from both lexicon and pretrained embeddings information, we find
that the lexicon information contributes more than the pretrained embeddings
information through controlled experiments. Our experiments show that the
lattice structure with subword encoding gives competitive or better results
with previous state-of-the-art methods on four segmentation benchmarks.
Detailed analyses are conducted to compare the performance of word encoding and
subword encoding in lattice LSTM. We also investigate the performance of
lattice LSTM structure under different circumstances and when this model works
or fails.
| 2,018 | Computation and Language |
Towards End-to-end Automatic Code-Switching Speech Recognition | Speech recognition in mixed language has difficulties to adapt end-to-end
framework due to the lack of data and overlapping phone sets, for example in
words such as "one" in English and "w\`an" in Chinese. We propose a CTC-based
end-to-end automatic speech recognition model for intra-sentential
English-Mandarin code-switching. The model is trained by joint training on
monolingual datasets, and fine-tuning with the mixed-language corpus. During
the decoding process, we apply a beam search and combine CTC predictions and
language model score. The proposed method is effective in leveraging
monolingual corpus and detecting language transitions and it improves the CER
by 5%.
| 2,018 | Computation and Language |
Prosodic entrainment in dialog acts | We examined prosodic entrainment in spoken dialogs separately for several
dialog acts in cooperative and competitive games. Entrainment was measured for
intonation features derived from a superpositional intonation stylization as
well as for rhythm features. The found differences can be related to the
cooperative or competitive nature of the game, as well as to dialog act
properties as its intrinsic authority, supportiveness and distributional
characteristics. In cooperative games dialog acts with a high authority given
by knowledge and with a high frequency showed the most entrainment. The results
are discussed amongst others with respect to the degree of active entrainment
control in cooperative behavior.
| 2,018 | Computation and Language |
Evaluating Text GANs as Language Models | Generative Adversarial Networks (GANs) are a promising approach for text
generation that, unlike traditional language models (LM), does not suffer from
the problem of ``exposure bias''. However, A major hurdle for understanding the
potential of GANs for text generation is the lack of a clear evaluation metric.
In this work, we propose to approximate the distribution of text generated by a
GAN, which permits evaluating them with traditional probability-based LM
metrics. We apply our approximation procedure on several GAN-based models and
show that they currently perform substantially worse than state-of-the-art LMs.
Our evaluation procedure promotes better understanding of the relation between
GANs and LMs, and can accelerate progress in GAN-based text generation.
| 2,019 | Computation and Language |
Unsupervised Neural Machine Translation Initialized by Unsupervised
Statistical Machine Translation | Recent work achieved remarkable results in training neural machine
translation (NMT) systems in a fully unsupervised way, with new and dedicated
architectures that rely on monolingual corpora only. In this work, we propose
to define unsupervised NMT (UNMT) as NMT trained with the supervision of
synthetic bilingual data. Our approach straightforwardly enables the use of
state-of-the-art architectures proposed for supervised NMT by replacing
human-made bilingual data with synthetic bilingual data for training. We
propose to initialize the training of UNMT with synthetic bilingual data
generated by unsupervised statistical machine translation (USMT). The UNMT
system is then incrementally improved using back-translation. Our preliminary
experiments show that our approach achieves a new state-of-the-art for
unsupervised machine translation on the WMT16 German--English news translation
task, for both translation directions.
| 2,018 | Computation and Language |
Spoken Language Understanding on the Edge | We consider the problem of performing Spoken Language Understanding (SLU) on
small devices typical of IoT applications. Our contributions are twofold.
First, we outline the design of an embedded, private-by-design SLU system and
show that it has performance on par with cloud-based commercial solutions.
Second, we release the datasets used in our experiments in the interest of
reproducibility and in the hope that they can prove useful to the SLU
community.
| 2,019 | Computation and Language |
Advancing PICO Element Detection in Biomedical Text via Deep Neural
Networks | In evidence-based medicine (EBM), defining a clinical question in terms of
the specific patient problem aids the physicians to efficiently identify
appropriate resources and search for the best available evidence for medical
treatment. In order to formulate a well-defined, focused clinical question, a
framework called PICO is widely used, which identifies the sentences in a given
medical text that belong to the four components typically reported in clinical
trials: Participants/Problem (P), Intervention (I), Comparison (C) and Outcome
(O). In this work, we propose a novel deep learning model for recognizing PICO
elements in biomedical abstracts. Based on the previous state-of-the-art
bidirectional long-short term memory (biLSTM) plus conditional random field
(CRF) architecture, we add another layer of biLSTM upon the sentence
representation vectors so that the contextual information from surrounding
sentences can be gathered to help infer the interpretation of the current one.
In addition, we propose two methods to further generalize and improve the
model: adversarial training and unsupervised pre-training over large corpora.
We tested our proposed approach over two benchmark datasets. One is the
PubMed-PICO dataset, where our best results outperform the previous best by
5.5%, 7.9%, and 5.8% for P, I, and O elements in terms of F1 score,
respectively. And for the other dataset named NICTA-PIBOSO, the improvements
for P/I/O elements are 2.4%, 13.6%, and 1.0% in F1 score, respectively.
Overall, our proposed deep learning model can obtain unprecedented PICO element
detection accuracy while avoiding the need for any manual feature selection.
| 2,019 | Computation and Language |
Learning Cross-Lingual Sentence Representations via a Multi-task
Dual-Encoder Model | A significant roadblock in multilingual neural language modeling is the lack
of labeled non-English data. One potential method for overcoming this issue is
learning cross-lingual text representations that can be used to transfer the
performance from training on English tasks to non-English tasks, despite little
to no task-specific non-English data. In this paper, we explore a natural setup
for learning cross-lingual sentence representations: the dual-encoder. We
provide a comprehensive evaluation of our cross-lingual representations on a
number of monolingual, cross-lingual, and zero-shot/few-shot learning tasks,
and also give an analysis of different learned cross-lingual embedding spaces.
| 2,019 | Computation and Language |
ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading
Comprehension | We present a large-scale dataset, ReCoRD, for machine reading comprehension
requiring commonsense reasoning. Experiments on this dataset demonstrate that
the performance of state-of-the-art MRC systems fall far behind human
performance. ReCoRD represents a challenge for future research to bridge the
gap between human and machine commonsense reading comprehension. ReCoRD is
available at http://nlp.jhu.edu/record.
| 2,018 | Computation and Language |
Topic-Specific Sentiment Analysis Can Help Identify Political Ideology | Ideological leanings of an individual can often be gauged by the sentiment
one expresses about different issues. We propose a simple framework that
represents a political ideology as a distribution of sentiment polarities
towards a set of topics. This representation can then be used to detect
ideological leanings of documents (speeches, news articles, etc.) based on the
sentiments expressed towards different topics. Experiments performed using a
widely used dataset show the promise of our proposed approach that achieves
comparable performance to other methods despite being much simpler and more
interpretable.
| 2,018 | Computation and Language |
Combining Distant and Direct Supervision for Neural Relation Extraction | In relation extraction with distant supervision, noisy labels make it
difficult to train quality models. Previous neural models addressed this
problem using an attention mechanism that attends to sentences that are likely
to express the relations. We improve such models by combining the distant
supervision data with an additional directly-supervised data, which we use as
supervision for the attention weights. We find that joint training on both
types of supervision leads to a better model because it improves the model's
ability to identify noisy sentences. In addition, we find that sigmoidal
attention weights with max pooling achieves better performance over the
commonly used weighted average attention in this setup. Our proposed method
achieves a new state-of-the-art result on the widely used FB-NYT dataset.
| 2,019 | Computation and Language |
Stress-Testing Neural Models of Natural Language Inference with
Multiply-Quantified Sentences | Standard evaluations of deep learning models for semantics using naturalistic
corpora are limited in what they can tell us about the fidelity of the learned
representations, because the corpora rarely come with good measures of semantic
complexity. To overcome this limitation, we present a method for generating
data sets of multiply-quantified natural language inference (NLI) examples in
which semantic complexity can be precisely characterized, and we use this
method to show that a variety of common architectures for NLI inevitably fail
to encode crucial information; only a model with forced lexical alignments
avoids this damaging information loss.
| 2,018 | Computation and Language |
GraphIE: A Graph-Based Framework for Information Extraction | Most modern Information Extraction (IE) systems are implemented as sequential
taggers and only model local dependencies. Non-local and non-sequential context
is, however, a valuable source of information to improve predictions. In this
paper, we introduce GraphIE, a framework that operates over a graph
representing a broad set of dependencies between textual units (i.e. words or
sentences). The algorithm propagates information between connected nodes
through graph convolutions, generating a richer representation that can be
exploited to improve word-level predictions. Evaluation on three different
tasks --- namely textual, social media and visual information extraction ---
shows that GraphIE consistently outperforms the state-of-the-art sequence
tagging model by a significant margin.
| 2,019 | Computation and Language |
Attention-based sequence-to-sequence model for speech recognition:
development of state-of-the-art system on LibriSpeech and its application to
non-native English | Recent research has shown that attention-based sequence-to-sequence models
such as Listen, Attend, and Spell (LAS) yield comparable results to
state-of-the-art ASR systems on various tasks. In this paper, we describe the
development of such a system and demonstrate its performance on two tasks:
first we achieve a new state-of-the-art word error rate of 3.43% on the test
clean subset of LibriSpeech English data; second on non-native English speech,
including both read speech and spontaneous speech, we obtain very competitive
results compared to a conventional system built with the most updated Kaldi
recipe.
| 2,018 | Computation and Language |
Towards End-to-End Code-Switching Speech Recognition | Code-switching speech recognition has attracted an increasing interest
recently, but the need for expert linguistic knowledge has always been a big
issue. End-to-end automatic speech recognition (ASR) simplifies the building of
ASR systems considerably by predicting graphemes or characters directly from
acoustic input. In the mean time, the need of expert linguistic knowledge is
also eliminated, which makes it an attractive choice for code-switching ASR.
This paper presents a hybrid CTC-Attention based end-to-end Mandarin-English
code-switching (CS) speech recognition system and studies the effect of hybrid
CTC-Attention based models, different modeling units, the inclusion of language
identification and different decoding strategies on the task of code-switching
ASR. On the SEAME corpus, our system achieves a mixed error rate (MER) of
34.24%.
| 2,018 | Computation and Language |
Attentive Neural Network for Named Entity Recognition in Vietnamese | We propose an attentive neural network for the task of named entity
recognition in Vietnamese. The proposed attentive neural model makes use of
character-based language models and word embeddings to encode words as vector
representations. A neural network architecture of encoder, attention, and
decoder layers is then utilized to encode knowledge of input sentences and to
label entity tags. The experimental results show that the proposed attentive
neural network achieves the state-of-the-art results on the benchmark named
entity recognition datasets in Vietnamese in comparison to both hand-crafted
features based models and neural models.
| 2,019 | Computation and Language |
End-to-End Feedback Loss in Speech Chain Framework via Straight-Through
Estimator | The speech chain mechanism integrates automatic speech recognition (ASR) and
text-to-speech synthesis (TTS) modules into a single cycle during training. In
our previous work, we applied a speech chain mechanism as a semi-supervised
learning. It provides the ability for ASR and TTS to assist each other when
they receive unpaired data and let them infer the missing pair and optimize the
model with reconstruction loss. If we only have speech without transcription,
ASR generates the most likely transcription from the speech data, and then TTS
uses the generated transcription to reconstruct the original speech features.
However, in previous papers, we just limited our back-propagation to the
closest module, which is the TTS part. One reason is that back-propagating the
error through the ASR is challenging due to the output of the ASR are discrete
tokens, creating non-differentiability between the TTS and ASR. In this paper,
we address this problem and describe how to thoroughly train a speech chain
end-to-end for reconstruction loss using a straight-through estimator (ST).
Experimental results revealed that, with sampling from ST-Gumbel-Softmax, we
were able to update ASR parameters and improve the ASR performances by 11\%
relative CER reduction compared to the baseline.
| 2,018 | Computation and Language |
Giving Space to Your Message: Assistive Word Segmentation for the
Electronic Typing of Digital Minorities | For readability and disambiguation of the written text, appropriate word
segmentation is recommended for documentation, and it also holds for the
digitized texts. If the language is agglutinative while far from scriptio
continua, for instance in the Korean language, the problem becomes more
significant. However, some device users these days find it challenging to
communicate via key stroking, not only for handicap but also for being
unskilled. In this study, we propose a real-time assistive technology that
utilizes an automatic word segmentation, designed for digital minorities who
are not familiar with electronic typing. We propose a data-driven system
trained upon a spoken Korean language corpus with various non-canonical
expressions and dialects, guaranteeing the comprehension of contextual
information. Through quantitative and qualitative comparison with other text
processing toolkits, we show the reliability of the proposed system and its fit
with colloquial and non-normalized texts, which fulfills the aim of supportive
technology.
| 2,021 | Computation and Language |
WikiConv: A Corpus of the Complete Conversational History of a Large
Online Collaborative Community | We present a corpus that encompasses the complete history of conversations
between contributors to Wikipedia, one of the largest online collaborative
communities. By recording the intermediate states of conversations---including
not only comments and replies, but also their modifications, deletions and
restorations---this data offers an unprecedented view of online conversation.
This level of detail supports new research questions pertaining to the process
(and challenges) of large-scale online collaboration. We illustrate the corpus'
potential with two case studies that highlight new perspectives on earlier
work. First, we explore how a person's conversational behavior depends on how
they relate to the discussion's venue. Second, we show that community
moderation of toxic behavior happens at a higher rate than previously
estimated. Finally the reconstruction framework is designed to be language
agnostic, and we show that it can extract high quality conversational data in
both Chinese and English.
| 2,018 | Computation and Language |
SURFACE: Semantically Rich Fact Validation with Explanations | Judging the veracity of a sentence making one or more claims is an important
and challenging problem with many dimensions. The recent FEVER task asked
participants to classify input sentences as either SUPPORTED, REFUTED or
NotEnoughInfo using Wikipedia as a source of true facts. SURFACE does this task
and explains its decision through a selection of sentences from the trusted
source. Our multi-task neural approach uses semantic lexical frames from
FrameNet to jointly (i) find relevant evidential sentences in the trusted
source and (ii) use them to classify the input sentence's veracity. An
evaluation of our efficient three-parameter model on the FEVER dataset showed
an improvement of 90% over the state-of-the-art baseline on retrieving relevant
sentences and a 70% relative improvement in classification.
| 2,018 | Computation and Language |
Convolutional Self-Attention Network | Self-attention network (SAN) has recently attracted increasing interest due
to its fully parallelized computation and flexibility in modeling dependencies.
It can be further enhanced with multi-headed attention mechanism by allowing
the model to jointly attend to information from different representation
subspaces at different positions (Vaswani et al., 2017). In this work, we
propose a novel convolutional self-attention network (CSAN), which offers SAN
the abilities to 1) capture neighboring dependencies, and 2) model the
interaction between multiple attention heads. Experimental results on WMT14
English-to-German translation task demonstrate that the proposed approach
outperforms both the strong Transformer baseline and other existing works on
enhancing the locality of SAN. Comparing with previous work, our model does not
introduce any new parameters.
| 2,019 | Computation and Language |
Cross-Lingual Transfer Learning for Multilingual Task Oriented Dialog | One of the first steps in the utterance interpretation pipeline of many
task-oriented conversational AI systems is to identify user intents and the
corresponding slots. Since data collection for machine learning models for this
task is time-consuming, it is desirable to make use of existing data in a
high-resource language to train models in low-resource languages. However,
development of such models has largely been hindered by the lack of
multilingual training data. In this paper, we present a new data set of 57k
annotated utterances in English (43k), Spanish (8.6k) and Thai (5k) across the
domains weather, alarm, and reminder. We use this data set to evaluate three
different cross-lingual transfer methods: (1) translating the training data,
(2) using cross-lingual pre-trained embeddings, and (3) a novel method of using
a multilingual machine translation encoder as contextual word representations.
We find that given several hundred training examples in the the target
language, the latter two methods outperform translating the training data.
Further, in very low-resource settings, multilingual contextual word
representations give better results than using cross-lingual static embeddings.
We also compare the cross-lingual methods to using monolingual resources in the
form of contextual ELMo representations and find that given just small amounts
of target language data, this method outperforms all cross-lingual methods,
which highlights the need for more sophisticated cross-lingual methods.
| 2,019 | Computation and Language |
Picking Apart Story Salads | During natural disasters and conflicts, information about what happened is
often confusing, messy, and distributed across many sources. We would like to
be able to automatically identify relevant information and assemble it into
coherent narratives of what happened. To make this task accessible to neural
models, we introduce Story Salads, mixtures of multiple documents that can be
generated at scale. By exploiting the Wikipedia hierarchy, we can generate
salads that exhibit challenging inference problems. Story salads give rise to a
novel, challenging clustering task, where the objective is to group sentences
from the same narratives. We demonstrate that simple bag-of-words similarity
clustering falls short on this task and that it is necessary to take into
account global context and coherence.
| 2,018 | Computation and Language |
You May Not Need Attention | In NMT, how far can we get without attention and without separate encoding
and decoding? To answer that question, we introduce a recurrent neural
translation model that does not use attention and does not have a separate
encoder and decoder. Our eager translation model is low-latency, writing target
tokens as soon as it reads the first source token, and uses constant memory
during decoding. It performs on par with the standard attention-based model of
Bahdanau et al. (2014), and better on long sentences.
| 2,018 | Computation and Language |
Extracting Linguistic Resources from the Web for Concept-to-Text
Generation | Many concept-to-text generation systems require domain-specific linguistic
resources to produce high quality texts, but manually constructing these
resources can be tedious and costly. Focusing on NaturalOWL, a publicly
available state of the art natural language generator for OWL ontologies, we
propose methods to extract from the Web sentence plans and natural language
names, two of the most important types of domain-specific linguistic resources
used by the generator. Experiments show that texts generated using linguistic
resources extracted by our methods in a semi-automatic manner, with minimal
human involvement, are perceived as being almost as good as texts generated
using manually authored linguistic resources, and much better than texts
produced by using linguistic resources extracted from the relation and entity
identifiers of the ontology.
| 2,018 | Computation and Language |
Improving Machine Reading Comprehension with General Reading Strategies | Reading strategies have been shown to improve comprehension levels,
especially for readers lacking adequate prior knowledge. Just as the process of
knowledge accumulation is time-consuming for human readers, it is
resource-demanding to impart rich general domain knowledge into a deep language
model via pre-training. Inspired by reading strategies identified in cognitive
science, and given limited computational resources -- just a pre-trained model
and a fixed number of training instances -- we propose three general strategies
aimed to improve non-extractive machine reading comprehension (MRC): (i) BACK
AND FORTH READING that considers both the original and reverse order of an
input sequence, (ii) HIGHLIGHTING, which adds a trainable embedding to the text
embedding of tokens that are relevant to the question and candidate answers,
and (iii) SELF-ASSESSMENT that generates practice questions and candidate
answers directly from the text in an unsupervised manner.
By fine-tuning a pre-trained language model (Radford et al., 2018) with our
proposed strategies on the largest general domain multiple-choice MRC dataset
RACE, we obtain a 5.8% absolute increase in accuracy over the previous best
result achieved by the same pre-trained model fine-tuned on RACE without the
use of strategies. We further fine-tune the resulting model on a target MRC
task, leading to an absolute improvement of 6.2% in average accuracy over
previous state-of-the-art approaches on six representative non-extractive MRC
datasets from different domains (i.e., ARC, OpenBookQA, MCTest, SemEval-2018
Task 11, ROCStories, and MultiRC). These results demonstrate the effectiveness
of our proposed strategies and the versatility and general applicability of our
fine-tuned models that incorporate these strategies. Core code is available at
https://github.com/nlpdata/strategy/.
| 2,019 | Computation and Language |
Generating Texts with Integer Linear Programming | Concept-to-text generation typically employs a pipeline architecture, which
often leads to suboptimal texts. Content selection, for example, may greedily
select the most important facts, which may require, however, too many words to
express, and this may be undesirable when space is limited or expensive.
Selecting other facts, possibly only slightly less important, may allow the
lexicalization stage to use much fewer words, or to report more facts in the
same space. Decisions made during content selection and lexicalization may also
lead to more or fewer sentence aggregation opportunities, affecting the length
and readability of the resulting texts. Building upon on a publicly available
state of the art natural language generator for Semantic Web ontologies, this
article presents an Integer Linear Programming model that, unlike pipeline
architectures, jointly considers choices available in content selection,
lexicalization, and sentence aggregation to avoid greedy local decisions and
produce more compact texts, i.e., texts that report more facts per word.
Compact texts are desirable, for example, when generating advertisements to be
included in Web search results, or when summarizing structured information in
limited space. An extended version of the proposed model also considers a
limited form of referring expression generation and avoids redundant sentences.
An approximation of the two models can be used when longer texts need to be
generated. Experiments with three ontologies confirm that the proposed models
lead to more compact texts, compared to pipeline systems, with no deterioration
or with improvements in the perceived quality of the generated texts.
| 2,018 | Computation and Language |
Aligning Very Small Parallel Corpora Using Cross-Lingual Word Embeddings
and a Monogamy Objective | Count-based word alignment methods, such as the IBM models or fast-align,
struggle on very small parallel corpora. We therefore present an alternative
approach based on cross-lingual word embeddings (CLWEs), which are trained on
purely monolingual data. Our main contribution is an unsupervised objective to
adapt CLWEs to parallel corpora. In experiments on between 25 and 500
sentences, our method outperforms fast-align. We also show that our fine-tuning
objective consistently improves a CLWE-only baseline.
| 2,018 | Computation and Language |
Effective Feature Representation for Clinical Text Concept Extraction | Crucial information about the practice of healthcare is recorded only in
free-form text, which creates an enormous opportunity for high-impact NLP.
However, annotated healthcare datasets tend to be small and expensive to
obtain, which raises the question of how to make maximally efficient uses of
the available data. To this end, we develop an LSTM-CRF model for combining
unsupervised word representations and hand-built feature representations
derived from publicly available healthcare ontologies. We show that this
combined model yields superior performance on five datasets of diverse kinds of
healthcare text (clinical, social, scientific, commercial). Each involves the
labeling of complex, multi-word spans that pick out different healthcare
concepts. We also introduce a new labeled dataset for identifying the treatment
relations between drugs and diseases.
| 2,019 | Computation and Language |
A task in a suit and a tie: paraphrase generation with semantic
augmentation | Paraphrasing is rooted in semantics. We show the effectiveness of
transformers (Vaswani et al. 2017) for paraphrase generation and further
improvements by incorporating PropBank labels via a multi-encoder. Evaluating
on MSCOCO and WikiAnswers, we find that transformers are fast and effective,
and that semantic augmentation for both transformers and LSTMs leads to sizable
2-3 point gains in BLEU, METEOR and TER. More importantly, we find surprisingly
large gains on human evaluations compared to previous models. Nevertheless,
manual inspection of generated paraphrases reveals ample room for improvement:
even our best model produces human-acceptable paraphrases for only 28% of
captions from the CHIA dataset (Sharma et al. 2018), and it fails spectacularly
on sentences from Wikipedia. Overall, these results point to the potential for
incorporating semantics in the task while highlighting the need for stronger
evaluation.
| 2,019 | Computation and Language |
Measuring Issue Ownership using Word Embeddings | Sentiment and topic analysis are common methods used for social media
monitoring. Essentially, these methods answers questions such as, "what is
being talked about, regarding X", and "what do people feel, regarding X". In
this paper, we investigate another venue for social media monitoring, namely
issue ownership and agenda setting, which are concepts from political science
that have been used to explain voter choice and electoral outcomes. We argue
that issue alignment and agenda setting can be seen as a kind of semantic
source similarity of the kind "how similar is source A to issue owner P, when
talking about issue X", and as such can be measured using word/document
embedding techniques. We present work in progress towards measuring that kind
of conditioned similarity, and introduce a new notion of similarity for
predictive embeddings. We then test this method by measuring the similarity
between politically aligned media and political parties, conditioned on
bloc-specific issues.
| 2,018 | Computation and Language |
Dirichlet Variational Autoencoder for Text Modeling | We introduce an improved variational autoencoder (VAE) for text modeling with
topic information explicitly modeled as a Dirichlet latent variable. By
providing the proposed model topic awareness, it is more superior at
reconstructing input texts. Furthermore, due to the inherent interactions
between the newly introduced Dirichlet variable and the conventional
multivariate Gaussian variable, the model is less prone to KL divergence
vanishing. We derive the variational lower bound for the new model and conduct
experiments on four different data sets. The results show that the proposed
model is superior at text reconstruction across the latent space and
classifications on learned representations have higher test accuracies.
| 2,018 | Computation and Language |
ATOMIC: An Atlas of Machine Commonsense for If-Then Reasoning | We present ATOMIC, an atlas of everyday commonsense reasoning, organized
through 877k textual descriptions of inferential knowledge. Compared to
existing resources that center around taxonomic knowledge, ATOMIC focuses on
inferential knowledge organized as typed if-then relations with variables
(e.g., "if X pays Y a compliment, then Y will likely return the compliment").
We propose nine if-then relation types to distinguish causes vs. effects,
agents vs. themes, voluntary vs. involuntary events, and actions vs. mental
states. By generatively training on the rich inferential knowledge described in
ATOMIC, we show that neural models can acquire simple commonsense capabilities
and reason about previously unseen events. Experimental results demonstrate
that multitask models that incorporate the hierarchical structure of if-then
relation types lead to more accurate inference compared to models trained in
isolation, as measured by both automatic and human evaluation.
| 2,019 | Computation and Language |
DOLORES: Deep Contextualized Knowledge Graph Embeddings | We introduce a new method DOLORES for learning knowledge graph embeddings
that effectively captures contextual cues and dependencies among entities and
relations. First, we note that short paths on knowledge graphs comprising of
chains of entities and relations can encode valuable information regarding
their contextual usage. We operationalize this notion by representing knowledge
graphs not as a collection of triples but as a collection of entity-relation
chains, and learn embeddings for entities and relations using deep neural
models that capture such contextual usage. In particular, our model is based on
Bi-Directional LSTMs and learn deep representations of entities and relations
from constructed entity-relation chains. We show that these representations can
very easily be incorporated into existing models to significantly advance the
state of the art on several knowledge graph prediction tasks like link
prediction, triple classification, and missing relation type prediction (in
some cases by at least 9.5%).
| 2,020 | Computation and Language |
Dial2Desc: End-to-end Dialogue Description Generation | We first propose a new task named Dialogue Description (Dial2Desc). Unlike
other existing dialogue summarization tasks such as meeting summarization, we
do not maintain the natural flow of a conversation but describe an object or an
action of what people are talking about. The Dial2Desc system takes a dialogue
text as input, then outputs a concise description of the object or the action
involved in this conversation. After reading this short description, one can
quickly extract the main topic of a conversation and build a clear picture in
his mind, without reading or listening to the whole conversation. Based on the
existing dialogue dataset, we build a new dataset, which has more than one
hundred thousand dialogue-description pairs. As a step forward, we demonstrate
that one can get more accurate and descriptive results using a new neural
attentive model that exploits the interaction between utterances from different
speakers, compared with other baselines.
| 2,018 | Computation and Language |
Towards Explainable NLP: A Generative Explanation Framework for Text
Classification | Building explainable systems is a critical problem in the field of Natural
Language Processing (NLP), since most machine learning models provide no
explanations for the predictions. Existing approaches for explainable machine
learning systems tend to focus on interpreting the outputs or the connections
between inputs and outputs. However, the fine-grained information is often
ignored, and the systems do not explicitly generate the human-readable
explanations. To better alleviate this problem, we propose a novel generative
explanation framework that learns to make classification decisions and generate
fine-grained explanations at the same time. More specifically, we introduce the
explainable factor and the minimum risk training approach that learn to
generate more reasonable explanations. We construct two new datasets that
contain summaries, rating scores, and fine-grained reasons. We conduct
experiments on both datasets, comparing with several strong neural network
baseline systems. Experimental results show that our method surpasses all
baselines on both datasets, and is able to generate concise explanations at the
same time.
| 2,019 | Computation and Language |
MOHONE: Modeling Higher Order Network Effects in KnowledgeGraphs via
Network Infused Embeddings | Many knowledge graph embedding methods operate on triples and are therefore
implicitly limited by a very local view of the entire knowledge graph. We
present a new framework MOHONE to effectively model higher order network
effects in knowledge-graphs, thus enabling one to capture varying degrees of
network connectivity (from the local to the global). Our framework is generic,
explicitly models the network scale, and captures two different aspects of
similarity in networks: (a) shared local neighborhood and (b) structural
role-based similarity. First, we introduce methods that learn network
representations of entities in the knowledge graph capturing these varied
aspects of similarity. We then propose a fast, efficient method to incorporate
the information captured by these network representations into existing
knowledge graph embeddings. We show that our method consistently and
significantly improves the performance on link prediction of several different
knowledge-graph embedding methods including TRANSE, TRANSD, DISTMULT, and
COMPLEX(by at least 4 points or 17% in some cases).
| 2,018 | Computation and Language |
Towards Empathetic Open-domain Conversation Models: a New Benchmark and
Dataset | One challenge for dialogue agents is recognizing feelings in the conversation
partner and replying accordingly, a key communicative skill. While it is
straightforward for humans to recognize and acknowledge others' feelings in a
conversation, this is a significant challenge for AI systems due to the paucity
of suitable publicly-available datasets for training and evaluation. This work
proposes a new benchmark for empathetic dialogue generation and
EmpatheticDialogues, a novel dataset of 25k conversations grounded in emotional
situations. Our experiments indicate that dialogue models that use our dataset
are perceived to be more empathetic by human evaluators, compared to models
merely trained on large-scale Internet conversation data. We also present
empirical comparisons of dialogue model adaptations for empathetic responding,
leveraging existing models or datasets without requiring lengthy re-training of
the full model.
| 2,019 | Computation and Language |
Understanding Learning Dynamics Of Language Models with SVCCA | Research has shown that neural models implicitly encode linguistic features,
but there has been no research showing \emph{how} these encodings arise as the
models are trained. We present the first study on the learning dynamics of
neural language models, using a simple and flexible analysis method called
Singular Vector Canonical Correlation Analysis (SVCCA), which enables us to
compare learned representations across time and across models, without the need
to evaluate directly on annotated data. We probe the evolution of syntactic,
semantic, and topic representations and find that part-of-speech is learned
earlier than topic; that recurrent layers become more similar to those of a
tagger during training; and embedding layers less similar. Our results and
methods could inform better learning algorithms for NLP models, possibly to
incorporate linguistic information more effectively.
| 2,020 | Computation and Language |
Textbook Question Answering with Multi-modal Context Graph Understanding
and Self-supervised Open-set Comprehension | In this work, we introduce a novel algorithm for solving the textbook
question answering (TQA) task which describes more realistic QA problems
compared to other recent tasks. We mainly focus on two related issues with
analysis of the TQA dataset. First, solving the TQA problems requires to
comprehend multi-modal contexts in complicated input data. To tackle this issue
of extracting knowledge features from long text lessons and merging them with
visual features, we establish a context graph from texts and images, and
propose a new module f-GCN based on graph convolutional networks (GCN). Second,
scientific terms are not spread over the chapters and subjects are split in the
TQA dataset. To overcome this so called "out-of-domain" issue, before learning
QA problems, we introduce a novel self-supervised open-set learning process
without any annotations. The experimental results show that our model
significantly outperforms prior state-of-the-art methods. Moreover, ablation
studies validate that both methods of incorporating f-GCN for extracting
knowledge from multi-modal contexts and our newly proposed self-supervised
learning process are effective for TQA problems.
| 2,019 | Computation and Language |
Spelling Error Correction Using a Nested RNN Model and Pseudo Training
Data | We propose a nested recurrent neural network (nested RNN) model for English
spelling error correction and generate pseudo data based on phonetic similarity
to train it. The model fuses orthographic information and context as a whole
and is trained in an end-to-end fashion. This avoids feature engineering and
does not rely on a noisy channel model as in traditional methods. Experiments
show that the proposed method is superior to existing systems in correcting
spelling errors.
| 2,018 | Computation and Language |
Progressive Memory Banks for Incremental Domain Adaptation | This paper addresses the problem of incremental domain adaptation (IDA) in
natural language processing (NLP). We assume each domain comes one after
another, and that we could only access data in the current domain. The goal of
IDA is to build a unified model performing well on all the domains that we have
encountered. We adopt the recurrent neural network (RNN) widely used in NLP,
but augment it with a directly parameterized memory bank, which is retrieved by
an attention mechanism at each step of RNN transition. The memory bank provides
a natural way of IDA: when adapting our model to a new domain, we progressively
add new slots to the memory bank, which increases the number of parameters, and
thus the model capacity. We learn the new memory slots and fine-tune existing
parameters by back-propagation. Experimental results show that our approach
achieves significantly better performance than fine-tuning alone. Compared with
expanding hidden states, our approach is more robust for old domains, shown by
both empirical and theoretical results. Our model also outperforms previous
work of IDA including elastic weight consolidation and progressive neural
networks in the experiments.
| 2,020 | Computation and Language |
GlobalTrait: Personality Alignment of Multilingual Word Embeddings | We propose a multilingual model to recognize Big Five Personality traits from
text data in four different languages: English, Spanish, Dutch and Italian. Our
analysis shows that words having a similar semantic meaning in different
languages do not necessarily correspond to the same personality traits.
Therefore, we propose a personality alignment method, GlobalTrait, which has a
mapping for each trait from the source language to the target language
(English), such that words that correlate positively to each trait are close
together in the multilingual vector space. Using these aligned embeddings for
training, we can transfer personality related training features from
high-resource languages such as English to other low-resource languages, and
get better multilingual results, when compared to using simple monolingual and
unaligned multilingual embeddings. We achieve an average F-score increase
(across all three languages except English) from 65 to 73.4 (+8.4), when
comparing our monolingual model to multilingual using CNN with personality
aligned embeddings. We also show relatively good performance in the regression
tasks, and better classification results when evaluating our model on a
separate Chinese dataset.
| 2,018 | Computation and Language |
On the End-to-End Solution to Mandarin-English Code-switching Speech
Recognition | Code-switching (CS) refers to a linguistic phenomenon where a speaker uses
different languages in an utterance or between alternating utterances. In this
work, we study end-to-end (E2E) approaches to the Mandarin-English
code-switching speech recognition (CSSR) task. We first examine the
effectiveness of using data augmentation and byte-pair encoding (BPE) subword
units. More importantly, we propose a multitask learning recipe, where a
language identification task is explicitly learned in addition to the E2E
speech recognition task. Furthermore, we introduce an efficient word vocabulary
expansion method for language modeling to alleviate data sparsity issues under
the code-switching scenario. Experimental results on the SEAME data, a
Mandarin-English CS corpus, demonstrate the effectiveness of the proposed
methods.
| 2,019 | Computation and Language |
Hybrid Self-Attention Network for Machine Translation | The encoder-decoder is the typical framework for Neural Machine Translation
(NMT), and different structures have been developed for improving the
translation performance. Transformer is one of the most promising structures,
which can leverage the self-attention mechanism to capture the semantic
dependency from global view. However, it cannot distinguish the relative
position of different tokens very well, such as the tokens located at the left
or right of the current token, and cannot focus on the local information around
the current token either. To alleviate these problems, we propose a novel
attention mechanism named Hybrid Self-Attention Network (HySAN) which
accommodates some specific-designed masks for self-attention network to extract
various semantic, such as the global/local information, the left/right part
context. Finally, a squeeze gate is introduced to combine different kinds of
SANs for fusion. Experimental results on three machine translation tasks show
that our proposed framework outperforms the Transformer baseline significantly
and achieves superior results over state-of-the-art NMT systems.
| 2,018 | Computation and Language |
Language-Independent Representor for Neural Machine Translation | Current Neural Machine Translation (NMT) employs a language-specific encoder
to represent the source sentence and adopts a language-specific decoder to
generate target translation. This language-dependent design leads to
large-scale network parameters and makes the duality of the parallel data
underutilized. To address the problem, we propose in this paper a
language-independent representor to replace the encoder and decoder by using
weight sharing. This shared representor can not only reduce large portion of
network parameters, but also facilitate us to fully explore the language
duality by jointly training source-to-target, target-to-source, left-to-right
and right-to-left translations within a multi-task learning framework.
Experiments show that our proposed framework can obtain significant
improvements over conventional NMT models on resource-rich and low-resource
translation tasks with only a quarter of parameters.
| 2,018 | Computation and Language |
Learning to Describe Phrases with Local and Global Contexts | When reading a text, it is common to become stuck on unfamiliar words and
phrases, such as polysemous words with novel senses, rarely used idioms,
internet slang, or emerging entities. If we humans cannot figure out the
meaning of those expressions from the immediate local context, we consult
dictionaries for definitions or search documents or the web to find other
global context to help in interpretation. Can machines help us do this work?
Which type of context is more important for machines to solve the problem? To
answer these questions, we undertake a task of describing a given phrase in
natural language based on its local and global contexts. To solve this task, we
propose a neural description model that consists of two context encoders and a
description decoder. In contrast to the existing methods for non-standard
English explanation [Ni+ 2017] and definition generation [Noraset+ 2017;
Gadetsky+ 2018], our model appropriately takes important clues from both local
and global contexts. Experimental results on three existing datasets (including
WordNet, Oxford and Urban Dictionaries) and a dataset newly created from
Wikipedia demonstrate the effectiveness of our method over previous work.
| 2,019 | Computation and Language |
Learning Unsupervised Word Mapping by Maximizing Mean Discrepancy | Cross-lingual word embeddings aim to capture common linguistic regularities
of different languages, which benefit various downstream tasks ranging from
machine translation to transfer learning. Recently, it has been shown that
these embeddings can be effectively learned by aligning two disjoint
monolingual vector spaces through a linear transformation (word mapping). In
this work, we focus on learning such a word mapping without any supervision
signal. Most previous work of this task adopts parametric metrics to measure
distribution differences, which typically requires a sophisticated alternate
optimization process, either in the form of \emph{minmax game} or intermediate
\emph{density estimation}. This alternate optimization process is relatively
hard and unstable. In order to avoid such sophisticated alternate optimization,
we propose to learn unsupervised word mapping by directly maximizing the mean
discrepancy between the distribution of transferred embedding and target
embedding. Extensive experimental results show that our proposed model
outperforms competitive baselines by a large margin.
| 2,018 | Computation and Language |
Towards Linear Time Neural Machine Translation with Capsule Networks | In this study, we first investigate a novel capsule network with dynamic
routing for linear time Neural Machine Translation (NMT), referred as
\textsc{CapsNMT}. \textsc{CapsNMT} uses an aggregation mechanism to map the
source sentence into a matrix with pre-determined size, and then applys a deep
LSTM network to decode the target sequence from the source representation.
Unlike the previous work \cite{sutskever2014sequence} to store the source
sentence with a passive and bottom-up way, the dynamic routing policy encodes
the source sentence with an iterative process to decide the credit attribution
between nodes from lower and higher layers. \textsc{CapsNMT} has two core
properties: it runs in time that is linear in the length of the sequences and
provides a more flexible way to select, represent and aggregates the part-whole
information of the source sentence. On WMT14 English-German task and a larger
WMT14 English-French task, \textsc{CapsNMT} achieves comparable results with
the state-of-the-art NMT systems. To the best of our knowledge, this is the
first work that capsule networks have been empirically investigated for
sequence to sequence problems.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.