Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Look-up and Adapt: A One-shot Semantic Parser | Computing devices have recently become capable of interacting with their end
users via natural language. However, they can only operate within a limited
"supported" domain of discourse and fail drastically when faced with an
out-of-domain utterance, mainly due to the limitations of their semantic
parser. In this paper, we propose a semantic parser that generalizes to
out-of-domain examples by learning a general strategy for parsing an unseen
utterance through adapting the logical forms of seen utterances, instead of
learning to generate a logical form from scratch. Our parser maintains a memory
consisting of a representative subset of the seen utterances paired with their
logical forms. Given an unseen utterance, our parser works by looking up a
similar utterance from the memory and adapting its logical form until it fits
the unseen utterance. Moreover, we present a data generation strategy for
constructing utterance-logical form pairs from different domains. Our results
show an improvement of up to 68.8% on one-shot parsing under two different
evaluation settings compared to the baselines.
| 2,019 | Computation and Language |
Do Sentence Interactions Matter? Leveraging Sentence Level
Representations for Fake News Classification | The rising growth of fake news and misleading information through online
media outlets demands an automatic method for detecting such news articles. Of
the few limited works which differentiate between trusted vs other types of
news article (satire, propaganda, hoax), none of them model sentence
interactions within a document. We observe an interesting pattern in the way
sentences interact with each other across different kind of news articles. To
capture this kind of information for long news articles, we propose a graph
neural network-based model which does away with the need of feature engineering
for fine grained fake news classification. Through experiments, we show that
our proposed method beats strong neural baselines and achieves state-of-the-art
accuracy on existing datasets. Moreover, we establish the generalizability of
our model by evaluating its performance in out-of-domain scenarios. Code is
available at https://github.com/MysteryVaibhav/fake_news_semantics
| 2,019 | Computation and Language |
Memeify: A Large-Scale Meme Generation System | Interest in the research areas related to meme propagation and generation has
been increasing rapidly in the last couple of years. Meme datasets available
online are either specific to a context or contain no class information. Here,
we prepare a large-scale dataset of memes with captions and class labels. The
dataset consists of 1.1 million meme captions from 128 classes. We also provide
reasoning for the existence of broad categories, called "themes" across the
meme dataset; each theme consists of multiple meme classes. Our generation
system uses a trained state-of-the-art transformer-based model for caption
generation by employing an encoder-decoder architecture. We develop a web
interface, called Memeify for users to generate memes of their choice, and
explain in detail, the working of individual components of the system. We also
perform a qualitative evaluation of the generated memes by conducting a user
study. A link to the demonstration of the Memeify system is
https://youtu.be/P_Tfs0X-czs.
| 2,019 | Computation and Language |
Induced Inflection-Set Keyword Search in Speech | We investigate the problem of searching for a lexeme-set in speech by
searching for its inflectional variants. Experimental results indicate how
lexeme-set search performance changes with the number of hypothesized
inflections, while ablation experiments highlight the relative importance of
different components in the lexeme-set search pipeline and the value of using
curated inflectional paradigms. We provide a recipe and evaluation set for the
community to use as an extrinsic measure of the performance of inflection
generation approaches.
| 2,020 | Computation and Language |
Task-Oriented Language Grounding for Language Input with Multiple
Sub-Goals of Non-Linear Order | In this work, we analyze the performance of general deep reinforcement
learning algorithms for a task-oriented language grounding problem, where
language input contains multiple sub-goals and their order of execution is
non-linear.
We generate a simple instructional language for the GridWorld environment,
that is built around three language elements (order connectors) defining the
order of execution: one linear - "comma" and two non-linear - "but first", "but
before". We apply one of the deep reinforcement learning baselines - Double DQN
with frame stacking and ablate several extensions such as Prioritized
Experience Replay and Gated-Attention architecture.
Our results show that the introduction of non-linear order connectors
improves the success rate on instructions with a higher number of sub-goals in
2-3 times, but it still does not exceed 20%. Also, we observe that the usage of
Gated-Attention provides no competitive advantage against concatenation in this
setting. Source code and experiments' results are available at
https://github.com/vkurenkov/language-grounding-multigoal
| 2,019 | Computation and Language |
Thieves on Sesame Street! Model Extraction of BERT-based APIs | We study the problem of model extraction in natural language processing, in
which an adversary with only query access to a victim model attempts to
reconstruct a local copy of that model. Assuming that both the adversary and
victim model fine-tune a large pretrained language model such as BERT (Devlin
et al. 2019), we show that the adversary does not need any real training data
to successfully mount the attack. In fact, the attacker need not even use
grammatical or semantically meaningful queries: we show that random sequences
of words coupled with task-specific heuristics form effective queries for model
extraction on a diverse set of NLP tasks, including natural language inference
and question answering. Our work thus highlights an exploit only made feasible
by the shift towards transfer learning methods within the NLP community: for a
query budget of a few hundred dollars, an attacker can extract a model that
performs only slightly worse than the victim model. Finally, we study two
defense strategies against model extraction---membership classification and API
watermarking---which while successful against naive adversaries, are
ineffective against more sophisticated ones.
| 2,020 | Computation and Language |
Training ASR models by Generation of Contextual Information | Supervised ASR models have reached unprecedented levels of accuracy, thanks
in part to ever-increasing amounts of labelled training data. However, in many
applications and locales, only moderate amounts of data are available, which
has led to a surge in semi- and weakly-supervised learning research. In this
paper, we conduct a large-scale study evaluating the effectiveness of
weakly-supervised learning for speech recognition by using loosely related
contextual information as a surrogate for ground-truth labels. For weakly
supervised training, we use 50k hours of public English social media videos
along with their respective titles and post text to train an encoder-decoder
transformer model. Our best encoder-decoder models achieve an average of 20.8%
WER reduction over a 1000 hours supervised baseline, and an average of 13.4%
WER reduction when using only the weakly supervised encoder for CTC
fine-tuning. Our results show that our setup for weak supervision improved both
the encoder acoustic representations as well as the decoder language generation
abilities.
| 2,020 | Computation and Language |
Multitask Learning For Different Subword Segmentations In Neural Machine
Translation | In Neural Machine Translation (NMT) the usage of subwords and characters as
source and target units offers a simple and flexible solution for translation
of rare and unseen words. However, selecting the optimal subword segmentation
involves a trade-off between expressiveness and flexibility, and is language
and dataset-dependent. We present Block Multitask Learning (BMTL), a novel NMT
architecture that predicts multiple targets of different granularities
simultaneously, removing the need to search for the optimal segmentation
strategy. Our multi-task model exhibits improvements of up to 1.7 BLEU points
on each decoder over single-task baseline models with the same number of
parameters on datasets from two language pairs of IWSLT15 and one from IWSLT19.
The multiple hypotheses generated at different granularities can be combined as
a post-processing step to give better translations, which improves over
hypothesis combination from baseline models while using substantially fewer
parameters.
| 2,019 | Computation and Language |
What does BERT Learn from Multiple-Choice Reading Comprehension
Datasets? | Multiple-Choice Reading Comprehension (MCRC) requires the model to read the
passage and question, and select the correct answer among the given options.
Recent state-of-the-art models have achieved impressive performance on multiple
MCRC datasets. However, such performance may not reflect the model's true
ability of language understanding and reasoning. In this work, we adopt two
approaches to investigate what BERT learns from MCRC datasets: 1) an
un-readable data attack, in which we add keywords to confuse BERT, leading to a
significant performance drop; and 2) an un-answerable data training, in which
we train BERT on partial or shuffled input. Under un-answerable data training,
BERT achieves unexpectedly high performance. Based on our experiments on the 5
key MCRC datasets - RACE, MCTest, MCScript, MCScript2.0, DREAM - we observe
that 1) fine-tuned BERT mainly learns how keywords lead to correct prediction,
instead of learning semantic understanding and reasoning; and 2) BERT does not
need correct syntactic information to solve the task; 3) there exists artifacts
in these datasets such that they can be solved even without the full context.
| 2,019 | Computation and Language |
Attention-Gated Graph Convolutions for Extracting Drug Interaction
Information from Drug Labels | Preventable adverse events as a result of medical errors present a growing
concern in the healthcare system. As drug-drug interactions (DDIs) may lead to
preventable adverse events, being able to extract DDIs from drug labels into a
machine-processable form is an important step toward effective dissemination of
drug safety information. In this study, we tackle the problem of jointly
extracting drugs and their interactions, including interaction outcome, from
drug labels. Our deep learning approach entails composing various intermediate
representations including sequence and graph based context, where the latter is
derived using graph convolutions (GC) with a novel attention-based gating
mechanism (holistically called GCA). These representations are then composed in
meaningful ways to handle all subtasks jointly. To overcome scarcity in
training data, we additionally propose transfer learning by pre-training on
related DDI data. Our model is trained and evaluated on the 2018 TAC DDI
corpus. Our GCA model in conjunction with transfer learning performs at 39.20%
F1 and 26.09% F1 on entity recognition (ER) and relation extraction (RE)
respectively on the first official test set and at 45.30% F1 and 27.87% F1 on
ER and RE respectively on the second official test set corresponding to an
improvement over our prior best results by up to 6 absolute F1 points. After
controlling for available training data, our model exhibits state-of-the-art
performance by improving over the next comparable best outcome by roughly three
F1 points in ER and 1.5 F1 points in RE evaluation across two official test
sets.
| 2,019 | Computation and Language |
Multi-Module System for Open Domain Chinese Question Answering over
Knowledge Base | For the task of open domain Knowledge Based Question Answering in CCKS2019,
we propose a method combining information retrieval and semantic parsing. This
multi-module system extracts the topic entity and the most related relation
predicate from a question and transforms it into a Sparql query statement. Our
method obtained the F1 score of 70.45% on the test data.
| 2,019 | Computation and Language |
RPM-Oriented Query Rewriting Framework for E-commerce Keyword-Based
Sponsored Search | Sponsored search optimizes revenue and relevance, which is estimated by
Revenue Per Mille (RPM). Existing sponsored search models are all based on
traditional statistical models, which have poor RPM performance when queries
follow a heavy-tailed distribution. Here, we propose an RPM-oriented Query
Rewriting Framework (RQRF) which outputs related bid keywords that can yield
high RPM. RQRF embeds both queries and bid keywords to vectors in the same
implicit space, converting the rewriting probability between each query and
keyword to the distance between the two vectors. For label construction, we
propose an RPM-oriented sample construction method, labeling keywords based on
whether or not they can lead to high RPM. Extensive experiments are conducted
to evaluate performance of RQRF. In a one month large-scale real-world traffic
of e-commerce sponsored search system, the proposed model significantly
outperforms traditional baseline.
| 2,020 | Computation and Language |
Modeling Inter-Speaker Relationship in XLNet for Contextual Spoken
Language Understanding | We propose two methods to capture relevant history information in a
multi-turn dialogue by modeling inter-speaker relationship for spoken language
understanding (SLU). Our methods are tailored for and therefore compatible with
XLNet, which is a state-of-the-art pretrained model, so we verified our models
built on the top of XLNet. In our experiments, all models achieved higher
accuracy than state-of-the-art contextual SLU models on two benchmark datasets.
Analysis on the results demonstrated that the proposed methods are effective to
improve SLU accuracy of XLNet. These methods to identify important dialogue
history will be useful to alleviate ambiguity in SLU of the current utterance.
| 2,019 | Computation and Language |
Exploring Kernel Functions in the Softmax Layer for Contextual Word
Classification | Prominently used in support vector machines and logistic regressions, kernel
functions (kernels) can implicitly map data points into high dimensional spaces
and make it easier to learn complex decision boundaries. In this work, by
replacing the inner product function in the softmax layer, we explore the use
of kernels for contextual word classification. In order to compare the
individual kernels, experiments are conducted on standard language modeling and
machine translation tasks. We observe a wide range of performances across
different kernel settings. Extending the results, we look at the gradient
properties, investigate various mixture strategies and examine the
disambiguation abilities.
| 2,019 | Computation and Language |
Textual Data for Time Series Forecasting | While ubiquitous, textual sources of information such as company reports,
social media posts, etc. are hardly included in prediction algorithms for time
series, despite the relevant information they may contain. In this work, openly
accessible daily weather reports from France and the United-Kingdom are
leveraged to predict time series of national electricity consumption, average
temperature and wind-speed with a single pipeline. Two methods of numerical
representation of text are considered, namely traditional Term Frequency -
Inverse Document Frequency (TF-IDF) as well as our own neural word embedding.
Using exclusively text, we are able to predict the aforementioned time series
with sufficient accuracy to be used to replace missing data. Furthermore the
proposed word embeddings display geometric properties relating to the behavior
of the time series and context similarity between words.
| 2,019 | Computation and Language |
Is it a Fruit, an Apple or a Granny Smith? Predicting the Basic Level in
a Concept Hierarchy | The "basic level", according to experiments in cognitive psychology, is the
level of abstraction in a hierarchy of concepts at which humans perform tasks
quicker and with greater accuracy than at other levels. We argue that
applications that use concept hierarchies - such as knowledge graphs,
ontologies or taxonomies - could significantly improve their user interfaces if
they `knew' which concepts are the basic level concepts. This paper examines to
what extent the basic level can be learned from data. We test the utility of
three types of concept features, that were inspired by the basic level theory:
lexical features, structural features and frequency features. We evaluate our
approach on WordNet, and create a training set of manually labelled examples
that includes concepts from different domains. Our findings include that the
basic level concepts can be accurately identified within one domain. Concepts
that are difficult to label for humans are also harder to classify
automatically. Our experiments provide insight into how classification
performance across domains could be improved, which is necessary for
identification of basic level concepts on a larger scale.
| 2,019 | Computation and Language |
Trouble with the Curve: Predicting Future MLB Players Using Scouting
Reports | In baseball, a scouting report profiles a player's characteristics and
traits, usually intended for use in player valuation. This work presents a
first-of-its-kind dataset of almost 10,000 scouting reports for minor league,
international, and draft prospects. Compiled from articles posted to MLB.com
and Fangraphs.com, each report consists of a written description of the player,
numerical grades for several skills, and unique IDs to reference their profiles
on popular resources like MLB.com, FanGraphs, and Baseball-Reference. With this
dataset, we employ several deep neural networks to predict if minor league
players will make the MLB given their scouting report. We open-source this data
to share with the community, and present a web application demonstrating
language variations in the reports of successful and unsuccessful prospects.
| 2,019 | Computation and Language |
HUBERT Untangles BERT to Improve Transfer across NLP Tasks | We introduce HUBERT which combines the structured-representational power of
Tensor-Product Representations (TPRs) and BERT, a pre-trained bidirectional
Transformer language model. We show that there is shared structure between
different NLP datasets that HUBERT, but not BERT, is able to learn and
leverage. We validate the effectiveness of our model on the GLUE benchmark and
HANS dataset. Our experiment results show that untangling data-specific
semantics from general language structure is key for better transfer among NLP
tasks.
| 2,021 | Computation and Language |
A Comparison of Neural Network Training Methods for Text Classification | We study the impact of neural networks in text classification. Our focus is
on training deep neural networks with proper weight initialization and greedy
layer-wise pretraining. Results are compared with 1-layer neural networks and
Support Vector Machines. We work with a dataset of labeled messages from the
Twitter microblogging service and aim to predict weather conditions. A feature
extraction procedure specific for the task is proposed, which applies
dimensionality reduction using Latent Semantic Analysis. Our results show that
neural networks outperform Support Vector Machines with Gaussian kernels,
noticing performance gains from introducing additional hidden layers with
nonlinearities. The impact of using Nesterov's Accelerated Gradient in
backpropagation is also studied. We conclude that deep neural networks are a
reasonable approach for text classification and propose further ideas to
improve performance.
| 2,019 | Computation and Language |
Adaptive Ensembling: Unsupervised Domain Adaptation for Political
Document Analysis | Insightful findings in political science often require researchers to analyze
documents of a certain subject or type, yet these documents are usually
contained in large corpora that do not distinguish between pertinent and
non-pertinent documents. In contrast, we can find corpora that label relevant
documents but have limitations (e.g., from a single source or era), preventing
their use for political science research. To bridge this gap, we present
\textit{adaptive ensembling}, an unsupervised domain adaptation framework,
equipped with a novel text classification model and time-aware training to
ensure our methods work well with diachronic corpora. Experiments on an
expert-annotated dataset show that our framework outperforms strong benchmarks.
Further analysis indicates that our methods are more stable, learn better
representations, and extract cleaner corpora for fine-grained analysis.
| 2,019 | Computation and Language |
Adversarial Multitask Learning for Joint Multi-Feature and Multi-Dialect
Morphological Modeling | Morphological tagging is challenging for morphologically rich languages due
to the large target space and the need for more training data to minimize model
sparsity. Dialectal variants of morphologically rich languages suffer more as
they tend to be more noisy and have less resources. In this paper we explore
the use of multitask learning and adversarial training to address morphological
richness and dialectal variations in the context of full morphological tagging.
We use multitask learning for joint morphological modeling for the features
within two dialects, and as a knowledge-transfer scheme for cross-dialectal
modeling. We use adversarial training to learn dialect invariant features that
can help the knowledge-transfer scheme from the high to low-resource variants.
We work with two dialectal variants: Modern Standard Arabic (high-resource
"dialect") and Egyptian Arabic (low-resource dialect) as a case study. Our
models achieve state-of-the-art results for both. Furthermore, adversarial
training provides more significant improvement when using smaller training
datasets in particular.
| 2,019 | Computation and Language |
Evaluating Lottery Tickets Under Distributional Shifts | The Lottery Ticket Hypothesis suggests large, over-parameterized neural
networks consist of small, sparse subnetworks that can be trained in isolation
to reach a similar (or better) test accuracy. However, the initialization and
generalizability of the obtained sparse subnetworks have been recently called
into question. Our work focuses on evaluating the initialization of sparse
subnetworks under distributional shifts. Specifically, we investigate the
extent to which a sparse subnetwork obtained in a source domain can be
re-trained in isolation in a dissimilar, target domain. In addition, we examine
the effects of different initialization strategies at transfer-time. Our
experiments show that sparse subnetworks obtained through lottery ticket
training do not simply overfit to particular domains, but rather reflect an
inductive bias of deep neural networks that can be exploited in multiple
domains.
| 2,019 | Computation and Language |
Towards Unsupervised Speech Recognition and Synthesis with Quantized
Speech Representation Learning | In this paper we propose a Sequential Representation Quantization AutoEncoder
(SeqRQ-AE) to learn from primarily unpaired audio data and produce sequences of
representations very close to phoneme sequences of speech utterances. This is
achieved by proper temporal segmentation to make the representations
phoneme-synchronized, and proper phonetic clustering to have total number of
distinct representations close to the number of phonemes. Mapping between the
distinct representations and phonemes is learned from a small amount of
annotated paired data. Preliminary experiments on LJSpeech demonstrated the
learned representations for vowels have relative locations in latent space in
good parallel to that shown in the IPA vowel chart defined by linguistics
experts. With less than 20 minutes of annotated speech, our method outperformed
existing methods on phoneme recognition and is able to synthesize intelligible
speech that beats our baseline model.
| 2,020 | Computation and Language |
Sequence-to-sequence Automatic Speech Recognition with Word Embedding
Regularization and Fused Decoding | In this paper, we investigate the benefit that off-the-shelf word embedding
can bring to the sequence-to-sequence (seq-to-seq) automatic speech recognition
(ASR). We first introduced the word embedding regularization by maximizing the
cosine similarity between a transformed decoder feature and the target word
embedding. Based on the regularized decoder, we further proposed the fused
decoding mechanism. This allows the decoder to consider the semantic
consistency during decoding by absorbing the information carried by the
transformed decoder feature, which is learned to be close to the target word
embedding. Initial results on LibriSpeech demonstrated that pre-trained word
embedding can significantly lower ASR recognition error with a negligible cost,
and the choice of word embedding algorithms among Skip-gram, CBOW and BERT is
important.
| 2,020 | Computation and Language |
Evaluating the Factual Consistency of Abstractive Text Summarization | Currently used metrics for assessing summarization algorithms do not account
for whether summaries are factually consistent with source documents. We
propose a weakly-supervised, model-based approach for verifying factual
consistency and identifying conflicts between source documents and a generated
summary. Training data is generated by applying a series of rule-based
transformations to the sentences of source documents. The factual consistency
model is then trained jointly for three tasks: 1) identify whether sentences
remain factually consistent after transformation, 2) extract a span in the
source documents to support the consistency prediction, 3) extract a span in
the summary sentence that is inconsistent if one exists. Transferring this
model to summaries generated by several state-of-the art models reveals that
this highly scalable approach substantially outperforms previous models,
including those trained with strong supervision using standard datasets for
natural language inference and fact checking. Additionally, human evaluation
shows that the auxiliary span extraction tasks provide useful assistance in the
process of verifying factual consistency.
| 2,019 | Computation and Language |
Cross-Domain Ambiguity Detection using Linear Transformation of Word
Embedding Spaces | The requirements engineering process is a crucial stage of the software
development life cycle. It involves various stakeholders from different
professional backgrounds, particularly in the requirements elicitation phase.
Each stakeholder carries distinct domain knowledge, causing them to differently
interpret certain words, leading to cross-domain ambiguity. This can result in
misunderstanding amongst them and jeopardize the entire project. This paper
proposes a natural language processing approach to find potentially ambiguous
words for a given set of domains. The idea is to apply linear transformations
on word embedding models trained on different domain corpora, to bring them
into a unified embedding space. The approach then finds words with divergent
embeddings as they signify a variation in the meaning across the domains. It
can help a requirements analyst in preventing misunderstandings during
elicitation interviews and meetings by defining a set of potentially ambiguous
terms in advance. The paper also discusses certain problems with the existing
approaches and discusses how the proposed approach resolves them.
| 2,020 | Computation and Language |
A Simple but Effective BERT Model for Dialog State Tracking on
Resource-Limited Systems | In a task-oriented dialog system, the goal of dialog state tracking (DST) is
to monitor the state of the conversation from the dialog history. Recently,
many deep learning based methods have been proposed for the task. Despite their
impressive performance, current neural architectures for DST are typically
heavily-engineered and conceptually complex, making it difficult to implement,
debug, and maintain them in a production setting. In this work, we propose a
simple but effective DST model based on BERT. In addition to its simplicity,
our approach also has a number of other advantages: (a) the number of
parameters does not grow with the ontology size (b) the model can operate in
situations where the domain ontology may change dynamically. Experimental
results demonstrate that our BERT-based model outperforms previous methods by a
large margin, achieving new state-of-the-art results on the standard WoZ 2.0
dataset. Finally, to make the model small and fast enough for
resource-restricted systems, we apply the knowledge distillation method to
compress our model. The final compressed model achieves comparable results with
the original model while being 8x smaller and 7x faster.
| 2,020 | Computation and Language |
Sketch-Fill-A-R: A Persona-Grounded Chit-Chat Generation Framework | Human-like chit-chat conversation requires agents to generate responses that
are fluent, engaging and consistent. We propose Sketch-Fill-A-R, a framework
that uses a persona-memory to generate chit-chat responses in three phases.
First, it generates dynamic sketch responses with open slots. Second, it
generates candidate responses by filling slots with parts of its stored persona
traits. Lastly, it ranks and selects the final response via a language model
score. Sketch-Fill-A-R outperforms a state-of-the-art baseline both
quantitatively (10-point lower perplexity) and qualitatively (preferred by 55%
heads-up in single-turn and 20% higher in consistency in multi-turn user
studies) on the Persona-Chat dataset. Finally, we extensively analyze
Sketch-Fill-A-R's responses and human feedback, and show it is more consistent
and engaging by using more relevant responses and questions.
| 2,019 | Computation and Language |
Big Bidirectional Insertion Representations for Documents | The Insertion Transformer is well suited for long form text generation due to
its parallel generation capabilities, requiring $O(\log_2 n)$ generation steps
to generate $n$ tokens. However, modeling long sequences is difficult, as there
is more ambiguity captured in the attention mechanism. This work proposes the
Big Bidirectional Insertion Representations for Documents (Big BIRD), an
insertion-based model for document-level translation tasks. We scale up the
insertion-based models to long form documents. Our key contribution is
introducing sentence alignment via sentence-positional embeddings between the
source and target document. We show an improvement of +4.3 BLEU on the WMT'19
English$\rightarrow$German document-level translation task compared with the
Insertion Transformer baseline.
| 2,019 | Computation and Language |
JarKA: Modeling Attribute Interactions for Cross-lingual Knowledge
Alignment | Abstract. Cross-lingual knowledge alignment is the cornerstone in building a
comprehensive knowledge graph (KG), which can benefit various knowledge-driven
applications. As the structures of KGs are usually sparse, attributes of
entities may play an important role in aligning the entities. However, the
heterogeneity of the attributes across KGs prevents from accurately embedding
and comparing entities. To deal with the issue, we propose to model the
interactions between attributes, instead of globally embedding an entity with
all the attributes. We further propose a joint framework to merge the
alignments inferred from the attributes and the structures. Experimental
results show that the proposed model outperforms the state-of-art baselines by
up to 38.48% HitRatio@1. The results also demonstrate that our model can infer
the alignments between attributes, relationships and values, in addition to
entities.
| 2,020 | Computation and Language |
Incorporating Interlocutor-Aware Context into Response Generation on
Multi-Party Chatbots | Conventional chatbots focus on two-party response generation, which
simplifies the real dialogue scene. In this paper, we strive toward a novel
task of Response Generation on Multi-Party Chatbot (RGMPC), where the generated
responses heavily rely on the interlocutors' roles (e.g., speaker and
addressee) and their utterances. Unfortunately, complex interactions among the
interlocutors' roles make it challenging to precisely capture conversational
contexts and interlocutors' information. Facing this challenge, we present a
response generation model which incorporates Interlocutor-aware Contexts into
Recurrent Encoder-Decoder frameworks (ICRED) for RGMPC. Specifically, we employ
interactive representations to capture dialogue contexts for different
interlocutors. Moreover, we leverage an addressee memory to enhance contextual
interlocutor information for the target addressee. Finally, we construct a
corpus for RGMPC based on an existing open-access dataset. Automatic and manual
evaluations demonstrate that the ICRED remarkably outperforms strong baselines.
| 2,019 | Computation and Language |
Generating Questions for Knowledge Bases via Incorporating Diversified
Contexts and Answer-Aware Loss | We tackle the task of question generation over knowledge bases. Conventional
methods for this task neglect two crucial research issues: 1) the given
predicate needs to be expressed; 2) the answer to the generated question needs
to be definitive. In this paper, we strive toward the above two issues via
incorporating diversified contexts and answer-aware loss. Specifically, we
propose a neural encoder-decoder model with multi-level copy mechanisms to
generate such questions. Furthermore, the answer aware loss is introduced to
make generated questions corresponding to more definitive answers. Experiments
demonstrate that our model achieves state-of-the-art performance. Meanwhile,
such generated question can express the given predicate and correspond to a
definitive answer.
| 2,019 | Computation and Language |
Contrastive Attention Mechanism for Abstractive Sentence Summarization | We propose a contrastive attention mechanism to extend the
sequence-to-sequence framework for abstractive sentence summarization task,
which aims to generate a brief summary of a given source sentence. The proposed
contrastive attention mechanism accommodates two categories of attention: one
is the conventional attention that attends to relevant parts of the source
sentence, the other is the opponent attention that attends to irrelevant or
less relevant parts of the source sentence. Both attentions are trained in an
opposite way so that the contribution from the conventional attention is
encouraged and the contribution from the opponent attention is discouraged
through a novel softmax and softmin functionality. Experiments on benchmark
datasets show that, the proposed contrastive attention mechanism is more
focused on the relevant parts for the summary than the conventional attention
mechanism, and greatly advances the state-of-the-art performance on the
abstractive sentence summarization task. We release the code at
https://github.com/travel-go/Abstractive-Text-Summarization
| 2,019 | Computation and Language |
An Efficient Model for Sentiment Analysis of Electronic Product Reviews
in Vietnamese | In the past few years, the growth of e-commerce and digital marketing in
Vietnam has generated a huge volume of opinionated data. Analyzing those data
would provide enterprises with insight for better business decisions. In this
work, as part of the Advosights project, we study sentiment analysis of product
reviews in Vietnamese. The final solution is based on Self-attention neural
networks, a flexible architecture for text classification task with about
90.16% of accuracy in 0.0124 second, a very fast inference time.
| 2,019 | Computation and Language |
Transformer-based Cascaded Multimodal Speech Translation | This paper describes the cascaded multimodal speech translation systems
developed by Imperial College London for the IWSLT 2019 evaluation campaign.
The architecture consists of an automatic speech recognition (ASR) system
followed by a Transformer-based multimodal machine translation (MMT) system.
While the ASR component is identical across the experiments, the MMT model
varies in terms of the way of integrating the visual context (simple
conditioning vs. attention), the type of visual features exploited (pooled,
convolutional, action categories) and the underlying architecture. For the
latter, we explore both the canonical transformer and its deliberation version
with additive and cascade variants which differ in how they integrate the
textual attention. Upon conducting extensive experiments, we found that (i) the
explored visual integration schemes often harm the translation performance for
the transformer and additive deliberation, but considerably improve the cascade
deliberation; (ii) the transformer and cascade deliberation integrate the
visual modality better than the additive deliberation, as shown by the
incongruence analysis.
| 2,019 | Computation and Language |
BPE-Dropout: Simple and Effective Subword Regularization | Subword segmentation is widely used to address the open vocabulary problem in
machine translation. The dominant approach to subword segmentation is Byte Pair
Encoding (BPE), which keeps the most frequent words intact while splitting the
rare ones into multiple tokens. While multiple segmentations are possible even
with the same vocabulary, BPE splits words into unique sequences; this may
prevent a model from better learning the compositionality of words and being
robust to segmentation errors. So far, the only way to overcome this BPE
imperfection, its deterministic nature, was to create another subword
segmentation algorithm (Kudo, 2018). In contrast, we show that BPE itself
incorporates the ability to produce multiple segmentations of the same word. We
introduce BPE-dropout - simple and effective subword regularization method
based on and compatible with conventional BPE. It stochastically corrupts the
segmentation procedure of BPE, which leads to producing multiple segmentations
within the same fixed BPE framework. Using BPE-dropout during training and the
standard BPE during inference improves translation quality up to 3 BLEU
compared to BPE and up to 0.9 BLEU compared to the previous subword
regularization.
| 2,020 | Computation and Language |
Sentence Embeddings for Russian NLU | We investigate the performance of sentence embeddings models on several tasks
for the Russian language. In our comparison, we include such tasks as multiple
choice question answering, next sentence prediction, and paraphrase
identification. We employ FastText embeddings as a baseline and compare it to
ELMo and BERT embeddings. We conduct two series of experiments, using both
unsupervised (i.e., based on similarity measure only) and supervised approaches
for the tasks. Finally, we present datasets for multiple choice question
answering and next sentence prediction in Russian.
| 2,019 | Computation and Language |
Rethinking Cooperative Rationalization: Introspective Extraction and
Complement Control | Selective rationalization has become a common mechanism to ensure that
predictive models reveal how they use any available features. The selection may
be soft or hard, and identifies a subset of input features relevant for
prediction. The setup can be viewed as a co-operate game between the selector
(aka rationale generator) and the predictor making use of only the selected
features. The co-operative setting may, however, be compromised for two
reasons. First, the generator typically has no direct access to the outcome it
aims to justify, resulting in poor performance. Second, there's typically no
control exerted on the information left outside the selection. We revise the
overall co-operative framework to address these challenges. We introduce an
introspective model which explicitly predicts and incorporates the outcome into
the selection process. Moreover, we explicitly control the rationale complement
via an adversary so as not to leave any useful information out of the
selection. We show that the two complementary mechanisms maintain both high
predictive accuracy and lead to comprehensive rationales.
| 2,019 | Computation and Language |
Findings of the Third Workshop on Neural Generation and Translation | This document describes the findings of the Third Workshop on Neural
Generation and Translation, held in concert with the annual conference of the
Empirical Methods in Natural Language Processing (EMNLP 2019). First, we
summarize the research trends of papers presented in the proceedings. Second,
we describe the results of the two shared tasks 1) efficient neural machine
translation (NMT) where participants were tasked with creating NMT systems that
are both accurate and efficient, and 2) document-level generation and
translation (DGT) where participants were tasked with developing systems that
generate summaries from structured data, potentially with assistance from text
in another language.
| 2,019 | Computation and Language |
Scalable Evaluation and Improvement of Document Set Expansion via Neural
Positive-Unlabeled Learning | We consider the situation in which a user has collected a small set of
documents on a cohesive topic, and they want to retrieve additional documents
on this topic from a large collection. Information Retrieval (IR) solutions
treat the document set as a query, and look for similar documents in the
collection. We propose to extend the IR approach by treating the problem as an
instance of positive-unlabeled (PU) learning -- i.e., learning binary
classifiers from only positive and unlabeled data, where the positive data
corresponds to the query documents, and the unlabeled data is the results
returned by the IR engine. Utilizing PU learning for text with big neural
networks is a largely unexplored field. We discuss various challenges in
applying PU learning to the setting, including an unknown class prior,
extremely imbalanced data and large-scale accurate evaluation of models, and we
propose solutions and empirically validate them. We demonstrate the
effectiveness of the method using a series of experiments of retrieving PubMed
abstracts adhering to fine-grained topics. We demonstrate improvements over the
base IR solution and other baselines.
| 2,021 | Computation and Language |
An Empirical Study of Generation Order for Machine Translation | In this work, we present an empirical study of generation order for machine
translation. Building on recent advances in insertion-based modeling, we first
introduce a soft order-reward framework that enables us to train models to
follow arbitrary oracle generation policies. We then make use of this framework
to explore a large variety of generation orders, including uninformed orders,
location-based orders, frequency-based orders, content-based orders, and
model-based orders. Curiously, we find that for the WMT'14 English $\to$ German
translation task, order does not have a substantial impact on output quality,
with unintuitive orderings such as alphabetical and shortest-first matching the
performance of a standard Transformer. This demonstrates that traditional
left-to-right generation is not strictly necessary to achieve high performance.
On the other hand, results on the WMT'18 English $\to$ Chinese task tend to
vary more widely, suggesting that translation for less well-aligned language
pairs may be more sensitive to generation order.
| 2,019 | Computation and Language |
BART: Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension | We present BART, a denoising autoencoder for pretraining sequence-to-sequence
models. BART is trained by (1) corrupting text with an arbitrary noising
function, and (2) learning a model to reconstruct the original text. It uses a
standard Tranformer-based neural machine translation architecture which,
despite its simplicity, can be seen as generalizing BERT (due to the
bidirectional encoder), GPT (with the left-to-right decoder), and many other
more recent pretraining schemes. We evaluate a number of noising approaches,
finding the best performance by both randomly shuffling the order of the
original sentences and using a novel in-filling scheme, where spans of text are
replaced with a single mask token. BART is particularly effective when fine
tuned for text generation but also works well for comprehension tasks. It
matches the performance of RoBERTa with comparable training resources on GLUE
and SQuAD, achieves new state-of-the-art results on a range of abstractive
dialogue, question answering, and summarization tasks, with gains of up to 6
ROUGE. BART also provides a 1.1 BLEU increase over a back-translation system
for machine translation, with only target language pretraining. We also report
ablation experiments that replicate other pretraining schemes within the BART
framework, to better measure which factors most influence end-task performance.
| 2,019 | Computation and Language |
Quantifying the Semantic Core of Gender Systems | Many of the world's languages employ grammatical gender on the lexeme. For
example, in Spanish, the word for 'house' (casa) is feminine, whereas the word
for 'paper' (papel) is masculine. To a speaker of a genderless language, this
assignment seems to exist with neither rhyme nor reason. But is the assignment
of inanimate nouns to grammatical genders truly arbitrary? We present the first
large-scale investigation of the arbitrariness of noun-gender assignments. To
that end, we use canonical correlation analysis to correlate the grammatical
gender of inanimate nouns with an externally grounded definition of their
lexical semantics. We find that 18 languages exhibit a significant correlation
between grammatical gender and lexical semantics.
| 2,019 | Computation and Language |
An Augmented Transformer Architecture for Natural Language Generation
Tasks | The Transformer based neural networks have been showing significant
advantages on most evaluations of various natural language processing and other
sequence-to-sequence tasks due to its inherent architecture based
superiorities. Although the main architecture of the Transformer has been
continuously being explored, little attention was paid to the positional
encoding module. In this paper, we enhance the sinusoidal positional encoding
algorithm by maximizing the variances between encoded consecutive positions to
obtain additional promotion. Furthermore, we propose an augmented Transformer
architecture encoded with additional linguistic knowledge, such as the
Part-of-Speech (POS) tagging, to boost the performance on some natural language
generation tasks, e.g., the automatic translation and summarization tasks.
Experiments show that the proposed architecture attains constantly superior
results compared to the vanilla Transformer.
| 2,019 | Computation and Language |
Phenotyping of Clinical Notes with Improved Document Classification
Models Using Contextualized Neural Language Models | Clinical notes contain an extensive record of a patient's health status, such
as smoking status or the presence of heart conditions. However, this detail is
not replicated within the structured data of electronic health systems.
Phenotyping, the extraction of patient conditions from free clinical text, is a
critical task which supports avariety of downstream applications such as
decision support and secondary use of medical records. Previous work has
resulted in systems which are high performing but require hand engineering,
often of rules. Recent work in pretrained contextualized language models have
enabled advances in representing text for a variety of tasks. We therefore
explore several architectures for modeling pheno-typing that rely solely on
BERT representations of the clinical note, removing the need for manual
engineering. We find these architectures are competitive with or outperform
existing state of the art methods on two phenotyping tasks.
| 2,020 | Computation and Language |
ON-TRAC Consortium End-to-End Speech Translation Systems for the IWSLT
2019 Shared Task | This paper describes the ON-TRAC Consortium translation systems developed for
the end-to-end model task of IWSLT Evaluation 2019 for the
English-to-Portuguese language pair. ON-TRAC Consortium is composed of
researchers from three French academic laboratories: LIA (Avignon
Universit\'e), LIG (Universit\'e Grenoble Alpes), and LIUM (Le Mans
Universit\'e). A single end-to-end model built as a neural encoder-decoder
architecture with attention mechanism was used for two primary submissions
corresponding to the two EN-PT evaluations sets: (1) TED (MuST-C) and (2) How2.
In this paper, we notably investigate impact of pooling heterogeneous corpora
for training, impact of target tokenization (characters or BPEs), impact of
speech input segmentation and we also compare our best end-to-end model (BLEU
of 26.91 on MuST-C and 43.82 on How2 validation sets) to a pipeline (ASR+MT)
approach.
| 2,019 | Computation and Language |
LSTM Easy-first Dependency Parsing with Pre-trained Word Embeddings and
Character-level Word Embeddings in Vietnamese | In Vietnamese dependency parsing, several methods have been proposed.
Dependency parser which uses deep neural network model has been reported that
achieved state-of-the-art results. In this paper, we proposed a new method
which applies LSTM easy-first dependency parsing with pre-trained word
embeddings and character-level word embeddings. Our method achieves an accuracy
of 80.91% of unlabeled attachment score and 72.98% of labeled attachment score
on the Vietnamese Dependency Treebank (VnDT).
| 2,018 | Computation and Language |
Time to Take Emoji Seriously: They Vastly Improve Casual Conversational
Models | Graphical emoji are ubiquitous in modern-day online conversations. So is a
single thumbs-up emoji able to signify an agreement, without any words. We
argue that the current state-of-the-art systems are ill-equipped to correctly
interpret these emoji, especially in a conversational context. However, in a
casual context, the benefits might be high: a better understanding of users'
utterances and more natural, emoji-rich responses.
With this in mind, we modify BERT to fully support emoji, both from the
Unicode Standard and custom emoji. This modified BERT is then trained on a
corpus of question-answer (QA) tuples with a high number of emoji, where we're
able to increase the 1-of-100 accuracy from 12.7% for the current
state-of-the-art to 17.8% for our model with emoji support.
| 2,019 | Computation and Language |
Let Me Know What to Ask: Interrogative-Word-Aware Question Generation | Question Generation (QG) is a Natural Language Processing (NLP) task that
aids advances in Question Answering (QA) and conversational assistants.
Existing models focus on generating a question based on a text and possibly the
answer to the generated question. They need to determine the type of
interrogative word to be generated while having to pay attention to the grammar
and vocabulary of the question. In this work, we propose
Interrogative-Word-Aware Question Generation (IWAQG), a pipelined system
composed of two modules: an interrogative word classifier and a QG model. The
first module predicts the interrogative word that is provided to the second
module to create the question. Owing to an increased recall of deciding the
interrogative words to be used for the generated questions, the proposed model
achieves new state-of-the-art results on the task of QG in SQuAD, improving
from 46.58 to 47.69 in BLEU-1, 17.55 to 18.53 in BLEU-4, 21.24 to 22.33 in
METEOR, and from 44.53 to 46.94 in ROUGE-L.
| 2,019 | Computation and Language |
A Framework for Building Closed-Domain Chat Dialogue Systems | This paper presents HRIChat, a framework for developing closed-domain chat
dialogue systems. Being able to engage in chat dialogues has been found
effective for improving communication between humans and dialogue systems. This
paper focuses on closed-domain systems because they would be useful when
combined with task-oriented dialogue systems in the same domain. HRIChat
enables domain-dependent language understanding so that it can deal well with
domain-specific utterances. In addition, HRIChat makes it possible to integrate
state transition network-based dialogue management and reaction-based dialogue
management. FoodChatbot, which is an application in the food and restaurant
domain, has been developed and evaluated through a user study. Its results
suggest that reasonably good systems can be developed with HRIChat. This paper
also reports lessons learned from the development and evaluation of
FoodChatbot.
| 2,020 | Computation and Language |
A Latent Morphology Model for Open-Vocabulary Neural Machine Translation | Translation into morphologically-rich languages challenges neural machine
translation (NMT) models with extremely sparse vocabularies where atomic
treatment of surface forms is unrealistic. This problem is typically addressed
by either pre-processing words into subword units or performing translation
directly at the level of characters. The former is based on word segmentation
algorithms optimized using corpus-level statistics with no regard to the
translation task. The latter learns directly from translation data but requires
rather deep architectures. In this paper, we propose to translate words by
modeling word formation through a hierarchical latent variable model which
mimics the process of morphological inflection. Our model generates words one
character at a time by composing two latent representations: a continuous one,
aimed at capturing the lexical semantics, and a set of (approximately) discrete
features, aimed at capturing the morphosyntactic function, which are shared
among different surface forms. Our model achieves better accuracy in
translation into three morphologically-rich languages than conventional
open-vocabulary NMT methods, while also demonstrating a better generalization
capacity under low to mid-resource settings.
| 2,020 | Computation and Language |
Toward Gender-Inclusive Coreference Resolution | Correctly resolving textual mentions of people fundamentally entails making
inferences about those people. Such inferences raise the risk of systemic
biases in coreference resolution systems, including biases that can harm binary
and non-binary trans and cis stakeholders. To better understand such biases, we
foreground nuanced conceptualizations of gender from sociology and
sociolinguistics, and develop two new datasets for interrogating bias in crowd
annotations and in existing coreference resolution systems. Through these
studies, conducted on English text, we confirm that without acknowledging and
building systems that recognize the complexity of gender, we build systems that
lead to many potential harms.
| 2,020 | Computation and Language |
Lightweight and Efficient End-to-End Speech Recognition Using Low-Rank
Transformer | Highly performing deep neural networks come at the cost of computational
complexity that limits their practicality for deployment on portable devices.
We propose the low-rank transformer (LRT), a memory-efficient and fast neural
architecture that significantly reduces the parameters and boosts the speed of
training and inference for end-to-end speech recognition. Our approach reduces
the number of parameters of the network by more than 50% and speeds up the
inference time by around 1.35x compared to the baseline transformer model. The
experiments show that our LRT model generalizes better and yields lower error
rates on both validation and test sets compared to an uncompressed transformer
model. The LRT model outperforms those from existing works on several datasets
in an end-to-end setting without using an external language model or acoustic
data.
| 2,020 | Computation and Language |
Let's FACE it. Finnish Poetry Generation with Aesthetics and Framing | We present a creative poem generator for the morphologically rich Finnish
language. Our method falls into the master-apprentice paradigm, where a
computationally creative genetic algorithm teaches a BRNN model to generate
poetry. We model several parts of poetic aesthetics in the fitness function of
the genetic algorithm, such as sonic features, semantic coherence, imagery and
metaphor. Furthermore, we justify the creativity of our method based on the
FACE theory on computational creativity and take additional care in evaluating
our system by automatic metrics for concepts together with human evaluation for
aesthetics, framing and expressions.
| 2,019 | Computation and Language |
Adapting Multilingual Neural Machine Translation to Unseen Languages | Multilingual Neural Machine Translation (MNMT) for low-resource languages
(LRL) can be enhanced by the presence of related high-resource languages (HRL),
but the relatedness of HRL usually relies on predefined linguistic assumptions
about language similarity. Recently, adapting MNMT to a LRL has shown to
greatly improve performance. In this work, we explore the problem of adapting
an MNMT model to an unseen LRL using data selection and model adaptation. In
order to improve NMT for LRL, we employ perplexity to select HRL data that are
most similar to the LRL on the basis of language distance. We extensively
explore data selection in popular multilingual NMT settings, namely in
(zero-shot) translation, and in adaptation from a multilingual pre-trained
model, for both directions (LRL-en). We further show that dynamic adaptation of
the model's vocabulary results in a more favourable segmentation for the LRL in
comparison with direct adaptation. Experiments show reductions in training time
and significant performance gains over LRL baselines, even with zero LRL data
(+13.0 BLEU), up to +17.0 BLEU for pre-trained multilingual model dynamic
adaptation with related data selection. Our method outperforms current
approaches, such as massively multilingual models and data augmentation, on
four LRL.
| 2,019 | Computation and Language |
Fill in the Blanks: Imputing Missing Sentences for Larger-Context Neural
Machine Translation | Most neural machine translation systems still translate sentences in
isolation. To make further progress, a promising line of research additionally
considers the surrounding context in order to provide the model potentially
missing source-side information, as well as to maintain a coherent output. One
difficulty in training such larger-context (i.e. document-level) machine
translation systems is that context may be missing from many parallel examples.
To circumvent this issue, two-stage approaches, in which sentence-level
translations are post-edited in context, have recently been proposed. In this
paper, we instead consider the viability of filling in the missing context. In
particular, we consider three distinct approaches to generate the missing
context: using random contexts, applying a copy heuristic or generating it with
a language model. In particular, the copy heuristic significantly helps with
lexical coherence, while using completely random contexts hurts performance on
many long-distance linguistic phenomena. We also validate the usefulness of
tagged back-translation. In addition to improving BLEU scores as expected,
using back-translated data helps larger-context machine translation systems to
better capture long-range phenomena.
| 2,019 | Computation and Language |
A Neural Topic-Attention Model for Medical Term Abbreviation
Disambiguation | Automated analysis of clinical notes is attracting increasing attention.
However, there has not been much work on medical term abbreviation
disambiguation. Such abbreviations are abundant, and highly ambiguous, in
clinical documents. One of the main obstacles is the lack of large scale,
balance labeled data sets. To address the issue, we propose a few-shot learning
approach to take advantage of limited labeled data. Specifically, a neural
topic-attention model is applied to learn improved contextualized sentence
representations for medical term abbreviation disambiguation. Another vital
issue is that the existing scarce annotations are noisy and missing. We
re-examine and correct an existing dataset for training and collect a test set
to evaluate the models fairly especially for rare senses. We train our model on
the training set which contains 30 abbreviation terms as categories (on
average, 479 samples and 3.24 classes in each term) selected from a public
abbreviation disambiguation dataset, and then test on a manually-created
balanced dataset (each class in each term has 15 samples). We show that
enhancing the sentence representation with topic information improves the
performance on small-scale unbalanced training datasets by a large margin,
compared to a number of baseline models.
| 2,019 | Computation and Language |
Contextual Text Denoising with Masked Language Models | Recently, with the help of deep learning models, significant advances have
been made in different Natural Language Processing (NLP) tasks. Unfortunately,
state-of-the-art models are vulnerable to noisy texts. We propose a new
contextual text denoising algorithm based on the ready-to-use masked language
model. The proposed algorithm does not require retraining of the model and can
be integrated into any NLP system without additional training on paired
cleaning training data. We evaluate our method under synthetic noise and
natural noise and show that the proposed algorithm can use context information
to correct noise text and improve the performance of noisy inputs in several
downstream tasks.
| 2,024 | Computation and Language |
Building an Application Independent Natural Language Interface | Traditional approaches to building natural language (NL) interfaces typically
use a semantic parser to parse the user command and convert it to a logical
form, which is then translated to an executable action in an application.
However, it is still challenging for a semantic parser to correctly parse
natural language. For a different domain, the parser may need to be retrained
or tuned, and a new translator also needs to be written to convert the logical
forms to executable actions. In this work, we propose a novel and application
independent approach to building NL interfaces that does not need a semantic
parser or a translator. It is based on natural language to natural language
matching and learning, where the representation of each action and each user
command are both in natural language. To perform a user intended action, the
system only needs to match the user command with the correct action
representation, and then execute the corresponding action. The system also
interactively learns new (paraphrased) commands for actions to expand the
action representations over time. Our experimental results show the
effectiveness of the proposed approach.
| 2,021 | Computation and Language |
Towards Generalizable Neuro-Symbolic Systems for Commonsense Question
Answering | Non-extractive commonsense QA remains a challenging AI task, as it requires
systems to reason about, synthesize, and gather disparate pieces of
information, in order to generate responses to queries. Recent approaches on
such tasks show increased performance, only when models are either pre-trained
with additional information or when domain-specific heuristics are used,
without any special consideration regarding the knowledge resource type. In
this paper, we perform a survey of recent commonsense QA methods and we provide
a systematic analysis of popular knowledge resources and knowledge-integration
methods, across benchmarks from multiple commonsense datasets. Our results and
analysis show that attention-based injection seems to be a preferable choice
for knowledge integration and that the degree of domain overlap, between
knowledge bases and datasets, plays a crucial role in determining model
success.
| 2,019 | Computation and Language |
Discourse-Aware Neural Extractive Text Summarization | Recently BERT has been adopted for document encoding in state-of-the-art text
summarization models. However, sentence-based extractive models often result in
redundant or uninformative phrases in the extracted summaries. Also, long-range
dependencies throughout a document are not well captured by BERT, which is
pre-trained on sentence pairs instead of documents. To address these issues, we
present a discourse-aware neural summarization model - DiscoBert. DiscoBert
extracts sub-sentential discourse units (instead of sentences) as candidates
for extractive selection on a finer granularity. To capture the long-range
dependencies among discourse units, structural discourse graphs are constructed
based on RST trees and coreference mentions, encoded with Graph Convolutional
Networks. Experiments show that the proposed model outperforms state-of-the-art
methods by a significant margin on popular summarization benchmarks compared to
other BERT-base models.
| 2,020 | Computation and Language |
How does Grammatical Gender Affect Noun Representations in
Gender-Marking Languages? | Many natural languages assign grammatical gender also to inanimate nouns in
the language. In such languages, words that relate to the gender-marked nouns
are inflected to agree with the noun's gender. We show that this affects the
word representations of inanimate nouns, resulting in nouns with the same
gender being closer to each other than nouns with different gender. While
"embedding debiasing" methods fail to remove the effect, we demonstrate that a
careful application of methods that neutralize grammatical gender signals from
the words' context when training word embeddings is effective in removing it.
Fixing the grammatical gender bias yields a positive effect on the quality of
the resulting word embeddings, both in monolingual and cross-lingual settings.
We note that successfully removing gender signals, while achievable, is not
trivial to do and that a language-specific morphological analyzer, together
with careful usage of it, are essential for achieving good results.
| 2,019 | Computation and Language |
Predicting Discourse Structure using Distant Supervision from Sentiment | Discourse parsing could not yet take full advantage of the neural NLP
revolution, mostly due to the lack of annotated datasets. We propose a novel
approach that uses distant supervision on an auxiliary task (sentiment
classification), to generate abundant data for RST-style discourse structure
prediction. Our approach combines a neural variant of multiple-instance
learning, using document-level supervision, with an optimal CKY-style tree
generation algorithm. In a series of experiments, we train a discourse parser
(for only structure prediction) on our automatically generated dataset and
compare it with parsers trained on human-annotated corpora (news domain RST-DT
and Instructional domain). Results indicate that while our parser does not yet
match the performance of a parser trained and tested on the same dataset
(intra-domain), it does perform remarkably well on the much more difficult and
arguably more useful task of inter-domain discourse structure prediction, where
the parser is trained on one domain and tested/applied on another one.
| 2,019 | Computation and Language |
Transferable End-to-End Aspect-based Sentiment Analysis with Selective
Adversarial Learning | Joint extraction of aspects and sentiments can be effectively formulated as a
sequence labeling problem. However, such formulation hinders the effectiveness
of supervised methods due to the lack of annotated sequence data in many
domains. To address this issue, we firstly explore an unsupervised domain
adaptation setting for this task. Prior work can only use common syntactic
relations between aspect and opinion words to bridge the domain gaps, which
highly relies on external linguistic resources. To resolve it, we propose a
novel Selective Adversarial Learning (SAL) method to align the inferred
correlation vectors that automatically capture their latent relations. The SAL
method can dynamically learn an alignment weight for each word such that more
important words can possess higher alignment weights to achieve fine-grained
(word-level) adaptation. Empirically, extensive experiments demonstrate the
effectiveness of the proposed SAL method.
| 2,019 | Computation and Language |
Cascaded LSTMs based Deep Reinforcement Learning for Goal-driven
Dialogue | This paper proposes a deep neural network model for joint modeling Natural
Language Understanding (NLU) and Dialogue Management (DM) in goal-driven
dialogue systems. There are three parts in this model. A Long Short-Term Memory
(LSTM) at the bottom of the network encodes utterances in each dialogue turn
into a turn embedding. Dialogue embeddings are learned by a LSTM at the middle
of the network, and updated by the feeding of all turn embeddings. The top part
is a forward Deep Neural Network which converts dialogue embeddings into the
Q-values of different dialogue actions. The cascaded LSTMs based reinforcement
learning network is jointly optimized by making use of the rewards received at
each dialogue turn as the only supervision information. There is no explicit
NLU and dialogue states in the network. Experimental results show that our
model outperforms both traditional Markov Decision Process (MDP) model and
single LSTM with Deep Q-Network on meeting room booking tasks. Visualization of
dialogue embeddings illustrates that the model can learn the representation of
dialogue states.
| 2,019 | Computation and Language |
DiaNet: BERT and Hierarchical Attention Multi-Task Learning of
Fine-Grained Dialect | Prediction of language varieties and dialects is an important language
processing task, with a wide range of applications. For Arabic, the native
tongue of ~ 300 million people, most varieties remain unsupported. To ease this
bottleneck, we present a very large scale dataset covering 319 cities from all
21 Arab countries. We introduce a hierarchical attention multi-task learning
(HA-MTL) approach for dialect identification exploiting our data at the city,
state, and country levels. We also evaluate use of BERT on the three tasks,
comparing it to the MTL approach. We benchmark and release our data and models.
| 2,019 | Computation and Language |
Harnessing the linguistic signal to predict scalar inferences | Pragmatic inferences often subtly depend on the presence or absence of
linguistic features. For example, the presence of a partitive construction (of
the) increases the strength of a so-called scalar inference: listeners perceive
the inference that Chris did not eat all of the cookies to be stronger after
hearing "Chris ate some of the cookies" than after hearing the same utterance
without a partitive, "Chris ate some cookies." In this work, we explore to what
extent neural network sentence encoders can learn to predict the strength of
scalar inferences. We first show that an LSTM-based sentence encoder trained on
an English dataset of human inference strength ratings is able to predict
ratings with high accuracy (r=0.78). We then probe the model's behavior using
manually constructed minimal sentence pairs and corpus data. We find that the
model inferred previously established associations between linguistic features
and inference strength, suggesting that the model learns to use linguistic
features to predict pragmatic inferences.
| 2,020 | Computation and Language |
A neural document language modeling framework for spoken document
retrieval | Recent developments in deep learning have led to a significant innovation in
various classic and practical subjects, including speech recognition, computer
vision, question answering, information retrieval and so on. In the context of
natural language processing (NLP), language representations have shown giant
successes in many downstream tasks, so the school of studies have become a
major stream of research recently. Because the immenseness of multimedia data
along with speech have spread around the world in our daily life, spoken
document retrieval (SDR) has become an important research subject in the past
decades. Targeting on enhancing the SDR performance, the paper concentrates on
proposing a neural retrieval framework, which assembles the merits of using
language modeling (LM) mechanism in SDR and leveraging the abstractive
information learned by the language representation models. Consequently, to our
knowledge, this is a pioneer study on supervised training of a neural LM-based
SDR framework, especially combined with the pretrained language representation
methods.
| 2,019 | Computation and Language |
LIMIT-BERT : Linguistic Informed Multi-Task BERT | In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT)
for learning language representations across multiple linguistic tasks by
Multi-Task Learning (MTL). LIMIT-BERT includes five key linguistic syntax and
semantics tasks: Part-Of-Speech (POS) tags, constituent and dependency
syntactic parsing, span and dependency semantic role labeling (SRL). Besides,
LIMIT-BERT adopts linguistics mask strategy: Syntactic and Semantic Phrase
Masking which mask all of the tokens corresponding to a syntactic/semantic
phrase. Different from recent Multi-Task Deep Neural Networks (MT-DNN) (Liu et
al., 2019), our LIMIT-BERT is linguistically motivated and learning in a
semi-supervised method which provides large amounts of linguistic-task data as
same as BERT learning corpus. As a result, LIMIT-BERT not only improves
linguistic tasks performance but also benefits from a regularization effect and
linguistic information that leads to more general representations to help adapt
to new tasks and domains. LIMIT-BERT obtains new state-of-the-art or
competitive results on both span and dependency semantic parsing on Propbank
benchmarks and both dependency and constituent syntactic parsing on Penn
Treebank.
| 2,020 | Computation and Language |
Learning to Customize Model Structures for Few-shot Dialogue Generation
Tasks | Training the generative models with minimal corpus is one of the critical
challenges for building open-domain dialogue systems. Existing methods tend to
use the meta-learning framework which pre-trains the parameters on all
non-target tasks then fine-tunes on the target task. However, fine-tuning
distinguishes tasks from the parameter perspective but ignores the
model-structure perspective, resulting in similar dialogue models for different
tasks. In this paper, we propose an algorithm that can customize a unique
dialogue model for each task in the few-shot setting. In our approach, each
dialogue model consists of a shared module, a gating module, and a private
module. The first two modules are shared among all the tasks, while the third
one will differentiate into different network structures to better capture the
characteristics of the corresponding task. The extensive experiments on two
datasets show that our method outperforms all the baselines in terms of task
consistency, response quality, and diversity.
| 2,020 | Computation and Language |
Transfer Learning from Transformers to Fake News Challenge Stance
Detection (FNC-1) Task | In this paper, we report improved results of the Fake News Challenge Stage 1
(FNC-1) stance detection task. This gain in performance is due to the
generalization power of large language models based on Transformer
architecture, invented, trained and publicly released over the last two years.
Specifically (1) we improved the FNC-1 best performing model adding BERT
sentence embedding of input sequences as a model feature, (2) we fine-tuned
BERT, XLNet, and RoBERTa transformers on FNC-1 extended dataset and obtained
state-of-the-art results on FNC-1 task.
| 2,019 | Computation and Language |
Great New Design: How Do We Talk about Media Architecture in Social
Media | In social media, we communicate through pictures, videos, short codes, links,
partial phrases. It is a rich, and digitally documented communication channel
that relies on a multitude of media and forms. These channels are sorted by
algorithms as organizers of discourse, mostly with the goal of channeling
attention. In this research, we used Twitter to study the way Media
Architecture is discussed within the community of architects, designers,
researchers and policy makers. We look at the way they spontaneously share
opinions on their engagement with digital infrastructures, networked places and
hybrid public spaces. What can we do with all those opinions? We propose here
the use of text-mining and machine learning techniques to identify important
concepts and patterns in this prolific communication stream. We discuss how
such techniques could inform the practice and emergence of future trends.
| 2,019 | Computation and Language |
Multi-scale Octave Convolutions for Robust Speech Recognition | We propose a multi-scale octave convolution layer to learn robust speech
representations efficiently. Octave convolutions were introduced by Chen et al
[1] in the computer vision field to reduce the spatial redundancy of the
feature maps by decomposing the output of a convolutional layer into feature
maps at two different spatial resolutions, one octave apart. This approach
improved the efficiency as well as the accuracy of the CNN models. The accuracy
gain was attributed to the enlargement of the receptive field in the original
input space. We argue that octave convolutions likewise improve the robustness
of learned representations due to the use of average pooling in the lower
resolution group, acting as a low-pass filter. We test this hypothesis by
evaluating on two noisy speech corpora - Aurora-4 and AMI. We extend the octave
convolution concept to multiple resolution groups and multiple octaves. To
evaluate the robustness of the inferred representations, we report the
similarity between clean and noisy encodings using an affine projection loss as
a proxy robustness measure. The results show that proposed method reduces the
WER by up to 6.6% relative for Aurora-4 and 3.6% for AMI, while improving the
computational efficiency of the CNN acoustic models.
| 2,019 | Computation and Language |
What Question Answering can Learn from Trivia Nerds | In addition to the traditional task of getting machines to answer questions,
a major research question in question answering is to create interesting,
challenging questions that can help systems learn how to answer questions and
also reveal which systems are the best at answering questions. We argue that
creating a question answering dataset -- and the ubiquitous leaderboard that
goes with it -- closely resembles running a trivia tournament: you write
questions, have agents (either humans or machines) answer the questions, and
declare a winner. However, the research community has ignored the decades of
hard-learned lessons from decades of the trivia community creating vibrant,
fair, and effective question answering competitions. After detailing problems
with existing QA datasets, we outline the key lessons -- removing ambiguity,
discriminating skill, and adjudicating disputes -- that can transfer to QA
research and how they might be implemented for the QA community.
| 2,020 | Computation and Language |
Probabilistic Bias Mitigation in Word Embeddings | It has been shown that word embeddings derived from large corpora tend to
incorporate biases present in their training data. Various methods for
mitigating these biases have been proposed, but recent work has demonstrated
that these methods hide but fail to truly remove the biases, which can still be
observed in word nearest-neighbor statistics. In this work we propose a
probabilistic view of word embedding bias. We leverage this framework to
present a novel method for mitigating bias which relies on probabilistic
observations to yield a more robust bias mitigation algorithm. We demonstrate
that this method effectively reduces bias according to three separate measures
of bias while maintaining embedding quality across various popular benchmark
semantic tasks
| 2,023 | Computation and Language |
Do Multi-hop Readers Dream of Reasoning Chains? | General Question Answering (QA) systems over texts require the multi-hop
reasoning capability, i.e. the ability to reason with information collected
from multiple passages to derive the answer. In this paper we conduct a
systematic analysis to assess such an ability of various existing models
proposed for multi-hop QA tasks. Specifically, our analysis investigates that
whether providing the full reasoning chain of multiple passages, instead of
just one final passage where the answer appears, could improve the performance
of the existing QA models. Surprisingly, when using the additional evidence
passages, the improvements of all the existing multi-hop reading approaches are
rather limited, with the highest error reduction of 5.8% on F1 (corresponding
to 1.3% absolute improvement) from the BERT model.
To better understand whether the reasoning chains could indeed help find
correct answers, we further develop a co-matching-based method that leads to
13.1% error reduction with passage chains when applied to two of our base
readers (including BERT). Our results demonstrate the existence of the
potential improvement using explicit multi-hop reasoning and the necessity to
develop models with better reasoning abilities.
| 2,019 | Computation and Language |
Document-level Neural Machine Translation with Associated Memory Network | Standard neural machine translation (NMT) is on the assumption that the
document-level context is independent. Most existing document-level NMT
approaches are satisfied with a smattering sense of global document-level
information, while this work focuses on exploiting detailed document-level
context in terms of a memory network. The capacity of the memory network that
detecting the most relevant part of the current sentence from memory renders a
natural solution to model the rich document-level context. In this work, the
proposed document-aware memory network is implemented to enhance the
Transformer NMT baseline. Experiments on several tasks show that the proposed
method significantly improves the NMT performance over strong Transformer
baselines and other related studies.
| 2,021 | Computation and Language |
Attention Is All You Need for Chinese Word Segmentation | Taking greedy decoding algorithm as it should be, this work focuses on
further strengthening the model itself for Chinese word segmentation (CWS),
which results in an even more fast and more accurate CWS model. Our model
consists of an attention only stacked encoder and a light enough decoder for
the greedy segmentation plus two highway connections for smoother training, in
which the encoder is composed of a newly proposed Transformer variant,
Gaussian-masked Directional (GD) Transformer, and a biaffine attention scorer.
With the effective encoder design, our model only needs to take unigram
features for scoring. Our model is evaluated on SIGHAN Bakeoff benchmark
datasets. The experimental results show that with the highest segmentation
speed, the proposed model achieves new state-of-the-art or comparable
performance against strong baselines in terms of strict closed test setting.
| 2,020 | Computation and Language |
Naver Labs Europe's Systems for the Document-Level Generation and
Translation Task at WNGT 2019 | Recently, neural models led to significant improvements in both machine
translation (MT) and natural language generation tasks (NLG). However,
generation of long descriptive summaries conditioned on structured data remains
an open challenge. Likewise, MT that goes beyond sentence-level context is
still an open issue (e.g., document-level MT or MT with metadata). To address
these challenges, we propose to leverage data from both tasks and do transfer
learning between MT, NLG, and MT with source-side metadata (MT+NLG). First, we
train document-based MT systems with large amounts of parallel data. Then, we
adapt these models to pure NLG and MT+NLG tasks by fine-tuning with smaller
amounts of domain-specific data. This end-to-end NLG approach, without data
selection and planning, outperforms the previous state of the art on the
Rotowire NLG task. We participated to the "Document Generation and Translation"
task at WNGT 2019, and ranked first in all tracks.
| 2,019 | Computation and Language |
Positional Attention-based Frame Identification with BERT: A Deep
Learning Approach to Target Disambiguation and Semantic Frame Selection | Semantic parsing is the task of transforming sentences from natural language
into formal representations of predicate-argument structures. Under this
research area, frame-semantic parsing has attracted much interest. This parsing
approach leverages the lexical information defined in FrameNet to associate
marked predicates or targets with semantic frames, thereby assigning semantic
roles to sentence components based on pre-specified frame elements in FrameNet.
In this paper, a deep neural network architecture known as Positional
Attention-based Frame Identification with BERT (PAFIBERT) is presented as a
solution to the frame identification subtask in frame-semantic parsing.
Although the importance of this subtask is well-established, prior research has
yet to find a robust solution that works satisfactorily for both in-domain and
out-of-domain data. This study thus set out to improve frame identification in
light of recent advancements of language modeling and transfer learning in
natural language processing. The proposed method is partially empowered by
BERT, a pre-trained language model that excels at capturing contextual
information in texts. By combining the language representation power of BERT
with a position-based attention mechanism, PAFIBERT is able to attend to
target-specific contexts in sentences for disambiguating targets and
associating them with the most suitable semantic frames. Under various
experimental settings, PAFIBERT outperformed existing solutions by a
significant margin, achieving new state-of-the-art results for both in-domain
and out-of-domain benchmark test sets.
| 2,019 | Computation and Language |
Machine Translation of Restaurant Reviews: New Corpus for Domain
Adaptation and Robustness | We share a French-English parallel corpus of Foursquare restaurant reviews
(https://europe.naverlabs.com/research/natural-language-processing/machine-translation-of-restaurant-reviews),
and define a new task to encourage research on Neural Machine Translation
robustness and domain adaptation, in a real-world scenario where better-quality
MT would be greatly beneficial. We discuss the challenges of such
user-generated content, and train good baseline models that build upon the
latest techniques for MT robustness. We also perform an extensive evaluation
(automatic and human) that shows significant improvements over existing online
systems. Finally, we propose task-specific metrics based on sentiment analysis
or translation accuracy of domain-specific polysemous words.
| 2,019 | Computation and Language |
Adversarial NLI: A New Benchmark for Natural Language Understanding | We introduce a new large-scale NLI benchmark dataset, collected via an
iterative, adversarial human-and-model-in-the-loop procedure. We show that
training models on this new dataset leads to state-of-the-art performance on a
variety of popular NLI benchmarks, while posing a more difficult challenge with
its new test set. Our analysis sheds light on the shortcomings of current
state-of-the-art models, and shows that non-expert annotators are successful at
finding their weaknesses. The data collection method can be applied in a
never-ending learning scenario, becoming a moving target for NLU, rather than a
static benchmark that will quickly saturate.
| 2,020 | Computation and Language |
Can adversarial training learn image captioning ? | Recently, generative adversarial networks (GAN) have gathered a lot of
interest. Their efficiency in generating unseen samples of high quality,
especially images, has improved over the years. In the field of Natural
Language Generation (NLG), the use of the adversarial setting to generate
meaningful sentences has shown to be difficult for two reasons: the lack of
existing architectures to produce realistic sentences and the lack of
evaluation tools. In this paper, we propose an adversarial architecture related
to the conditional GAN (cGAN) that generates sentences according to a given
image (also called image captioning). This attempt is the first that uses no
pre-training or reinforcement methods. We also explain why our experiment
settings can be safely evaluated and interpreted for further works.
| 2,019 | Computation and Language |
Masked Language Model Scoring | Pretrained masked language models (MLMs) require finetuning for most NLP
tasks. Instead, we evaluate MLMs out of the box via their pseudo-log-likelihood
scores (PLLs), which are computed by masking tokens one by one. We show that
PLLs outperform scores from autoregressive language models like GPT-2 in a
variety of tasks. By rescoring ASR and NMT hypotheses, RoBERTa reduces an
end-to-end LibriSpeech model's WER by 30% relative and adds up to +1.7 BLEU on
state-of-the-art baselines for low-resource translation pairs, with further
gains from domain adaptation. We attribute this success to PLL's unsupervised
expression of linguistic acceptability without a left-to-right bias, greatly
improving on scores from GPT-2 (+10 points on island effects, NPI licensing in
BLiMP). One can finetune MLMs to give scores without masking, enabling
computation in a single inference pass. In all, PLLs and their associated
pseudo-perplexities (PPPLs) enable plug-and-play use of the growing number of
pretrained MLMs; e.g., we use a single cross-lingual model to rescore
translations in multiple languages. We release our library for language model
scoring at https://github.com/awslabs/mlm-scoring.
| 2,020 | Computation and Language |
Neural Cross-Lingual Relation Extraction Based on Bilingual Word
Embedding Mapping | Relation extraction (RE) seeks to detect and classify semantic relationships
between entities, which provides useful information for many NLP applications.
Since the state-of-the-art RE models require large amounts of manually
annotated data and language-specific resources to achieve high accuracy, it is
very challenging to transfer an RE model of a resource-rich language to a
resource-poor language. In this paper, we propose a new approach for
cross-lingual RE model transfer based on bilingual word embedding mapping. It
projects word embeddings from a target language to a source language, so that a
well-trained source-language neural network RE model can be directly applied to
the target language. Experiment results show that the proposed approach
achieves very good performance for a number of target languages on both
in-house and open datasets, using a small bilingual dictionary with only 1K
word pairs.
| 2,019 | Computation and Language |
Dreaddit: A Reddit Dataset for Stress Analysis in Social Media | Stress is a nigh-universal human experience, particularly in the online
world. While stress can be a motivator, too much stress is associated with many
negative health outcomes, making its identification useful across a range of
domains. However, existing computational research typically only studies stress
in domains such as speech, or in short genres such as Twitter. We present
Dreaddit, a new text corpus of lengthy multi-domain social media data for the
identification of stress. Our dataset consists of 190K posts from five
different categories of Reddit communities; we additionally label 3.5K total
segments taken from 3K posts using Amazon Mechanical Turk. We present
preliminary supervised learning methods for identifying stress, both neural and
traditional, and analyze the complexity and diversity of the data and
characteristics of each category.
| 2,019 | Computation and Language |
Implementation of an Index Optimize Technology for Highly Specialized
Terms based on the Phonetic Algorithm Metaphone | When compiling databases, for example to meet the needs of healthcare
establishments, there is quite a common problem with the introduction and
further processing of names and last names of doctors and patients that are
highly specialized both in terms of pronunciation and writing. This is because
names and last names of people cannot be unique, their notation is not subject
to any rules of phonetics, while their length in different languages may not
match. With the advent of the Internet, this situation has become generally
critical and can lead to that multiple copies of e-mails are sent to one
address. It is possible to solve the specified problem by using phonetic
algorithms for comparing words Daitch-Mokotoff, Soundex, NYSIIS, Polyphone, and
Metaphone, as well as the Levenshtein and Jaro algorithms, Q-gram-based
algorithms, which make it possible to find distances between words. The most
widespread among them are the Soundex and Metaphone algorithms, which are
designed to index the words based on their sound, taking into consideration the
rules of pronunciation. By applying the Metaphone algorithm, an attempt has
been made to optimize the phonetic search processes for tasks of fuzzy
coincidence, for example, at data deduplication in various databases and
registries, in order to reduce the number of errors of incorrect input of last
names. An analysis of the most common last names reveals that some of them are
of the Ukrainian or Russian origin. At the same time, the rules following which
the names are pronounced and written, for example in Ukrainian, differ
radically from basic algorithms for English and differ quite significantly for
the Russian language. That is why a phonetic algorithm should take into
consideration first of all the peculiarities in the formation of Ukrainian last
names, which is of special relevance now.
| 2,019 | Computation and Language |
Generalization through Memorization: Nearest Neighbor Language Models | We introduce $k$NN-LMs, which extend a pre-trained neural language model (LM)
by linearly interpolating it with a $k$-nearest neighbors ($k$NN) model. The
nearest neighbors are computed according to distance in the pre-trained LM
embedding space, and can be drawn from any text collection, including the
original LM training data. Applying this augmentation to a strong Wikitext-103
LM, with neighbors drawn from the original training set, our $k$NN-LM achieves
a new state-of-the-art perplexity of 15.79 - a 2.9 point improvement with no
additional training. We also show that this approach has implications for
efficiently scaling up to larger training sets and allows for effective domain
adaptation, by simply varying the nearest neighbor datastore, again without
further training. Qualitatively, the model is particularly helpful in
predicting rare patterns, such as factual knowledge. Together, these results
strongly suggest that learning similarity between sequences of text is easier
than predicting the next word, and that nearest neighbor search is an effective
approach for language modeling in the long tail.
| 2,020 | Computation and Language |
Sequence Modeling with Unconstrained Generation Order | The dominant approach to sequence generation is to produce a sequence in some
predefined order, e.g. left to right. In contrast, we propose a more general
model that can generate the output sequence by inserting tokens in any
arbitrary order. Our model learns decoding order as a result of its training
procedure. Our experiments show that this model is superior to fixed order
models on a number of sequence generation tasks, such as Machine Translation,
Image-to-LaTeX and Image Captioning.
| 2,019 | Computation and Language |
Forget Me Not: Reducing Catastrophic Forgetting for Domain Adaptation in
Reading Comprehension | The creation of large-scale open domain reading comprehension data sets in
recent years has enabled the development of end-to-end neural comprehension
models with promising results. To use these models for domains with limited
training data, one of the most effective approach is to first pretrain them on
large out-of-domain source data and then fine-tune them with the limited target
data. The caveat of this is that after fine-tuning the comprehension models
tend to perform poorly in the source domain, a phenomenon known as catastrophic
forgetting. In this paper, we explore methods that overcome catastrophic
forgetting during fine-tuning without assuming access to data from the source
domain. We introduce new auxiliary penalty terms and observe the best
performance when a combination of auxiliary penalty terms is used to regularise
the fine-tuning process for adapting comprehension models. To test our methods,
we develop and release 6 narrow domain data sets that could potentially be used
as reading comprehension benchmarks.
| 2,020 | Computation and Language |
Improving Generalization of Transformer for Speech Recognition with
Parallel Schedule Sampling and Relative Positional Embedding | Transformer has shown promising results in many sequence to sequence
transformation tasks recently. It utilizes a number of feed-forward
self-attention layers to replace the recurrent neural networks (RNN) in
attention-based encoder decoder (AED) architecture. Self-attention layer learns
temporal dependence by incorporating sinusoidal positional embedding of tokens
in a sequence for parallel computing. Quicker iteration speed in training than
sequential operation of RNN can be obtained. Deeper layers of the transformer
also make it perform better than RNN-based AED. However, this parallelization
ability is lost when applying scheduled sampling training. Self-attention with
sinusoidal positional embedding may cause performance degradations for longer
sequences that have similar acoustic or semantic information at different
positions as well. To address these problems, we propose to use parallel
scheduled sampling (PSS) and relative positional embedding (RPE) to help the
transformer generalize to unseen data. Our proposed methods achieve a 7%
relative improvement for short utterances and a 70% relative gain for long
utterances on a 10,000-hour Mandarin ASR task.
| 2,020 | Computation and Language |
When Choosing Plausible Alternatives, Clever Hans can be Clever | Pretrained language models, such as BERT and RoBERTa, have shown large
improvements in the commonsense reasoning benchmark COPA. However, recent work
found that many improvements in benchmarks of natural language understanding
are not due to models learning the task, but due to their increasing ability to
exploit superficial cues, such as tokens that occur more often in the correct
answer than the wrong one. Are BERT's and RoBERTa's good performance on COPA
also caused by this? We find superficial cues in COPA, as well as evidence that
BERT exploits these cues. To remedy this problem, we introduce Balanced COPA,
an extension of COPA that does not suffer from easy-to-exploit single token
cues. We analyze BERT's and RoBERTa's performance on original and Balanced
COPA, finding that BERT relies on superficial cues when they are present, but
still achieves comparable performance once they are made ineffective,
suggesting that BERT learns the task to a certain degree when forced to. In
contrast, RoBERTa does not appear to rely on superficial cues.
| 2,019 | Computation and Language |
Generating Justifications for Norm-Related Agent Decisions | We present an approach to generating natural language justifications of
decisions derived from norm-based reasoning. Assuming an agent which maximally
satisfies a set of rules specified in an object-oriented temporal logic, the
user can ask factual questions (about the agent's rules, actions, and the
extent to which the agent violated the rules) as well as "why" questions that
require the agent comparing actual behavior to counterfactual trajectories with
respect to these rules. To produce natural-sounding explanations, we focus on
the subproblem of producing natural language clauses from statements in a
fragment of temporal logic, and then describe how to embed these clauses into
explanatory sentences. We use a human judgment evaluation on a testbed task to
compare our approach to variants in terms of intelligibility, mental model and
perceived trust.
| 2,019 | Computation and Language |
Engaging in Dialogue about an Agent's Norms and Behaviors | We present a set of capabilities allowing an agent planning with moral and
social norms represented in temporal logic to respond to queries about its
norms and behaviors in natural language, and for the human user to add and
remove norms directly in natural language. The user may also pose hypothetical
modifications to the agent's norms and inquire about their effects.
| 2,019 | Computation and Language |
A Robust Data-Driven Approach for Dialogue State Tracking of Unseen Slot
Values | A Dialogue State Tracker is a key component in dialogue systems which
estimates the beliefs of possible user goals at each dialogue turn. Deep
learning approaches using recurrent neural networks have shown state-of-the-art
performance for the task of dialogue state tracking. Generally, these
approaches assume a predefined candidate list and struggle to predict any new
dialogue state values that are not seen during training. This makes extending
the candidate list for a slot without model retaining infeasible and also has
limitations in modelling for low resource domains where training data for slot
values are expensive. In this paper, we propose a novel dialogue state tracker
based on copying mechanism that can effectively track such unseen slot values
without compromising performance on slot values seen during training. The
proposed model is also flexible in extending the candidate list without
requiring any retraining or change in the model. We evaluate the proposed model
on various benchmark datasets (DSTC2, DSTC3 and WoZ2.0) and show that our
approach, outperform other end-to-end data-driven approaches in tracking unseen
slot values and also provides significant advantages in modelling for DST.
| 2,019 | Computation and Language |
Kernelized Bayesian Softmax for Text Generation | Neural models for text generation require a softmax layer with proper token
embeddings during the decoding phase. Most existing approaches adopt single
point embedding for each token. However, a word may have multiple senses
according to different context, some of which might be distinct. In this paper,
we propose KerBS, a novel approach for learning better embeddings for text
generation. KerBS embodies two advantages: (a) it employs a Bayesian
composition of embeddings for words with multiple senses; (b) it is adaptive to
semantic variances of words and robust to rare sentence context by imposing
learned kernels to capture the closeness of words (senses) in the embedding
space. Empirical studies show that KerBS significantly boosts the performance
of several text generation tasks.
| 2,019 | Computation and Language |
Efficient Feature Selection techniques for Sentiment Analysis | Sentiment analysis is a domain of study that focuses on identifying and
classifying the ideas expressed in the form of text into positive, negative and
neutral polarities. Feature selection is a crucial process in machine learning.
In this paper, we aim to study the performance of different feature selection
techniques for sentiment analysis. Term Frequency Inverse Document Frequency
(TF-IDF) is used as the feature extraction technique for creating feature
vocabulary. Various Feature Selection (FS) techniques are experimented to
select the best set of features from feature vocabulary. The selected features
are trained using different machine learning classifiers Logistic Regression
(LR), Support Vector Machines (SVM), Decision Tree (DT) and Naive Bayes (NB).
Ensemble techniques Bagging and Random Subspace are applied on classifiers to
enhance the performance on sentiment analysis. We show that, when the best FS
techniques are trained using ensemble methods achieve remarkable results on
sentiment analysis. We also compare the performance of FS methods trained using
Bagging, Random Subspace with varied neural network architectures. We show that
FS techniques trained using ensemble classifiers outperform neural networks
requiring significantly less training time and parameters thereby eliminating
the need for extensive hyper-parameter tuning.
| 2,020 | Computation and Language |
On the Linguistic Representational Power of Neural Machine Translation
Models | Despite the recent success of deep neural networks in natural language
processing (NLP), their interpretability remains a challenge. We analyze the
representations learned by neural machine translation models at various levels
of granularity and evaluate their quality through relevant extrinsic
properties. In particular, we seek answers to the following questions: (i) How
accurately is word-structure captured within the learned representations, an
important aspect in translating morphologically-rich languages? (ii) Do the
representations capture long-range dependencies, and effectively handle
syntactically divergent languages? (iii) Do the representations capture lexical
semantics? We conduct a thorough investigation along several parameters: (i)
Which layers in the architecture capture each of these linguistic phenomena;
(ii) How does the choice of translation unit (word, character, or subword unit)
impact the linguistic properties captured by the underlying representations?
(iii) Do the encoder and decoder learn differently and independently? (iv) Do
the representations learned by multilingual NMT models capture the same amount
of linguistic information as their bilingual counterparts? Our data-driven,
quantitative evaluation illuminates important aspects in NMT models and their
ability to capture various linguistic phenomena. We show that deep NMT models
learn a non-trivial amount of linguistic information. Notable findings include:
i) Word morphology and part-of-speech information are captured at the lower
layers of the model; (ii) In contrast, lexical semantics or non-local syntactic
and semantic dependencies are better represented at the higher layers; (iii)
Representations learned using characters are more informed about wordmorphology
compared to those learned using subword units; and (iv) Representations learned
by multilingual models are richer compared to bilingual models.
| 2,019 | Computation and Language |
Ensembling Strategies for Answering Natural Questions | Many of the top question answering systems today utilize ensembling to
improve their performance on tasks such as the Stanford Question Answering
Dataset (SQuAD) and Natural Questions (NQ) challenges. Unfortunately most of
these systems do not publish their ensembling strategies used in their
leaderboard submissions. In this work, we investigate a number of ensembling
techniques and demonstrate a strategy which improves our F1 score for short
answers on the dev set for NQ by 2.3 F1 points over our single model (which
outperforms the previous SOTA by 1.9 F1 points).
| 2,019 | Computation and Language |
CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data | Pre-training text representations have led to significant improvements in
many areas of natural language processing. The quality of these models benefits
greatly from the size of the pretraining corpora as long as its quality is
preserved. In this paper, we describe an automatic pipeline to extract massive
high-quality monolingual datasets from Common Crawl for a variety of languages.
Our pipeline follows the data processing introduced in fastText (Mikolov et
al., 2017; Grave et al., 2018), that deduplicates documents and identifies
their language. We augment this pipeline with a filtering step to select
documents that are close to high quality corpora like Wikipedia.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.