Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Phrase-Level Class based Language Model for Mandarin Smart Speaker Query
Recognition | The success of speech assistants requires precise recognition of a number of
entities on particular contexts. A common solution is to train a class-based
n-gram language model and then expand the classes into specific words or
phrases. However, when the class has a huge list, e.g., more than 20 million
songs, a fully expansion will cause memory explosion. Worse still, the list
items in the class need to be updated frequently, which requires a dynamic
model updating technique. In this work, we propose to train pruned language
models for the word classes to replace the slots in the root n-gram. We further
propose to use a novel technique, named Difference Language Model (DLM), to
correct the bias from the pruned language models. Once the decoding graph is
built, we only need to recalculate the DLM when the entities in word classes
are updated. Results show that the proposed method consistently and
significantly outperforms the conventional approaches on all datasets, esp. for
large lists, which the conventional approaches cannot handle.
| 2,019 | Computation and Language |
Enhancing Context Modeling with a Query-Guided Capsule Network for
Document-level Translation | Context modeling is essential to generate coherent and consistent translation
for Document-level Neural Machine Translations. The widely used method for
document-level translation usually compresses the context information into a
representation via hierarchical attention networks. However, this method
neither considers the relationship between context words nor distinguishes the
roles of context words. To address this problem, we propose a query-guided
capsule networks to cluster context information into different perspectives
from which the target translation may concern. Experiment results show that our
method can significantly outperform strong baselines on multiple data sets of
different domains.
| 2,019 | Computation and Language |
A Sketch-Based System for Semantic Parsing | This paper presents our semantic parsing system for the evaluation task of
open domain semantic parsing in NLPCC 2019. Many previous works formulate
semantic parsing as a sequence-to-sequence(seq2seq) problem. Instead, we treat
the task as a sketch-based problem in a coarse-to-fine(coarse2fine) fashion.
The sketch is a high-level structure of the logical form exclusive of low-level
details such as entities and predicates. In this way, we are able to optimize
each part individually. Specifically, we decompose the process into three
stages: the sketch classification determines the high-level structure while the
entity labeling and the matching network fill in missing details. Moreover, we
adopt the seq2seq method to evaluate logical form candidates from an overall
perspective. The co-occurrence relationship between predicates and entities
contribute to the reranking as well. Our submitted system achieves the exactly
matching accuracy of 82.53% on full test set and 47.83% on hard test subset,
which is the 3rd place in NLPCC 2019 Shared Task 2. After optimizations for
parameters, network structure and sampling, the accuracy reaches 84.47% on full
test set and 63.08% on hard test subset(Our code and data are available at
https://github.com/zechagl/NLPCC2019-Semantic-Parsing).
| 2,019 | Computation and Language |
SumQE: a BERT-based Summary Quality Estimation Model | We propose SumQE, a novel Quality Estimation model for summarization based on
BERT. The model addresses linguistic quality aspects that are only indirectly
captured by content-based approaches to summary evaluation, without involving
comparison with human references. SumQE achieves very high correlations with
human ratings, outperforming simpler models addressing these linguistic
aspects. Predictions of the SumQE model can be used for system development, and
to inform users of the quality of automatically produced summaries and other
types of generated text.
| 2,019 | Computation and Language |
Answering questions by learning to rank -- Learning to rank by answering
questions | Answering multiple-choice questions in a setting in which no supporting
documents are explicitly provided continues to stand as a core problem in
natural language processing. The contribution of this article is two-fold.
First, it describes a method which can be used to semantically rank documents
extracted from Wikipedia or similar natural language corpora. Second, we
propose a model employing the semantic ranking that holds the first place in
two of the most popular leaderboards for answering multiple-choice questions:
ARC Easy and Challenge. To achieve this, we introduce a self-attention based
neural network that latently learns to rank documents by their importance
related to a given question, whilst optimizing the objective of predicting the
correct answer. These documents are considered relevant contexts for the
underlying question. We have published the ranked documents so that they can be
used off-the-shelf to improve downstream decision models.
| 2,019 | Computation and Language |
Subword Language Model for Query Auto-Completion | Current neural query auto-completion (QAC) systems rely on character-level
language models, but they slow down when queries are long. We present how to
utilize subword language models for the fast and accurate generation of query
completion candidates. Representing queries with subwords shorten a decoding
length significantly. To deal with issues coming from introducing subword
language model, we develop a retrace algorithm and a reranking method by
approximate marginalization. As a result, our model achieves up to 2.5 times
faster while maintaining a similar quality of generated results compared to the
character-level baseline. Also, we propose a new evaluation metric, mean
recoverable length (MRL), measuring how many upcoming characters the model
could complete correctly. It provides more explicit meaning and eliminates the
need for prefix length sampling for existing rank-based metrics. Moreover, we
performed a comprehensive analysis with ablation study to figure out the
importance of each component.
| 2,019 | Computation and Language |
Enriching Medcial Terminology Knowledge Bases via Pre-trained Language
Model and Graph Convolutional Network | Enriching existing medical terminology knowledge bases (KBs) is an important
and never-ending work for clinical research because new terminology alias may
be continually added and standard terminologies may be newly renamed. In this
paper, we propose a novel automatic terminology enriching approach to
supplement a set of terminologies to KBs. Specifically, terminology and entity
characters are first fed into pre-trained language model to obtain semantic
embedding. The pre-trained model is used again to initialize the terminology
and entity representations, then they are further embedded through graph
convolutional network to gain structure embedding. Afterwards, both semantic
and structure embeddings are combined to measure the relevancy between the
terminology and the entity. Finally, the optimal alignment is achieved based on
the order of relevancy between the terminology and all the entities in the KB.
Experimental results on clinical indicator terminology KB, collected from 38
top-class hospitals of Shanghai Hospital Development Center, show that our
proposed approach outperforms baseline methods and can effectively enrich the
KB.
| 2,019 | Computation and Language |
Story-oriented Image Selection and Placement | Multimodal contents have become commonplace on the Internet today, manifested
as news articles, social media posts, and personal or business blog posts.
Among the various kinds of media (images, videos, graphics, icons, audio) used
in such multimodal stories, images are the most popular. The selection of
images from a collection - either author's personal photo album, or web
repositories - and their meticulous placement within a text, builds a succinct
multimodal commentary for digital consumption. In this paper we present a
system that automates the process of selecting relevant images for a story and
placing them at contextual paragraphs within the story for a multimodal
narration. We leverage automatic object recognition, user-provided tags, and
commonsense knowledge, and use an unsupervised combinatorial optimization to
solve the selection and placement problems seamlessly as a single unit.
| 2,019 | Computation and Language |
Minimally Supervised Learning of Affective Events Using Discourse
Relations | Recognizing affective events that trigger positive or negative sentiment has
a wide range of natural language processing applications but remains a
challenging problem mainly because the polarity of an event is not necessarily
predictable from its constituent words. In this paper, we propose to propagate
affective polarity using discourse relations. Our method is simple and only
requires a very small seed lexicon and a large raw corpus. Our experiments
using Japanese data show that our method learns affective events effectively
without manually labeled data. It also improves supervised learning results
when labeled data are small.
| 2,020 | Computation and Language |
Sentence-Level Content Planning and Style Specification for Neural Text
Generation | Building effective text generation systems requires three critical
components: content selection, text planning, and surface realization, and
traditionally they are tackled as separate problems. Recent all-in-one style
neural generation models have made impressive progress, yet they often produce
outputs that are incoherent and unfaithful to the input. To address these
issues, we present an end-to-end trained two-step generation model, where a
sentence-level content planner first decides on the keyphrases to cover as well
as a desired language style, followed by a surface realization decoder that
generates relevant and coherent text. For experiments, we consider three tasks
from domains with diverse topics and varying language styles: persuasive
argument construction from Reddit, paragraph generation for normal and simple
versions of Wikipedia, and abstract generation for scientific articles.
Automatic evaluation shows that our system can significantly outperform
competitive comparisons. Human judges further rate our system generated text as
more fluent and correct, compared to the generations by its variants that do
not consider language style.
| 2,019 | Computation and Language |
The CL-SciSumm Shared Task 2018: Results and Key Insights | This overview describes the official results of the CL-SciSumm Shared Task
2018 -- the first medium-scale shared task on scientific document summarization
in the computational linguistics (CL) domain. This year, the dataset comprised
60 annotated sets of citing and reference papers from the open access research
papers in the CL domain. The Shared Task was organized as a part of the 41st
Annual Conference of the Special Interest Group in Information Retrieval
(SIGIR), held in Ann Arbor, USA in July 2018. We compare the participating
systems in terms of two evaluation metrics. The annotated dataset and
evaluation scripts can be accessed and used by the community from:
\url{https://github.com/WING-NUS/scisumm-corpus}.
| 2,019 | Computation and Language |
Editing-Based SQL Query Generation for Cross-Domain Context-Dependent
Questions | We focus on the cross-domain context-dependent text-to-SQL generation task.
Based on the observation that adjacent natural language questions are often
linguistically dependent and their corresponding SQL queries tend to overlap,
we utilize the interaction history by editing the previous predicted query to
improve the generation quality. Our editing mechanism views SQL as sequences
and reuses generation results at the token level in a simple manner. It is
flexible to change individual tokens and robust to error propagation.
Furthermore, to deal with complex table structures in different domains, we
employ an utterance-table encoder and a table-aware decoder to incorporate the
context of the user utterance and the table schema. We evaluate our approach on
the SParC dataset and demonstrate the benefit of editing compared with the
state-of-the-art baselines which generate SQL from scratch. Our code is
available at https://github.com/ryanzhumich/sparc_atis_pytorch.
| 2,019 | Computation and Language |
Investigating the Relationship between Multi-Party Linguistic
Entrainment, Team Characteristics, and the Perception of Team Social Outcomes | Multi-party linguistic entrainment refers to the phenomenon that speakers
tend to speak more similarly during conversation. We first developed new
measures of multi-party entrainment on features describing linguistic style,
and then examined the relationship between entrainment and team characteristics
in terms of gender composition, team size, and diversity. Next, we predicted
the perception of team social outcomes using multi-party linguistic entrainment
and team characteristics with a hierarchical regression model. We found that
teams with greater gender diversity had higher minimum convergence than teams
with less gender diversity. Entrainment contributed significantly to predicting
perceived team social outcomes both alone and controlling for team
characteristics.
| 2,019 | Computation and Language |
It's All in the Name: Mitigating Gender Bias with Name-Based
Counterfactual Data Substitution | This paper treats gender bias latent in word embeddings. Previous mitigation
attempts rely on the operationalisation of gender bias as a projection over a
linear subspace. An alternative approach is Counterfactual Data Augmentation
(CDA), in which a corpus is duplicated and augmented to remove bias, e.g. by
swapping all inherently-gendered words in the copy. We perform an empirical
comparison of these approaches on the English Gigaword and Wikipedia, and find
that whilst both successfully reduce direct bias and perform well in tasks
which quantify embedding quality, CDA variants outperform projection-based
methods at the task of drawing non-biased gender analogies by an average of 19%
across both corpora. We propose two improvements to CDA: Counterfactual Data
Substitution (CDS), a variant of CDA in which potentially biased text is
randomly substituted to avoid duplication, and the Names Intervention, a novel
name-pairing technique that vastly increases the number of words being treated.
CDA/S with the Names Intervention is the only approach which is able to
mitigate indirect gender bias: following debiasing, previously biased words are
significantly less clustered according to gender (cluster purity is reduced by
49%), thus improving on the state-of-the-art for bias mitigation.
| 2,020 | Computation and Language |
Identifying Personality Traits Using Overlap Dynamics in Multiparty
Dialogue | Research on human spoken language has shown that speech plays an important
role in identifying speaker personality traits. In this work, we propose an
approach for identifying speaker personality traits using overlap dynamics in
multiparty spoken dialogues. We first define a set of novel features
representing the overlap dynamics of each speaker. We then investigate the
impact of speaker personality traits on these features using ANOVA tests. We
find that features of overlap dynamics significantly vary for speakers with
different levels of both Extraversion and Conscientiousness. Finally, we find
that classifiers using only overlap dynamics features outperform random
guessing in identifying Extraversion and Agreeableness, and that the
improvements are statistically significant.
| 2,019 | Computation and Language |
Attributed Rhetorical Structure Grammar for Domain Text Summarization | This paper presents a new approach of automatic text summarization which
combines domain oriented text analysis (DoTA) and rhetorical structure theory
(RST) in a grammar form: the attributed rhetorical structure grammar (ARSG),
where the non-terminal symbols are domain keywords, called domain relations,
while the rhetorical relations serve as attributes. We developed machine
learning algorithms for learning such a grammar from a corpus of sample domain
texts, as well as parsing algorithms for the learned grammar, together with
adjustable text summarization algorithms for generating domain specific
summaries. Our practical experiments have shown that with support of domain
knowledge the drawback of missing very large training data set can be
effectively compensated. We have also shown that the knowledge based approach
may be made more powerful by introducing grammar parsing and RST as inference
engine. For checking the feasibility of model transfer, we introduced a
technique for mapping a grammar from one domain to others with acceptable cost.
We have also made a comprehensive comparison of our approach with some others.
| 2,019 | Computation and Language |
Adversarial Bootstrapping for Dialogue Model Training | Open domain neural dialogue models, despite their successes, are known to
produce responses that lack relevance, diversity, and in many cases coherence.
These shortcomings stem from the limited ability of common training objectives
to directly express these properties as well as their interplay with training
datasets and model architectures. Toward addressing these problems, this paper
proposes bootstrapping a dialogue response generator with an adversarially
trained discriminator. The method involves training a neural generator in both
autoregressive and traditional teacher-forcing modes, with the maximum
likelihood loss of the auto-regressive outputs weighted by the score from a
metric-based discriminator model. The discriminator input is a mixture of
ground truth labels, the teacher-forcing outputs of the generator, and
distractors sampled from the dataset, thereby allowing for richer feedback on
the autoregressive outputs of the generator. To improve the calibration of the
discriminator output, we also bootstrap the discriminator with the matching of
the intermediate features of the ground truth and the generator's
autoregressive output. We explore different sampling and adversarial policy
optimization strategies during training in order to understand how to encourage
response diversity without sacrificing relevance. Our experiments shows that
adversarial bootstrapping is effective at addressing exposure bias, leading to
improvement in response relevance and coherence. The improvement is
demonstrated with the state-of-the-art results on the Movie and Ubuntu dialogue
datasets with respect to human evaluations and BLUE, ROGUE, and distinct n-gram
scores.
| 2,019 | Computation and Language |
Combining Spans into Entities: A Neural Two-Stage Approach for
Recognizing Discontiguous Entities | In medical documents, it is possible that an entity of interest not only
contains a discontiguous sequence of words but also overlaps with another
entity. Entities of such structures are intrinsically hard to recognize due to
the large space of possible entity combinations. In this work, we propose a
neural two-stage approach to recognize discontiguous and overlapping entities
by decomposing this problem into two subtasks: 1) it first detects all the
overlapping spans that either form entities on their own or present as segments
of discontiguous entities, based on the representation of segmental hypergraph,
2) next it learns to combine these segments into discontiguous entities with a
classifier, which filters out other incorrect combinations of segments. Two
neural components are designed for these subtasks respectively and they are
learned jointly using a shared encoder for text. Our model achieves the
state-of-the-art performance in a standard dataset, even in the absence of
external features that previous methods used.
| 2,019 | Computation and Language |
Transfer Fine-Tuning: A BERT Case Study | A semantic equivalence assessment is defined as a task that assesses semantic
equivalence in a sentence pair by binary judgment (i.e., paraphrase
identification) or grading (i.e., semantic textual similarity measurement). It
constitutes a set of tasks crucial for research on natural language
understanding. Recently, BERT realized a breakthrough in sentence
representation learning (Devlin et al., 2019), which is broadly transferable to
various NLP tasks. While BERT's performance improves by increasing its model
size, the required computational power is an obstacle preventing practical
applications from adopting the technology. Herein, we propose to inject phrasal
paraphrase relations into BERT in order to generate suitable representations
for semantic equivalence assessment instead of increasing the model size.
Experiments on standard natural language understanding tasks confirm that our
method effectively improves a smaller BERT model while maintaining the model
size. The generated model exhibits superior performance compared to a larger
BERT model on semantic equivalence assessment tasks. Furthermore, it achieves
larger performance gains on tasks with limited training datasets for
fine-tuning, which is a property desirable for transfer learning.
| 2,022 | Computation and Language |
"Can you say more about the location?" The Development of a Pedagogical
Reference Resolution Agent | In an increasingly globalized world, geographic literacy is crucial. In this
paper, we present a collaborative two-player game to improve people's ability
to locate countries on the world map. We discuss two implementations of the
game: First, we created a web-based version which can be played with the
remote-controlled agent Nellie. With the knowledge we gained from a large
online data collection, we re-implemented the game so it can be played
face-to-face with the Furhat robot Neil. Our analysis shows that participants
found the game not just engaging to play, they also believe they gained lasting
knowledge about the world map.
| 2,019 | Computation and Language |
Unicoder: A Universal Language Encoder by Pre-training with Multiple
Cross-lingual Tasks | We present Unicoder, a universal language encoder that is insensitive to
different languages. Given an arbitrary NLP task, a model can be trained with
Unicoder using training data in one language and directly applied to inputs of
the same task in other languages. Comparing to similar efforts such as
Multilingual BERT and XLM, three new cross-lingual pre-training tasks are
proposed, including cross-lingual word recovery, cross-lingual paraphrase
classification and cross-lingual masked language model. These tasks help
Unicoder learn the mappings among different languages from more perspectives.
We also find that doing fine-tuning on multiple languages together can bring
further improvement. Experiments are performed on two tasks: cross-lingual
natural language inference (XNLI) and cross-lingual question answering (XQA),
where XLM is our baseline. On XNLI, 1.8% averaged accuracy improvement (on 15
languages) is obtained. On XQA, which is a new cross-lingual dataset built by
us, 5.5% averaged accuracy improvement (on French and German) is obtained.
| 2,019 | Computation and Language |
Certified Robustness to Adversarial Word Substitutions | State-of-the-art NLP models can often be fooled by adversaries that apply
seemingly innocuous label-preserving transformations (e.g., paraphrasing) to
input text. The number of possible transformations scales exponentially with
text length, so data augmentation cannot cover all transformations of an input.
This paper considers one exponentially large family of label-preserving
transformations, in which every word in the input can be replaced with a
similar word. We train the first models that are provably robust to all word
substitutions in this family. Our training procedure uses Interval Bound
Propagation (IBP) to minimize an upper bound on the worst-case loss that any
combination of word substitutions can induce. To evaluate models' robustness to
these transformations, we measure accuracy on adversarially chosen word
substitutions applied to test examples. Our IBP-trained models attain $75\%$
adversarial accuracy on both sentiment analysis on IMDB and natural language
inference on SNLI. In comparison, on IMDB, models trained normally and ones
trained with data augmentation achieve adversarial accuracy of only $8\%$ and
$35\%$, respectively.
| 2,019 | Computation and Language |
Automatic Argument Quality Assessment -- New Datasets and Methods | We explore the task of automatic assessment of argument quality. To that end,
we actively collected 6.3k arguments, more than a factor of five compared to
previously examined data. Each argument was explicitly and carefully annotated
for its quality. In addition, 14k pairs of arguments were annotated
independently, identifying the higher quality argument in each pair. In spite
of the inherent subjective nature of the task, both annotation schemes led to
surprisingly consistent results. We release the labeled datasets to the
community. Furthermore, we suggest neural methods based on a recently released
language model, for argument ranking as well as for argument-pair
classification. In the former task, our results are comparable to
state-of-the-art; in the latter task our results significantly outperform
earlier methods.
| 2,019 | Computation and Language |
Duality Regularization for Unsupervised Bilingual Lexicon Induction | Unsupervised bilingual lexicon induction naturally exhibits duality, which
results from symmetry in back-translation. For example, EN-IT and IT-EN
induction can be mutually primal and dual problems. Current state-of-the-art
methods, however, consider the two tasks independently. In this paper, we
propose to train primal and dual models jointly, using regularizers to
encourage consistency in back translation cycles. Experiments across 6 language
pairs show that the proposed method significantly outperforms competitive
baselines, obtaining the best-published results on a standard benchmark.
| 2,022 | Computation and Language |
Towards Making a Dependency Parser See | We explore whether it is possible to leverage eye-tracking data in an RNN
dependency parser (for English) when such information is only available during
training, i.e., no aggregated or token-level gaze features are used at
inference time. To do so, we train a multitask learning model that parses
sentences as sequence labeling and leverages gaze features as auxiliary tasks.
Our method also learns to train from disjoint datasets, i.e. it can be used to
test whether already collected gaze features are useful to improve the
performance on new non-gazed annotated treebanks. Accuracy gains are modest but
positive, showing the feasibility of the approach. It can serve as a first step
towards architectures that can better leverage eye-tracking data or other
complementary information available only for training sentences, possibly
leading to improvements in syntactic parsing.
| 2,019 | Computation and Language |
Attention-based Pairwise Multi-Perspective Convolutional Neural Network
for Answer Selection in Question Answering | Over the past few years, question answering and information retrieval systems
have become widely used. These systems attempt to find the answer of the asked
questions from raw text sources. A component of these systems is Answer
Selection which selects the most relevant from candidate answers. Syntactic
similarities were mostly used to compute the similarity, but in recent works,
deep neural networks have been used, making a significant improvement in this
field. In this research, a model is proposed to select the most relevant
answers to the factoid question from the candidate answers. The proposed model
ranks the candidate answers in terms of semantic and syntactic similarity to
the question, using convolutional neural networks. In this research, Attention
mechanism and Sparse feature vector use the context-sensitive interactions
between questions and answer sentence. Wide convolution increases the
importance of the interrogative word. Pairwise ranking is used to learn
differentiable representations to distinguish positive and negative answers.
Our model indicates strong performance on the TrecQA Raw beating previous
state-of-the-art systems by 1.4% in MAP and 1.1% in MRR while using the
benefits of no additional syntactic parsers and external tools. The results
show that using context-sensitive interactions between question and answer
sentences can help to find the correct answer more accurately.
| 2,019 | Computation and Language |
A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen | This paper presents a smart sliding Chinese pinyin Input Method Editor (IME)
for touchscreen devices which allows user finger sliding from one key to
another on the touchscreen instead of tapping keys one by one, while the target
Chinese character sequence will be predicted during the sliding process to help
user input Chinese characters efficiently. Moreover, the layout of the virtual
keyboard of our IME adapts to user sliding for more efficient inputting. The
layout adaption process is utilized with Recurrent Neural Networks (RNN) and
deep reinforcement learning. The pinyin-to-character converter is implemented
with a sequence-to-sequence (Seq2Seq) model to predict the target Chinese
sequence. A sliding simulator is built to automatically produce sliding samples
for model training and virtual keyboard test. The key advantage of our proposed
IME is that nearly all its built-in tactics can be optimized automatically with
deep learning algorithms only following user behavior. Empirical studies verify
the effectiveness of the proposed model and show a better user input
efficiency.
| 2,019 | Computation and Language |
Modeling Named Entity Embedding Distribution into Hypersphere | This work models named entity distribution from a way of visualizing
topological structure of embedding space, so that we make an assumption that
most, if not all, named entities (NEs) for a language tend to aggregate
together to be accommodated by a specific hypersphere in embedding space. Thus
we present a novel open definition for NE which alleviates the obvious drawback
in previous closed NE definition with a limited NE dictionary. Then, we show
two applications with introducing the proposed named entity hypersphere model.
First, using a generative adversarial neural network to learn a transformation
matrix of two embedding spaces, which results in a convenient determination of
named entity distribution in the target language, indicating the potential of
fast named entity discovery only using isomorphic relation between embedding
spaces. Second, the named entity hypersphere model is directly integrated with
various named entity recognition models over sentences to achieve
state-of-the-art results. Only assuming that embeddings are available, we show
a prior knowledge free approach on effective named entity distribution
depiction.
| 2,019 | Computation and Language |
Language Models as Knowledge Bases? | Recent progress in pretraining language models on large textual corpora led
to a surge of improvements for downstream NLP tasks. Whilst learning linguistic
knowledge, these models may also be storing relational knowledge present in the
training data, and may be able to answer queries structured as
"fill-in-the-blank" cloze statements. Language models have many advantages over
structured knowledge bases: they require no schema engineering, allow
practitioners to query about an open class of relations, are easy to extend to
more data, and require no human supervision to train. We present an in-depth
analysis of the relational knowledge already present (without fine-tuning) in a
wide range of state-of-the-art pretrained language models. We find that (i)
without fine-tuning, BERT contains relational knowledge competitive with
traditional NLP methods that have some access to oracle knowledge, (ii) BERT
also does remarkably well on open-domain question answering against a
supervised baseline, and (iii) certain types of factual knowledge are learned
much more readily than others by standard language model pretraining
approaches. The surprisingly strong ability of these models to recall factual
knowledge without any fine-tuning demonstrates their potential as unsupervised
open-domain QA systems. The code to reproduce our analysis is available at
https://github.com/facebookresearch/LAMA.
| 2,019 | Computation and Language |
Multi-agent Learning for Neural Machine Translation | Conventional Neural Machine Translation (NMT) models benefit from the
training with an additional agent, e.g., dual learning, and bidirectional
decoding with one agent decoding from left to right and the other decoding in
the opposite direction. In this paper, we extend the training framework to the
multi-agent scenario by introducing diverse agents in an interactive updating
process. At training time, each agent learns advanced knowledge from others,
and they work together to improve translation quality. Experimental results on
NIST Chinese-English, IWSLT 2014 German-English, WMT 2014 English-German and
large-scale Chinese-English translation tasks indicate that our approach
achieves absolute improvements over the strong baseline systems and shows
competitive performance on all tasks.
| 2,019 | Computation and Language |
Pre-training A Neural Language Model Improves The Sample Efficiency of
an Emergency Room Classification Model | To build a French national electronic injury surveillance system based on
emergency room visits, we aim to develop a coding system to classify their
causes from clinical notes in free-text. Supervised learning techniques have
shown good results in this area but require a large amount of expert annotated
dataset which is time consuming and costly to obtain. We hypothesize that the
Natural Language Processing Transformer model incorporating a generative
self-supervised pre-training step can significantly reduce the required number
of annotated samples for supervised fine-tuning. In this preliminary study, we
test our hypothesis in the simplified problem of predicting whether a visit is
the consequence of a traumatic event or not from free-text clinical notes.
Using fully re-trained GPT-2 models (without OpenAI pre-trained weights), we
assess the gain of applying a self-supervised pre-training phase with unlabeled
notes prior to the supervised learning task. Results show that the number of
data required to achieve a ginve level of performance (AUC>0.95) was reduced by
a factor of 10 when applying pre-training. Namely, for 16 times more data, the
fully-supervised model achieved an improvement <1% in AUC. To conclude, it is
possible to adapt a multi-purpose neural language model such as the GPT-2 to
create a powerful tool for classification of free-text notes with only a small
number of labeled samples.
| 2,020 | Computation and Language |
Bilingual is At Least Monolingual (BALM): A Novel Translation Algorithm
that Encodes Monolingual Priors | State-of-the-art machine translation (MT) models do not use knowledge of any
single language's structure; this is the equivalent of asking someone to
translate from English to German while knowing neither language. BALM is a
framework incorporates monolingual priors into an MT pipeline; by casting input
and output languages into embedded space using BERT, we can solve machine
translation with much simpler models. We find that English-to-German
translation on the Multi30k dataset can be solved with a simple feedforward
network under the BALM framework with near-SOTA BLEU scores.
| 2,019 | Computation and Language |
Encode, Tag, Realize: High-Precision Text Editing | We propose LaserTagger - a sequence tagging approach that casts text
generation as a text editing task. Target texts are reconstructed from the
inputs using three main edit operations: keeping a token, deleting it, and
adding a phrase before the token. To predict the edit operations, we propose a
novel model, which combines a BERT encoder with an autoregressive Transformer
decoder. This approach is evaluated on English text on four tasks: sentence
fusion, sentence splitting, abstractive summarization, and grammar correction.
LaserTagger achieves new state-of-the-art results on three of these tasks,
performs comparably to a set of strong seq2seq baselines with a large number of
training examples, and outperforms them when the number of examples is limited.
Furthermore, we show that at inference time tagging can be more than two orders
of magnitude faster than comparable seq2seq models, making it more attractive
for running in a live environment.
| 2,019 | Computation and Language |
Better Rewards Yield Better Summaries: Learning to Summarise Without
References | Reinforcement Learning (RL) based document summarisation systems yield
state-of-the-art performance in terms of ROUGE scores, because they directly
use ROUGE as the rewards during training. However, summaries with high ROUGE
scores often receive low human judgement. To find a better reward function that
can guide RL to generate human-appealing summaries, we learn a reward function
from human ratings on 2,500 summaries. Our reward function only takes the
document and system summary as input. Hence, once trained, it can be used to
train RL-based summarisation systems without using any reference summaries. We
show that our learned rewards have significantly higher correlation with human
ratings than previous approaches. Human evaluation experiments show that,
compared to the state-of-the-art supervised-learning systems and
ROUGE-as-rewards RL summarisation systems, the RL systems using our learned
rewards during training generate summarieswith higher human ratings. The
learned reward function and our source code are available at
https://github.com/yg211/summary-reward-no-reference.
| 2,019 | Computation and Language |
Introducing RONEC -- the Romanian Named Entity Corpus | We present RONEC - the Named Entity Corpus for the Romanian language. The
corpus contains over 26000 entities in ~5000 annotated sentences, belonging to
16 distinct classes. The sentences have been extracted from a copy-right free
newspaper, covering several styles. This corpus represents the first initiative
in the Romanian language space specifically targeted for named entity
recognition. It is available in BRAT and CoNLL-U Plus formats, and it is free
to use and extend at github.com/dumitrescustefan/ronec .
| 2,020 | Computation and Language |
Neural Attentive Bag-of-Entities Model for Text Classification | This study proposes a Neural Attentive Bag-of-Entities model, which is a
neural network model that performs text classification using entities in a
knowledge base. Entities provide unambiguous and relevant semantic signals that
are beneficial for capturing semantics in texts. We combine simple high-recall
entity detection based on a dictionary, to detect entities in a document, with
a novel neural attention mechanism that enables the model to focus on a small
number of unambiguous and relevant entities. We tested the effectiveness of our
model using two standard text classification datasets (i.e., the 20 Newsgroups
and R8 datasets) and a popular factoid question answering dataset based on a
trivia quiz game. As a result, our model achieved state-of-the-art results on
all datasets. The source code of the proposed model is available online at
https://github.com/wikipedia2vec/wikipedia2vec.
| 2,019 | Computation and Language |
Aspect Detection using Word and Char Embeddings with (Bi)LSTM and CRF | We proposed a~new accurate aspect extraction method that makes use of both
word and character-based embeddings. We have conducted experiments of various
models of aspect extraction using LSTM and BiLSTM including CRF enhancement on
five different pre-trained word embeddings extended with character embeddings.
The results revealed that BiLSTM outperforms regular LSTM, but also word
embedding coverage in train and test sets profoundly impacted aspect detection
performance. Moreover, the additional CRF layer consistently improves the
results across different models and text embeddings. Summing up, we obtained
state-of-the-art F-score results for SemEval Restaurants (85%) and Laptops
(80%).
| 2,019 | Computation and Language |
PolyResponse: A Rank-based Approach to Task-Oriented Dialogue with
Application in Restaurant Search and Booking | We present PolyResponse, a conversational search engine that supports
task-oriented dialogue. It is a retrieval-based approach that bypasses the
complex multi-component design of traditional task-oriented dialogue systems
and the use of explicit semantics in the form of task-specific ontologies. The
PolyResponse engine is trained on hundreds of millions of examples extracted
from real conversations: it learns what responses are appropriate in different
conversational contexts. It then ranks a large index of text and visual
responses according to their similarity to the given context, and narrows down
the list of relevant entities during the multi-turn conversation. We introduce
a restaurant search and booking system powered by the PolyResponse engine,
currently available in 8 different languages.
| 2,019 | Computation and Language |
CMU GetGoing: An Understandable and Memorable Dialog System for Seniors | Voice-based technologies are typically developed for the average user, and
thus generally not tailored to the specific needs of any subgroup of the
population, like seniors. This paper presents CMU GetGoing, an accessible trip
planning dialog system designed for senior users. The GetGoing system design is
described in detail, with particular attention to the senior-tailored features.
A user study is presented, demonstrating that the senior-tailored features
significantly improve comprehension and retention of information.
| 2,019 | Computation and Language |
The Woman Worked as a Babysitter: On Biases in Language Generation | We present a systematic study of biases in natural language generation (NLG)
by analyzing text generated from prompts that contain mentions of different
demographic groups. In this work, we introduce the notion of the regard towards
a demographic, use the varying levels of regard towards different demographics
as a defining metric for bias in NLG, and analyze the extent to which sentiment
scores are a relevant proxy metric for regard. To this end, we collect
strategically-generated text from language models and manually annotate the
text with both sentiment and regard scores. Additionally, we build an automatic
regard classifier through transfer learning, so that we can analyze biases in
unseen text. Together, these methods reveal the extent of the biased nature of
language model generations. Our analysis provides a study of biases in NLG,
bias metrics and correlated human judgments, and empirical evidence on the
usefulness of our annotated dataset.
| 2,019 | Computation and Language |
Trouble on the Horizon: Forecasting the Derailment of Online
Conversations as they Develop | Online discussions often derail into toxic exchanges between participants.
Recent efforts mostly focused on detecting antisocial behavior after the fact,
by analyzing single comments in isolation. To provide more timely notice to
human moderators, a system needs to preemptively detect that a conversation is
heading towards derailment before it actually turns toxic. This means modeling
derailment as an emerging property of a conversation rather than as an isolated
utterance-level event.
Forecasting emerging conversational properties, however, poses several
inherent modeling challenges. First, since conversations are dynamic, a
forecasting model needs to capture the flow of the discussion, rather than
properties of individual comments. Second, real conversations have an unknown
horizon: they can end or derail at any time; thus a practical forecasting model
needs to assess the risk in an online fashion, as the conversation develops. In
this work we introduce a conversational forecasting model that learns an
unsupervised representation of conversational dynamics and exploits it to
predict future derailment as the conversation develops. By applying this model
to two new diverse datasets of online conversations with labels for antisocial
events, we show that it outperforms state-of-the-art systems at forecasting
derailment.
| 2,019 | Computation and Language |
The Bottom-up Evolution of Representations in the Transformer: A Study
with Machine Translation and Language Modeling Objectives | We seek to understand how the representations of individual tokens and the
structure of the learned feature space evolve between layers in deep neural
networks under different learning objectives. We focus on the Transformers for
our analysis as they have been shown effective on various tasks, including
machine translation (MT), standard left-to-right language models (LM) and
masked language modeling (MLM). Previous work used black-box probing tasks to
show that the representations learned by the Transformer differ significantly
depending on the objective. In this work, we use canonical correlation analysis
and mutual information estimators to study how information flows across
Transformer layers and how this process depends on the choice of learning
objective. For example, as you go from bottom to top layers, information about
the past in left-to-right language models gets vanished and predictions about
the future get formed. In contrast, for MLM, representations initially acquire
information about the context around the token, partially forgetting the token
identity and producing a more generalized token representation. The token
identity then gets recreated at the top MLM layers.
| 2,019 | Computation and Language |
Context-Aware Monolingual Repair for Neural Machine Translation | Modern sentence-level NMT systems often produce plausible translations of
isolated sentences. However, when put in context, these translations may end up
being inconsistent with each other. We propose a monolingual DocRepair model to
correct inconsistencies between sentence-level translations. DocRepair performs
automatic post-editing on a sequence of sentence-level translations, refining
translations of sentences in context of each other. For training, the DocRepair
model requires only monolingual document-level data in the target language. It
is trained as a monolingual sequence-to-sequence model that maps inconsistent
groups of sentences into consistent ones. The consistent groups come from the
original training data; the inconsistent groups are obtained by sampling
round-trip translations for each isolated sentence. We show that this approach
successfully imitates inconsistencies we aim to fix: using contrastive
evaluation, we show large improvements in the translation of several contextual
phenomena in an English-Russian translation task, as well as improvements in
the BLEU score. We also conduct a human evaluation and show a strong preference
of the annotators to corrected translations over the baseline ones. Moreover,
we analyze which discourse phenomena are hard to capture using monolingual data
only.
| 2,019 | Computation and Language |
How to Build User Simulators to Train RL-based Dialog Systems | User simulators are essential for training reinforcement learning (RL) based
dialog models. The performance of the simulator directly impacts the RL policy.
However, building a good user simulator that models real user behaviors is
challenging. We propose a method of standardizing user simulator building that
can be used by the community to compare dialog system quality using the same
set of user simulators fairly. We present implementations of six user
simulators trained with different dialog planning and generation methods. We
then calculate a set of automatic metrics to evaluate the quality of these
simulators both directly and indirectly. We also ask human users to assess the
simulators directly and indirectly by rating the simulated dialogs and
interacting with the trained systems. This paper presents a comprehensive
evaluation framework for user simulator study and provides a better
understanding of the pros and cons of different user simulators, as well as
their impacts on the trained systems.
| 2,019 | Computation and Language |
CrossWeigh: Training Named Entity Tagger from Imperfect Annotations | Everyone makes mistakes. So do human annotators when curating labels for
named entity recognition (NER). Such label mistakes might hurt model training
and interfere model comparison. In this study, we dive deep into one of the
widely-adopted NER benchmark datasets, CoNLL03 NER. We are able to identify
label mistakes in about 5.38% test sentences, which is a significant ratio
considering that the state-of-the-art test F1 score is already around 93%.
Therefore, we manually correct these label mistakes and form a cleaner test
set. Our re-evaluation of popular models on this corrected test set leads to
more accurate assessments, compared to those on the original test set. More
importantly, we propose a simple yet effective framework, CrossWeigh, to handle
label mistakes during NER model training. Specifically, it partitions the
training data into several folds and train independent NER models to identify
potential mistakes in each fold. Then it adjusts the weights of training data
accordingly to train the final NER model. Extensive experiments demonstrate
significant improvements of plugging various NER models into our proposed
framework on three datasets. All implementations and corrected test set are
available at our Github repo: https://github.com/ZihanWangKi/CrossWeigh.
| 2,019 | Computation and Language |
Interpretable Word Embeddings via Informative Priors | Word embeddings have demonstrated strong performance on NLP tasks. However,
lack of interpretability and the unsupervised nature of word embeddings have
limited their use within computational social science and digital humanities.
We propose the use of informative priors to create interpretable and
domain-informed dimensions for probabilistic word embeddings. Experimental
results show that sensible priors can capture latent semantic concepts better
than or on-par with the current state of the art, while retaining the
simplicity and generalizability of using priors.
| 2,019 | Computation and Language |
Predicting Specificity in Classroom Discussion | High quality classroom discussion is important to student development,
enhancing abilities to express claims, reason about other students' claims, and
retain information for longer periods of time. Previous small-scale studies
have shown that one indicator of classroom discussion quality is specificity.
In this paper we tackle the problem of predicting specificity for classroom
discussions. We propose several methods and feature sets capable of
outperforming the state of the art in specificity prediction. Additionally, we
provide a set of meaningful, interpretable features that can be used to analyze
classroom discussions at a pedagogical level.
| 2,017 | Computation and Language |
Target Language-Aware Constrained Inference for Cross-lingual Dependency
Parsing | Prior work on cross-lingual dependency parsing often focuses on capturing the
commonalities between source and target languages and overlooks the potential
of leveraging linguistic properties of the languages to facilitate the
transfer. In this paper, we show that weak supervisions of linguistic knowledge
for the target languages can improve a cross-lingual graph-based dependency
parser substantially. Specifically, we explore several types of corpus
linguistic statistics and compile them into corpus-wise constraints to guide
the inference process during the test time. We adapt two techniques, Lagrangian
relaxation and posterior regularization, to conduct inference with
corpus-statistics constraints. Experiments show that the Lagrangian relaxation
and posterior regularization inference improve the performances on 15 and 17
out of 19 target languages, respectively. The improvements are especially
significant for target languages that have different word order features from
the source language.
| 2,019 | Computation and Language |
Achieving Verified Robustness to Symbol Substitutions via Interval Bound
Propagation | Neural networks are part of many contemporary NLP systems, yet their
empirical successes come at the price of vulnerability to adversarial attacks.
Previous work has used adversarial training and data augmentation to partially
mitigate such brittleness, but these are unlikely to find worst-case
adversaries due to the complexity of the search space arising from discrete
text perturbations. In this work, we approach the problem from the opposite
direction: to formally verify a system's robustness against a predefined class
of adversarial attacks. We study text classification under synonym replacements
or character flip perturbations. We propose modeling these input perturbations
as a simplex and then using Interval Bound Propagation -- a formal model
verification method. We modify the conventional log-likelihood training
objective to train models that can be efficiently verified, which would
otherwise come with exponential search complexity. The resulting models show
only little difference in terms of nominal accuracy, but have much improved
verified accuracy under perturbations and come with an efficiently computable
formal guarantee on worst case adversaries.
| 2,019 | Computation and Language |
Neural Linguistic Steganography | Whereas traditional cryptography encrypts a secret message into an
unintelligible form, steganography conceals that communication is taking place
by encoding a secret message into a cover signal. Language is a particularly
pragmatic cover signal due to its benign occurrence and independence from any
one medium. Traditionally, linguistic steganography systems encode secret
messages in existing text via synonym substitution or word order
rearrangements. Advances in neural language models enable previously
impractical generation-based techniques. We propose a steganography technique
based on arithmetic coding with large-scale neural language models. We find
that our approach can generate realistic looking cover sentences as evaluated
by humans, while at the same time preserving security by matching the cover
message distribution with the language model distribution.
| 2,019 | Computation and Language |
Meta Relational Learning for Few-Shot Link Prediction in Knowledge
Graphs | Link prediction is an important way to complete knowledge graphs (KGs), while
embedding-based methods, effective for link prediction in KGs, perform poorly
on relations that only have a few associative triples. In this work, we propose
a Meta Relational Learning (MetaR) framework to do the common but challenging
few-shot link prediction in KGs, namely predicting new triples about a relation
by only observing a few associative triples. We solve few-shot link prediction
by focusing on transferring relation-specific meta information to make model
learn the most important knowledge and learn faster, corresponding to relation
meta and gradient meta respectively in MetaR. Empirically, our model achieves
state-of-the-art results on few-shot link prediction KG benchmarks.
| 2,019 | Computation and Language |
Towards Realistic Practices In Low-Resource Natural Language Processing:
The Development Set | Development sets are impractical to obtain for real low-resource languages,
since using all available data for training is often more effective. However,
development sets are widely used in research papers that purport to deal with
low-resource natural language processing (NLP). Here, we aim to answer the
following questions: Does using a development set for early stopping in the
low-resource setting influence results as compared to a more realistic
alternative, where the number of training epochs is tuned on development
languages? And does it lead to overestimation or underestimation of
performance? We repeat multiple experiments from recent work on neural models
for low-resource NLP and compare results for models obtained by training with
and without development sets. On average over languages, absolute accuracy
differs by up to 1.4%. However, for some languages and tasks, differences are
as big as 18.0% accuracy. Our results highlight the importance of realistic
experimental setups in the publication of low-resource NLP research results.
| 2,019 | Computation and Language |
Referring Expression Generation Using Entity Profiles | Referring Expression Generation (REG) is the task of generating contextually
appropriate references to entities. A limitation of existing REG systems is
that they rely on entity-specific supervised training, which means that they
cannot handle entities not seen during training. In this study, we address this
in two ways. First, we propose task setups in which we specifically test a REG
system's ability to generalize to entities not seen during training. Second, we
propose a profile-based deep neural network model, ProfileREG, which encodes
both the local context and an external profile of the entity to generate
reference realizations. Our model generates tokens by learning to choose
between generating pronouns, generating from a fixed vocabulary, or copying a
word from the profile. We evaluate our model on three different splits of the
WebNLG dataset, and show that it outperforms competitive baselines in all
settings according to automatic and human evaluations.
| 2,019 | Computation and Language |
Simpler and Faster Learning of Adaptive Policies for Simultaneous
Translation | Simultaneous translation is widely useful but remains challenging. Previous
work falls into two main categories: (a) fixed-latency policies such as Ma et
al. (2019) and (b) adaptive policies such as Gu et al. (2017). The former are
simple and effective, but have to aggressively predict future content due to
diverging source-target word order; the latter do not anticipate, but suffer
from unstable and inefficient training. To combine the merits of both
approaches, we propose a simple supervised-learning framework to learn an
adaptive policy from oracle READ/WRITE sequences generated from parallel text.
At each step, such an oracle sequence chooses to WRITE the next target word if
the available source sentence context provides enough information to do so,
otherwise READ the next source word. Experiments on German<->English show that
our method, without retraining the underlying NMT model, can learn flexible
policies with better BLEU scores and similar latencies compared to previous
work.
| 2,019 | Computation and Language |
Towards Better Modeling Hierarchical Structure for Self-Attention with
Ordered Neurons | Recent studies have shown that a hybrid of self-attention networks (SANs) and
recurrent neural networks (RNNs) outperforms both individual architectures,
while not much is known about why the hybrid models work. With the belief that
modeling hierarchical structure is an essential complementary between SANs and
RNNs, we propose to further enhance the strength of hybrid models with an
advanced variant of RNNs - Ordered Neurons LSTM (ON-LSTM), which introduces a
syntax-oriented inductive bias to perform tree-like composition. Experimental
results on the benchmark machine translation task show that the proposed
approach outperforms both individual architectures and a standard hybrid model.
Further analyses on targeted linguistic evaluation and logical inference tasks
demonstrate that the proposed approach indeed benefits from a better modeling
of hierarchical structure.
| 2,019 | Computation and Language |
AMR Normalization for Fairer Evaluation | Meaning Representation (AMR; Banarescu et al., 2013) encodes the meaning of
sentences as a directed graph and Smatch (Cai and Knight, 2013) is the primary
metric for evaluating AMR graphs. Smatch, however, is unaware of some
meaning-equivalent variations in graph structure allowed by the AMR
Specification and gives different scores for AMRs exhibiting these variations.
In this paper I propose four normalization methods for helping to ensure that
conceptually equivalent AMRs are evaluated as equivalent. Equivalent AMRs with
and without normalization can look quite different---comparing a gold corpus to
itself with relation reification alone yields a difference of 25 Smatch points,
suggesting that the outputs of two systems may not be directly comparable
without normalization. The algorithms described in this paper are implemented
on top of an existing open-source Python toolkit for AMR and will be released
under the same license.
| 2,019 | Computation and Language |
Discovering Hypernymy in Text-Rich Heterogeneous Information Network by
Exploiting Context Granularity | Text-rich heterogeneous information networks (text-rich HINs) are ubiquitous
in real-world applications. Hypernymy, also known as is-a relation or
subclass-of relation, lays in the core of many knowledge graphs and benefits
many downstream applications. Existing methods of hypernymy discovery either
leverage textual patterns to extract explicitly mentioned hypernym-hyponym
pairs, or learn a distributional representation for each term of interest based
its context. These approaches rely on statistical signals from the textual
corpus, and their effectiveness would therefore be hindered when the signals
from the corpus are not sufficient for all terms of interest. In this work, we
propose to discover hypernymy in text-rich HINs, which can introduce additional
high-quality signals. We develop a new framework, named HyperMine, that
exploits multi-granular contexts and combines signals from both text and
network without human labeled data. HyperMine extends the definition of context
to the scenario of text-rich HIN. For example, we can define typed nodes and
communities as contexts. These contexts encode signals of different
granularities and we feed them into a hypernymy inference model. HyperMine
learns this model using weak supervision acquired based on high-precision
textual patterns. Extensive experiments on two large real-world datasets
demonstrate the effectiveness of HyperMine and the utility of modeling context
granularity. We further show a case study that a high-quality taxonomy can be
generated solely based on the hypernymy discovered by HyperMine.
| 2,019 | Computation and Language |
Answers Unite! Unsupervised Metrics for Reinforced Summarization Models | Abstractive summarization approaches based on Reinforcement Learning (RL)
have recently been proposed to overcome classical likelihood maximization. RL
enables to consider complex, possibly non-differentiable, metrics that globally
assess the quality and relevance of the generated outputs. ROUGE, the most used
summarization metric, is known to suffer from bias towards lexical similarity
as well as from suboptimal accounting for fluency and readability of the
generated abstracts. We thus explore and propose alternative evaluation
measures: the reported human-evaluation analysis shows that the proposed
metrics, based on Question Answering, favorably compares to ROUGE -- with the
additional property of not requiring reference summaries. Training a RL-based
model on these metrics leads to improvements (both in terms of human or
automated metrics) over current approaches that use ROUGE as a reward.
| 2,019 | Computation and Language |
Do We Really Need Fully Unsupervised Cross-Lingual Embeddings? | Recent efforts in cross-lingual word embedding (CLWE) learning have
predominantly focused on fully unsupervised approaches that project monolingual
embeddings into a shared cross-lingual space without any cross-lingual signal.
The lack of any supervision makes such approaches conceptually attractive. Yet,
their only core difference from (weakly) supervised projection-based CLWE
methods is in the way they obtain a seed dictionary used to initialize an
iterative self-learning procedure. The fully unsupervised methods have arguably
become more robust, and their primary use case is CLWE induction for pairs of
resource-poor and distant languages. In this paper, we question the ability of
even the most robust unsupervised CLWE approaches to induce meaningful CLWEs in
these more challenging settings. A series of bilingual lexicon induction (BLI)
experiments with 15 diverse languages (210 language pairs) show that fully
unsupervised CLWE methods still fail for a large number of language pairs
(e.g., they yield zero BLI performance for 87/210 pairs). Even when they
succeed, they never surpass the performance of weakly supervised methods
(seeded with 500-1,000 translation pairs) using the same self-learning
procedure in any BLI setup, and the gaps are often substantial. These findings
call for revisiting the main motivations behind fully unsupervised CLWE
methods.
| 2,019 | Computation and Language |
ParaQG: A System for Generating Questions and Answers from Paragraphs | Generating syntactically and semantically valid and relevant questions from
paragraphs is useful with many applications. Manual generation is a
labour-intensive task, as it requires the reading, parsing and understanding of
long passages of text. A number of question generation models based on
sequence-to-sequence techniques have recently been proposed. Most of them
generate questions from sentences only, and none of them is publicly available
as an easy-to-use service. In this paper, we demonstrate ParaQG, a Web-based
system for generating questions from sentences and paragraphs. ParaQG
incorporates a number of novel functionalities to make the question generation
process user-friendly. It provides an interactive interface for a user to
select answers with visual insights on generation of questions. It also employs
various faceted views to group similar questions as well as filtering
techniques to eliminate unanswerable questions
| 2,019 | Computation and Language |
DurIAN: Duration Informed Attention Network For Multimodal Synthesis | In this paper, we present a generic and robust multimodal synthesis system
that produces highly natural speech and facial expression simultaneously. The
key component of this system is the Duration Informed Attention Network
(DurIAN), an autoregressive model in which the alignments between the input
text and the output acoustic features are inferred from a duration model. This
is different from the end-to-end attention mechanism used, and accounts for
various unavoidable artifacts, in existing end-to-end speech synthesis systems
such as Tacotron. Furthermore, DurIAN can be used to generate high quality
facial expression which can be synchronized with generated speech with/without
parallel speech and face data. To improve the efficiency of speech generation,
we also propose a multi-band parallel generation strategy on top of the WaveRNN
model. The proposed Multi-band WaveRNN effectively reduces the total
computational complexity from 9.8 to 5.5 GFLOPS, and is able to generate audio
that is 6 times faster than real time on a single CPU core. We show that DurIAN
could generate highly natural speech that is on par with current state of the
art end-to-end systems, while at the same time avoid word skipping/repeating
errors in those systems. Finally, a simple yet effective approach for
fine-grained control of expressiveness of speech and facial expression is
introduced.
| 2,019 | Computation and Language |
SAO WMT19 Test Suite: Machine Translation of Audit Reports | This paper describes a machine translation test set of documents from the
auditing domain and its use as one of the "test suites" in the WMT19 News
Translation Task for translation directions involving Czech, English and
German.
Our evaluation suggests that current MT systems optimized for the general
news domain can perform quite well even in the particular domain of audit
reports. The detailed manual evaluation however indicates that deep factual
knowledge of the domain is necessary. For the naked eye of a non-expert,
translations by many systems seem almost perfect and automatic MT evaluation
with one reference is practically useless for considering these details.
Furthermore, we show on a sample document from the domain of agreements that
even the best systems completely fail in preserving the semantics of the
agreement, namely the identity of the parties.
| 2,019 | Computation and Language |
ScisummNet: A Large Annotated Corpus and Content-Impact Models for
Scientific Paper Summarization with Citation Networks | Scientific article summarization is challenging: large, annotated corpora are
not available, and the summary should ideally include the article's impacts on
research community. This paper provides novel solutions to these two
challenges. We 1) develop and release the first large-scale manually-annotated
corpus for scientific papers (on computational linguistics) by enabling faster
annotation, and 2) propose summarization methods that integrate the authors'
original highlights (abstract) and the article's actual impacts on the
community (citations), to create comprehensive, hybrid summaries. We conduct
experiments to demonstrate the efficacy of our corpus in training data-driven
models for scientific paper summarization and the advantage of our hybrid
summaries over abstracts and traditional citation-based summaries. Our large
annotated corpus and hybrid methods provide a new framework for scientific
paper summarization research.
| 2,019 | Computation and Language |
Different Absorption from the Same Sharing: Sifted Multi-task Learning
for Fake News Detection | Recently, neural networks based on multi-task learning have achieved
promising performance on fake news detection, which focus on learning shared
features among tasks as complementary features to serve different tasks.
However, in most of the existing approaches, the shared features are completely
assigned to different tasks without selection, which may lead to some useless
and even adverse features integrated into specific tasks. In this paper, we
design a sifted multi-task learning method with a selected sharing layer for
fake news detection. The selected sharing layer adopts gate mechanism and
attention mechanism to filter and select shared feature flows between tasks.
Experiments on two public and widely used competition datasets, i.e. RumourEval
and PHEME, demonstrate that our proposed method achieves the state-of-the-art
performance and boosts the F1-score by more than 0.87%, 1.31%, respectively.
| 2,019 | Computation and Language |
Single Training Dimension Selection for Word Embedding with PCA | In this paper, we present a fast and reliable method based on PCA to select
the number of dimensions for word embeddings. First, we train one embedding
with a generous upper bound (e.g. 1,000) of dimensions. Then we transform the
embeddings using PCA and incrementally remove the lesser dimensions one at a
time while recording the embeddings' performance on language tasks. Lastly, we
select the number of dimensions while balancing model size and accuracy.
Experiments using various datasets and language tasks demonstrate that we are
able to train 10 times fewer sets of embeddings while retaining optimal
performance. Researchers interested in training the best-performing embeddings
for downstream tasks, such as sentiment analysis, question answering and
hypernym extraction, as well as those interested in embedding compression
should find the method helpful.
| 2,019 | Computation and Language |
Mogrifier LSTM | Many advances in Natural Language Processing have been based upon more
expressive models for how inputs interact with the context in which they occur.
Recurrent networks, which have enjoyed a modicum of success, still lack the
generalization and systematicity ultimately required for modelling language. In
this work, we propose an extension to the venerable Long Short-Term Memory in
the form of mutual gating of the current input and the previous output. This
mechanism affords the modelling of a richer space of interactions between
inputs and their context. Equivalently, our model can be viewed as making the
transition function given by the LSTM context-dependent. Experiments
demonstrate markedly improved generalization on language modelling in the range
of 3-4 perplexity points on Penn Treebank and Wikitext-2, and 0.01-0.05 bpc on
four character-based datasets. We establish a new state of the art on all
datasets with the exception of Enwik8, where we close a large gap between the
LSTM and Transformer models.
| 2,020 | Computation and Language |
Extracting Aspects Hierarchies using Rhetorical Structure Theory | We propose a novel approach to generate aspect hierarchies that proved to be
consistently correct compared with human-generated hierarchies. We present an
unsupervised technique using Rhetorical Structure Theory and graph analysis. We
evaluated our approach based on 100,000 reviews from Amazon and achieved an
astonishing 80% coverage compared with human-generated hierarchies coded in
ConceptNet. The method could be easily extended with a sentiment analysis model
and used to describe sentiment on different levels of aspect granularity.
Hence, besides the flat aspect structure, we can differentiate between aspects
and describe if the charging aspect is related to battery or price.
| 2,018 | Computation and Language |
ICDM 2019 Knowledge Graph Contest: Team UWA | We present an overview of our triple extraction system for the ICDM 2019
Knowledge Graph Contest. Our system uses a pipeline-based approach to extract a
set of triples from a given document. It offers a simple and effective solution
to the challenge of knowledge graph construction from domain-specific text. It
also provides the facility to visualise useful information about each triple
such as the degree, betweenness, structured relation type(s), and named entity
types.
| 2,019 | Computation and Language |
Empirical Study of Diachronic Word Embeddings for Scarce Data | Word meaning change can be inferred from drifts of time-varying word
embeddings. However, temporal data may be too sparse to build robust word
embeddings and to discriminate significant drifts from noise. In this paper, we
compare three models to learn diachronic word embeddings on scarce data:
incremental updating of a Skip-Gram from Kim et al. (2014), dynamic filtering
from Bamler and Mandt (2017), and dynamic Bernoulli embeddings from Rudolph and
Blei (2018). In particular, we study the performance of different
initialisation schemes and emphasise what characteristics of each model are
more suitable to data scarcity, relying on the distribution of detected drifts.
Finally, we regularise the loss of these models to better adapt to scarce data.
| 2,019 | Computation and Language |
Mixture Content Selection for Diverse Sequence Generation | Generating diverse sequences is important in many NLP applications such as
question generation or summarization that exhibit semantically one-to-many
relationships between source and the target sequences. We present a method to
explicitly separate diversification from generation using a general
plug-and-play module (called SELECTOR) that wraps around and guides an existing
encoder-decoder model. The diversification stage uses a mixture of experts to
sample different binary masks on the source sequence for diverse content
selection. The generation stage uses a standard encoder-decoder model given
each selected content from the source sequence. Due to the non-differentiable
nature of discrete sampling and the lack of ground truth labels for binary
mask, we leverage a proxy for ground truth mask and adopt stochastic hard-EM
for training. In question generation (SQuAD) and abstractive summarization
(CNN-DM), our method demonstrates significant improvements in accuracy,
diversity and training efficiency, including state-of-the-art top-1 accuracy in
both datasets, 6% gain in top-5 accuracy, and 3.7 times faster training over a
state of the art model. Our code is publicly available at
https://github.com/clovaai/FocusSeq2Seq.
| 2,019 | Computation and Language |
From 'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the
Aristo Project | AI has achieved remarkable mastery over games such as Chess, Go, and Poker,
and even Jeopardy, but the rich variety of standardized exams has remained a
landmark challenge. Even in 2016, the best AI system achieved merely 59.3% on
an 8th Grade science exam challenge. This paper reports unprecedented success
on the Grade 8 New York Regents Science Exam, where for the first time a system
scores more than 90% on the exam's non-diagram, multiple choice (NDMC)
questions. In addition, our Aristo system, building upon the success of recent
language models, exceeded 83% on the corresponding Grade 12 Science Exam NDMC
questions. The results, on unseen test questions, are robust across different
test years and different variations of this kind of test. They demonstrate that
modern NLP methods can result in mastery on this task. While not a full
solution to general question-answering (the questions are multiple choice, and
the domain is restricted to 8th Grade science), it represents a significant
milestone for the field.
| 2,021 | Computation and Language |
An Evaluation Dataset for Intent Classification and Out-of-Scope
Prediction | Task-oriented dialog systems need to know when a query falls outside their
range of supported intents, but current text classification corpora only define
label sets that cover every example. We introduce a new dataset that includes
queries that are out-of-scope---i.e., queries that do not fall into any of the
system's supported intents. This poses a new challenge because models cannot
assume that every query at inference time belongs to a system-supported intent
class. Our dataset also covers 150 intent classes over 10 domains, capturing
the breadth that a production task-oriented agent must handle. We evaluate a
range of benchmark classifiers on our dataset along with several different
out-of-scope identification schemes. We find that while the classifiers perform
well on in-scope intent classification, they struggle to identify out-of-scope
queries. Our dataset and evaluation fill an important gap in the field,
offering a way of more rigorously and realistically benchmarking text
classification in task-driven dialog systems.
| 2,019 | Computation and Language |
TIGEr: Text-to-Image Grounding for Image Caption Evaluation | This paper presents a new metric called TIGEr for the automatic evaluation of
image captioning systems. Popular metrics, such as BLEU and CIDEr, are based
solely on text matching between reference captions and machine-generated
captions, potentially leading to biased evaluations because references may not
fully cover the image content and natural language is inherently ambiguous.
Building upon a machine-learned text-image grounding model, TIGEr allows to
evaluate caption quality not only based on how well a caption represents image
content, but also on how well machine-generated captions match human-generated
captions. Our empirical tests show that TIGEr has a higher consistency with
human judgments than alternative existing metrics. We also comprehensively
assess the metric's effectiveness in caption evaluation by measuring the
correlation between human judgments and metric scores.
| 2,019 | Computation and Language |
An Entity-Driven Framework for Abstractive Summarization | Abstractive summarization systems aim to produce more coherent and concise
summaries than their extractive counterparts. Popular neural models have
achieved impressive results for single-document summarization, yet their
outputs are often incoherent and unfaithful to the input. In this paper, we
introduce SENECA, a novel System for ENtity-drivEn Coherent Abstractive
summarization framework that leverages entity information to generate
informative and coherent abstracts. Our framework takes a two-step approach:
(1) an entity-aware content selection module first identifies salient sentences
from the input, then (2) an abstract generation module conducts cross-sentence
information compression and abstraction to generate the final summary, which is
trained with rewards to promote coherence, conciseness, and clarity. The two
components are further connected using reinforcement learning. Automatic
evaluation shows that our model significantly outperforms previous
state-of-the-art on ROUGE and our proposed coherence measures on New York Times
and CNN/Daily Mail datasets. Human judges further rate our system summaries as
more informative and coherent than those by popular summarization models.
| 2,019 | Computation and Language |
Distributionally Robust Language Modeling | Language models are generally trained on data spanning a wide range of topics
(e.g., news, reviews, fiction), but they might be applied to an a priori
unknown target distribution (e.g., restaurant reviews). In this paper, we first
show that training on text outside the test distribution can degrade test
performance when using standard maximum likelihood (MLE) training. To remedy
this without the knowledge of the test distribution, we propose an approach
which trains a model that performs well over a wide range of potential test
distributions. In particular, we derive a new distributionally robust
optimization (DRO) procedure which minimizes the loss of the model over the
worst-case mixture of topics with sufficient overlap with the training
distribution. Our approach, called topic conditional value at risk (topic
CVaR), obtains a 5.5 point perplexity reduction over MLE when the language
models are trained on a mixture of Yelp reviews and news and tested only on
reviews.
| 2,019 | Computation and Language |
Jointly Learning to Align and Translate with Transformer Models | The state of the art in machine translation (MT) is governed by neural
approaches, which typically provide superior translation accuracy over
statistical approaches. However, on the closely related task of word alignment,
traditional statistical word alignment models often remain the go-to solution.
In this paper, we present an approach to train a Transformer model to produce
both accurate translations and alignments. We extract discrete alignments from
the attention probabilities learnt during regular neural machine translation
model training and leverage them in a multi-task framework to optimize towards
translation and alignment objectives. We demonstrate that our approach produces
competitive results compared to GIZA++ trained IBM alignment models without
sacrificing translation accuracy and outperforms previous attempts on
Transformer model based word alignment. Finally, by incorporating IBM model
alignments into our multi-task training, we report significantly better
alignment accuracies compared to GIZA++ on three publicly available data sets.
| 2,019 | Computation and Language |
Decoupled Box Proposal and Featurization with Ultrafine-Grained Semantic
Labels Improve Image Captioning and Visual Question Answering | Object detection plays an important role in current solutions to vision and
language tasks like image captioning and visual question answering. However,
popular models like Faster R-CNN rely on a costly process of annotating
ground-truths for both the bounding boxes and their corresponding semantic
labels, making it less amenable as a primitive task for transfer learning. In
this paper, we examine the effect of decoupling box proposal and featurization
for down-stream tasks. The key insight is that this allows us to leverage a
large amount of labeled annotations that were previously unavailable for
standard object detection benchmarks. Empirically, we demonstrate that this
leads to effective transfer learning and improved image captioning and visual
question answering models, as measured on publicly available benchmarks.
| 2,019 | Computation and Language |
Learning Dynamic Context Augmentation for Global Entity Linking | Despite of the recent success of collective entity linking (EL) methods,
these "global" inference methods may yield sub-optimal results when the
"all-mention coherence" assumption breaks, and often suffer from high
computational cost at the inference stage, due to the complex search space. In
this paper, we propose a simple yet effective solution, called Dynamic Context
Augmentation (DCA), for collective EL, which requires only one pass through the
mentions in a document. DCA sequentially accumulates context information to
make efficient, collective inference, and can cope with different local EL
models as a plug-and-enhance module. We explore both supervised and
reinforcement learning strategies for learning the DCA model. Extensive
experiments show the effectiveness of our model with different learning
settings, base models, decision orders and attention mechanisms.
| 2,019 | Computation and Language |
Reporting the Unreported: Event Extraction for Analyzing the Local
Representation of Hate Crimes | Official reports of hate crimes in the US are under-reported relative to the
actual number of such incidents. Further, despite statistical approximations,
there are no official reports from a large number of US cities regarding
incidents of hate. Here, we first demonstrate that event extraction and
multi-instance learning, applied to a corpus of local news articles, can be
used to predict instances of hate crime. We then use the trained model to
detect incidents of hate in cities for which the FBI lacks statistics. Lastly,
we train models on predicting homicide and kidnapping, compare the predictions
to FBI reports, and establish that incidents of hate are indeed under-reported,
compared to other types of crimes, in local press.
| 2,019 | Computation and Language |
PaLM: A Hybrid Parser and Language Model | We present PaLM, a hybrid parser and neural language model. Building on an
RNN language model, PaLM adds an attention layer over text spans in the left
context. An unsupervised constituency parser can be derived from its attention
weights, using a greedy decoding algorithm. We evaluate PaLM on language
modeling, and empirically show that it outperforms strong baselines. If
syntactic annotations are available, the attention component can be trained in
a supervised manner, providing syntactically-informed representations of the
context, and further improving language modeling performance.
| 2,019 | Computation and Language |
KagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning | Commonsense reasoning aims to empower machines with the human ability to make
presumptions about ordinary situations in our daily life. In this paper, we
propose a textual inference framework for answering commonsense questions,
which effectively utilizes external, structured commonsense knowledge graphs to
perform explainable inferences. The framework first grounds a question-answer
pair from the semantic space to the knowledge-based symbolic space as a schema
graph, a related sub-graph of external knowledge graphs. It represents schema
graphs with a novel knowledge-aware graph network module named KagNet, and
finally scores answers with graph representations. Our model is based on graph
convolutional networks and LSTMs, with a hierarchical path-based attention
mechanism. The intermediate attention scores make it transparent and
interpretable, which thus produce trustworthy inferences. Using ConceptNet as
the only external resource for Bert-based models, we achieved state-of-the-art
performance on the CommonsenseQA, a large-scale dataset for commonsense
reasoning.
| 2,019 | Computation and Language |
TabFact: A Large-scale Dataset for Table-based Fact Verification | The problem of verifying whether a textual hypothesis holds based on the
given evidence, also known as fact verification, plays an important role in the
study of natural language understanding and semantic representation. However,
existing studies are mainly restricted to dealing with unstructured evidence
(e.g., natural language sentences and documents, news, etc), while verification
under structured evidence, such as tables, graphs, and databases, remains
under-explored. This paper specifically aims to study the fact verification
given semi-structured data as evidence. To this end, we construct a large-scale
dataset called TabFact with 16k Wikipedia tables as the evidence for 118k
human-annotated natural language statements, which are labeled as either
ENTAILED or REFUTED. TabFact is challenging since it involves both soft
linguistic reasoning and hard symbolic reasoning. To address these reasoning
challenges, we design two different models: Table-BERT and Latent Program
Algorithm (LPA). Table-BERT leverages the state-of-the-art pre-trained language
model to encode the linearized tables and statements into continuous vectors
for verification. LPA parses statements into programs and executes them against
the tables to obtain the returned binary value for verification. Both methods
achieve similar accuracy but still lag far behind human performance. We also
perform a comprehensive analysis to demonstrate great future opportunities. The
data and code of the dataset are provided in
\url{https://github.com/wenhuchen/Table-Fact-Checking}.
| 2,020 | Computation and Language |
NERO: A Neural Rule Grounding Framework for Label-Efficient Relation
Extraction | Deep neural models for relation extraction tend to be less reliable when
perfectly labeled data is limited, despite their success in label-sufficient
scenarios. Instead of seeking more instance-level labels from human annotators,
here we propose to annotate frequent surface patterns to form labeling rules.
These rules can be automatically mined from large text corpora and generalized
via a soft rule matching mechanism. Prior works use labeling rules in an exact
matching fashion, which inherently limits the coverage of sentence matching and
results in the low-recall issue. In this paper, we present a neural approach to
ground rules for RE, named NERO, which jointly learns a relation extraction
module and a soft matching module. One can employ any neural relation
extraction models as the instantiation for the RE module. The soft matching
module learns to match rules with semantically similar sentences such that raw
corpora can be automatically labeled and leveraged by the RE module (in a much
better coverage) as augmented supervision, in addition to the exactly matched
sentences. Extensive experiments and analysis on two public and widely-used
datasets demonstrate the effectiveness of the proposed NERO framework,
comparing with both rule-based and semi-supervised methods. Through user
studies, we find that the time efficiency for a human to annotate rules and
sentences are similar (0.30 vs. 0.35 min per label). In particular, NERO's
performance using 270 rules is comparable to the models trained using 3,000
labeled sentences, yielding a 9.5x speedup. Moreover, NERO can predict for
unseen relations at test time and provide interpretable predictions. We release
our code to the community for future research.
| 2,020 | Computation and Language |
A Stack-Propagation Framework with Token-Level Intent Detection for
Spoken Language Understanding | Intent detection and slot filling are two main tasks for building a spoken
language understanding (SLU) system. The two tasks are closely tied and the
slots often highly depend on the intent. In this paper, we propose a novel
framework for SLU to better incorporate the intent information, which further
guides the slot filling. In our framework, we adopt a joint model with
Stack-Propagation which can directly use the intent information as input for
slot filling, thus to capture the intent semantic knowledge. In addition, to
further alleviate the error propagation, we perform the token-level intent
detection for the Stack-Propagation framework. Experiments on two publicly
datasets show that our model achieves the state-of-the-art performance and
outperforms other previous methods by a large margin. Finally, we use the
Bidirectional Encoder Representation from Transformer (BERT) model in our
framework, which further boost our performance in SLU task.
| 2,019 | Computation and Language |
Automated Let's Play Commentary | Let's Plays of video games represent a relatively unexplored area for
experimental AI in games. In this short paper, we discuss an approach to
generate automated commentary for Let's Play videos, drawing on convolutional
deep neural networks. We focus on Let's Plays of the popular game Minecraft. We
compare our approach and a prior approach and demonstrate the generation of
automated, artificial commentary.
| 2,019 | Computation and Language |
Investigating Multilingual NMT Representations at Scale | Multilingual Neural Machine Translation (NMT) models have yielded large
empirical success in transfer learning settings. However, these black-box
representations are poorly understood, and their mode of transfer remains
elusive. In this work, we attempt to understand massively multilingual NMT
representations (with 103 languages) using Singular Value Canonical Correlation
Analysis (SVCCA), a representation similarity framework that allows us to
compare representations across different languages, layers and models. Our
analysis validates several empirical results and long-standing intuitions, and
unveils new observations regarding how representations evolve in a multilingual
translation model. We draw three major conclusions from our analysis, with
implications on cross-lingual transfer learning: (i) Encoder representations of
different languages cluster based on linguistic similarity, (ii)
Representations of a source language learned by the encoder are dependent on
the target language, and vice-versa, and (iii) Representations of high resource
and/or linguistically similar languages are more robust when fine-tuning on an
arbitrary language pair, which is critical to determining how much
cross-lingual transfer can be expected in a zero or few-shot setting. We
further connect our findings with existing empirical observations in
multilingual NMT and transfer learning.
| 2,019 | Computation and Language |
Semantics-aware BERT for Language Understanding | The latest work on language representations carefully integrates
contextualized features into language model training, which enables a series of
success especially in various machine reading comprehension and natural
language inference tasks. However, the existing language representation models
including ELMo, GPT and BERT only exploit plain context-sensitive features such
as character or word embeddings. They rarely consider incorporating structured
semantic information which can provide rich semantics for language
representation. To promote natural language understanding, we propose to
incorporate explicit contextual semantics from pre-trained semantic role
labeling, and introduce an improved language representation model,
Semantics-aware BERT (SemBERT), which is capable of explicitly absorbing
contextual semantics over a BERT backbone. SemBERT keeps the convenient
usability of its BERT precursor in a light fine-tuning way without substantial
task-specific modifications. Compared with BERT, semantics-aware BERT is as
simple in concept but more powerful. It obtains new state-of-the-art or
substantially improves results on ten reading comprehension and language
inference tasks.
| 2,020 | Computation and Language |
REO-Relevance, Extraness, Omission: A Fine-grained Evaluation for Image
Captioning | Popular metrics used for evaluating image captioning systems, such as BLEU
and CIDEr, provide a single score to gauge the system's overall effectiveness.
This score is often not informative enough to indicate what specific errors are
made by a given system. In this study, we present a fine-grained evaluation
method REO for automatically measuring the performance of image captioning
systems. REO assesses the quality of captions from three perspectives: 1)
Relevance to the ground truth, 2) Extraness of the content that is irrelevant
to the ground truth, and 3) Omission of the elements in the images and human
references. Experiments on three benchmark datasets demonstrate that our method
achieves a higher consistency with human judgments and provides more intuitive
evaluation results than alternative metrics.
| 2,019 | Computation and Language |
Multi-Granularity Self-Attention for Neural Machine Translation | Current state-of-the-art neural machine translation (NMT) uses a deep
multi-head self-attention network with no explicit phrase information. However,
prior work on statistical machine translation has shown that extending the
basic translation unit from words to phrases has produced substantial
improvements, suggesting the possibility of improving NMT performance from
explicit modeling of phrases. In this work, we present multi-granularity
self-attention (Mg-Sa): a neural network that combines multi-head
self-attention and phrase modeling. Specifically, we train several attention
heads to attend to phrases in either n-gram or syntactic formalism. Moreover,
we exploit interactions among phrases to enhance the strength of structure
modeling - a commonly-cited weakness of self-attention. Experimental results on
WMT14 English-to-German and NIST Chinese-to-English translation tasks show the
proposed approach consistently improves performance. Targeted linguistic
analysis reveals that Mg-Sa indeed captures useful phrase information at
various levels of granularities.
| 2,019 | Computation and Language |
Examining Gender Bias in Languages with Grammatical Gender | Recent studies have shown that word embeddings exhibit gender bias inherited
from the training corpora. However, most studies to date have focused on
quantifying and mitigating such bias only in English. These analyses cannot be
directly extended to languages that exhibit morphological agreement on gender,
such as Spanish and French. In this paper, we propose new metrics for
evaluating gender bias in word embeddings of these languages and further
demonstrate evidence of gender bias in bilingual embeddings which align these
languages with English. Finally, we extend an existing approach to mitigate
gender bias in word embeddings under both monolingual and bilingual settings.
Experiments on modified Word Embedding Association Test, word similarity, word
translation, and word pair translation tasks show that the proposed approaches
effectively reduce the gender bias while preserving the utility of the
embeddings.
| 2,019 | Computation and Language |
Cross-Lingual Dependency Parsing Using Code-Mixed TreeBank | Treebank translation is a promising method for cross-lingual transfer of
syntactic dependency knowledge. The basic idea is to map dependency arcs from a
source treebank to its target translation according to word alignments. This
method, however, can suffer from imperfect alignment between source and target
words. To address this problem, we investigate syntactic transfer by code
mixing, translating only confident words in a source treebank. Cross-lingual
word embeddings are leveraged for transferring syntactic knowledge to the
target from the resulting code-mixed treebank. Experiments on University
Dependency Treebanks show that code-mixed treebanks are more effective than
translated treebanks, giving highly competitive performances among
cross-lingual parsing methods.
| 2,019 | Computation and Language |
Robust Navigation with Language Pretraining and Stochastic Sampling | Core to the vision-and-language navigation (VLN) challenge is building robust
instruction representations and action decoding schemes, which can generalize
well to previously unseen instructions and environments. In this paper, we
report two simple but highly effective methods to address these challenges and
lead to a new state-of-the-art performance. First, we adapt large-scale
pretrained language models to learn text representations that generalize better
to previously unseen instructions. Second, we propose a stochastic sampling
scheme to reduce the considerable gap between the expert actions in training
and sampled actions in test, so that the agent can learn to correct its own
mistakes during long sequential action decoding. Combining the two techniques,
we achieve a new state of the art on the Room-to-Room benchmark with 6%
absolute gain over the previous best result (47% -> 53%) on the Success Rate
weighted by Path Length metric.
| 2,019 | Computation and Language |
Nested Named Entity Recognition via Second-best Sequence Learning and
Decoding | When an entity name contains other names within it, the identification of all
combinations of names can become difficult and expensive. We propose a new
method to recognize not only outermost named entities but also inner nested
ones. We design an objective function for training a neural model that treats
the tag sequence for nested entities as the second best path within the span of
their parent entity. In addition, we provide the decoding method for inference
that extracts entities iteratively from outermost ones to inner ones in an
outside-to-inside way. Our method has no additional hyperparameters to the
conditional random field based model widely used for flat named entity
recognition tasks. Experiments demonstrate that our method performs better than
or at least as well as existing methods capable of handling nested entities,
achieving the F1-scores of 85.82%, 84.34%, and 77.36% on ACE-2004, ACE-2005,
and GENIA datasets, respectively.
| 2,020 | Computation and Language |
Towards Task-Oriented Dialogue in Mixed Domains | This work investigates the task-oriented dialogue problem in mixed-domain
settings. We study the effect of alternating between different domains in
sequences of dialogue turns using two related state-of-the-art dialogue
systems. We first show that a specialized state tracking component in multiple
domains plays an important role and gives better results than an end-to-end
task-oriented dialogue system. We then propose a hybrid system which is able to
improve the belief tracking accuracy of about 28% of average absolute point on
a standard multi-domain dialogue dataset. These experimental results give some
useful insights for improving our commercial chatbot platform FPT.AI, which is
currently deployed for many practical chatbot applications.
| 2,019 | Computation and Language |
Source Dependency-Aware Transformer with Supervised Self-Attention | Recently, Transformer has achieved the state-of-the-art performance on many
machine translation tasks. However, without syntax knowledge explicitly
considered in the encoder, incorrect context information that violates the
syntax structure may be integrated into source hidden states, leading to
erroneous translations. In this paper, we propose a novel method to incorporate
source dependencies into the Transformer. Specifically, we adopt the source
dependency tree and define two matrices to represent the dependency relations.
Based on the matrices, two heads in the multi-head self-attention module are
trained in a supervised manner and two extra cross entropy losses are
introduced into the training objective function. Under this training objective,
the model is trained to learn the source dependency relations directly. Without
requiring pre-parsed input during inference, our model can generate better
translations with the dependency-aware context information. Experiments on
bi-directional Chinese-to-English, English-to-Japanese and English-to-German
translation tasks show that our proposed method can significantly improve the
Transformer baseline.
| 2,019 | Computation and Language |
Accelerating Transformer Decoding via a Hybrid of Self-attention and
Recurrent Neural Network | Due to the highly parallelizable architecture, Transformer is faster to train
than RNN-based models and popularly used in machine translation tasks. However,
at inference time, each output word requires all the hidden states of the
previously generated words, which limits the parallelization capability, and
makes it much slower than RNN-based ones. In this paper, we systematically
analyze the time cost of different components of both the Transformer and
RNN-based model. Based on it, we propose a hybrid network of self-attention and
RNN structures, in which, the highly parallelizable self-attention is utilized
as the encoder, and the simpler RNN structure is used as the decoder. Our
hybrid network can decode 4-times faster than the Transformer. In addition,
with the help of knowledge distillation, our hybrid network achieves comparable
translation quality to the original Transformer.
| 2,019 | Computation and Language |
Table-to-Text Generation with Effective Hierarchical Encoder on Three
Dimensions (Row, Column and Time) | Although Seq2Seq models for table-to-text generation have achieved remarkable
progress, modeling table representation in one dimension is inadequate. This is
because (1) the table consists of multiple rows and columns, which means that
encoding a table should not depend only on one dimensional sequence or set of
records and (2) most of the tables are time series data (e.g. NBA game data,
stock market data), which means that the description of the current table may
be affected by its historical data. To address aforementioned problems, not
only do we model each table cell considering other records in the same row, we
also enrich table's representation by modeling each table cell in context of
other cells in the same column or with historical (time dimension) data
respectively. In addition, we develop a table cell fusion gate to combine
representations from row, column and time dimension into one dense vector
according to the saliency of each dimension's representation. We evaluated our
methods on ROTOWIRE, a benchmark dataset of NBA basketball games. Both
automatic and human evaluation results demonstrate the effectiveness of our
model with improvement of 2.66 in BLEU over the strong baseline and
outperformance of state-of-the-art model.
| 2,019 | Computation and Language |
Fusing Vector Space Models for Domain-Specific Applications | We address the problem of tuning word embeddings for specific use cases and
domains. We propose a new method that automatically combines multiple
domain-specific embeddings, selected from a wide range of pre-trained
domain-specific embeddings, to improve their combined expressive power. Our
approach relies on two key components: 1) a ranking function, based on a new
embedding similarity measure, that selects the most relevant embeddings to use
given a domain and 2) a dimensionality reduction method that combines the
selected embeddings to produce a more compact and efficient encoding that
preserves the expressiveness. We empirically show that our method produces
effective domain-specific embeddings that consistently improve the performance
of state-of-the-art machine learning algorithms on multiple tasks, compared to
generic embeddings trained on large text corpora.
| 2,019 | Computation and Language |
Informative and Controllable Opinion Summarization | Opinion summarization is the task of automatically generating summaries for a
set of reviews about a specific target (e.g., a movie or a product). Since the
number of reviews for each target can be prohibitively large, neural
network-based methods follow a two-stage approach where an extractive step
first pre-selects a subset of salient opinions and an abstractive step creates
the summary while conditioning on the extracted subset. However, the extractive
model leads to loss of information which may be useful depending on user needs.
In this paper we propose a summarization framework that eliminates the need to
rely only on pre-selected content and waste possibly useful information,
especially when customizing summaries. The framework enables the use of all
input reviews by first condensing them into multiple dense vectors which serve
as input to an abstractive model. We showcase an effective instantiation of our
framework which produces more informative summaries and also allows to take
user preferences into account using our zero-shot customization technique.
Experimental results demonstrate that our model improves the state of the art
on the Rotten Tomatoes dataset and generates customized summaries effectively.
| 2,021 | Computation and Language |
Specializing Unsupervised Pretraining Models for Word-Level Semantic
Similarity | Unsupervised pretraining models have been shown to facilitate a wide range of
downstream NLP applications. These models, however, retain some of the
limitations of traditional static word embeddings. In particular, they encode
only the distributional knowledge available in raw text corpora, incorporated
through language modeling objectives. In this work, we complement such
distributional knowledge with external lexical knowledge, that is, we integrate
the discrete knowledge on word-level semantic similarity into pretraining. To
this end, we generalize the standard BERT model to a multi-task learning
setting where we couple BERT's masked language modeling and next sentence
prediction objectives with an auxiliary task of binary word relation
classification. Our experiments suggest that our "Lexically Informed" BERT
(LIBERT), specialized for the word-level semantic similarity, yields better
performance than the lexically blind "vanilla" BERT on several language
understanding tasks. Concretely, LIBERT outperforms BERT in 9 out of 10 tasks
of the GLUE benchmark and is on a par with BERT in the remaining one. Moreover,
we show consistent gains on 3 benchmarks for lexical simplification, a task
where knowledge about word-level semantic similarity is paramount.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.