Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Autoencoding Pixies: Amortised Variational Inference with Graph
Convolutions for Functional Distributional Semantics | Functional Distributional Semantics provides a linguistically interpretable
framework for distributional semantics, by representing the meaning of a word
as a function (a binary classifier), instead of a vector. However, the large
number of latent variables means that inference is computationally expensive,
and training a model is therefore slow to converge. In this paper, I introduce
the Pixie Autoencoder, which augments the generative model of Functional
Distributional Semantics with a graph-convolutional neural network to perform
amortised variational inference. This allows the model to be trained more
effectively, achieving better results on two tasks (semantic similarity in
context and semantic composition), and outperforming BERT, a large pre-trained
language model.
| 2,020 | Computation and Language |
Evaluating text coherence based on the graph of the consistency of
phrases to identify symptoms of schizophrenia | Different state-of-the-art methods of the detection of schizophrenia symptoms
based on the estimation of text coherence have been analyzed. The analysis of a
text at the level of phrases has been suggested. The method based on the graph
of the consistency of phrases has been proposed to evaluate the semantic
coherence and the cohesion of a text. The semantic coherence, cohesion, and
other linguistic features (lexical diversity, lexical density) have been taken
into account to form feature vectors for the training of a model-classifier.
The training of the classifier has been performed on the set of
English-language interviews. According to the retrieved results, the impact of
each feature on the output of the model has been analyzed. The results obtained
can indicate that the proposed method based on the graph of the consistency of
phrases may be used in the different tasks of the detection of mental illness.
| 2,020 | Computation and Language |
Extracting Headless MWEs from Dependency Parse Trees: Parsing, Tagging,
and Joint Modeling Approaches | An interesting and frequent type of multi-word expression (MWE) is the
headless MWE, for which there are no true internal syntactic dominance
relations; examples include many named entities ("Wells Fargo") and dates
("July 5, 2020") as well as certain productive constructions ("blow for blow",
"day after day"). Despite their special status and prevalence, current
dependency-annotation schemes require treating such flat structures as if they
had internal syntactic heads, and most current parsers handle them in the same
fashion as headed constructions. Meanwhile, outside the context of parsing,
taggers are typically used for identifying MWEs, but taggers might benefit from
structural information. We empirically compare these two common
strategies--parsing and tagging--for predicting flat MWEs. Additionally, we
propose an efficient joint decoding algorithm that combines scores from both
strategies. Experimental results on the MWE-Aware English Dependency Corpus and
on six non-English dependency treebanks with frequent flat structures show
that: (1) tagging is more accurate than parsing for identifying flat-structure
MWEs, (2) our joint decoder reconciles the two different views and, for
non-BERT features, leads to higher accuracies, and (3) most of the gains result
from feature sharing between the parsers and taggers.
| 2,020 | Computation and Language |
Weakly-Supervised Neural Response Selection from an Ensemble of
Task-Specialised Dialogue Agents | Dialogue engines that incorporate different types of agents to converse with
humans are popular.
However, conversations are dynamic in the sense that a selected response will
change the conversation on-the-fly, influencing the subsequent utterances in
the conversation, which makes the response selection a challenging problem.
We model the problem of selecting the best response from a set of responses
generated by a heterogeneous set of dialogue agents by taking into account the
conversational history, and propose a \emph{Neural Response Selection} method.
The proposed method is trained to predict a coherent set of responses within
a single conversation, considering its own predictions via a curriculum
training mechanism.
Our experimental results show that the proposed method can accurately select
the most appropriate responses, thereby significantly improving the user
experience in dialogue systems.
| 2,020 | Computation and Language |
Categorical Vector Space Semantics for Lambek Calculus with a Relevant
Modality | We develop a categorical compositional distributional semantics for Lambek
Calculus with a Relevant Modality !L*, which has a limited edition of the
contraction and permutation rules. The categorical part of the semantics is a
monoidal biclosed category with a coalgebra modality, very similar to the
structure of a Differential Category. We instantiate this category to finite
dimensional vector spaces and linear maps via "quantisation" functors and work
with three concrete interpretations of the coalgebra modality. We apply the
model to construct categorical and concrete semantic interpretations for the
motivating example of !L*: the derivation of a phrase with a parasitic gap. The
effectiveness of the concrete interpretations are evaluated via a
disambiguation task, on an extension of a sentence disambiguation dataset to
parasitic gap phrases, using BERT, Word2Vec, and FastText vectors and
Relational tensors.
| 2,023 | Computation and Language |
Diagnosing the Environment Bias in Vision-and-Language Navigation | Vision-and-Language Navigation (VLN) requires an agent to follow
natural-language instructions, explore the given environments, and reach the
desired target locations. These step-by-step navigational instructions are
crucial when the agent is navigating new environments about which it has no
prior knowledge. Most recent works that study VLN observe a significant
performance drop when tested on unseen environments (i.e., environments not
used in training), indicating that the neural agent models are highly biased
towards training environments. Although this issue is considered as one of the
major challenges in VLN research, it is still under-studied and needs a clearer
explanation. In this work, we design novel diagnosis experiments via
environment re-splitting and feature replacement, looking into possible reasons
for this environment bias. We observe that neither the language nor the
underlying navigational graph, but the low-level visual appearance conveyed by
ResNet features directly affects the agent model and contributes to this
environment bias in results. According to this observation, we explore several
kinds of semantic representations that contain less low-level visual
information, hence the agent learned with these features could be better
generalized to unseen testing environments. Without modifying the baseline
agent model and its training method, our explored semantic features
significantly decrease the performance gaps between seen and unseen on multiple
datasets (i.e. R2R, R4R, and CVDN) and achieve competitive unseen results to
previous state-of-the-art models. Our code and features are available at:
https://github.com/zhangybzbo/EnvBiasVLN
| 2,020 | Computation and Language |
Unsupervised Multimodal Neural Machine Translation with Pseudo Visual
Pivoting | Unsupervised machine translation (MT) has recently achieved impressive
results with monolingual corpora only. However, it is still challenging to
associate source-target sentences in the latent space. As people speak
different languages biologically share similar visual systems, the potential of
achieving better alignment through visual content is promising yet
under-explored in unsupervised multimodal MT (MMT). In this paper, we
investigate how to utilize visual content for disambiguation and promoting
latent space alignment in unsupervised MMT. Our model employs multimodal
back-translation and features pseudo visual pivoting in which we learn a shared
multilingual visual-semantic embedding space and incorporate visually-pivoted
captioning as additional weak supervision. The experimental results on the
widely used Multi30K dataset show that the proposed model significantly
improves over the state-of-the-art methods and generalizes well when the images
are not available at the testing time.
| 2,020 | Computation and Language |
Fact-based Dialogue Generation with Convergent and Divergent Decoding | Fact-based dialogue generation is a task of generating a human-like response
based on both dialogue context and factual texts. Various methods were proposed
to focus on generating informative words that contain facts effectively.
However, previous works implicitly assume a topic to be kept on a dialogue and
usually converse passively, therefore the systems have a difficulty to generate
diverse responses that provide meaningful information proactively. This paper
proposes an end-to-end fact-based dialogue system augmented with the ability of
convergent and divergent thinking over both context and facts, which can
converse about the current topic or introduce a new topic. Specifically, our
model incorporates a novel convergent and divergent decoding that can generate
informative and diverse responses considering not only given inputs (context
and facts) but also inputs-related topics. Both automatic and human evaluation
results on DSTC7 dataset show that our model significantly outperforms
state-of-the-art baselines, indicating that our model can generate more
appropriate, informative, and diverse responses.
| 2,020 | Computation and Language |
Quda: Natural Language Queries for Visual Data Analytics | The identification of analytic tasks from free text is critical for
visualization-oriented natural language interfaces (V-NLIs) to suggest
effective visualizations. However, it is challenging due to the ambiguity and
complexity nature of human language. To address this challenge, we present a
new dataset, called Quda, that aims to help V-NLIs recognize analytic tasks
from free-form natural language by training and evaluating cutting-edge
multi-label classification models. Our dataset contains $14,035$ diverse user
queries, and each is annotated with one or multiple analytic tasks. We achieve
this goal by first gathering seed queries with data analysts and then employing
extensive crowd force for paraphrase generation and validation. We demonstrate
the usefulness of Quda through three applications. This work is the first
attempt to construct a large-scale corpus for recognizing analytic tasks. With
the release of Quda, we hope it will boost the research and development of
V-NLIs in data analysis and visualization.
| 2,020 | Computation and Language |
Nakdan: Professional Hebrew Diacritizer | We present a system for automatic diacritization of Hebrew text. The system
combines modern neural models with carefully curated declarative linguistic
knowledge and comprehensive manually constructed tables and dictionaries.
Besides providing state of the art diacritization accuracy, the system also
supports an interface for manual editing and correction of the automatic
output, and has several features which make it particularly useful for
preparation of scientific editions of Hebrew texts. The system supports Modern
Hebrew, Rabbinic Hebrew and Poetic Hebrew. The system is freely accessible for
all use at http://nakdanpro.dicta.org.il.
| 2,020 | Computation and Language |
DramaQA: Character-Centered Video Story Understanding with Hierarchical
QA | Despite recent progress on computer vision and natural language processing,
developing a machine that can understand video story is still hard to achieve
due to the intrinsic difficulty of video story. Moreover, researches on how to
evaluate the degree of video understanding based on human cognitive process
have not progressed as yet. In this paper, we propose a novel video question
answering (Video QA) task, DramaQA, for a comprehensive understanding of the
video story. The DramaQA focuses on two perspectives: 1) Hierarchical QAs as an
evaluation metric based on the cognitive developmental stages of human
intelligence. 2) Character-centered video annotations to model local coherence
of the story. Our dataset is built upon the TV drama "Another Miss Oh" and it
contains 17,983 QA pairs from 23,928 various length video clips, with each QA
pair belonging to one of four difficulty levels. We provide 217,308 annotated
images with rich character-centered annotations, including visual bounding
boxes, behaviors and emotions of main characters, and coreference resolved
scripts. Additionally, we suggest Multi-level Context Matching model which
hierarchically understands character-centered representations of video to
answer questions. We release our dataset and model publicly for research
purposes, and we expect our work to provide a new perspective on video story
understanding research.
| 2,020 | Computation and Language |
JASS: Japanese-specific Sequence to Sequence Pre-training for Neural
Machine Translation | Neural machine translation (NMT) needs large parallel corpora for
state-of-the-art translation quality. Low-resource NMT is typically addressed
by transfer learning which leverages large monolingual or parallel corpora for
pre-training. Monolingual pre-training approaches such as MASS (MAsked Sequence
to Sequence) are extremely effective in boosting NMT quality for languages with
small parallel corpora. However, they do not account for linguistic information
obtained using syntactic analyzers which is known to be invaluable for several
Natural Language Processing (NLP) tasks. To this end, we propose JASS,
Japanese-specific Sequence to Sequence, as a novel pre-training alternative to
MASS for NMT involving Japanese as the source or target language. JASS is joint
BMASS (Bunsetsu MASS) and BRSS (Bunsetsu Reordering Sequence to Sequence)
pre-training which focuses on Japanese linguistic units called bunsetsus. In
our experiments on ASPEC Japanese--English and News Commentary
Japanese--Russian translation we show that JASS can give results that are
competitive with if not better than those given by MASS. Furthermore, we show
for the first time that joint MASS and JASS pre-training gives results that
significantly surpass the individual methods indicating their complementary
nature. We will release our code, pre-trained models and bunsetsu annotated
data as resources for researchers to use in their own NLP tasks.
| 2,020 | Computation and Language |
2kenize: Tying Subword Sequences for Chinese Script Conversion | Simplified Chinese to Traditional Chinese character conversion is a common
preprocessing step in Chinese NLP. Despite this, current approaches have poor
performance because they do not take into account that a simplified Chinese
character can correspond to multiple traditional characters. Here, we propose a
model that can disambiguate between mappings and convert between the two
scripts. The model is based on subword segmentation, two language models, as
well as a method for mapping between subword sequences. We further construct
benchmark datasets for topic classification and script conversion. Our proposed
method outperforms previous Chinese Character conversion approaches by 6 points
in accuracy. These results are further confirmed in a downstream application,
where 2kenize is used to convert pretraining dataset for topic classification.
An error analysis reveals that our method's particular strengths are in dealing
with code-mixing and named entities.
| 2,020 | Computation and Language |
Does Multi-Encoder Help? A Case Study on Context-Aware Neural Machine
Translation | In encoder-decoder neural models, multiple encoders are in general used to
represent the contextual information in addition to the individual sentence. In
this paper, we investigate multi-encoder approaches in documentlevel neural
machine translation (NMT). Surprisingly, we find that the context encoder does
not only encode the surrounding sentences but also behaves as a noise
generator. This makes us rethink the real benefits of multi-encoder in
context-aware translation - some of the improvements come from robust training.
We compare several methods that introduce noise and/or well-tuned dropout setup
into the training of these encoders. Experimental results show that noisy
training plays an important role in multi-encoder-based NMT, especially when
the training data is small. Also, we establish a new state-of-the-art on IWSLT
Fr-En task by careful use of noise generation and dropout methods.
| 2,020 | Computation and Language |
The Perceptimatic English Benchmark for Speech Perception Models | We present the Perceptimatic English Benchmark, an open experimental
benchmark for evaluating quantitative models of speech perception in English.
The benchmark consists of ABX stimuli along with the responses of 91 American
English-speaking listeners. The stimuli test discrimination of a large number
of English and French phonemic contrasts. They are extracted directly from
corpora of read speech, making them appropriate for evaluating statistical
acoustic models (such as those used in automatic speech recognition) trained on
typical speech data sets. We show that phone discrimination is correlated with
several types of models, and give recommendations for researchers seeking
easily calculated norms of acoustic distance on experimental stimuli. We show
that DeepSpeech, a standard English speech recognizer, is more specialized on
English phoneme discrimination than English listeners, and is poorly correlated
with their behaviour, even though it yields a low error on the decision task
given to humans.
| 2,020 | Computation and Language |
Fine-Grained Analysis of Cross-Linguistic Syntactic Divergences | The patterns in which the syntax of different languages converges and
diverges are often used to inform work on cross-lingual transfer. Nevertheless,
little empirical work has been done on quantifying the prevalence of different
syntactic divergences across language pairs. We propose a framework for
extracting divergence patterns for any language pair from a parallel corpus,
building on Universal Dependencies. We show that our framework provides a
detailed picture of cross-language divergences, generalizes previous
approaches, and lends itself to full automation. We further present a novel
dataset, a manually word-aligned subset of the Parallel UD corpus in five
languages, and use it to perform a detailed corpus study. We demonstrate the
usefulness of the resulting analysis by showing that it can help account for
performance patterns of a cross-lingual parser.
| 2,020 | Computation and Language |
Reference and Document Aware Semantic Evaluation Methods for Korean
Language Summarization | Text summarization refers to the process that generates a shorter form of
text from the source document preserving salient information. Many existing
works for text summarization are generally evaluated by using recall-oriented
understudy for gisting evaluation (ROUGE) scores. However, as ROUGE scores are
computed based on n-gram overlap, they do not reflect semantic meaning
correspondences between generated and reference summaries. Because Korean is an
agglutinative language that combines various morphemes into a word that express
several meanings, ROUGE is not suitable for Korean summarization. In this
paper, we propose evaluation metrics that reflect semantic meanings of a
reference summary and the original document, Reference and Document Aware
Semantic Score (RDASS). We then propose a method for improving the correlation
of the metrics with human judgment. Evaluation results show that the
correlation with human judgment is significantly higher for our evaluation
metrics than for ROUGE scores.
| 2,020 | Computation and Language |
Practical Perspectives on Quality Estimation for Machine Translation | Sentence level quality estimation (QE) for machine translation (MT) attempts
to predict the translation edit rate (TER) cost of post-editing work required
to correct MT output. We describe our view on sentence-level QE as dictated by
several practical setups encountered in the industry. We find consumers of MT
output---whether human or algorithmic ones---to be primarily interested in a
binary quality metric: is the translated sentence adequate as-is or does it
need post-editing? Motivated by this we propose a quality classification (QC)
view on sentence-level QE whereby we focus on maximizing recall at precision
above a given threshold. We demonstrate that, while classical QE regression
models fare poorly on this task, they can be re-purposed by replacing the
output regression layer with a binary classification one, achieving 50-60\%
recall at 90\% precision. For a high-quality MT system producing 75-80\%
correct translations, this promises a significant reduction in post-editing
work indeed.
| 2,020 | Computation and Language |
The Danish Gigaword Project | Danish language technology has been hindered by a lack of broad-coverage
corpora at the scale modern NLP prefers. This paper describes the Danish
Gigaword Corpus, the result of a focused effort to provide a diverse and
freely-available one billion word corpus of Danish text. The Danish Gigaword
corpus covers a wide array of time periods, domains, speakers' socio-economic
status, and Danish dialects.
| 2,021 | Computation and Language |
MISA: Modality-Invariant and -Specific Representations for Multimodal
Sentiment Analysis | Multimodal Sentiment Analysis is an active area of research that leverages
multimodal signals for affective understanding of user-generated videos. The
predominant approach, addressing this task, has been to develop sophisticated
fusion techniques. However, the heterogeneous nature of the signals creates
distributional modality gaps that pose significant challenges. In this paper,
we aim to learn effective modality representations to aid the process of
fusion. We propose a novel framework, MISA, which projects each modality to two
distinct subspaces. The first subspace is modality-invariant, where the
representations across modalities learn their commonalities and reduce the
modality gap. The second subspace is modality-specific, which is private to
each modality and captures their characteristic features. These representations
provide a holistic view of the multimodal data, which is used for fusion that
leads to task predictions. Our experiments on popular sentiment analysis
benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art
models. We also consider the task of Multimodal Humor Detection and experiment
on the recently proposed UR_FUNNY dataset. Here too, our model fares better
than strong baselines, establishing MISA as a useful multimodal framework.
| 2,020 | Computation and Language |
Learning Implicit Text Generation via Feature Matching | Generative feature matching network (GFMN) is an approach for training
implicit generative models for images by performing moment matching on features
from pre-trained neural networks. In this paper, we present new GFMN
formulations that are effective for sequential data. Our experimental results
show the effectiveness of the proposed method, SeqGFMN, for three distinct
generation tasks in English: unconditional text generation, class-conditional
text generation, and unsupervised text style transfer. SeqGFMN is stable to
train and outperforms various adversarial approaches for text generation and
text style transfer.
| 2,020 | Computation and Language |
A Tale of Two Perplexities: Sensitivity of Neural Language Models to
Lexical Retrieval Deficits in Dementia of the Alzheimer's Type | In recent years there has been a burgeoning interest in the use of
computational methods to distinguish between elicited speech samples produced
by patients with dementia, and those from healthy controls. The difference
between perplexity estimates from two neural language models (LMs) - one
trained on transcripts of speech produced by healthy participants and the other
trained on transcripts from patients with dementia - as a single feature for
diagnostic classification of unseen transcripts has been shown to produce
state-of-the-art performance. However, little is known about why this approach
is effective, and on account of the lack of case/control matching in the most
widely-used evaluation set of transcripts (DementiaBank), it is unclear if
these approaches are truly diagnostic, or are sensitive to other variables. In
this paper, we interrogate neural LMs trained on participants with and without
dementia using synthetic narratives previously developed to simulate
progressive semantic dementia by manipulating lexical frequency. We find that
perplexity of neural LMs is strongly and differentially associated with lexical
frequency, and that a mixture model resulting from interpolating control and
dementia LMs improves upon the current state-of-the-art for models trained on
transcript text exclusively.
| 2,020 | Computation and Language |
Learning Robust Models for e-Commerce Product Search | Showing items that do not match search query intent degrades customer
experience in e-commerce. These mismatches result from counterfactual biases of
the ranking algorithms toward noisy behavioral signals such as clicks and
purchases in the search logs. Mitigating the problem requires a large labeled
dataset, which is expensive and time-consuming to obtain. In this paper, we
develop a deep, end-to-end model that learns to effectively classify mismatches
and to generate hard mismatched examples to improve the classifier. We train
the model end-to-end by introducing a latent variable into the cross-entropy
loss that alternates between using the real and generated samples. This not
only makes the classifier more robust but also boosts the overall ranking
performance. Our model achieves a relative gain compared to baselines by over
26% in F-score, and over 17% in Area Under PR curve. On live search traffic,
our model gains significant improvement in multiple countries.
| 2,020 | Computation and Language |
Where is Linked Data in Question Answering over Linked Data? | We argue that "Question Answering with Knowledge Base" and "Question
Answering over Linked Data" are currently two instances of the same problem,
despite one explicitly declares to deal with Linked Data. We point out the lack
of existing methods to evaluate question answering on datasets which exploit
external links to the rest of the cloud or share common schema. To this end, we
propose the creation of new evaluation settings to leverage the advantages of
the Semantic Web to achieve AI-complete question answering.
| 2,020 | Computation and Language |
On Exposure Bias, Hallucination and Domain Shift in Neural Machine
Translation | The standard training algorithm in neural machine translation (NMT) suffers
from exposure bias, and alternative algorithms have been proposed to mitigate
this. However, the practical impact of exposure bias is under debate. In this
paper, we link exposure bias to another well-known problem in NMT, namely the
tendency to generate hallucinations under domain shift. In experiments on three
datasets with multiple test domains, we show that exposure bias is partially to
blame for hallucinations, and that training with Minimum Risk Training, which
avoids exposure bias, can mitigate this. Our analysis explains why exposure
bias is more problematic under domain shift, and also links exposure bias to
the beam search problem, i.e. performance deterioration with increasing beam
size. Our results provide a new justification for methods that reduce exposure
bias: even if they do not increase performance on in-domain test sets, they can
increase model robustness to domain shift.
| 2,020 | Computation and Language |
Learning to Segment Actions from Observation and Narration | We apply a generative segmental model of task structure, guided by narration,
to action segmentation in video. We focus on unsupervised and weakly-supervised
settings where no action labels are known during training. Despite its
simplicity, our model performs competitively with previous work on a dataset of
naturalistic instructional videos. Our model allows us to vary the sources of
supervision used in training, and we find that both task structure and
narrative language provide large benefits in segmentation quality.
| 2,020 | Computation and Language |
A Systematic Assessment of Syntactic Generalization in Neural Language
Models | While state-of-the-art neural network models continue to achieve lower
perplexity scores on language modeling benchmarks, it remains unknown whether
optimizing for broad-coverage predictive performance leads to human-like
syntactic knowledge. Furthermore, existing work has not provided a clear
picture about the model properties required to produce proper syntactic
generalizations. We present a systematic evaluation of the syntactic knowledge
of neural language models, testing 20 combinations of model types and data
sizes on a set of 34 English-language syntactic test suites. We find
substantial differences in syntactic generalization performance by model
architecture, with sequential models underperforming other architectures.
Factorially manipulating model architecture and training dataset size (1M--40M
words), we find that variability in syntactic generalization performance is
substantially greater by architecture than by dataset size for the corpora
tested in our experiments. Our results also reveal a dissociation between
perplexity and syntactic generalization performance.
| 2,020 | Computation and Language |
LIIR at SemEval-2020 Task 12: A Cross-Lingual Augmentation Approach for
Multilingual Offensive Language Identification | This paper presents our system entitled `LIIR' for SemEval-2020 Task 12 on
Multilingual Offensive Language Identification in Social Media (OffensEval 2).
We have participated in sub-task A for English, Danish, Greek, Arabic, and
Turkish languages. We adapt and fine-tune the BERT and Multilingual Bert models
made available by Google AI for English and non-English languages respectively.
For the English language, we use a combination of two fine-tuned BERT models.
For other languages we propose a cross-lingual augmentation approach in order
to enrich training data and we use Multilingual BERT to obtain sentence
representations. LIIR achieved rank 14/38, 18/47, 24/86, 24/54, and 25/40 in
Greek, Turkish, English, Arabic, and Danish languages, respectively.
| 2,020 | Computation and Language |
SUPERT: Towards New Frontiers in Unsupervised Evaluation Metrics for
Multi-Document Summarization | We study unsupervised multi-document summarization evaluation metrics, which
require neither human-written reference summaries nor human annotations (e.g.
preferences, ratings, etc.). We propose SUPERT, which rates the quality of a
summary by measuring its semantic similarity with a pseudo reference summary,
i.e. selected salient sentences from the source documents, using contextualized
embeddings and soft token alignment techniques. Compared to the
state-of-the-art unsupervised evaluation metrics, SUPERT correlates better with
human ratings by 18-39%. Furthermore, we use SUPERT as rewards to guide a
neural-based reinforcement learning summarizer, yielding favorable performance
compared to the state-of-the-art unsupervised summarizers. All source code is
available at https://github.com/yg211/acl20-ref-free-eval.
| 2,020 | Computation and Language |
FEQA: A Question Answering Evaluation Framework for Faithfulness
Assessment in Abstractive Summarization | Neural abstractive summarization models are prone to generate content
inconsistent with the source document, i.e. unfaithful. Existing automatic
metrics do not capture such mistakes effectively. We tackle the problem of
evaluating faithfulness of a generated summary given its source document. We
first collected human annotations of faithfulness for outputs from numerous
models on two datasets. We find that current models exhibit a trade-off between
abstractiveness and faithfulness: outputs with less word overlap with the
source document are more likely to be unfaithful. Next, we propose an automatic
question answering (QA) based metric for faithfulness, FEQA, which leverages
recent advances in reading comprehension. Given question-answer pairs generated
from the summary, a QA model extracts answers from the document; non-matched
answers indicate unfaithful information in the summary. Among metrics based on
word overlap, embedding similarity, and learned language understanding models,
our QA-based metric has significantly higher correlation with human
faithfulness scores, especially on highly abstractive summaries.
| 2,020 | Computation and Language |
Mapping Natural Language Instructions to Mobile UI Action Sequences | We present a new problem: grounding natural language instructions to mobile
user interface actions, and create three new datasets for it. For full task
evaluation, we create PIXELHELP, a corpus that pairs English instructions with
actions performed by people on a mobile UI emulator. To scale training, we
decouple the language and action data by (a) annotating action phrase spans in
HowTo instructions and (b) synthesizing grounded descriptions of actions for
mobile user interfaces. We use a Transformer to extract action phrase tuples
from long-range natural language instructions. A grounding Transformer then
contextually represents UI objects using both their content and screen position
and connects them to object descriptions. Given a starting screen and
instruction, our model achieves 70.59% accuracy on predicting complete
ground-truth action sequences in PIXELHELP.
| 2,020 | Computation and Language |
Comparative Analysis of Word Embeddings for Capturing Word Similarities | Distributed language representation has become the most widely used technique
for language representation in various natural language processing tasks. Most
of the natural language processing models that are based on deep learning
techniques use already pre-trained distributed word representations, commonly
called word embeddings. Determining the most qualitative word embeddings is of
crucial importance for such models. However, selecting the appropriate word
embeddings is a perplexing task since the projected embedding space is not
intuitive to humans. In this paper, we explore different approaches for
creating distributed word representations. We perform an intrinsic evaluation
of several state-of-the-art word embedding methods. Their performance on
capturing word similarities is analysed with existing benchmark datasets for
word pairs similarities. The research in this paper conducts a correlation
analysis between ground truth word similarities and similarities obtained by
different word embedding methods.
| 2,020 | Computation and Language |
Distilling Knowledge from Pre-trained Language Models via Text Smoothing | This paper studies compressing pre-trained language models, like BERT (Devlin
et al.,2019), via teacher-student knowledge distillation. Previous works
usually force the student model to strictly mimic the smoothed labels predicted
by the teacher BERT. As an alternative, we propose a new method for BERT
distillation, i.e., asking the teacher to generate smoothed word ids, rather
than labels, for teaching the student model in knowledge distillation. We call
this kind of methodTextSmoothing. Practically, we use the softmax prediction of
the Masked Language Model(MLM) in BERT to generate word distributions for given
texts and smooth those input texts using that predicted soft word ids. We
assume that both the smoothed labels and the smoothed texts can implicitly
augment the input corpus, while text smoothing is intuitively more efficient
since it can generate more instances in one neural network forward
step.Experimental results on GLUE and SQuAD demonstrate that our solution can
achieve competitive results compared with existing BERT distillation methods.
| 2,020 | Computation and Language |
Detecting East Asian Prejudice on Social Media | The outbreak of COVID-19 has transformed societies across the world as
governments tackle the health, economic and social costs of the pandemic. It
has also raised concerns about the spread of hateful language and prejudice
online, especially hostility directed against East Asia. In this paper we
report on the creation of a classifier that detects and categorizes social
media posts from Twitter into four classes: Hostility against East Asia,
Criticism of East Asia, Meta-discussions of East Asian prejudice and a neutral
class. The classifier achieves an F1 score of 0.83 across all four classes. We
provide our final model (coded in Python), as well as a new 20,000 tweet
training dataset used to make the classifier, two analyses of hashtags
associated with East Asian prejudice and the annotation codebook. The
classifier can be implemented by other researchers, assisting with both online
content moderation processes and further research into the dynamics, prevalence
and impact of East Asian prejudice online during this global pandemic.
| 2,020 | Computation and Language |
Context-Sensitive Generation Network for Handing Unknown Slot Values in
Dialogue State Tracking | As a key component in a dialogue system, dialogue state tracking plays an
important role. It is very important for dialogue state tracking to deal with
the problem of unknown slot values. As far as we known, almost all existing
approaches depend on pointer network to solve the unknown slot value problem.
These pointer network-based methods usually have a hidden assumption that there
is at most one out-of-vocabulary word in an unknown slot value because of the
character of a pointer network. However, often, there are multiple
out-of-vocabulary words in an unknown slot value, and it makes the existing
methods perform bad. To tackle the problem, in this paper, we propose a novel
Context-Sensitive Generation network (CSG) which can facilitate the
representation of out-of-vocabulary words when generating the unknown slot
value. Extensive experiments show that our proposed method performs better than
the state-of-the-art baselines.
| 2,020 | Computation and Language |
Learning to Detect Unacceptable Machine Translations for Downstream
Tasks | The field of machine translation has progressed tremendously in recent years.
Even though the translation quality has improved significantly, current systems
are still unable to produce uniformly acceptable machine translations for the
variety of possible use cases. In this work, we put machine translation in a
cross-lingual pipeline and introduce downstream tasks to define task-specific
acceptability of machine translations. This allows us to leverage parallel data
to automatically generate acceptability annotations on a large scale, which in
turn help to learn acceptability detectors for the downstream tasks. We conduct
experiments to demonstrate the effectiveness of our framework for a range of
downstream tasks and translation models.
| 2,020 | Computation and Language |
Towards Conversational Recommendation over Multi-Type Dialogs | We propose a new task of conversational recommendation over multi-type
dialogs, where the bots can proactively and naturally lead a conversation from
a non-recommendation dialog (e.g., QA) to a recommendation dialog, taking into
account user's interests and feedback. To facilitate the study of this task, we
create a human-to-human Chinese dialog dataset \emph{DuRecDial} (about 10k
dialogs, 156k utterances), which contains multiple sequential dialogs for every
pair of a recommendation seeker (user) and a recommender (bot). In each dialog,
the recommender proactively leads a multi-type dialog to approach
recommendation targets and then makes multiple recommendations with rich
interaction behavior. This dataset allows us to systematically investigate
different parts of the overall problem, e.g., how to naturally lead a dialog,
how to interact with users for recommendation. Finally we establish baseline
results on DuRecDial for future studies. Dataset and codes are publicly
available at
https://github.com/PaddlePaddle/models/tree/develop/PaddleNLP/Research/ACL2020-DuRecDial.
| 2,020 | Computation and Language |
CAiRE-COVID: A Question Answering and Query-focused Multi-Document
Summarization System for COVID-19 Scholarly Information Management | We present CAiRE-COVID, a real-time question answering (QA) and
multi-document summarization system, which won one of the 10 tasks in the
Kaggle COVID-19 Open Research Dataset Challenge, judged by medical experts. Our
system aims to tackle the recent challenge of mining the numerous scientific
articles being published on COVID-19 by answering high priority questions from
the community and summarizing salient question-related information. It combines
information extraction with state-of-the-art QA and query-focused
multi-document summarization techniques, selecting and highlighting evidence
snippets from existing literature given a query. We also propose query-focused
abstractive and extractive multi-document summarization methods, to provide
more relevant information related to the question. We further conduct
quantitative experiments that show consistent improvements on various metrics
for each module. We have launched our website CAiRE-COVID for broader use by
the medical community, and have open-sourced the code for our system, to
bootstrap further study by other researches.
| 2,020 | Computation and Language |
Sentiment Analysis Using Simplified Long Short-term Memory Recurrent
Neural Networks | LSTM or Long Short Term Memory Networks is a specific type of Recurrent
Neural Network (RNN) that is very effective in dealing with long sequence data
and learning long term dependencies. In this work, we perform sentiment
analysis on a GOP Debate Twitter dataset. To speed up training and reduce the
computational cost and time, six different parameter reduced slim versions of
the LSTM model (slim LSTM) are proposed. We evaluate two of these models on the
dataset. The performance of these two LSTM models along with the standard LSTM
model is compared. The effect of Bidirectional LSTM Layers is also studied. The
work also consists of a study to choose the best architecture, apart from
establishing the best set of hyper parameters for different LSTM Models.
| 2,020 | Computation and Language |
Literature Triage on Genomic Variation Publications by
Knowledge-enhanced Multi-channel CNN | Background: To investigate the correlation between genomic variation and
certain diseases or phenotypes, the fundamental task is to screen out the
concerning publications from massive literature, which is called literature
triage. Some knowledge bases, including UniProtKB/Swiss-Prot and NHGRI-EBI GWAS
Catalog are created for collecting concerning publications. These publications
are manually curated by experts, which is time-consuming. Moreover, the manual
curation of information from literature is not scalable due to the rapidly
increasing amount of publications. In order to cut down the cost of literature
triage, machine-learning models were adopted to automatically identify
biomedical publications. Methods: Comparing to previous studies utilizing
machine-learning models for literature triage, we adopt a multi-channel
convolutional network to utilize rich textual information and meanwhile bridge
the semantic gaps from different corpora. In addition, knowledge embeddings
learned from UMLS is also used to provide extra medical knowledge beyond
textual features in the process of triage. Results: We demonstrate that our
model outperforms the state-of-the-art models over 5 datasets with the help of
knowledge embedding and multiple channels. Our model improves the accuracy of
biomedical literature triage results. Conclusions: Multiple channels and
knowledge embeddings enhance the performance of the CNN model in the task of
biomedical literature triage. Keywords: Literature Triage; Knowledge Embedding;
Multi-channel Convolutional Network
| 2,020 | Computation and Language |
SentiBERT: A Transferable Transformer-Based Architecture for
Compositional Sentiment Semantics | We propose SentiBERT, a variant of BERT that effectively captures
compositional sentiment semantics. The model incorporates contextualized
representation with binary constituency parse tree to capture semantic
composition. Comprehensive experiments demonstrate that SentiBERT achieves
competitive performance on phrase-level sentiment classification. We further
demonstrate that the sentiment composition learned from the phrase-level
annotations on SST can be transferred to other sentiment analysis tasks as well
as related tasks, such as emotion classification tasks. Moreover, we conduct
ablation studies and design visualization methods to understand SentiBERT. We
show that SentiBERT is better than baseline approaches in capturing negation
and the contrastive relation and model the compositional sentiment semantics.
| 2,020 | Computation and Language |
Beyond Accuracy: Behavioral Testing of NLP models with CheckList | Although measuring held-out accuracy has been the primary approach to
evaluate generalization, it often overestimates the performance of NLP models,
while alternative approaches for evaluating models either focus on individual
tasks or on specific behaviors. Inspired by principles of behavioral testing in
software engineering, we introduce CheckList, a task-agnostic methodology for
testing NLP models. CheckList includes a matrix of general linguistic
capabilities and test types that facilitate comprehensive test ideation, as
well as a software tool to generate a large and diverse number of test cases
quickly. We illustrate the utility of CheckList with tests for three tasks,
identifying critical failures in both commercial and state-of-art models. In a
user study, a team responsible for a commercial sentiment analysis model found
new and actionable bugs in an extensively tested model. In another user study,
NLP practitioners with CheckList created twice as many tests, and found almost
three times as many bugs as users without it.
| 2,020 | Computation and Language |
Quantum Natural Language Processing on Near-Term Quantum Computers | In this work, we describe a full-stack pipeline for natural language
processing on near-term quantum computers, aka QNLP. The language-modelling
framework we employ is that of compositional distributional semantics
(DisCoCat), which extends and complements the compositional structure of
pregroup grammars. Within this model, the grammatical reduction of a sentence
is interpreted as a diagram, encoding a specific interaction of words according
to the grammar. It is this interaction which, together with a specific choice
of word embedding, realises the meaning (or "semantics") of a sentence.
Building on the formal quantum-like nature of such interactions, we present a
method for mapping DisCoCat diagrams to quantum circuits. Our methodology is
compatible both with NISQ devices and with established Quantum Machine Learning
techniques, paving the way to near-term applications of quantum technology to
natural language processing.
| 2,021 | Computation and Language |
Evidence Inference 2.0: More Data, Better Models | How do we most effectively treat a disease or condition? Ideally, we could
consult a database of evidence gleaned from clinical trials to answer such
questions. Unfortunately, no such database exists; clinical trial results are
instead disseminated primarily via lengthy natural language articles. Perusing
all such articles would be prohibitively time-consuming for healthcare
practitioners; they instead tend to depend on manually compiled systematic
reviews of medical literature to inform care.
NLP may speed this process up, and eventually facilitate immediate consult of
published evidence. The Evidence Inference dataset was recently released to
facilitate research toward this end. This task entails inferring the
comparative performance of two treatments, with respect to a given outcome,
from a particular article (describing a clinical trial) and identifying
supporting evidence. For instance: Does this article report that chemotherapy
performed better than surgery for five-year survival rates of operable cancers?
In this paper, we collect additional annotations to expand the Evidence
Inference dataset by 25\%, provide stronger baseline models, systematically
inspect the errors that these make, and probe dataset quality. We also release
an abstract only (as opposed to full-texts) version of the task for rapid model
prototyping. The updated corpus, documentation, and code for new baselines and
evaluations are available at http://evidence-inference.ebm-nlp.com/.
| 2,020 | Computation and Language |
Text-Based Ideal Points | Ideal point models analyze lawmakers' votes to quantify their political
positions, or ideal points. But votes are not the only way to express a
political position. Lawmakers also give speeches, release press statements, and
post tweets. In this paper, we introduce the text-based ideal point model
(TBIP), an unsupervised probabilistic topic model that analyzes texts to
quantify the political positions of its authors. We demonstrate the TBIP with
two types of politicized text data: U.S. Senate speeches and senator tweets.
Though the model does not analyze their votes or political affiliations, the
TBIP separates lawmakers by party, learns interpretable politicized topics, and
infers ideal points close to the classical vote-based ideal points. One benefit
of analyzing texts, as opposed to votes, is that the TBIP can estimate ideal
points of anyone who authors political texts, including non-voting actors. To
this end, we use it to study tweets from the 2020 Democratic presidential
candidates. Using only the texts of their tweets, it identifies them along an
interpretable progressive-to-moderate spectrum.
| 2,020 | Computation and Language |
Balancing Objectives in Counseling Conversations: Advancing Forwards or
Looking Backwards | Throughout a conversation, participants make choices that can orient the flow
of the interaction. Such choices are particularly salient in the consequential
domain of crisis counseling, where a difficulty for counselors is balancing
between two key objectives: advancing the conversation towards a resolution,
and empathetically addressing the crisis situation.
In this work, we develop an unsupervised methodology to quantify how
counselors manage this balance. Our main intuition is that if an utterance can
only receive a narrow range of appropriate replies, then its likely aim is to
advance the conversation forwards, towards a target within that range.
Likewise, an utterance that can only appropriately follow a narrow range of
possible utterances is likely aimed backwards at addressing a specific
situation within that range. By applying this intuition, we can map each
utterance to a continuous orientation axis that captures the degree to which it
is intended to direct the flow of the conversation forwards or backwards.
This unsupervised method allows us to characterize counselor behaviors in a
large dataset of crisis counseling conversations, where we show that known
counseling strategies intuitively align with this axis. We also illustrate how
our measure can be indicative of a conversation's progress, as well as its
effectiveness.
| 2,020 | Computation and Language |
ConvoKit: A Toolkit for the Analysis of Conversations | This paper describes the design and functionality of ConvoKit, an open-source
toolkit for analyzing conversations and the social interactions embedded
within. ConvoKit provides an unified framework for representing and
manipulating conversational data, as well as a large and diverse collection of
conversational datasets. By providing an intuitive interface for exploring and
interacting with conversational data, this toolkit lowers the technical
barriers for the broad adoption of computational methods for conversational
analysis.
| 2,020 | Computation and Language |
Adversarial Learning for Supervised and Semi-supervised Relation
Extraction in Biomedical Literature | Adversarial training is a technique of improving model performance by
involving adversarial examples in the training process. In this paper, we
investigate adversarial training with multiple adversarial examples to benefit
the relation extraction task. We also apply adversarial training technique in
semi-supervised scenarios to utilize unlabeled data. The evaluation results on
protein-protein interaction and protein subcellular localization task
illustrate adversarial training provides improvement on the supervised model,
and is also effective on involving unlabeled data in the semi-supervised
training case. In addition, our method achieves state-of-the-art performance on
two benchmarking datasets.
| 2,020 | Computation and Language |
Temporal Common Sense Acquisition with Minimal Supervision | Temporal common sense (e.g., duration and frequency of events) is crucial for
understanding natural language. However, its acquisition is challenging, partly
because such information is often not expressed explicitly in text, and human
annotation on such concepts is costly. This work proposes a novel sequence
modeling approach that exploits explicit and implicit mentions of temporal
common sense, extracted from a large corpus, to build TACOLM, a temporal common
sense language model. Our method is shown to give quality predictions of
various dimensions of temporal common sense (on UDST and a newly collected
dataset from RealNews). It also produces representations of events for relevant
tasks such as duration comparison, parent-child relations, event coreference
and temporal QA (on TimeBank, HiEVE and MCTACO) that are better than using the
standard BERT. Thus, it will be an important component of temporal NLP.
| 2,020 | Computation and Language |
Probing Linguistic Systematicity | Recently, there has been much interest in the question of whether deep
natural language understanding models exhibit systematicity; generalizing such
that units like words make consistent contributions to the meaning of the
sentences in which they appear. There is accumulating evidence that neural
models often generalize non-systematically. We examined the notion of
systematicity from a linguistic perspective, defining a set of probes and a set
of metrics to measure systematic behaviour. We also identified ways in which
network architectures can generalize non-systematically, and discuss why such
forms of generalization may be unsatisfying. As a case study, we performed a
series of experiments in the setting of natural language inference (NLI),
demonstrating that some NLU systems achieve high overall performance despite
being non-systematic.
| 2,020 | Computation and Language |
LinCE: A Centralized Benchmark for Linguistic Code-switching Evaluation | Recent trends in NLP research have raised an interest in linguistic
code-switching (CS); modern approaches have been proposed to solve a wide range
of NLP tasks on multiple language pairs. Unfortunately, these proposed methods
are hardly generalizable to different code-switched languages. In addition, it
is unclear whether a model architecture is applicable for a different task
while still being compatible with the code-switching setting. This is mainly
because of the lack of a centralized benchmark and the sparse corpora that
researchers employ based on their specific needs and interests. To facilitate
research in this direction, we propose a centralized benchmark for Linguistic
Code-switching Evaluation (LinCE) that combines ten corpora covering four
different code-switched language pairs (i.e., Spanish-English, Nepali-English,
Hindi-English, and Modern Standard Arabic-Egyptian Arabic) and four tasks
(i.e., language identification, named entity recognition, part-of-speech
tagging, and sentiment analysis). As part of the benchmark centralization
effort, we provide an online platform at ritual.uh.edu/lince, where researchers
can submit their results while comparing with others in real-time. In addition,
we provide the scores of different popular models, including LSTM, ELMo, and
multilingual BERT so that the NLP community can compare against
state-of-the-art systems. LinCE is a continuous effort, and we will expand it
with more low-resource languages and tasks.
| 2,020 | Computation and Language |
Generalizing Outside the Training Set: When Can Neural Networks Learn
Identity Effects? | Often in language and other areas of cognition, whether two components of an
object are identical or not determine whether it is well formed. We call such
constraints identity effects. When developing a system to learn well-formedness
from examples, it is easy enough to build in an identify effect. But can
identity effects be learned from the data without explicit guidance? We provide
a simple framework in which we can rigorously prove that algorithms satisfying
simple criteria cannot make the correct inference. We then show that a broad
class of algorithms including deep neural networks with standard architecture
and training with backpropagation satisfy our criteria, dependent on the
encoding of inputs. Finally, we demonstrate our theory with computational
experiments in which we explore the effect of different input encodings on the
ability of algorithms to generalize to novel inputs.
| 2,020 | Computation and Language |
Diversifying Dialogue Generation with Non-Conversational Text | Neural network-based sequence-to-sequence (seq2seq) models strongly suffer
from the low-diversity problem when it comes to open-domain dialogue
generation. As bland and generic utterances usually dominate the frequency
distribution in our daily chitchat, avoiding them to generate more interesting
responses requires complex data filtering, sampling techniques or modifying the
training objective. In this paper, we propose a new perspective to diversify
dialogue generation by leveraging non-conversational text. Compared with
bilateral conversations, non-conversational text are easier to obtain, more
diverse and cover a much broader range of topics. We collect a large-scale
non-conversational corpus from multi sources including forum comments, idioms
and book snippets. We further present a training paradigm to effectively
incorporate these text via iterative back translation. The resulting model is
tested on two conversational datasets and is shown to produce significantly
more diverse responses without sacrificing the relevance with context.
| 2,020 | Computation and Language |
It's Morphin' Time! Combating Linguistic Discrimination with
Inflectional Perturbations | Training on only perfect Standard English corpora predisposes pre-trained
neural networks to discriminate against minorities from non-standard linguistic
backgrounds (e.g., African American Vernacular English, Colloquial Singapore
English, etc.). We perturb the inflectional morphology of words to craft
plausible and semantically similar adversarial examples that expose these
biases in popular NLP models, e.g., BERT and Transformer, and show that
adversarially fine-tuning them for a single epoch significantly improves
robustness without sacrificing performance on clean data.
| 2,021 | Computation and Language |
Semi-Supervised Dialogue Policy Learning via Stochastic Reward
Estimation | Dialogue policy optimization often obtains feedback until task completion in
task-oriented dialogue systems. This is insufficient for training intermediate
dialogue turns since supervision signals (or rewards) are only provided at the
end of dialogues. To address this issue, reward learning has been introduced to
learn from state-action pairs of an optimal policy to provide turn-by-turn
rewards. This approach requires complete state-action annotations of
human-to-human dialogues (i.e., expert demonstrations), which is labor
intensive. To overcome this limitation, we propose a novel reward learning
approach for semi-supervised policy learning. The proposed approach learns a
dynamics model as the reward function which models dialogue progress (i.e.,
state-action sequences) based on expert demonstrations, either with or without
annotations. The dynamics model computes rewards by predicting whether the
dialogue progress is consistent with expert demonstrations. We further propose
to learn action embeddings for a better generalization of the reward function.
The proposed approach outperforms competitive policy learning baselines on
MultiWOZ, a benchmark multi-domain dataset.
| 2,020 | Computation and Language |
Generating Pertinent and Diversified Comments with Topic-aware
Pointer-Generator Networks | Comment generation, a new and challenging task in Natural Language Generation
(NLG), attracts a lot of attention in recent years. However, comments generated
by previous work tend to lack pertinence and diversity. In this paper, we
propose a novel generation model based on Topic-aware Pointer-Generator
Networks (TPGN), which can utilize the topic information hidden in the articles
to guide the generation of pertinent and diversified comments. Firstly, we
design a keyword-level and topic-level encoder attention mechanism to capture
topic information in the articles. Next, we integrate the topic information
into pointer-generator networks to guide comment generation. Experiments on a
large scale of comment generation dataset show that our model produces the
valuable comments and outperforms competitive baseline models significantly.
| 2,020 | Computation and Language |
The Structured Weighted Violations MIRA | We present the Structured Weighted Violation MIRA (SWVM), a new structured
prediction algorithm that is based on an hybridization between MIRA (Crammer
and Singer, 2003) and the structured weighted violations perceptron (SWVP)
(Dror and Reichart, 2016). We demonstrate that the concepts developed in (Dror
and Reichart, 2016) combined with a powerful structured prediction algorithm
can improve performance on sequence labeling tasks. In experiments with
syntactic chunking and named entity recognition (NER), the new algorithm
substantially outperforms the original MIRA as well as the original structured
perceptron and SWVP. Our code is available at
https://github.com/dorringel/SWVM.
| 2,020 | Computation and Language |
Empowering Active Learning to Jointly Optimize System and User Demands | Existing approaches to active learning maximize the system performance by
sampling unlabeled instances for annotation that yield the most efficient
training. However, when active learning is integrated with an end-user
application, this can lead to frustration for participating users, as they
spend time labeling instances that they would not otherwise be interested in
reading. In this paper, we propose a new active learning approach that jointly
optimizes the seemingly counteracting objectives of the active learning system
(training efficiently) and the user (receiving useful instances). We study our
approach in an educational application, which particularly benefits from this
technique as the system needs to rapidly learn to predict the appropriateness
of an exercise to a particular user, while the users should receive only
exercises that match their skills. We evaluate multiple learning strategies and
user types with data from real users and find that our joint approach better
satisfies both objectives when alternative methods lead to many unsuitable
exercises for end users.
| 2,020 | Computation and Language |
Finding Universal Grammatical Relations in Multilingual BERT | Recent work has found evidence that Multilingual BERT (mBERT), a
transformer-based multilingual masked language model, is capable of zero-shot
cross-lingual transfer, suggesting that some aspects of its representations are
shared cross-lingually. To better understand this overlap, we extend recent
work on finding syntactic trees in neural networks' internal representations to
the multilingual setting. We show that subspaces of mBERT representations
recover syntactic tree distances in languages other than English, and that
these subspaces are approximately shared across languages. Motivated by these
results, we present an unsupervised analysis method that provides evidence
mBERT learns representations of syntactic dependency labels, in the form of
clusters which largely agree with the Universal Dependencies taxonomy. This
evidence suggests that even without explicit supervision, multilingual masked
language models learn certain linguistic universals.
| 2,020 | Computation and Language |
What Was Written vs. Who Read It: News Media Profiling Using Text
Analysis and Social Media Context | Predicting the political bias and the factuality of reporting of entire news
outlets are critical elements of media profiling, which is an understudied but
an increasingly important research direction. The present level of
proliferation of fake, biased, and propagandistic content online, has made it
impossible to fact-check every single suspicious claim, either manually or
automatically. Alternatively, we can profile entire news outlets and look for
those that are likely to publish fake or biased content. This approach makes it
possible to detect likely "fake news" the moment they are published, by simply
checking the reliability of their source. From a practical perspective,
political bias and factuality of reporting have a linguistic aspect but also a
social context. Here, we study the impact of both, namely (i) what was written
(i.e., what was published by the target medium, and how it describes itself on
Twitter) vs. (ii) who read it (i.e., analyzing the readers of the target medium
on Facebook, Twitter, and YouTube). We further study (iii) what was written
about the target medium on Wikipedia. The evaluation results show that what was
written matters most, and that putting all information sources together yields
huge improvements over the current state-of-the-art.
| 2,020 | Computation and Language |
Article citation study: Context enhanced citation sentiment detection | Citation sentimet analysis is one of the little studied tasks for
scientometric analysis. For citation analysis, we developed eight datasets
comprising citation sentences, which are manually annotated by us into three
sentiment polarities viz. positive, negative, and neutral. Among eight
datasets, three were developed by considering the whole context of citations.
Furthermore, we proposed an ensembled feature engineering method comprising
word embeddings obtained for texts, parts-of-speech tags, and dependency
relationships together. Ensembled features were considered as input to deep
learning based approaches for citation sentiment classification, which is in
turn compared with Bag-of-Words approach. Experimental results demonstrate that
deep learning is useful for higher number of samples, whereas support vector
machine is the winner for smaller number of samples. Moreover, context-based
samples are proved to be more effective than context-less samples for citation
sentiment analysis.
| 2,020 | Computation and Language |
Posterior Control of Blackbox Generation | Text generation often requires high-precision output that obeys task-specific
rules. This fine-grained control is difficult to enforce with off-the-shelf
deep learning models. In this work, we consider augmenting neural generation
models with discrete control states learned through a structured
latent-variable approach. Under this formulation, task-specific knowledge can
be encoded through a range of rich, posterior constraints that are effectively
trained into the model. This approach allows users to ground internal model
decisions based on prior knowledge, without sacrificing the representational
power of neural generative models. Experiments consider applications of this
approach for text generation. We find that this method improves over standard
benchmarks, while also providing fine-grained control.
| 2,020 | Computation and Language |
How Context Affects Language Models' Factual Predictions | When pre-trained on large unsupervised textual corpora, language models are
able to store and retrieve factual knowledge to some extent, making it possible
to use them directly for zero-shot cloze-style question answering. However,
storing factual knowledge in a fixed number of weights of a language model
clearly has limitations. Previous approaches have successfully provided access
to information outside the model weights using supervised architectures that
combine an information retrieval system with a machine reading component. In
this paper, we go a step further and integrate information from a retrieval
system with a pre-trained language model in a purely unsupervised way. We
report that augmenting pre-trained language models in this way dramatically
improves performance and that the resulting system, despite being unsupervised,
is competitive with a supervised machine reading baseline. Furthermore,
processing query and context with different segment tokens allows BERT to
utilize its Next Sentence Prediction pre-trained classifier to determine
whether the context is relevant or not, substantially improving BERT's
zero-shot cloze-style question-answering performance and making its predictions
robust to noisy contexts.
| 2,020 | Computation and Language |
From Standard Summarization to New Tasks and Beyond: Summarization with
Manifold Information | Text summarization is the research area aiming at creating a short and
condensed version of the original document, which conveys the main idea of the
document in a few words. This research topic has started to attract the
attention of a large community of researchers, and it is nowadays counted as
one of the most promising research areas. In general, text summarization
algorithms aim at using a plain text document as input and then output a
summary. However, in real-world applications, most of the data is not in a
plain text format. Instead, there is much manifold information to be
summarized, such as the summary for a web page based on a query in the search
engine, extreme long document (e.g., academic paper), dialog history and so on.
In this paper, we focus on the survey of these new summarization tasks and
approaches in the real-world application.
| 2,020 | Computation and Language |
Non-Autoregressive Image Captioning with Counterfactuals-Critical
Multi-Agent Learning | Most image captioning models are autoregressive, i.e. they generate each word
by conditioning on previously generated words, which leads to heavy latency
during inference. Recently, non-autoregressive decoding has been proposed in
machine translation to speed up the inference time by generating all words in
parallel. Typically, these models use the word-level cross-entropy loss to
optimize each word independently. However, such a learning process fails to
consider the sentence-level consistency, thus resulting in inferior generation
quality of these non-autoregressive models. In this paper, we propose a
Non-Autoregressive Image Captioning (NAIC) model with a novel training
paradigm: Counterfactuals-critical Multi-Agent Learning (CMAL). CMAL formulates
NAIC as a multi-agent reinforcement learning system where positions in the
target sequence are viewed as agents that learn to cooperatively maximize a
sentence-level reward. Besides, we propose to utilize massive unlabeled images
to boost captioning performance. Extensive experiments on MSCOCO image
captioning benchmark show that our NAIC model achieves a performance comparable
to state-of-the-art autoregressive models, while brings 13.9x decoding speedup.
| 2,020 | Computation and Language |
CTC-synchronous Training for Monotonic Attention Model | Monotonic chunkwise attention (MoChA) has been studied for the online
streaming automatic speech recognition (ASR) based on a sequence-to-sequence
framework. In contrast to connectionist temporal classification (CTC), backward
probabilities cannot be leveraged in the alignment marginalization process
during training due to left-to-right dependency in the decoder. This results in
the error propagation of alignments to subsequent token generation. To address
this problem, we propose CTC-synchronous training (CTC-ST), in which MoChA uses
CTC alignments to learn optimal monotonic alignments. Reference CTC alignments
are extracted from a CTC branch sharing the same encoder with the decoder. The
entire model is jointly optimized so that the expected boundaries from MoChA
are synchronized with the alignments. Experimental evaluations of the TEDLIUM
release-2 and Librispeech corpora show that the proposed method significantly
improves recognition, especially for long utterances. We also show that CTC-ST
can bring out the full potential of SpecAugment for MoChA.
| 2,020 | Computation and Language |
Towards Robustifying NLI Models Against Lexical Dataset Biases | While deep learning models are making fast progress on the task of Natural
Language Inference, recent studies have also shown that these models achieve
high accuracy by exploiting several dataset biases, and without deep
understanding of the language semantics. Using contradiction-word bias and
word-overlapping bias as our two bias examples, this paper explores both
data-level and model-level debiasing methods to robustify models against
lexical dataset biases. First, we debias the dataset through data augmentation
and enhancement, but show that the model bias cannot be fully removed via this
method. Next, we also compare two ways of directly debiasing the model without
knowing what the dataset biases are in advance. The first approach aims to
remove the label bias at the embedding level. The second approach employs a
bag-of-words sub-model to capture the features that are likely to exploit the
bias and prevents the original model from learning these biased features by
forcing orthogonality between these two sub-models. We performed evaluations on
new balanced datasets extracted from the original MNLI dataset as well as the
NLI stress tests, and show that the orthogonality approach is better at
debiasing the model while maintaining competitive overall accuracy. Our code
and data are available at: https://github.com/owenzx/LexicalDebias-ACL2020
| 2,020 | Computation and Language |
A SentiWordNet Strategy for Curriculum Learning in Sentiment Analysis | Curriculum Learning (CL) is the idea that learning on a training set
sequenced or ordered in a manner where samples range from easy to difficult,
results in an increment in performance over otherwise random ordering. The idea
parallels cognitive science's theory of how human brains learn, and that
learning a difficult task can be made easier by phrasing it as a sequence of
easy to difficult tasks. This idea has gained a lot of traction in machine
learning and image processing for a while and recently in Natural Language
Processing (NLP). In this paper, we apply the ideas of curriculum learning,
driven by SentiWordNet in a sentiment analysis setting. In this setting, given
a text segment, our aim is to extract its sentiment or polarity. SentiWordNet
is a lexical resource with sentiment polarity annotations. By comparing
performance with other curriculum strategies and with no curriculum, the
effectiveness of the proposed strategy is presented. Convolutional, Recurrence,
and Attention-based architectures are employed to assess this improvement. The
models are evaluated on a standard sentiment dataset, Stanford Sentiment
Treebank.
| 2,020 | Computation and Language |
Leveraging Monolingual Data with Self-Supervision for Multilingual
Neural Machine Translation | Over the last few years two promising research directions in low-resource
neural machine translation (NMT) have emerged. The first focuses on utilizing
high-resource languages to improve the quality of low-resource languages via
multilingual NMT. The second direction employs monolingual data with
self-supervision to pre-train translation models, followed by fine-tuning on
small amounts of supervised data. In this work, we join these two lines of
research and demonstrate the efficacy of monolingual data with self-supervision
in multilingual NMT. We offer three major results: (i) Using monolingual data
significantly boosts the translation quality of low-resource languages in
multilingual models. (ii) Self-supervision improves zero-shot translation
quality in multilingual models. (iii) Leveraging monolingual data with
self-supervision provides a viable path towards adding new languages to
multilingual models, getting up to 33 BLEU on ro-en translation without any
parallel data or back-translation.
| 2,020 | Computation and Language |
Towards logical negation for compositional distributional semantics | The categorical compositional distributional model of meaning gives the
composition of words into phrases and sentences pride of place. However, it has
so far lacked a model of logical negation. This paper gives some steps towards
providing this operator, modelling it as a version of projection onto the
subspace orthogonal to a word. We give a small demonstration of the operators
performance in a sentence entailment task.
| 2,020 | Computation and Language |
A Deep Learning Approach for Automatic Detection of Fake News | Fake news detection is a very prominent and essential task in the field of
journalism. This challenging problem is seen so far in the field of politics,
but it could be even more challenging when it is to be determined in the
multi-domain platform. In this paper, we propose two effective models based on
deep learning for solving fake news detection problem in online news contents
of multiple domains. We evaluate our techniques on the two recently released
datasets, namely FakeNews AMT and Celebrity for fake news detection. The
proposed systems yield encouraging performance, outperforming the current
handcrafted feature engineering based state-of-the-art system with a
significant margin of 3.08% and 9.3% by the two models, respectively. In order
to exploit the datasets, available for the related tasks, we perform
cross-domain analysis (i.e. model trained on FakeNews AMT and tested on
Celebrity and vice versa) to explore the applicability of our systems across
the domains.
| 2,019 | Computation and Language |
Evaluating Sparse Interpretable Word Embeddings for Biomedical Domain | Word embeddings have found their way into a wide range of natural language
processing tasks including those in the biomedical domain. While these vector
representations successfully capture semantic and syntactic word relations,
hidden patterns and trends in the data, they fail to offer interpretability.
Interpretability is a key means to justification which is an integral part when
it comes to biomedical applications. We present an inclusive study on
interpretability of word embeddings in the medical domain, focusing on the role
of sparse methods. Qualitative and quantitative measurements and metrics for
interpretability of word vector representations are provided. For the
quantitative evaluation, we introduce an extensive categorized dataset that can
be used to quantify interpretability based on category theory. Intrinsic and
extrinsic evaluation of the studied methods are also presented. As for the
latter, we propose datasets which can be utilized for effective extrinsic
evaluation of word vectors in the biomedical domain. Based on our experiments,
it is seen that sparse word vectors show far more interpretability while
preserving the performance of their original vectors in downstream tasks.
| 2,020 | Computation and Language |
A Self-Training Method for Machine Reading Comprehension with Soft
Evidence Extraction | Neural models have achieved great success on machine reading comprehension
(MRC), many of which typically consist of two components: an evidence extractor
and an answer predictor. The former seeks the most relevant information from a
reference text, while the latter is to locate or generate answers from the
extracted evidence. Despite the importance of evidence labels for training the
evidence extractor, they are not cheaply accessible, particularly in many
non-extractive MRC tasks such as YES/NO question answering and multi-choice
MRC.
To address this problem, we present a Self-Training method (STM), which
supervises the evidence extractor with auto-generated evidence labels in an
iterative process. At each iteration, a base MRC model is trained with golden
answers and noisy evidence labels. The trained model will predict pseudo
evidence labels as extra supervision in the next iteration. We evaluate STM on
seven datasets over three MRC tasks. Experimental results demonstrate the
improvement on existing MRC models, and we also analyze how and why such a
self-training method works in MRC. The source code can be obtained from
https://github.com/SparkJiao/Self-Training-MRC
| 2,020 | Computation and Language |
Toward Better Storylines with Sentence-Level Language Models | We propose a sentence-level language model which selects the next sentence in
a story from a finite set of fluent alternatives. Since it does not need to
model fluency, the sentence-level language model can focus on longer range
dependencies, which are crucial for multi-sentence coherence. Rather than
dealing with individual words, our method treats the story so far as a list of
pre-trained sentence embeddings and predicts an embedding for the next
sentence, which is more efficient than predicting word embeddings. Notably this
allows us to consider a large number of candidates for the next sentence during
training. We demonstrate the effectiveness of our approach with
state-of-the-art accuracy on the unsupervised Story Cloze task and with
promising results on larger-scale next sentence prediction tasks.
| 2,020 | Computation and Language |
Reinforced Rewards Framework for Text Style Transfer | Style transfer deals with the algorithms to transfer the stylistic properties
of a piece of text into that of another while ensuring that the core content is
preserved. There has been a lot of interest in the field of text style transfer
due to its wide application to tailored text generation. Existing works
evaluate the style transfer models based on content preservation and transfer
strength. In this work, we propose a reinforcement learning based framework
that directly rewards the framework on these target metrics yielding a better
transfer of the target style. We show the improved performance of our proposed
framework based on automatic and human evaluation on three independent tasks:
wherein we transfer the style of text from formal to informal, high excitement
to low excitement, modern English to Shakespearean English, and vice-versa in
all the three cases. Improved performance of the proposed framework over
existing state-of-the-art frameworks indicates the viability of the approach.
| 2,020 | Computation and Language |
A Dataset for Statutory Reasoning in Tax Law Entailment and Question
Answering | Legislation can be viewed as a body of prescriptive rules expressed in
natural language. The application of legislation to facts of a case we refer to
as statutory reasoning, where those facts are also expressed in natural
language. Computational statutory reasoning is distinct from most existing work
in machine reading, in that much of the information needed for deciding a case
is declared exactly once (a law), while the information needed in much of
machine reading tends to be learned through distributional language statistics.
To investigate the performance of natural language understanding approaches on
statutory reasoning, we introduce a dataset, together with a legal-domain text
corpus. Straightforward application of machine reading models exhibits low
out-of-the-box performance on our questions, whether or not they have been
fine-tuned to the legal domain. We contrast this with a hand-constructed
Prolog-based system, designed to fully solve the task. These experiments
support a discussion of the challenges facing statutory reasoning moving
forward, which we argue is an interesting real-world task that can motivate the
development of models able to utilize prescriptive rules specified in natural
language.
| 2,020 | Computation and Language |
Multidirectional Associative Optimization of Function-Specific Word
Representations | We present a neural framework for learning associations between interrelated
groups of words such as the ones found in Subject-Verb-Object (SVO) structures.
Our model induces a joint function-specific word vector space, where vectors of
e.g. plausible SVO compositions lie close together. The model retains
information about word group membership even in the joint space, and can
thereby effectively be applied to a number of tasks reasoning over the SVO
structure. We show the robustness and versatility of the proposed framework by
reporting state-of-the-art results on the tasks of estimating selectional
preference and event similarity. The results indicate that the combinations of
representations learned with our task-independent model outperform
task-specific architectures from prior work, while reducing the number of
parameters by up to 95%.
| 2,020 | Computation and Language |
SOLOIST: Building Task Bots at Scale with Transfer Learning and Machine
Teaching | We present a new method SOLOIST that uses transfer learning and machine
teaching to build task bots at scale. We parameterize classical modular
task-oriented dialog systems using a Transformer-based auto-regressive language
model, which subsumes different dialog modules into a single neural model. We
pre-train, on heterogeneous dialog corpora, a task-grounded response generation
model, which can generate dialog responses grounded in user goals and
real-world knowledge for task completion. The pre-trained model can be
efficiently adapted to accomplish new tasks with a handful of task-specific
dialogs via machine teaching, where training samples are generated by human
teachers interacting with the system. Experiments show that (i) SOLOIST creates
new state-of-the-art on well-studied task-oriented dialog benchmarks, including
CamRest676 and MultiWOZ; (ii) in the few-shot fine-tuning settings, SOLOIST
significantly outperforms existing methods, and (iii) the use of machine
teaching substantially reduces the labeling cost of fine-tuning. The
pre-trained models and codes are available at https://aka.ms/soloist.
| 2,021 | Computation and Language |
Enabling Language Models to Fill in the Blanks | We present a simple approach for text infilling, the task of predicting
missing spans of text at any position in a document. While infilling could
enable rich functionality especially for writing assistance tools, more
attention has been devoted to language modeling---a special case of infilling
where text is predicted at the end of a document. In this paper, we aim to
extend the capabilities of language models (LMs) to the more general task of
infilling. To this end, we train (or fine-tune) off-the-shelf LMs on sequences
containing the concatenation of artificially-masked text and the text which was
masked. We show that this approach, which we call infilling by language
modeling, can enable LMs to infill entire sentences effectively on three
different domains: short stories, scientific abstracts, and lyrics.
Furthermore, we show that humans have difficulty identifying sentences infilled
by our approach as machine-generated in the domain of short stories.
| 2,020 | Computation and Language |
MART: Memory-Augmented Recurrent Transformer for Coherent Video
Paragraph Captioning | Generating multi-sentence descriptions for videos is one of the most
challenging captioning tasks due to its high requirements for not only visual
relevance but also discourse-based coherence across the sentences in the
paragraph. Towards this goal, we propose a new approach called Memory-Augmented
Recurrent Transformer (MART), which uses a memory module to augment the
transformer architecture. The memory module generates a highly summarized
memory state from the video segments and the sentence history so as to help
better prediction of the next sentence (w.r.t. coreference and repetition
aspects), thus encouraging coherent paragraph generation. Extensive
experiments, human evaluations, and qualitative analyses on two popular
datasets ActivityNet Captions and YouCookII show that MART generates more
coherent and less repetitive paragraph captions than baseline methods, while
maintaining relevance to the input video events. All code is available
open-source at: https://github.com/jayleicn/recurrent-transformer
| 2,020 | Computation and Language |
Segmenting Scientific Abstracts into Discourse Categories: A Deep
Learning-Based Approach for Sparse Labeled Data | The abstract of a scientific paper distills the contents of the paper into a
short paragraph. In the biomedical literature, it is customary to structure an
abstract into discourse categories like BACKGROUND, OBJECTIVE, METHOD, RESULT,
and CONCLUSION, but this segmentation is uncommon in other fields like computer
science. Explicit categories could be helpful for more granular, that is,
discourse-level search and recommendation. The sparsity of labeled data makes
it challenging to construct supervised machine learning solutions for automatic
discourse-level segmentation of abstracts in non-bio domains. In this paper, we
address this problem using transfer learning. In particular, we define three
discourse categories BACKGROUND, TECHNIQUE, OBSERVATION-for an abstract because
these three categories are the most common. We train a deep neural network on
structured abstracts from PubMed, then fine-tune it on a small hand-labeled
corpus of computer science papers. We observe an accuracy of 75% on the test
corpus. We perform an ablation study to highlight the roles of the different
parts of the model. Our method appears to be a promising solution to the
automatic segmentation of abstracts, where the labeled data is sparse.
| 2,020 | Computation and Language |
On the Generation of Medical Dialogues for COVID-19 | Under the pandemic of COVID-19, people experiencing COVID19-related symptoms
or exposed to risk factors have a pressing need to consult doctors. Due to
hospital closure, a lot of consulting services have been moved online. Because
of the shortage of medical professionals, many people cannot receive online
consultations timely. To address this problem, we aim to develop a medical
dialogue system that can provide COVID19-related consultations. We collected
two dialogue datasets -- CovidDialog -- (in English and Chinese respectively)
containing conversations between doctors and patients about COVID-19. On these
two datasets, we train several dialogue generation models based on Transformer,
GPT, and BERT-GPT. Since the two COVID-19 dialogue datasets are small in size,
which bear high risk of overfitting, we leverage transfer learning to mitigate
data deficiency. Specifically, we take the pretrained models of Transformer,
GPT, and BERT-GPT on dialog datasets and other large-scale texts, then finetune
them on our CovidDialog tasks. We perform both automatic and human evaluation
of responses generated by these models. The results show that the generated
responses are promising in being doctor-like, relevant to the conversation
history, and clinically informative. The data and code are available at
https://github.com/UCSD-AI4H/COVID-Dialogue.
| 2,020 | Computation and Language |
Luganda Text-to-Speech Machine | In Uganda, Luganda is the most spoken native language. It is used for
communication in informal as well as formal business transactions. The
development of technology startups globally related to TTS has mainly been with
languages like English, French, etc. These are added in TTS engines by Google,
Microsoft among others, allowing developers in these regions to innovate TTS
products. Luganda is not supported because the language is not built and
trained on these engines. In this study, we analyzed the Luganda language
structure and constructions and then proposed and developed a Luganda TTS. The
system was built and trained using locally sourced Luganda language text and
audio. The engine is now able to capture text and reads it aloud. We tested the
accuracy using MRT and MOS. MRT and MOS tests results are quite good with MRT
having better results. The results general score was 71%. This study will
enhance previous solutions to NLP gaps in Uganda, as well as provide raw data
such that other research in this area can take place.
| 2,020 | Computation and Language |
Neural Polysynthetic Language Modelling | Research in natural language processing commonly assumes that approaches that
work well for English and and other widely-used languages are "language
agnostic". In high-resource languages, especially those that are analytic, a
common approach is to treat morphologically-distinct variants of a common root
as completely independent word types. This assumes, that there are limited
morphological inflections per root, and that the majority will appear in a
large enough corpus, so that the model can adequately learn statistics about
each form. Approaches like stemming, lemmatization, or subword segmentation are
often used when either of those assumptions do not hold, particularly in the
case of synthetic languages like Spanish or Russian that have more inflection
than English.
In the literature, languages like Finnish or Turkish are held up as extreme
examples of complexity that challenge common modelling assumptions. Yet, when
considering all of the world's languages, Finnish and Turkish are closer to the
average case. When we consider polysynthetic languages (those at the extreme of
morphological complexity), approaches like stemming, lemmatization, or subword
modelling may not suffice. These languages have very high numbers of hapax
legomena, showing the need for appropriate morphological handling of words,
without which it is not possible for a model to capture enough word statistics.
We examine the current state-of-the-art in language modelling, machine
translation, and text prediction for four polysynthetic languages: Guaran\'i,
St. Lawrence Island Yupik, Central Alaskan Yupik, and Inuktitut. We then
propose a novel framework for language modelling that combines knowledge
representations from finite-state morphological analyzers with Tensor Product
Representations in order to enable neural language models capable of handling
the full range of typologically variant languages.
| 2,020 | Computation and Language |
Schema-Guided Natural Language Generation | Neural network based approaches to data-to-text natural language generation
(NLG) have gained popularity in recent years, with the goal of generating a
natural language prompt that accurately realizes an input meaning
representation. To facilitate the training of neural network models,
researchers created large datasets of paired utterances and their meaning
representations. However, the creation of such datasets is an arduous task and
they mostly consist of simple meaning representations composed of slot and
value tokens to be realized. These representations do not include any
contextual information that an NLG system can use when trying to generalize,
such as domain information and descriptions of slots and values. In this paper,
we present the novel task of Schema-Guided Natural Language Generation
(SG-NLG). Here, the goal is still to generate a natural language prompt, but in
SG-NLG, the input MRs are paired with rich schemata providing contextual
information. To generate a dataset for SG-NLG we re-purpose an existing dataset
for another task: dialog state tracking, which includes a large and rich schema
spanning multiple different attributes, including information about the domain,
user intent, and slot descriptions. We train different state-of-the-art models
for neural natural language generation on this dataset and show that in many
cases, including rich schema information allows our models to produce higher
quality outputs both in terms of semantics and diversity. We also conduct
experiments comparing model performance on seen versus unseen domains, and
present a human evaluation demonstrating high ratings for overall output
quality.
| 2,020 | Computation and Language |
Exploring TTS without T Using Biologically/Psychologically Motivated
Neural Network Modules (ZeroSpeech 2020) | In this study, we reported our exploration of Text-To-Speech without Text
(TTS without T) in the Zero Resource Speech Challenge 2020, in which
participants proposed an end-to-end, unsupervised system that learned speech
recognition and TTS together. We addressed the challenge using
biologically/psychologically motivated modules of Artificial Neural Networks
(ANN), with a particular interest in unsupervised learning of human language as
a biological/psychological problem. The system first processes Mel Frequency
Cepstral Coefficient (MFCC) frames with an Echo-State Network (ESN), and
simulates computations in cortical microcircuits. The outcome is discretized by
our original Variational Autoencoder (VAE) that implements the Dirichlet-based
Bayesian clustering widely accepted in computational linguistics and cognitive
science. The discretized signal is then reverted into sound waveform via a
neural-network implementation of the source-filter model for speech production.
| 2,020 | Computation and Language |
A Framework for Hierarchical Multilingual Machine Translation | Multilingual machine translation has recently been in vogue given its
potential for improving machine translation performance for low-resource
languages via transfer learning. Empirical examinations demonstrating the
success of existing multilingual machine translation strategies, however, are
limited to experiments in specific language groups. In this paper, we present a
hierarchical framework for building multilingual machine translation strategies
that takes advantage of a typological language family tree for enabling
transfer among similar languages while avoiding the negative effects that
result from incorporating languages that are too different to each other.
Exhaustive experimentation on a dataset with 41 languages demonstrates the
validity of the proposed framework, especially when it comes to improving the
performance of low-resource languages via the use of typologically related
families for which richer sets of resources are available.
| 2,020 | Computation and Language |
Psychometric Analysis and Coupling of Emotions Between State Bulletins
and Twitter in India during COVID-19 Infodemic | COVID-19 infodemic has been spreading faster than the pandemic itself. The
misinformation riding upon the infodemic wave poses a major threat to people's
health and governance systems. Since social media is the largest source of
information, managing the infodemic not only requires mitigating of
misinformation but also an early understanding of psychological patterns
resulting from it. During the COVID-19 crisis, Twitter alone has seen a sharp
45% increase in the usage of its curated events page, and a 30% increase in its
direct messaging usage, since March 6th 2020. In this study, we analyze the
psychometric impact and coupling of the COVID-19 infodemic with the official
bulletins related to COVID-19 at the national and state level in India. We look
at these two sources with a psycho-linguistic lens of emotions and quantified
the extent and coupling between the two. We modified path, a deep skip-gram
based open-sourced lexicon builder for effective capture of health-related
emotions. We were then able to capture the time-evolution of health-related
emotions in social media and official bulletins. An analysis of lead-lag
relationships between the time series of extracted emotions from official
bulletins and social media using Granger's causality showed that state
bulletins were leading the social media for some emotions such as Medical
Emergency. Further insights that are potentially relevant for the policymaker
and the communicators actively engaged in mitigating misinformation are also
discussed. Our paper also introduces CoronaIndiaDataset2, the first social
media based COVID-19 dataset at national and state levels from India with over
5.6 million national and 2.6 million state-level tweets. Finally, we present
our findings as COVibes, an interactive web application capturing psychometric
insights captured upon the CoronaIndiaDataset, both at a national and state
level.
| 2,020 | Computation and Language |
DiscreTalk: Text-to-Speech as a Machine Translation Problem | This paper proposes a new end-to-end text-to-speech (E2E-TTS) model based on
neural machine translation (NMT). The proposed model consists of two
components; a non-autoregressive vector quantized variational autoencoder
(VQ-VAE) model and an autoregressive Transformer-NMT model. The VQ-VAE model
learns a mapping function from a speech waveform into a sequence of discrete
symbols, and then the Transformer-NMT model is trained to estimate this
discrete symbol sequence from a given input text. Since the VQ-VAE model can
learn such a mapping in a fully-data-driven manner, we do not need to consider
hyperparameters of the feature extraction required in the conventional E2E-TTS
models. Thanks to the use of discrete symbols, we can use various techniques
developed in NMT and automatic speech recognition (ASR) such as beam search,
subword units, and fusions with a language model. Furthermore, we can avoid an
over smoothing problem of predicted features, which is one of the common issues
in TTS. The experimental evaluation with the JSUT corpus shows that the
proposed method outperforms the conventional Transformer-TTS model with a
non-autoregressive neural vocoder in naturalness, achieving the performance
comparable to the reconstruction of the VQ-VAE model.
| 2,020 | Computation and Language |
Simultaneous paraphrasing and translation by fine-tuning Transformer
models | This paper describes the third place submission to the shared task on
simultaneous translation and paraphrasing for language education at the 4th
workshop on Neural Generation and Translation (WNGT) for ACL 2020. The final
system leverages pre-trained translation models and uses a Transformer
architecture combined with an oversampling strategy to achieve a competitive
performance. This system significantly outperforms the baseline on Hungarian
(27% absolute improvement in Weighted Macro F1 score) and Portuguese (33%
absolute improvement) languages.
| 2,020 | Computation and Language |
Neighborhood Matching Network for Entity Alignment | Structural heterogeneity between knowledge graphs is an outstanding challenge
for entity alignment. This paper presents Neighborhood Matching Network (NMN),
a novel entity alignment framework for tackling the structural heterogeneity
challenge. NMN estimates the similarities between entities to capture both the
topological structure and the neighborhood difference. It provides two
innovative components for better learning representations for entity alignment.
It first uses a novel graph sampling method to distill a discriminative
neighborhood for each entity. It then adopts a cross-graph neighborhood
matching module to jointly encode the neighborhood difference for a given
entity pair. Such strategies allow NMN to effectively construct
matching-oriented entity representations while ignoring noisy neighbors that
have a negative impact on the alignment task. Extensive experiments performed
on three entity alignment datasets show that NMN can well estimate the
neighborhood similarity in more tough cases and significantly outperforms 12
previous state-of-the-art methods.
| 2,020 | Computation and Language |
SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis | Recently, sentiment analysis has seen remarkable advance with the help of
pre-training approaches. However, sentiment knowledge, such as sentiment words
and aspect-sentiment pairs, is ignored in the process of pre-training, despite
the fact that they are widely used in traditional sentiment analysis
approaches. In this paper, we introduce Sentiment Knowledge Enhanced
Pre-training (SKEP) in order to learn a unified sentiment representation for
multiple sentiment analysis tasks. With the help of automatically-mined
knowledge, SKEP conducts sentiment masking and constructs three sentiment
knowledge prediction objectives, so as to embed sentiment information at the
word, polarity and aspect level into pre-trained sentiment representation. In
particular, the prediction of aspect-sentiment pairs is converted into
multi-label classification, aiming to capture the dependency between words in a
pair. Experiments on three kinds of sentiment tasks show that SKEP
significantly outperforms strong pre-training baseline, and achieves new
state-of-the-art results on most of the test datasets. We release our code at
https://github.com/baidu/Senta.
| 2,020 | Computation and Language |
A Frobenius Algebraic Analysis for Parasitic Gaps | The interpretation of parasitic gaps is an ostensible case of non-linearity
in natural language composition. Existing categorial analyses, both in the
typelogical and in the combinatory traditions, rely on explicit forms of
syntactic copying. We identify two types of parasitic gapping where the
duplication of semantic content can be confined to the lexicon. Parasitic gaps
in adjuncts are analysed as forms of generalized coordination with a
polymorphic type schema for the head of the adjunct phrase. For parasitic gaps
affecting arguments of the same predicate, the polymorphism is associated with
the lexical item that introduces the primary gap. Our analysis is formulated in
terms of Lambek calculus extended with structural control modalities. A
compositional translation relates syntactic types and derivations to the
interpreting compact closed category of finite dimensional vector spaces and
linear maps with Frobenius algebras over it. When interpreted over the
necessary semantic spaces, the Frobenius algebras provide the tools to model
the proposed instances of lexical polymorphism.
| 2,020 | Computation and Language |
Learning and Evaluating Emotion Lexicons for 91 Languages | Emotion lexicons describe the affective meaning of words and thus constitute
a centerpiece for advanced sentiment and emotion analysis. Yet, manually
curated lexicons are only available for a handful of languages, leaving most
languages of the world without such a precious resource for downstream
applications. Even worse, their coverage is often limited both in terms of the
lexical units they contain and the emotional variables they feature. In order
to break this bottleneck, we here introduce a methodology for creating almost
arbitrarily large emotion lexicons for any target language. Our approach
requires nothing but a source language emotion lexicon, a bilingual word
translation model, and a target language embedding model. Fulfilling these
requirements for 91 languages, we are able to generate representationally rich
high-coverage lexicons comprising eight emotional variables with more than 100k
lexical entries each. We evaluated the automatically generated lexicons against
human judgment from 26 datasets, spanning 12 typologically diverse languages,
and found that our approach produces results in line with state-of-the-art
monolingual approaches to lexicon creation and even surpasses human reliability
for some languages and variables. Code and data are available at
https://github.com/JULIELab/MEmoLon archived under DOI
https://doi.org/10.5281/zenodo.3779901.
| 2,020 | Computation and Language |
On the Robustness of Language Encoders against Grammatical Errors | We conduct a thorough study to diagnose the behaviors of pre-trained language
encoders (ELMo, BERT, and RoBERTa) when confronted with natural grammatical
errors. Specifically, we collect real grammatical errors from non-native
speakers and conduct adversarial attacks to simulate these errors on clean text
data. We use this approach to facilitate debugging models on downstream
applications. Results confirm that the performance of all tested models is
affected but the degree of impact varies. To interpret model behaviors, we
further design a linguistic acceptability task to reveal their abilities in
identifying ungrammatical sentences and the position of errors. We find that
fixed contextual encoders with a simple classifier trained on the prediction of
sentence correctness are able to locate error positions. We also design a cloze
test for BERT and discover that BERT captures the interaction between errors
and specific tokens in context. Our results shed light on understanding the
robustness and behaviors of language encoders against grammatical errors.
| 2,020 | Computation and Language |
Detecting Multiword Expression Type Helps Lexical Complexity Assessment | Multiword expressions (MWEs) represent lexemes that should be treated as
single lexical units due to their idiosyncratic nature. Multiple NLP
applications have been shown to benefit from MWE identification, however the
research on lexical complexity of MWEs is still an under-explored area. In this
work, we re-annotate the Complex Word Identification Shared Task 2018 dataset
of Yimam et al. (2017), which provides complexity scores for a range of
lexemes, with the types of MWEs. We release the MWE-annotated dataset with this
paper, and we believe this dataset represents a valuable resource for the text
simplification community. In addition, we investigate which types of
expressions are most problematic for native and non-native readers. Finally, we
show that a lexical complexity assessment system benefits from the information
about MWE types.
| 2,020 | Computation and Language |
Dynamic Memory Induction Networks for Few-Shot Text Classification | This paper proposes Dynamic Memory Induction Networks (DMIN) for few-shot
text classification. The model utilizes dynamic routing to provide more
flexibility to memory-based few-shot learning in order to better adapt the
support sets, which is a critical capacity of few-shot classification models.
Based on that, we further develop induction models with query information,
aiming to enhance the generalization ability of meta-learning. The proposed
model achieves new state-of-the-art results on the miniRCV1 and ODIC dataset,
improving the best performance (accuracy) by 2~4%. Detailed analysis is further
performed to show the effectiveness of each component.
| 2,020 | Computation and Language |
Reassessing Claims of Human Parity and Super-Human Performance in
Machine Translation at WMT 2019 | We reassess the claims of human parity and super-human performance made at
the news shared task of WMT 2019 for three translation directions:
English-to-German, English-to-Russian and German-to-English. First we identify
three potential issues in the human evaluation of that shared task: (i) the
limited amount of intersentential context available, (ii) the limited
translation proficiency of the evaluators and (iii) the use of a reference
translation. We then conduct a modified evaluation taking these issues into
account. Our results indicate that all the claims of human parity and
super-human performance made at WMT 2019 should be refuted, except the claim of
human parity for English-to-German. Based on our findings, we put forward a set
of recommendations and open questions for future assessments of human parity in
machine translation.
| 2,020 | Computation and Language |
Document Modeling with Graph Attention Networks for Multi-grained
Machine Reading Comprehension | Natural Questions is a new challenging machine reading comprehension
benchmark with two-grained answers, which are a long answer (typically a
paragraph) and a short answer (one or more entities inside the long answer).
Despite the effectiveness of existing methods on this benchmark, they treat
these two sub-tasks individually during training while ignoring their
dependencies. To address this issue, we present a novel multi-grained machine
reading comprehension framework that focuses on modeling documents at their
hierarchical nature, which are different levels of granularity: documents,
paragraphs, sentences, and tokens. We utilize graph attention networks to
obtain different levels of representations so that they can be learned
simultaneously. The long and short answers can be extracted from
paragraph-level representation and token-level representation, respectively. In
this way, we can model the dependencies between the two-grained answers to
provide evidence for each other. We jointly train the two sub-tasks, and our
experiments show that our approach significantly outperforms previous systems
at both long and short answer criteria.
| 2,020 | Computation and Language |
A Report on the 2020 Sarcasm Detection Shared Task | Detecting sarcasm and verbal irony is critical for understanding people's
actual sentiments and beliefs. Thus, the field of sarcasm analysis has become a
popular research problem in natural language processing. As the community
working on computational approaches for sarcasm detection is growing, it is
imperative to conduct benchmarking studies to analyze the current
state-of-the-art, facilitating progress in this area. We report on the shared
task on sarcasm detection we conducted as a part of the 2nd Workshop on
Figurative Language Processing (FigLang 2020) at ACL 2020.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.