Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Sound Natural: Content Rephrasing in Dialog Systems
|
We introduce a new task of rephrasing for a more natural virtual assistant.
Currently, virtual assistants work in the paradigm of intent slot tagging and
the slot values are directly passed as-is to the execution engine. However,
this setup fails in some scenarios such as messaging when the query given by
the user needs to be changed before repeating it or sending it to another user.
For example, for queries like 'ask my wife if she can pick up the kids' or
'remind me to take my pills', we need to rephrase the content to 'can you pick
up the kids' and 'take your pills' In this paper, we study the problem of
rephrasing with messaging as a use case and release a dataset of 3000 pairs of
original query and rephrased query. We show that BART, a pre-trained
transformers-based masked language model with auto-regressive decoding, is a
strong baseline for the task, and show improvements by adding a copy-pointer
and copy loss to it. We analyze different tradeoffs of BART-based and
LSTM-based seq2seq models, and propose a distilled LSTM-based seq2seq as the
best practical model.
| 2,020 |
Computation and Language
|
SimulMT to SimulST: Adapting Simultaneous Text Translation to End-to-End
Simultaneous Speech Translation
|
Simultaneous text translation and end-to-end speech translation have recently
made great progress but little work has combined these tasks together. We
investigate how to adapt simultaneous text translation methods such as wait-k
and monotonic multihead attention to end-to-end simultaneous speech translation
by introducing a pre-decision module. A detailed analysis is provided on the
latency-quality trade-offs of combining fixed and flexible pre-decision with
fixed and flexible policies. We also design a novel computation-aware latency
metric, adapted from Average Lagging.
| 2,020 |
Computation and Language
|
Generating Synthetic Data for Task-Oriented Semantic Parsing with
Hierarchical Representations
|
Modern conversational AI systems support natural language understanding for a
wide variety of capabilities. While a majority of these tasks can be
accomplished using a simple and flat representation of intents and slots, more
sophisticated capabilities require complex hierarchical representations
supported by semantic parsing. State-of-the-art semantic parsers are trained
using supervised learning with data labeled according to a hierarchical schema
which might be costly to obtain or not readily available for a new domain. In
this work, we explore the possibility of generating synthetic data for neural
semantic parsing using a pretrained denoising sequence-to-sequence model (i.e.,
BART). Specifically, we first extract masked templates from the existing
labeled utterances, and then fine-tune BART to generate synthetic utterances
conditioning on the extracted templates. Finally, we use an auxiliary parser
(AP) to filter the generated utterances. The AP guarantees the quality of the
generated data. We show the potential of our approach when evaluating on the
Facebook TOP dataset for navigation domain.
| 2,020 |
Computation and Language
|
Treebanking User-Generated Content: a UD Based Overview of Guidelines,
Corpora and Unified Recommendations
|
This article presents a discussion on the main linguistic phenomena which
cause difficulties in the analysis of user-generated texts found on the web and
in social media, and proposes a set of annotation guidelines for their
treatment within the Universal Dependencies (UD) framework of syntactic
analysis. Given on the one hand the increasing number of treebanks featuring
user-generated content, and its somewhat inconsistent treatment in these
resources on the other, the aim of this article is twofold: (1) to provide a
condensed, though comprehensive, overview of such treebanks -- based on
available literature -- along with their main features and a comparative
analysis of their annotation criteria, and (2) to propose a set of tentative
UD-based annotation guidelines, to promote consistent treatment of the
particular phenomena found in these types of texts. The overarching goal of
this article is to provide a common framework for researchers interested in
developing similar resources in UD, thus promoting cross-linguistic
consistency, which is a principle that has always been central to the spirit of
UD.
| 2,020 |
Computation and Language
|
Exhaustive Entity Recognition for Coptic: Challenges and Solutions
|
Entity recognition provides semantic access to ancient materials in the
Digital Humanities: itexposes people and places of interest in texts that
cannot be read exhaustively, facilitates linkingresources and can provide a
window into text contents, even for texts with no translations. Inthis paper we
present entity recognition for Coptic, the language of Hellenistic era Egypt.
Weevaluate NLP approaches to the task and lay out difficulties in applying them
to a low-resource,morphologically complex language. We present solutions for
named and non-named nested en-tity recognition and semi-automatic entity
linking to Wikipedia, relying on robust dependencyparsing, feature-based CRF
models, and hand-crafted knowledge base resources, enabling highaccuracy NER
with orders of magnitude less data than those used for high resource
languages.The results suggest avenues for research on other languages in
similar settings.
| 2,020 |
Computation and Language
|
Probing Multilingual BERT for Genetic and Typological Signals
|
We probe the layers in multilingual BERT (mBERT) for phylogenetic and
geographic language signals across 100 languages and compute language distances
based on the mBERT representations. We 1) employ the language distances to
infer and evaluate language trees, finding that they are close to the reference
family tree in terms of quartet tree distance, 2) perform distance matrix
regression analysis, finding that the language distances can be best explained
by phylogenetic and worst by structural factors and 3) present a novel measure
for measuring diachronic meaning stability (based on cross-lingual
representation variability) which correlates significantly with published
ranked lists based on linguistic approaches. Our results contribute to the
nascent field of typological interpretability of cross-lingual text
representations.
| 2,020 |
Computation and Language
|
Chinese Grammatical Correction Using BERT-based Pre-trained Model
|
In recent years, pre-trained models have been extensively studied, and
several downstream tasks have benefited from their utilization. In this study,
we verify the effectiveness of two methods that incorporate a BERT-based
pre-trained model developed by Cui et al. (2020) into an encoder-decoder model
on Chinese grammatical error correction tasks. We also analyze the error type
and conclude that sentence-level errors are yet to be addressed.
| 2,020 |
Computation and Language
|
Augmenting Images for ASR and TTS through Single-loop and Dual-loop
Multimodal Chain Framework
|
Previous research has proposed a machine speech chain to enable automatic
speech recognition (ASR) and text-to-speech synthesis (TTS) to assist each
other in semi-supervised learning and to avoid the need for a large amount of
paired speech and text data. However, that framework still requires a large
amount of unpaired (speech or text) data. A prototype multimodal machine chain
was then explored to further reduce the need for a large amount of unpaired
data, which could improve ASR or TTS even when no more speech or text data were
available. Unfortunately, this framework relied on the image retrieval (IR)
model, and thus it was limited to handling only those images that were already
known during training. Furthermore, the performance of this framework was only
investigated with single-speaker artificial speech data. In this study, we
revamp the multimodal machine chain framework with image generation (IG) and
investigate the possibility of augmenting image data for ASR and TTS using
single-loop and dual-loop architectures on multispeaker natural speech data.
Experimental results revealed that both single-loop and dual-loop multimodal
chain frameworks enabled ASR and TTS to improve their performance using an
image-only dataset.
| 2,020 |
Computation and Language
|
PheMT: A Phenomenon-wise Dataset for Machine Translation Robustness on
User-Generated Contents
|
Neural Machine Translation (NMT) has shown drastic improvement in its quality
when translating clean input, such as text from the news domain. However,
existing studies suggest that NMT still struggles with certain kinds of input
with considerable noise, such as User-Generated Contents (UGC) on the Internet.
To make better use of NMT for cross-cultural communication, one of the most
promising directions is to develop a model that correctly handles these
expressions. Though its importance has been recognized, it is still not clear
as to what creates the great gap in performance between the translation of
clean input and that of UGC. To answer the question, we present a new dataset,
PheMT, for evaluating the robustness of MT systems against specific linguistic
phenomena in Japanese-English translation. Our experiments with the created
dataset revealed that not only our in-house models but even widely used
off-the-shelf systems are greatly disturbed by the presence of certain
phenomena.
| 2,020 |
Computation and Language
|
Incremental Machine Speech Chain Towards Enabling Listening while
Speaking in Real-time
|
Inspired by a human speech chain mechanism, a machine speech chain framework
based on deep learning was recently proposed for the semi-supervised
development of automatic speech recognition (ASR) and text-to-speech synthesis
TTS) systems. However, the mechanism to listen while speaking can be done only
after receiving entire input sequences. Thus, there is a significant delay when
encountering long utterances. By contrast, humans can listen to what hey speak
in real-time, and if there is a delay in hearing, they won't be able to
continue speaking. In this work, we propose an incremental machine speech chain
towards enabling machine to listen while speaking in real-time. Specifically,
we construct incremental ASR (ISR) and incremental TTS (ITTS) by letting both
systems improve together through a short-term loop. Our experimental results
reveal that our proposed framework is able to reduce delays due to long
utterances while keeping a comparable performance to the non-incremental basic
machine speech chain.
| 2,020 |
Computation and Language
|
Sequence-to-Sequence Learning via Attention Transfer for Incremental
Speech Recognition
|
Attention-based sequence-to-sequence automatic speech recognition (ASR)
requires a significant delay to recognize long utterances because the output is
generated after receiving entire input sequences. Although several studies
recently proposed sequence mechanisms for incremental speech recognition (ISR),
using different frameworks and learning algorithms is more complicated than the
standard ASR model. One main reason is because the model needs to decide the
incremental steps and learn the transcription that aligns with the current
short speech segment. In this work, we investigate whether it is possible to
employ the original architecture of attention-based ASR for ISR tasks by
treating a full-utterance ASR as the teacher model and the ISR as the student
model. We design an alternative student network that, instead of using a
thinner or a shallower model, keeps the original architecture of the teacher
model but with shorter sequences (few encoder and decoder states). Using
attention transfer, the student network learns to mimic the same alignment
between the current input short speech segments and the transcription. Our
experiments show that by delaying the starting time of recognition process with
about 1.7 sec, we can achieve comparable performance to one that needs to wait
until the end.
| 2,020 |
Computation and Language
|
Cross-Lingual Machine Speech Chain for Javanese, Sundanese, Balinese,
and Bataks Speech Recognition and Synthesis
|
Even though over seven hundred ethnic languages are spoken in Indonesia, the
available technology remains limited that could support communication within
indigenous communities as well as with people outside the villages. As a
result, indigenous communities still face isolation due to cultural barriers;
languages continue to disappear. To accelerate communication, speech-to-speech
translation (S2ST) technology is one approach that can overcome language
barriers. However, S2ST systems require machine translation (MT), speech
recognition (ASR), and synthesis (TTS) that rely heavily on supervised training
and a broad set of language resources that can be difficult to collect from
ethnic communities. Recently, a machine speech chain mechanism was proposed to
enable ASR and TTS to assist each other in semi-supervised learning. The
framework was initially implemented only for monolingual languages. In this
study, we focus on developing speech recognition and synthesis for these
Indonesian ethnic languages: Javanese, Sundanese, Balinese, and Bataks. We
first separately train ASR and TTS of standard Indonesian in supervised
training. We then develop ASR and TTS of ethnic languages by utilizing
Indonesian ASR and TTS in a cross-lingual machine speech chain framework with
only text or only speech data removing the need for paired speech-text data of
those ethnic languages.
| 2,020 |
Computation and Language
|
Conditioned Text Generation with Transfer for Closed-Domain Dialogue
Systems
|
Scarcity of training data for task-oriented dialogue systems is a well known
problem that is usually tackled with costly and time-consuming manual data
annotation. An alternative solution is to rely on automatic text generation
which, although less accurate than human supervision, has the advantage of
being cheap and fast. Our contribution is twofold. First we show how to
optimally train and control the generation of intent-specific sentences using a
conditional variational autoencoder. Then we introduce a new protocol called
query transfer that allows to leverage a large unlabelled dataset, possibly
containing irrelevant queries, to extract relevant information. Comparison with
two different baselines shows that this method, in the appropriate regime,
consistently improves the diversity of the generated queries without
compromising their quality. We also demonstrate the effectiveness of our
generation method as a data augmentation technique for language modelling
tasks.
| 2,020 |
Computation and Language
|
Data Augmentation for End-to-end Code-switching Speech Recognition
|
Training a code-switching end-to-end automatic speech recognition (ASR) model
normally requires a large amount of data, while code-switching data is often
limited. In this paper, three novel approaches are proposed for code-switching
data augmentation. Specifically, they are audio splicing with the existing
code-switching data, and TTS with new code-switching texts generated by word
translation or word insertion. Our experiments on 200 hours Mandarin-English
code-switching dataset show that all the three proposed approaches yield
significant improvements on code-switching ASR individually. Moreover, all the
proposed approaches can be combined with recent popular SpecAugment, and an
addition gain can be obtained. WER is significantly reduced by relative 24.0%
compared to the system without any data augmentation, and still relative 13.0%
gain compared to the system with only SpecAugment
| 2,021 |
Computation and Language
|
Neural text normalization leveraging similarities of strings and sounds
|
We propose neural models that can normalize text by considering the
similarities of word strings and sounds. We experimentally compared a model
that considers the similarities of both word strings and sounds, a model that
considers only the similarity of word strings or of sounds, and a model without
the similarities as a baseline. Results showed that leveraging the word string
similarity succeeded in dealing with misspellings and abbreviations, and taking
into account the sound similarity succeeded in dealing with phonetic
substitutions and emphasized characters. So that the proposed models achieved
higher F$_1$ scores than the baseline.
| 2,020 |
Computation and Language
|
Extracting Chemical-Protein Interactions via Calibrated Deep Neural
Network and Self-training
|
The extraction of interactions between chemicals and proteins from several
biomedical articles is important in many fields of biomedical research such as
drug development and prediction of drug side effects. Several natural language
processing methods, including deep neural network (DNN) models, have been
applied to address this problem. However, these methods were trained with
hard-labeled data, which tend to become over-confident, leading to degradation
of the model reliability. To estimate the data uncertainty and improve the
reliability, "calibration" techniques have been applied to deep learning
models. In this study, to extract chemical--protein interactions, we propose a
DNN-based approach incorporating uncertainty information and calibration
techniques. Our model first encodes the input sequence using a pre-trained
language-understanding model, following which it is trained using two
calibration methods: mixup training and addition of a confidence penalty loss.
Finally, the model is re-trained with augmented data that are extracted using
the estimated uncertainties. Our approach has achieved state-of-the-art
performance with regard to the Biocreative VI ChemProt task, while preserving
higher calibration abilities than those of previous approaches. Furthermore,
our approach also presents the possibilities of using uncertainty estimation
for performance improvement.
| 2,020 |
Computation and Language
|
Hybrid Supervised Reinforced Model for Dialogue Systems
|
This paper presents a recurrent hybrid model and training procedure for
task-oriented dialogue systems based on Deep Recurrent Q-Networks (DRQN). The
model copes with both tasks required for Dialogue Management: State Tracking
and Decision Making. It is based on modeling Human-Machine interaction into a
latent representation embedding an interaction context to guide the discussion.
The model achieves greater performance, learning speed and robustness than a
non-recurrent baseline. Moreover, results allow interpreting and validating the
policy evolution and the latent representations information-wise.
| 2,020 |
Computation and Language
|
Optimizing Transformer for Low-Resource Neural Machine Translation
|
Language pairs with limited amounts of parallel data, also known as
low-resource languages, remain a challenge for neural machine translation.
While the Transformer model has achieved significant improvements for many
language pairs and has become the de facto mainstream architecture, its
capability under low-resource conditions has not been fully investigated yet.
Our experiments on different subsets of the IWSLT14 training data show that the
effectiveness of Transformer under low-resource conditions is highly dependent
on the hyper-parameter settings. Our experiments show that using an optimized
Transformer for low-resource conditions improves the translation quality up to
7.3 BLEU points compared to using the Transformer default settings.
| 2,020 |
Computation and Language
|
Indic-Transformers: An Analysis of Transformer Language Models for
Indian Languages
|
Language models based on the Transformer architecture have achieved
state-of-the-art performance on a wide range of NLP tasks such as text
classification, question-answering, and token classification. However, this
performance is usually tested and reported on high-resource languages, like
English, French, Spanish, and German. Indian languages, on the other hand, are
underrepresented in such benchmarks. Despite some Indian languages being
included in training multilingual Transformer models, they have not been the
primary focus of such work. In order to evaluate the performance on Indian
languages specifically, we analyze these language models through extensive
experiments on multiple downstream tasks in Hindi, Bengali, and Telugu
language. Here, we compare the efficacy of fine-tuning model parameters of
pre-trained models against that of training a language model from scratch.
Moreover, we empirically argue against the strict dependency between the
dataset size and model performance, but rather encourage task-specific model
and method selection. We achieve state-of-the-art performance on Hindi and
Bengali languages for text classification task. Finally, we present effective
strategies for handling the modeling of Indian languages and we release our
model checkpoints for the community :
https://huggingface.co/neuralspace-reverie.
| 2,020 |
Computation and Language
|
A BERT-based Dual Embedding Model for Chinese Idiom Prediction
|
Chinese idioms are special fixed phrases usually derived from ancient
stories, whose meanings are oftentimes highly idiomatic and non-compositional.
The Chinese idiom prediction task is to select the correct idiom from a set of
candidate idioms given a context with a blank. We propose a BERT-based dual
embedding model to encode the contextual words as well as to learn dual
embeddings of the idioms. Specifically, we first match the embedding of each
candidate idiom with the hidden representation corresponding to the blank in
the context. We then match the embedding of each candidate idiom with the
hidden representations of all the tokens in the context thorough context
pooling. We further propose to use two separate idiom embeddings for the two
kinds of matching. Experiments on a recently released Chinese idiom cloze test
dataset show that our proposed method performs better than the existing state
of the art. Ablation experiments also show that both context pooling and dual
embedding contribute to the improvement of performance.
| 2,020 |
Computation and Language
|
Investigating Novel Verb Learning in BERT: Selectional Preference
Classes and Alternation-Based Syntactic Generalization
|
Previous studies investigating the syntactic abilities of deep learning
models have not targeted the relationship between the strength of the
grammatical generalization and the amount of evidence to which the model is
exposed during training. We address this issue by deploying a novel
word-learning paradigm to test BERT's few-shot learning capabilities for two
aspects of English verbs: alternations and classes of selectional preferences.
For the former, we fine-tune BERT on a single frame in a verbal-alternation
pair and ask whether the model expects the novel verb to occur in its sister
frame. For the latter, we fine-tune BERT on an incomplete selectional network
of verbal objects and ask whether it expects unattested but plausible
verb/object pairs. We find that BERT makes robust grammatical generalizations
after just one or two instances of a novel word in fine-tuning. For the verbal
alternation tests, we find that the model displays behavior that is consistent
with a transitivity bias: verbs seen few times are expected to take direct
objects, but verbs seen with direct objects are not expected to occur
intransitively.
| 2,020 |
Computation and Language
|
Offline Reinforcement Learning from Human Feedback in Real-World
Sequence-to-Sequence Tasks
|
Large volumes of interaction logs can be collected from NLP systems that are
deployed in the real world. How can this wealth of information be leveraged?
Using such interaction logs in an offline reinforcement learning (RL) setting
is a promising approach. However, due to the nature of NLP tasks and the
constraints of production systems, a series of challenges arise. We present a
concise overview of these challenges and discuss possible solutions.
| 2,021 |
Computation and Language
|
MTLB-STRUCT @PARSEME 2020: Capturing Unseen Multiword Expressions Using
Multi-task Learning and Pre-trained Masked Language Models
|
This paper describes a semi-supervised system that jointly learns verbal
multiword expressions (VMWEs) and dependency parse trees as an auxiliary task.
The model benefits from pre-trained multilingual BERT. BERT hidden layers are
shared among the two tasks and we introduce an additional linear layer to
retrieve VMWE tags. The dependency parse tree prediction is modelled by a
linear layer and a bilinear one plus a tree CRF on top of BERT. The system has
participated in the open track of the PARSEME shared task 2020 and ranked first
in terms of F1-score in identifying unseen VMWEs as well as VMWEs in general,
averaged across all 14 languages.
| 2,020 |
Computation and Language
|
MK-SQuIT: Synthesizing Questions using Iterative Template-filling
|
The aim of this work is to create a framework for synthetically generating
question/query pairs with as little human input as possible. These datasets can
be used to train machine translation systems to convert natural language
questions into queries, a useful tool that could allow for more natural access
to database information. Existing methods of dataset generation require human
input that scales linearly with the size of the dataset, resulting in small
datasets. Aside from a short initial configuration task, no human input is
required during the query generation process of our system. We leverage
WikiData, a knowledge base of RDF triples, as a source for generating the main
content of questions and queries. Using multiple layers of question templating
we are able to sidestep some of the most challenging parts of query generation
that have been handled by humans in previous methods; humans never have to
modify, aggregate, inspect, annotate, or generate any questions or queries at
any step in the process. Our system is easily configurable to multiple domains
and can be modified to generate queries in natural languages other than
English. We also present an example dataset of 110,000 question/query pairs
across four WikiData domains. We then present a baseline model that we train
using the dataset which shows promise in a commercial QA setting.
| 2,020 |
Computation and Language
|
Detecting Hallucinated Content in Conditional Neural Sequence Generation
|
Neural sequence models can generate highly fluent sentences, but recent
studies have also shown that they are also prone to hallucinate additional
content not supported by the input. These variety of fluent but wrong outputs
are particularly problematic, as it will not be possible for users to tell they
are being presented incorrect content. To detect these errors, we propose a
task to predict whether each token in the output sequence is hallucinated (not
contained in the input) and collect new manually annotated evaluation sets for
this task. We also introduce a method for learning to detect hallucinations
using pretrained language models fine tuned on synthetic data that includes
automatically inserted hallucinations Experiments on machine translation (MT)
and abstractive summarization demonstrate that our proposed approach
consistently outperforms strong baselines on all benchmark datasets. We further
demonstrate how to use the token-level hallucination labels to define a
fine-grained loss over the target sequence in low-resource MT and achieve
significant improvements over strong baseline methods. We also apply our method
to word-level quality estimation for MT and show its effectiveness in both
supervised and unsupervised settings. Codes and data available at
https://github.com/violet-zct/fairseq-detect-hallucination.
| 2,021 |
Computation and Language
|
Improving Event Duration Prediction via Time-aware Pre-training
|
End-to-end models in NLP rarely encode external world knowledge about length
of time. We introduce two effective models for duration prediction, which
incorporate external knowledge by reading temporal-related news sentences
(time-aware pre-training). Specifically, one model predicts the range/unit
where the duration value falls in (R-pred); and the other predicts the exact
duration value E-pred. Our best model -- E-pred, substantially outperforms
previous work, and captures duration information more accurately than R-pred.
We also demonstrate our models are capable of duration prediction in the
unsupervised setting, outperforming the baselines.
| 2,020 |
Computation and Language
|
Adversarial Context Aware Network Embeddings for Textual Networks
|
Representation learning of textual networks poses a significant challenge as
it involves capturing amalgamated information from two modalities: (i)
underlying network structure, and (ii) node textual attributes. For this, most
existing approaches learn embeddings of text and network structure by enforcing
embeddings of connected nodes to be similar. Then for achieving a modality
fusion they use the similarities between text embedding of a node with the
structure embedding of its connected node and vice versa. This implies that
these approaches require edge information for learning embeddings and they
cannot learn embeddings of unseen nodes. In this paper we propose an approach
that achieves both modality fusion and the capability to learn embeddings of
unseen nodes. The main feature of our model is that it uses an adversarial
mechanism between text embedding based discriminator, and structure embedding
based generator to learn efficient representations. Then for learning
embeddings of unseen nodes, we use the supervision provided by the text
embedding based discriminator. In addition this, we propose a novel
architecture for learning text embedding that can combine both mutual attention
and topological attention mechanism, which give more flexible text embeddings.
Through extensive experiments on real-world datasets, we demonstrate that our
model makes substantial gains over several state-of-the-art benchmarks. In
comparison with previous state-of-the-art, it gives up to 7% improvement in
performance in predicting links among nodes seen in the training and up to 12%
improvement in performance in predicting links involving nodes not seen in
training. Further, in the node classification task, it gives up to 2%
improvement in performance.
| 2,020 |
Computation and Language
|
Investigating Societal Biases in a Poetry Composition System
|
There is a growing collection of work analyzing and mitigating societal
biases in language understanding, generation, and retrieval tasks, though
examining biases in creative tasks remains underexplored. Creative language
applications are meant for direct interaction with users, so it is important to
quantify and mitigate societal biases in these applications. We introduce a
novel study on a pipeline to mitigate societal biases when retrieving next
verse suggestions in a poetry composition system. Our results suggest that data
augmentation through sentiment style transfer has potential for mitigating
societal biases.
| 2,020 |
Computation and Language
|
Context-Aware Answer Extraction in Question Answering
|
Extractive QA models have shown very promising performance in predicting the
correct answer to a question for a given passage. However, they sometimes
result in predicting the correct answer text but in a context irrelevant to the
given question. This discrepancy becomes especially important as the number of
occurrences of the answer text in a passage increases. To resolve this issue,
we propose \textbf{BLANC} (\textbf{BL}ock \textbf{A}ttentio\textbf{N} for
\textbf{C}ontext prediction) based on two main ideas: context prediction as an
auxiliary task in multi-task learning manner, and a block attention method that
learns the context prediction task. With experiments on reading comprehension,
we show that BLANC outperforms the state-of-the-art QA models, and the
performance gap increases as the number of answer text occurrences increases.
We also conduct an experiment of training the models using SQuAD and predicting
the supporting facts on HotpotQA and show that BLANC outperforms all baseline
models in this zero-shot setting.
| 2,020 |
Computation and Language
|
Entity Linking in 100 Languages
|
We propose a new formulation for multilingual entity linking, where
language-specific mentions resolve to a language-agnostic Knowledge Base. We
train a dual encoder in this new setting, building on prior work with improved
feature representation, negative mining, and an auxiliary entity-pairing task,
to obtain a single entity retrieval model that covers 100+ languages and 20
million entities. The model outperforms state-of-the-art results from a far
more limited cross-lingual linking task. Rare entities and low-resource
languages pose challenges at this large-scale, so we advocate for an increased
focus on zero- and few-shot evaluation. To this end, we provide Mewsli-9, a
large new multilingual dataset (http://goo.gle/mewsli-dataset) matched to our
setting, and show how frequency-based analysis provided key insights for our
model and training enhancements.
| 2,020 |
Computation and Language
|
Improving Commonsense Question Answering by Graph-based Iterative
Retrieval over Multiple Knowledge Sources
|
In order to facilitate natural language understanding, the key is to engage
commonsense or background knowledge. However, how to engage commonsense
effectively in question answering systems is still under exploration in both
research academia and industry. In this paper, we propose a novel
question-answering method by integrating multiple knowledge sources, i.e.
ConceptNet, Wikipedia, and the Cambridge Dictionary, to boost the performance.
More concretely, we first introduce a novel graph-based iterative knowledge
retrieval module, which iteratively retrieves concepts and entities related to
the given question and its choices from multiple knowledge sources. Afterward,
we use a pre-trained language model to encode the question, retrieved knowledge
and choices, and propose an answer choice-aware attention mechanism to fuse all
hidden representations of the previous modules. Finally, the linear classifier
for specific tasks is used to predict the answer. Experimental results on the
CommonsenseQA dataset show that our method significantly outperforms other
competitive methods and achieves the new state-of-the-art. In addition, further
ablation studies demonstrate the effectiveness of our graph-based iterative
knowledge retrieval module and the answer choice-aware attention module in
retrieving and synthesizing background knowledge from multiple knowledge
sources.
| 2,020 |
Computation and Language
|
NUAA-QMUL at SemEval-2020 Task 8: Utilizing BERT and DenseNet for
Internet Meme Emotion Analysis
|
This paper describes our contribution to SemEval 2020 Task 8: Memotion
Analysis. Our system learns multi-modal embeddings from text and images in
order to classify Internet memes by sentiment. Our model learns text embeddings
using BERT and extracts features from images with DenseNet, subsequently
combining both features through concatenation. We also compare our results with
those produced by DenseNet, ResNet, BERT, and BERT-ResNet. Our results show
that image classification models have the potential to help classifying memes,
with DenseNet outperforming ResNet. Adding text features is however not always
helpful for Memotion Analysis.
| 2,020 |
Computation and Language
|
Data Augmentation and Terminology Integration for Domain-Specific
Sinhala-English-Tamil Statistical Machine Translation
|
Out of vocabulary (OOV) is a problem in the context of Machine Translation
(MT) in low-resourced languages. When source and/or target languages are
morphologically rich, it becomes even worse. Bilingual list integration is an
approach to address the OOV problem. This allows more words to be translated
than are in the training data. However, since bilingual lists contain words in
the base form, it will not translate inflected forms for morphologically rich
languages such as Sinhala and Tamil. This paper focuses on data augmentation
techniques where bilingual lexicon terms are expanded based on case-markers
with the objective of generating new words, to be used in Statistical machine
Translation (SMT). This data augmentation technique for dictionary terms shows
improved BLEU scores for Sinhala-English SMT.
| 2,021 |
Computation and Language
|
Imagining Grounded Conceptual Representations from Perceptual
Information in Situated Guessing Games
|
In visual guessing games, a Guesser has to identify a target object in a
scene by asking questions to an Oracle. An effective strategy for the players
is to learn conceptual representations of objects that are both discriminative
and expressive enough to ask questions and guess correctly. However, as shown
by Suglia et al. (2020), existing models fail to learn truly multi-modal
representations, relying instead on gold category labels for objects in the
scene both at training and inference time. This provides an unnatural
performance advantage when categories at inference time match those at training
time, and it causes models to fail in more realistic "zero-shot" scenarios
where out-of-domain object categories are involved. To overcome this issue, we
introduce a novel "imagination" module based on Regularized Auto-Encoders, that
learns context-aware and category-aware latent embeddings without relying on
category labels at inference time. Our imagination module outperforms
state-of-the-art competitors by 8.26% gameplay accuracy in the CompGuessWhat?!
zero-shot scenario (Suglia et al., 2020), and it improves the Oracle and
Guesser accuracy by 2.08% and 12.86% in the GuessWhat?! benchmark, when no gold
categories are available at inference time. The imagination module also boosts
reasoning about object properties and attributes.
| 2,020 |
Computation and Language
|
Paralinguistic Privacy Protection at the Edge
|
Voice user interfaces and digital assistants are rapidly entering our lives
and becoming singular touch points spanning our devices. These always-on
services capture and transmit our audio data to powerful cloud services for
further processing and subsequent actions. Our voices and raw audio signals
collected through these devices contain a host of sensitive paralinguistic
information that is transmitted to service providers regardless of deliberate
or false triggers. As our emotional patterns and sensitive attributes like our
identity, gender, well-being, are easily inferred using deep acoustic models,
we encounter a new generation of privacy risks by using these services. One
approach to mitigate the risk of paralinguistic-based privacy breaches is to
exploit a combination of cloud-based processing with privacy-preserving,
on-device paralinguistic information learning and filtering before transmitting
voice data. In this paper we introduce EDGY, a configurable, lightweight,
disentangled representation learning framework that transforms and filters
high-dimensional voice data to identify and contain sensitive attributes at the
edge prior to offloading to the cloud. We evaluate EDGY's on-device performance
and explore optimization techniques, including model quantization and knowledge
distillation, to enable private, accurate and efficient representation learning
on resource-constrained devices. Our results show that EDGY runs in tens of
milliseconds with 0.2% relative improvement in "zero-shot" ABX score or minimal
performance penalties of approximately 5.95% word error rate (WER) in learning
linguistic representations from raw voice signals, using a CPU and a
single-core ARM processor without specialized hardware.
| 2,022 |
Computation and Language
|
QMUL-SDS @ DIACR-Ita: Evaluating Unsupervised Diachronic Lexical
Semantics Classification in Italian
|
In this paper, we present the results and main findings of our system for the
DIACR-ITA 2020 Task. Our system focuses on using variations of training sets
and different semantic detection methods. The task involves training, aligning
and predicting a word's vector change from two diachronic Italian corpora. We
demonstrate that using Temporal Word Embeddings with a Compass C-BOW model is
more effective compared to different approaches including Logistic Regression
and a Feed Forward Neural Network using accuracy. Our model ranked 3rd with an
accuracy of 83.3%.
| 2,020 |
Computation and Language
|
Learning Efficient Task-Specific Meta-Embeddings with Word Prisms
|
Word embeddings are trained to predict word cooccurrence statistics, which
leads them to possess different lexical properties (syntactic, semantic, etc.)
depending on the notion of context defined at training time. These properties
manifest when querying the embedding space for the most similar vectors, and
when used at the input layer of deep neural networks trained to solve
downstream NLP problems. Meta-embeddings combine multiple sets of differently
trained word embeddings, and have been shown to successfully improve intrinsic
and extrinsic performance over equivalent models which use just one set of
source embeddings. We introduce word prisms: a simple and efficient
meta-embedding method that learns to combine source embeddings according to the
task at hand. Word prisms learn orthogonal transformations to linearly combine
the input source embeddings, which allows them to be very efficient at
inference time. We evaluate word prisms in comparison to other meta-embedding
methods on six extrinsic evaluations and observe that word prisms offer
improvements in performance on all tasks.
| 2,020 |
Computation and Language
|
CODER: Knowledge infused cross-lingual medical term embedding for term
normalization
|
This paper proposes CODER: contrastive learning on knowledge graphs for
cross-lingual medical term representation. CODER is designed for medical term
normalization by providing close vector representations for different terms
that represent the same or similar medical concepts with cross-lingual support.
We train CODER via contrastive learning on a medical knowledge graph (KG) named
the Unified Medical Language System, where similarities are calculated
utilizing both terms and relation triplets from KG. Training with relations
injects medical knowledge into embeddings and aims to provide potentially
better machine learning features. We evaluate CODER in zero-shot term
normalization, semantic similarity, and relation classification benchmarks,
which show that CODERoutperforms various state-of-the-art biomedical word
embedding, concept embeddings, and contextual embeddings. Our codes and models
are available at https://github.com/GanjinZero/CODER.
| 2,021 |
Computation and Language
|
Competence-Level Prediction and Resume & Job Description Matching Using
Context-Aware Transformer Models
|
This paper presents a comprehensive study on resume classification to reduce
the time and labor needed to screen an overwhelming number of applications
significantly, while improving the selection of suitable candidates. A total of
6,492 resumes are extracted from 24,933 job applications for 252 positions
designated into four levels of experience for Clinical Research Coordinators
(CRC). Each resume is manually annotated to its most appropriate CRC position
by experts through several rounds of triple annotation to establish guidelines.
As a result, a high Kappa score of 61% is achieved for inter-annotator
agreement. Given this dataset, novel transformer-based classification models
are developed for two tasks: the first task takes a resume and classifies it to
a CRC level (T1), and the second task takes both a resume and a job description
to apply and predicts if the application is suited to the job T2. Our best
models using section encoding and multi-head attention decoding give results of
73.3% to T1 and 79.2% to T2. Our analysis shows that the prediction errors are
mostly made among adjacent CRC levels, which are hard for even experts to
distinguish, implying the practical value of our models in real HR platforms.
| 2,020 |
Computation and Language
|
MEGA RST Discourse Treebanks with Structure and Nuclearity from Scalable
Distant Sentiment Supervision
|
The lack of large and diverse discourse treebanks hinders the application of
data-driven approaches, such as deep-learning, to RST-style discourse parsing.
In this work, we present a novel scalable methodology to automatically generate
discourse treebanks using distant supervision from sentiment-annotated
datasets, creating and publishing MEGA-DT, a new large-scale
discourse-annotated corpus. Our approach generates discourse trees
incorporating structure and nuclearity for documents of arbitrary length by
relying on an efficient heuristic beam-search strategy, extended with a
stochastic component. Experiments on multiple datasets indicate that a
discourse parser trained on our MEGA-DT treebank delivers promising
inter-domain performance gains when compared to parsers trained on
human-annotated discourse corpora.
| 2,020 |
Computation and Language
|
Quantifying Intimacy in Language
|
Intimacy is a fundamental aspect of how we relate to others in social
settings. Language encodes the social information of intimacy through both
topics and other more subtle cues (such as linguistic hedging and swearing).
Here, we introduce a new computational framework for studying expressions of
the intimacy in language with an accompanying dataset and deep learning model
for accurately predicting the intimacy level of questions (Pearson's r=0.87).
Through analyzing a dataset of 80.5M questions across social media, books, and
films, we show that individuals employ interpersonal pragmatic moves in their
language to align their intimacy with social settings. Then, in three studies,
we further demonstrate how individuals modulate their intimacy to match social
norms around gender, social distance, and audience, each validating key
findings from studies in social psychology. Our work demonstrates that intimacy
is a pervasive and impactful social dimension of language.
| 2,020 |
Computation and Language
|
From Sentiment Annotations to Sentiment Prediction through Discourse
Augmentation
|
Sentiment analysis, especially for long documents, plausibly requires methods
capturing complex linguistics structures. To accommodate this, we propose a
novel framework to exploit task-related discourse for the task of sentiment
analysis. More specifically, we are combining the large-scale,
sentiment-dependent MEGA-DT treebank with a novel neural architecture for
sentiment prediction, based on a hybrid TreeLSTM hierarchical attention model.
Experiments show that our framework using sentiment-related discourse
augmentations for sentiment prediction enhances the overall performance for
long documents, even beyond previous approaches using well-established
discourse parsers trained on human annotated data. We show that a simple
ensemble approach can further enhance performance by selectively using
discourse, depending on the document length.
| 2,020 |
Computation and Language
|
Language Model is All You Need: Natural Language Understanding as
Question Answering
|
Different flavors of transfer learning have shown tremendous impact in
advancing research and applications of machine learning. In this work we study
the use of a specific family of transfer learning, where the target domain is
mapped to the source domain. Specifically we map Natural Language Understanding
(NLU) problems to QuestionAnswering (QA) problems and we show that in low data
regimes this approach offers significant improvements compared to other
approaches to NLU. Moreover we show that these gains could be increased through
sequential transfer learning across NLU problems from different domains. We
show that our approach could reduce the amount of required data for the same
performance by up to a factor of 10.
| 2,020 |
Computation and Language
|
Alignment Restricted Streaming Recurrent Neural Network Transducer
|
There is a growing interest in the speech community in developing Recurrent
Neural Network Transducer (RNN-T) models for automatic speech recognition (ASR)
applications. RNN-T is trained with a loss function that does not enforce
temporal alignment of the training transcripts and audio. As a result, RNN-T
models built with uni-directional long short term memory (LSTM) encoders tend
to wait for longer spans of input audio, before streaming already decoded ASR
tokens. In this work, we propose a modification to the RNN-T loss function and
develop Alignment Restricted RNN-T (Ar-RNN-T) models, which utilize audio-text
alignment information to guide the loss computation. We compare the proposed
method with existing works, such as monotonic RNN-T, on LibriSpeech and
in-house datasets. We show that the Ar-RNN-T loss provides a refined control to
navigate the trade-offs between the token emission delays and the Word Error
Rate (WER). The Ar-RNN-T models also improve downstream applications such as
the ASR End-pointing by guaranteeing token emissions within any given range of
latency. Moreover, the Ar-RNN-T loss allows for bigger batch sizes and 4 times
higher throughput for our LSTM model architecture, enabling faster training and
convergence on GPUs.
| 2,020 |
Computation and Language
|
EXAMS: A Multi-Subject High School Examinations Dataset for
Cross-Lingual and Multilingual Question Answering
|
We propose EXAMS -- a new benchmark dataset for cross-lingual and
multilingual question answering for high school examinations. We collected more
than 24,000 high-quality high school exam questions in 16 languages, covering 8
language families and 24 school subjects from Natural Sciences and Social
Sciences, among others.
EXAMS offers a fine-grained evaluation framework across multiple languages
and subjects, which allows precise analysis and comparison of various models.
We perform various experiments with existing top-performing multilingual
pre-trained models and we show that EXAMS offers multiple challenges that
require multilingual knowledge and reasoning in multiple domains. We hope that
EXAMS will enable researchers to explore challenging reasoning and knowledge
transfer methods and pre-trained models for school question answering in
various languages which was not possible before. The data, code, pre-trained
models, and evaluation are available at https://github.com/mhardalov/exams-qa.
| 2,020 |
Computation and Language
|
HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
|
We introduce HoVer (HOppy VERification), a dataset for many-hop evidence
extraction and fact verification. It challenges models to extract facts from
several Wikipedia articles that are relevant to a claim and classify whether
the claim is Supported or Not-Supported by the facts. In HoVer, the claims
require evidence to be extracted from as many as four English Wikipedia
articles and embody reasoning graphs of diverse shapes. Moreover, most of the
3/4-hop claims are written in multiple sentences, which adds to the complexity
of understanding long-range dependency relations such as coreference. We show
that the performance of an existing state-of-the-art semantic-matching model
degrades significantly on our dataset as the number of reasoning hops
increases, hence demonstrating the necessity of many-hop reasoning to achieve
strong results. We hope that the introduction of this challenging dataset and
the accompanying evaluation task will encourage research in many-hop fact
retrieval and information verification. We make the HoVer dataset publicly
available at https://hover-nlp.github.io
| 2,020 |
Computation and Language
|
Machine Generation and Detection of Arabic Manipulated and Fake News
|
Fake news and deceptive machine-generated text are serious problems
threatening modern societies, including in the Arab world. This motivates work
on detecting false and manipulated stories online. However, a bottleneck for
this research is lack of sufficient data to train detection models. We present
a novel method for automatically generating Arabic manipulated (and potentially
fake) news stories. Our method is simple and only depends on availability of
true stories, which are abundant online, and a part of speech tagger (POS). To
facilitate future work, we dispense with both of these requirements altogether
by providing AraNews, a novel and large POS-tagged news dataset that can be
used off-the-shelf. Using stories generated based on AraNews, we carry out a
human annotation study that casts light on the effects of machine manipulation
on text veracity. The study also measures human ability to detect Arabic
machine manipulated text generated by our method. Finally, we develop the first
models for detecting manipulated Arabic news and achieve state-of-the-art
results on Arabic fake news detection (macro F1=70.06). Our models and data are
publicly available.
| 2,020 |
Computation and Language
|
Explain by Evidence: An Explainable Memory-based Neural Network for
Question Answering
|
Interpretability and explainability of deep neural networks are challenging
due to their scale, complexity, and the agreeable notions on which the
explaining process rests. Previous work, in particular, has focused on
representing internal components of neural networks through human-friendly
visuals and concepts. On the other hand, in real life, when making a decision,
human tends to rely on similar situations and/or associations in the past.
Hence arguably, a promising approach to make the model transparent is to design
it in a way such that the model explicitly connects the current sample with the
seen ones, and bases its decision on these samples. Grounded on that principle,
we propose in this paper an explainable, evidence-based memory network
architecture, which learns to summarize the dataset and extract supporting
evidences to make its decision. Our model achieves state-of-the-art performance
on two popular question answering datasets (i.e. TrecQA and WikiQA). Via
further analysis, we show that this model can reliably trace the errors it has
made in the validation step to the training instances that might have caused
these errors. We believe that this error-tracing capability provides
significant benefit in improving dataset quality in many applications.
| 2,020 |
Computation and Language
|
Improving RNN Transducer Based ASR with Auxiliary Tasks
|
End-to-end automatic speech recognition (ASR) models with a single neural
network have recently demonstrated state-of-the-art results compared to
conventional hybrid speech recognizers. Specifically, recurrent neural network
transducer (RNN-T) has shown competitive ASR performance on various benchmarks.
In this work, we examine ways in which RNN-T can achieve better ASR accuracy
via performing auxiliary tasks. We propose (i) using the same auxiliary task as
primary RNN-T ASR task, and (ii) performing context-dependent graphemic state
prediction as in conventional hybrid modeling. In transcribing social media
videos with varying training data size, we first evaluate the streaming ASR
performance on three languages: Romanian, Turkish and German. We find that both
proposed methods provide consistent improvements. Next, we observe that both
auxiliary tasks demonstrate efficacy in learning deep transformer encoders for
RNN-T criterion, thus achieving competitive results - 2.0%/4.2% WER on
LibriSpeech test-clean/other - as compared to prior top performing models.
| 2,020 |
Computation and Language
|
Semi-supervised URL Segmentation with Recurrent Neural Networks
Pre-trained on Knowledge Graph Entities
|
Breaking domain names such as openresearch into component words open and
research is important for applications like Text-to-Speech synthesis and web
search. We link this problem to the classic problem of Chinese word
segmentation and show the effectiveness of a tagging model based on Recurrent
Neural Networks (RNNs) using characters as input. To compensate for the lack of
training data, we propose a pre-training method on concatenated entity names in
a large knowledge database. Pre-training improves the model by 33% and brings
the sequence accuracy to 85%.
| 2,020 |
Computation and Language
|
What's New? Summarizing Contributions in Scientific Literature
|
With thousands of academic articles shared on a daily basis, it has become
increasingly difficult to keep up with the latest scientific findings. To
overcome this problem, we introduce a new task of disentangled paper
summarization, which seeks to generate separate summaries for the paper
contributions and the context of the work, making it easier to identify the key
findings shared in articles. For this purpose, we extend the S2ORC corpus of
academic articles, which spans a diverse set of domains ranging from economics
to psychology, by adding disentangled "contribution" and "context" reference
labels. Together with the dataset, we introduce and analyze three baseline
approaches: 1) a unified model controlled by input code prefixes, 2) a model
with separate generation heads specialized in generating the disentangled
outputs, and 3) a training strategy that guides the model using additional
supervision coming from inbound and outbound citations. We also propose a
comprehensive automatic evaluation protocol which reports the relevance,
novelty, and disentanglement of generated outputs. Through a human study
involving expert annotators, we show that in 79%, of cases our new task is
considered more helpful than traditional scientific paper summarization.
| 2,020 |
Computation and Language
|
Unleashing the Power of Neural Discourse Parsers -- A Context and
Structure Aware Approach Using Large Scale Pretraining
|
RST-based discourse parsing is an important NLP task with numerous downstream
applications, such as summarization, machine translation and opinion mining. In
this paper, we demonstrate a simple, yet highly accurate discourse parser,
incorporating recent contextual language models. Our parser establishes the new
state-of-the-art (SOTA) performance for predicting structure and nuclearity on
two key RST datasets, RST-DT and Instr-DT. We further demonstrate that
pretraining our parser on the recently available large-scale "silver-standard"
discourse treebank MEGA-DT provides even larger performance benefits,
suggesting a novel and promising research direction in the field of discourse
analysis.
| 2,020 |
Computation and Language
|
From Dataset Recycling to Multi-Property Extraction and Beyond
|
This paper investigates various Transformer architectures on the WikiReading
Information Extraction and Machine Reading Comprehension dataset. The proposed
dual-source model outperforms the current state-of-the-art by a large margin.
Next, we introduce WikiReading Recycled-a newly developed public dataset and
the task of multiple property extraction. It uses the same data as WikiReading
but does not inherit its predecessor's identified disadvantages. In addition,
we provide a human-annotated test set with diagnostic subsets for a detailed
analysis of model performance.
| 2,020 |
Computation and Language
|
OP-IMS @ DIACR-Ita: Back to the Roots: SGNS+OP+CD still rocks Semantic
Change Detection
|
We present the results of our participation in the DIACR-Ita shared task on
lexical semantic change detection for Italian. We exploit one of the earliest
and most influential semantic change detection models based on Skip-Gram with
Negative Sampling, Orthogonal Procrustes alignment and Cosine Distance and
obtain the winning submission of the shared task with near to perfect accuracy
.94. Our results once more indicate that, within the present task setup in
lexical semantic change detection, the traditional type-based approaches yield
excellent performance.
| 2,020 |
Computation and Language
|
Alquist 2.0: Alexa Prize Socialbot Based on Sub-Dialogue Models
|
This paper presents the second version of the dialogue system named Alquist
competing in Amazon Alexa Prize 2018. We introduce a system leveraging
ontology-based topic structure called topic nodes. Each of the nodes consists
of several sub-dialogues, and each sub-dialogue has its own LSTM-based model
for dialogue management. The sub-dialogues can be triggered according to the
topic hierarchy or a user intent which allows the bot to create a unique
experience during each session.
| 2,020 |
Computation and Language
|
Alquist 3.0: Alexa Prize Bot Using Conversational Knowledge Graph
|
The third version of the open-domain dialogue system Alquist developed within
the Alexa Prize 2020 competition is designed to conduct coherent and engaging
conversations on popular topics. The main novel contribution is the
introduction of a system leveraging an innovative approach based on a
conversational knowledge graph and adjacency pairs. The conversational
knowledge graph allows the system to utilize knowledge expressed during the
dialogue in consequent turns and across conversations. Dialogue adjacency pairs
divide the conversation into small conversational structures, which can be
combined and allow the system to react to a wide range of user inputs flexibly.
We discuss and describe Alquist's pipeline, data acquisition and processing,
dialogue manager, NLG, knowledge aggregation, and a hierarchy of adjacency
pairs. We present the experimental results of the individual parts of the
system.
| 2,020 |
Computation and Language
|
Corpora Compared: The Case of the Swedish Gigaword & Wikipedia Corpora
|
In this work, we show that the difference in performance of embeddings from
differently sourced data for a given language can be due to other factors
besides data size. Natural language processing (NLP) tasks usually perform
better with embeddings from bigger corpora. However, broadness of covered
domain and noise can play important roles. We evaluate embeddings based on two
Swedish corpora: The Gigaword and Wikipedia, in analogy (intrinsic) tests and
discover that the embeddings from the Wikipedia corpus generally outperform
those from the Gigaword corpus, which is a bigger corpus. Downstream tests will
be required to have a definite evaluation.
| 2,020 |
Computation and Language
|
Semi-Supervised Low-Resource Style Transfer of Indonesian Informal to
Formal Language with Iterative Forward-Translation
|
In its daily use, the Indonesian language is riddled with informality, that
is, deviations from the standard in terms of vocabulary, spelling, and word
order. On the other hand, current available Indonesian NLP models are typically
developed with the standard Indonesian in mind. In this work, we address a
style-transfer from informal to formal Indonesian as a low-resource machine
translation problem. We build a new dataset of parallel sentences of informal
Indonesian and its formal counterpart. We benchmark several strategies to
perform style transfer from informal to formal Indonesian. We also explore
augmenting the training set with artificial forward-translated data. Since we
are dealing with an extremely low-resource setting, we find that a phrase-based
machine translation approach outperforms the Transformer-based approach.
Alternatively, a pre-trained GPT-2 fined-tuned to this task performed equally
well but costs more computational resource. Our findings show a promising step
towards leveraging machine translation models for style transfer. Our code and
data are available in https://github.com/haryoa/stif-indonesia
| 2,020 |
Computation and Language
|
The ApposCorpus: A new multilingual, multi-domain dataset for factual
appositive generation
|
News articles, image captions, product reviews and many other texts mention
people and organizations whose name recognition could vary for different
audiences. In such cases, background information about the named entities could
be provided in the form of an appositive noun phrase, either written by a human
or generated automatically. We expand on the previous work in appositive
generation with a new, more realistic, end-to-end definition of the task,
instantiated by a dataset that spans four languages (English, Spanish, German
and Polish), two entity types (person and organization) and two domains
(Wikipedia and News). We carry out an extensive analysis of the data and the
task, pointing to the various modeling challenges it poses. The results we
obtain with standard language generation methods show that the task is indeed
non-trivial, and leaves plenty of room for improvement.
| 2,020 |
Computation and Language
|
Improving Machine Reading Comprehension with Single-choice Decision and
Transfer Learning
|
Multi-choice Machine Reading Comprehension (MMRC) aims to select the correct
answer from a set of options based on a given passage and question. Due to task
specific of MMRC, it is non-trivial to transfer knowledge from other MRC tasks
such as SQuAD, Dream. In this paper, we simply reconstruct multi-choice to
single-choice by training a binary classification to distinguish whether a
certain answer is correct. Then select the option with the highest confidence
score. We construct our model upon ALBERT-xxlarge model and estimate it on the
RACE dataset. During training, We adopt AutoML strategy to tune better
parameters. Experimental results show that the single-choice is better than
multi-choice. In addition, by transferring knowledge from other kinds of MRC
tasks, our model achieves a new state-of-the-art results in both single and
ensemble settings.
| 2,020 |
Computation and Language
|
Learning to Respond with Your Favorite Stickers: A Framework of Unifying
Multi-Modality and User Preference in Multi-Turn Dialog
|
Stickers with vivid and engaging expressions are becoming increasingly
popular in online messaging apps, and some works are dedicated to automatically
select sticker response by matching the stickers image with previous
utterances. However, existing methods usually focus on measuring the matching
degree between the dialog context and sticker image, which ignores the user
preference of using stickers. Hence, in this paper, we propose to recommend an
appropriate sticker to user based on multi-turn dialog context and sticker
using history of user. Two main challenges are confronted in this task. One is
to model the sticker preference of user based on the previous sticker selection
history. Another challenge is to jointly fuse the user preference and the
matching between dialog context and candidate sticker into final prediction
making. To tackle these challenges, we propose a \emph{Preference Enhanced
Sticker Response Selector} (PESRS) model. Specifically, PESRS first employs a
convolutional based sticker image encoder and a self-attention based multi-turn
dialog encoder to obtain the representation of stickers and utterances. Next,
deep interaction network is proposed to conduct deep matching between the
sticker and each utterance. Then, we model the user preference by using the
recently selected stickers as input, and use a key-value memory network to
store the preference representation. PESRS then learns the short-term and
long-term dependency between all interaction results by a fusion network, and
dynamically fuse the user preference representation into the final sticker
selection prediction. Extensive experiments conducted on a large-scale
real-world dialog dataset show that our model achieves the state-of-the-art
performance for all commonly-used metrics. Experiments also verify the
effectiveness of each component of PESRS.
| 2,020 |
Computation and Language
|
Fighting an Infodemic: COVID-19 Fake News Dataset
|
Along with COVID-19 pandemic we are also fighting an `infodemic'. Fake news
and rumors are rampant on social media. Believing in rumors can cause
significant harm. This is further exacerbated at the time of a pandemic. To
tackle this, we curate and release a manually annotated dataset of 10,700
social media posts and articles of real and fake news on COVID-19. We benchmark
the annotated dataset with four machine learning baselines - Decision Tree,
Logistic Regression, Gradient Boost, and Support Vector Machine (SVM). We
obtain the best performance of 93.46% F1-score with SVM. The data and code is
available at: https://github.com/parthpatwa/covid19-fake-news-dectection
| 2,021 |
Computation and Language
|
Answer Span Correction in Machine Reading Comprehension
|
Answer validation in machine reading comprehension (MRC) consists of
verifying an extracted answer against an input context and question pair.
Previous work has looked at re-assessing the "answerability" of the question
given the extracted answer. Here we address a different problem: the tendency
of existing MRC systems to produce partially correct answers when presented
with answerable questions. We explore the nature of such errors and propose a
post-processing correction method that yields statistically significant
performance improvements over state-of-the-art MRC systems in both monolingual
and multilingual evaluation.
| 2,020 |
Computation and Language
|
Understanding Pure Character-Based Neural Machine Translation: The Case
of Translating Finnish into English
|
Recent work has shown that deeper character-based neural machine translation
(NMT) models can outperform subword-based models. However, it is still unclear
what makes deeper character-based models successful. In this paper, we conduct
an investigation into pure character-based models in the case of translating
Finnish into English, including exploring the ability to learn word senses and
morphological inflections and the attention mechanism. We demonstrate that
word-level information is distributed over the entire character sequence rather
than over a single character, and characters at different positions play
different roles in learning linguistic knowledge. In addition, character-based
models need more layers to encode word senses which explains why only deeper
models outperform subword-based models. The attention distribution pattern
shows that separators attract a lot of attention and we explore a sparse
word-level attention to enforce character hidden states to capture the full
word-level information. Experimental results show that the word-level attention
with a single head results in 1.2 BLEU points drop.
| 2,020 |
Computation and Language
|
Practical and Ethical Considerations in the Effective use of Emotion and
Sentiment Lexicons
|
Lexicons of word-emotion associations are widely used in research and
real-world applications. As part of my research, I have created several such
lexicons (e.g., the NRC Emotion Lexicon). This paper outlines some practical
and ethical considerations involved in the effective use of these lexical
resources.
| 2,020 |
Computation and Language
|
An Unsupervised method for OCR Post-Correction and Spelling
Normalisation for Finnish
|
Historical corpora are known to contain errors introduced by OCR (optical
character recognition) methods used in the digitization process, often said to
be degrading the performance of NLP systems. Correcting these errors manually
is a time-consuming process and a great part of the automatic approaches have
been relying on rules or supervised machine learning. We build on previous work
on fully automatic unsupervised extraction of parallel data to train a
character-based sequence-to-sequence NMT (neural machine translation) model to
conduct OCR error correction designed for English, and adapt it to Finnish by
proposing solutions that take the rich morphology of the language into account.
Our new method shows increased performance while remaining fully unsupervised,
with the added benefit of spelling normalisation. The source code and models
are available on GitHub and Zenodo.
| 2,020 |
Computation and Language
|
Wave-Tacotron: Spectrogram-free end-to-end text-to-speech synthesis
|
We describe a sequence-to-sequence neural network which directly generates
speech waveforms from text inputs. The architecture extends the Tacotron model
by incorporating a normalizing flow into the autoregressive decoder loop.
Output waveforms are modeled as a sequence of non-overlapping fixed-length
blocks, each one containing hundreds of samples. The interdependencies of
waveform samples within each block are modeled using the normalizing flow,
enabling parallel training and synthesis. Longer-term dependencies are handled
autoregressively by conditioning each flow on preceding blocks.This model can
be optimized directly with maximum likelihood, with-out using intermediate,
hand-designed features nor additional loss terms. Contemporary state-of-the-art
text-to-speech (TTS) systems use a cascade of separately learned models: one
(such as Tacotron) which generates intermediate features (such as spectrograms)
from text, followed by a vocoder (such as WaveRNN) which generates waveform
samples from the intermediate features. The proposed system, in contrast, does
not use a fixed intermediate representation, and learns all parameters
end-to-end. Experiments show that the proposed model generates speech with
quality approaching a state-of-the-art neural TTS system, with significantly
improved generation speed.
| 2,021 |
Computation and Language
|
Hostility Detection Dataset in Hindi
|
In this paper, we present a novel hostility detection dataset in Hindi
language. We collect and manually annotate ~8200 online posts. The annotated
dataset covers four hostility dimensions: fake news, hate speech, offensive,
and defamation posts, along with a non-hostile label. The hostile posts are
also considered for multi-label tags due to a significant overlap among the
hostile classes. We release this dataset as part of the CONSTRAINT-2021 shared
task on hostile post detection.
| 2,020 |
Computation and Language
|
Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages
|
With recent advancements in language technologies, humans are now speaking to
devices. Increasing the reach of spoken language technologies requires building
systems in local languages. A major bottleneck here are the underlying
data-intensive parts that make up such systems, including automatic speech
recognition (ASR) systems that require large amounts of labelled data. With the
aim of aiding development of spoken dialog systems in low resourced languages,
we propose a novel acoustics based intent recognition system that uses
discovered phonetic units for intent classification. The system is made up of
two blocks - the first block is a universal phone recognition system that
generates a transcript of discovered phonetic units for the input audio, and
the second block performs intent classification from the generated phonetic
transcripts. We propose a CNN+LSTM based architecture and present results for
two languages families - Indic languages and Romance languages, for two
different intent recognition tasks. We also perform multilingual training of
our intent classifier and show improved cross-lingual transfer and zero-shot
performance on an unknown language within the same language family.
| 2,021 |
Computation and Language
|
Naturalization of Text by the Insertion of Pauses and Filler Words
|
In this article, we introduce a set of methods to naturalize text based on
natural human speech. Voice-based interactions provide a natural way of
interfacing with electronic systems and are seeing a widespread adaptation of
late. These computerized voices can be naturalized to some degree by inserting
pauses and filler words at appropriate positions. The first proposed text
transformation method uses the frequency of bigrams in the training data to
make appropriate insertions in the input sentence. It uses a probability
distribution to choose the insertions from a set of all possible insertions.
This method is fast and can be included before a Text-To-Speech module. The
second method uses a Recurrent Neural Network to predict the next word to be
inserted. It confirms the insertions given by the bigram method. Additionally,
the degree of naturalization can be controlled in both these methods. On the
conduction of a blind survey, we conclude that the output of these text
transformation methods is comparable to natural speech.
| 2,020 |
Computation and Language
|
NLP-CIC @ DIACR-Ita: POS and Neighbor Based Distributional Models for
Lexical Semantic Change in Diachronic Italian Corpora
|
We present our systems and findings on unsupervised lexical semantic change
for the Italian language in the DIACR-Ita shared-task at EVALITA 2020. The task
is to determine whether a target word has evolved its meaning with time, only
relying on raw-text from two time-specific datasets. We propose two models
representing the target words across the periods to predict the changing words
using threshold and voting schemes. Our first model solely relies on
part-of-speech usage and an ensemble of distance measures. The second model
uses word embedding representation to extract the neighbor's relative distances
across spaces and propose "the average of absolute differences" to estimate
lexical semantic change. Our models achieved competent results, ranking third
in the DIACR-Ita competition. Furthermore, we experiment with the k_neighbor
parameter of our second model to compare the impact of using "the average of
absolute differences" versus the cosine distance used in Hamilton et al.
(2016).
| 2,020 |
Computation and Language
|
NLP-CIC @ PRELEARN: Mastering prerequisites relations, from handcrafted
features to embeddings
|
We present our systems and findings for the prerequisite relation learning
task (PRELEARN) at EVALITA 2020. The task aims to classify whether a pair of
concepts hold a prerequisite relation or not. We model the problem using
handcrafted features and embedding representations for in-domain and
cross-domain scenarios. Our submissions ranked first place in both scenarios
with average F1 score of 0.887 and 0.690 respectively across domains on the
test sets. We made our code is freely available.
| 2,020 |
Computation and Language
|
Know What You Don't Need: Single-Shot Meta-Pruning for Attention Heads
|
Deep pre-trained Transformer models have achieved state-of-the-art results
over a variety of natural language processing (NLP) tasks. By learning rich
language knowledge with millions of parameters, these models are usually
overparameterized and significantly increase the computational overhead in
applications. It is intuitive to address this issue by model compression. In
this work, we propose a method, called Single-Shot Meta-Pruning, to compress
deep pre-trained Transformers before fine-tuning. Specifically, we focus on
pruning unnecessary attention heads adaptively for different downstream tasks.
To measure the informativeness of attention heads, we train our Single-Shot
Meta-Pruner (SMP) with a meta-learning paradigm aiming to maintain the
distribution of text representations after pruning. Compared with existing
compression methods for pre-trained models, our method can reduce the overhead
of both fine-tuning and inference. Experimental results show that our pruner
can selectively prune 50% of attention heads with little impact on the
performance on downstream tasks and even provide better text representations.
The source code will be released in the future.
| 2,020 |
Computation and Language
|
AlphaMWE: Construction of Multilingual Parallel Corpora with MWE
Annotations
|
In this work, we present the construction of multilingual parallel corpora
with annotation of multiword expressions (MWEs). MWEs include verbal MWEs
(vMWEs) defined in the PARSEME shared task that have a verb as the head of the
studied terms. The annotated vMWEs are also bilingually and multilingually
aligned manually. The languages covered include English, Chinese, Polish, and
German. Our original English corpus is taken from the PARSEME shared task in
2018. We performed machine translation of this source corpus followed by human
post editing and annotation of target MWEs. Strict quality control was applied
for error limitation, i.e., each MT output sentence received first manual post
editing and annotation plus second manual quality rechecking. One of our
findings during corpora preparation is that accurate translation of MWEs
presents challenges to MT systems. To facilitate further MT research, we
present a categorisation of the error types encountered by MT systems in
performing MWE related translation. To acquire a broader view of MT issues, we
selected four popular state-of-the-art MT models for comparisons namely:
Microsoft Bing Translator, GoogleMT, Baidu Fanyi and DeepL MT. Because of the
noise removal, translation post editing and MWE annotation by human
professionals, we believe our AlphaMWE dataset will be an asset for
cross-lingual and multilingual research, such as MT and information extraction.
Our multilingual corpora are available as open access at
github.com/poethan/AlphaMWE.
| 2,020 |
Computation and Language
|
PairRE: Knowledge Graph Embeddings via Paired Relation Vectors
|
Distance based knowledge graph embedding methods show promising results on
link prediction task, on which two topics have been widely studied: one is the
ability to handle complex relations, such as N-to-1, 1-to-N and N-to-N, the
other is to encode various relation patterns, such as symmetry/antisymmetry.
However, the existing methods fail to solve these two problems at the same
time, which leads to unsatisfactory results. To mitigate this problem, we
propose PairRE, a model with paired vectors for each relation representation.
The paired vectors enable an adaptive adjustment of the margin in loss function
to fit for complex relations. Besides, PairRE is capable of encoding three
important relation patterns, symmetry/antisymmetry, inverse and composition.
Given simple constraints on relation representations, PairRE can encode
subrelation further. Experiments on link prediction benchmarks demonstrate the
proposed key capabilities of PairRE. Moreover, We set a new state-of-the-art on
two knowledge graph datasets of the challenging Open Graph Benchmark.
| 2,021 |
Computation and Language
|
Rethinking the Value of Transformer Components
|
Transformer becomes the state-of-the-art translation model, while it is not
well studied how each intermediate component contributes to the model
performance, which poses significant challenges for designing optimal
architectures. In this work, we bridge this gap by evaluating the impact of
individual component (sub-layer) in trained Transformer models from different
perspectives. Experimental results across language pairs, training strategies,
and model capacities show that certain components are consistently more
important than the others. We also report a number of interesting findings that
might help humans better analyze, understand and improve Transformer models.
Based on these observations, we further propose a new training strategy that
can improves translation performance by distinguishing the unimportant
components in training.
| 2,020 |
Computation and Language
|
Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering
|
Recent developments in pre-trained neural language modeling have led to leaps
in accuracy on commonsense question-answering benchmarks. However, there is
increasing concern that models overfit to specific tasks, without learning to
utilize external knowledge or perform general semantic reasoning. In contrast,
zero-shot evaluations have shown promise as a more robust measure of a model's
general reasoning abilities. In this paper, we propose a novel neuro-symbolic
framework for zero-shot question answering across commonsense tasks. Guided by
a set of hypotheses, the framework studies how to transform various
pre-existing knowledge resources into a form that is most effective for
pre-training models. We vary the set of language models, training regimes,
knowledge sources, and data generation strategies, and measure their impact
across tasks. Extending on prior work, we devise and compare four constrained
distractor-sampling strategies. We provide empirical results across five
commonsense question-answering tasks with data generated from five external
knowledge resources. We show that, while an individual knowledge graph is
better suited for specific tasks, a global knowledge graph brings consistent
gains across different tasks. In addition, both preserving the structure of the
task as well as generating fair and informative questions help language models
learn more effectively.
| 2,020 |
Computation and Language
|
Explainable Automated Fact-Checking: A Survey
|
A number of exciting advances have been made in automated fact-checking
thanks to increasingly larger datasets and more powerful systems, leading to
improvements in the complexity of claims which can be accurately fact-checked.
However, despite these advances, there are still desirable functionalities
missing from the fact-checking pipeline. In this survey, we focus on the
explanation functionality -- that is fact-checking systems providing reasons
for their predictions. We summarize existing methods for explaining the
predictions of fact-checking systems and we explore trends in this topic.
Further, we consider what makes for good explanations in this specific domain
through a comparative analysis of existing fact-checking explanations against
some desirable properties. Finally, we propose further research directions for
generating fact-checking explanations, and describe how these may lead to
improvements in the research area.
| 2,020 |
Computation and Language
|
Best Practices for Data-Efficient Modeling in NLG:How to Train
Production-Ready Neural Models with Less Data
|
Natural language generation (NLG) is a critical component in conversational
systems, owing to its role of formulating a correct and natural text response.
Traditionally, NLG components have been deployed using template-based
solutions. Although neural network solutions recently developed in the research
community have been shown to provide several benefits, deployment of such
model-based solutions has been challenging due to high latency, correctness
issues, and high data needs. In this paper, we present approaches that have
helped us deploy data-efficient neural solutions for NLG in conversational
systems to production. We describe a family of sampling and modeling techniques
to attain production quality with light-weight neural network models using only
a fraction of the data that would be necessary otherwise, and show a thorough
comparison between each. Our results show that domain complexity dictates the
appropriate approach to achieve high data efficiency. Finally, we distill the
lessons from our experimental findings into a list of best practices for
production-level NLG model development, and present them in a brief runbook.
Importantly, the end products of all of the techniques are small
sequence-to-sequence models (2Mb) that we can reliably deploy in production.
| 2,020 |
Computation and Language
|
Denoising Relation Extraction from Document-level Distant Supervision
|
Distant supervision (DS) has been widely used to generate auto-labeled data
for sentence-level relation extraction (RE), which improves RE performance.
However, the existing success of DS cannot be directly transferred to the more
challenging document-level relation extraction (DocRE), since the inherent
noise in DS may be even multiplied in document level and significantly harm the
performance of RE. To address this challenge, we propose a novel pre-trained
model for DocRE, which denoises the document-level DS data via multiple
pre-training tasks. Experimental results on the large-scale DocRE benchmark
show that our model can capture useful information from noisy DS data and
achieve promising results.
| 2,020 |
Computation and Language
|
On the Practical Ability of Recurrent Neural Networks to Recognize
Hierarchical Languages
|
While recurrent models have been effective in NLP tasks, their performance on
context-free languages (CFLs) has been found to be quite weak. Given that CFLs
are believed to capture important phenomena such as hierarchical structure in
natural languages, this discrepancy in performance calls for an explanation. We
study the performance of recurrent models on Dyck-n languages, a particularly
important and well-studied class of CFLs. We find that while recurrent models
generalize nearly perfectly if the lengths of the training and test strings are
from the same range, they perform poorly if the test strings are longer. At the
same time, we observe that recurrent models are expressive enough to recognize
Dyck words of arbitrary lengths in finite precision if their depths are
bounded. Hence, we evaluate our models on samples generated from Dyck languages
with bounded depth and find that they are indeed able to generalize to much
higher lengths. Since natural language datasets have nested dependencies of
bounded depth, this may help explain why they perform well in modeling
hierarchical dependencies in natural language data despite prior works
indicating poor generalization performance on Dyck languages. We perform
probing studies to support our results and provide comparisons with
Transformers.
| 2,020 |
Computation and Language
|
Detecting Emerging Symptoms of COVID-19 using Context-based Twitter
Embeddings
|
In this paper, we present an iterative graph-based approach for the detection
of symptoms of COVID-19, the pathology of which seems to be evolving. More
generally, the method can be applied to finding context-specific words and
texts (e.g. symptom mentions) in large imbalanced corpora (e.g. all tweets
mentioning #COVID-19). Given the novelty of COVID-19, we also test if the
proposed approach generalizes to the problem of detecting Adverse Drug Reaction
(ADR). We find that the approach applied to Twitter data can detect symptom
mentions substantially before being reported by the Centers for Disease Control
(CDC).
| 2,020 |
Computation and Language
|
A Gold Standard Methodology for Evaluating Accuracy in Data-To-Text
Systems
|
Most Natural Language Generation systems need to produce accurate texts. We
propose a methodology for high-quality human evaluation of the accuracy of
generated texts, which is intended to serve as a gold-standard for accuracy
evaluations of data-to-text systems. We use our methodology to evaluate the
accuracy of computer generated basketball summaries. We then show how our gold
standard evaluation can be used to validate automated metrics
| 2,020 |
Computation and Language
|
Adapting a Language Model for Controlled Affective Text Generation
|
Human use language not just to convey information but also to express their
inner feelings and mental states. In this work, we adapt the state-of-the-art
language generation models to generate affective (emotional) text. We posit a
model capable of generating affect-driven and topic-focused sentences without
losing grammatical correctness as the affect intensity increases. We propose to
incorporate emotion as prior for the probabilistic state-of-the-art text
generation model such as GPT-2. The model gives a user the flexibility to
control the category and intensity of emotion as well as the topic of the
generated text. Previous attempts at modelling fine-grained emotions fall out
on grammatical correctness at extreme intensities, but our model is resilient
to this and delivers robust results at all intensities. We conduct automated
evaluations and human studies to test the performance of our model and provide
a detailed comparison of the results with other models. In all evaluations, our
model outperforms existing affective text generation models.
| 2,020 |
Computation and Language
|
Stochastic Attention Head Removal: A simple and effective method for
improving Transformer Based ASR Models
|
Recently, Transformer based models have shown competitive automatic speech
recognition (ASR) performance. One key factor in the success of these models is
the multi-head attention mechanism. However, for trained models, we have
previously observed that many attention matrices are close to diagonal,
indicating the redundancy of the corresponding attention heads. We have also
found that some architectures with reduced numbers of attention heads have
better performance. Since the search for the best structure is time
prohibitive, we propose to randomly remove attention heads during training and
keep all attention heads at test time, thus the final model is an ensemble of
models with different architectures. The proposed method also forces each head
independently learn the most useful patterns. We apply the proposed method to
train Transformer based and Convolution-augmented Transformer (Conformer) based
ASR models. Our method gives consistent performance gains over strong baselines
on the Wall Street Journal, AISHELL, Switchboard and AMI datasets. To the best
of our knowledge, we have achieved state-of-the-art end-to-end Transformer
based model performance on Switchboard and AMI.
| 2,021 |
Computation and Language
|
Exploring End-to-End Differentiable Natural Logic Modeling
|
We explore end-to-end trained differentiable models that integrate natural
logic with neural networks, aiming to keep the backbone of natural language
reasoning based on the natural logic formalism while introducing subsymbolic
vector representations and neural components. The proposed model adapts module
networks to model natural logic operations, which is enhanced with a memory
component to model contextual information. Experiments show that the proposed
framework can effectively model monotonicity-based reasoning, compared to the
baseline neural network models without built-in inductive bias for
monotonicity-based reasoning. Our proposed model shows to be robust when
transferred from upward to downward inference. We perform further analyses on
the performance of the proposed model on aggregation, showing the effectiveness
of the proposed subcomponents on helping achieve better intermediate
aggregation performance.
| 2,020 |
Computation and Language
|
Metrics also Disagree in the Low Scoring Range: Revisiting Summarization
Evaluation Metrics
|
In text summarization, evaluating the efficacy of automatic metrics without
human judgments has become recently popular. One exemplar work concludes that
automatic metrics strongly disagree when ranking high-scoring summaries. In
this paper, we revisit their experiments and find that their observations stem
from the fact that metrics disagree in ranking summaries from any narrow
scoring range. We hypothesize that this may be because summaries are similar to
each other in a narrow scoring range and are thus, difficult to rank. Apart
from the width of the scoring range of summaries, we analyze three other
properties that impact inter-metric agreement - Ease of Summarization,
Abstractiveness, and Coverage. To encourage reproducible research, we make all
our analysis code and data publicly available.
| 2,020 |
Computation and Language
|
What time is it? Temporal Analysis of Novels
|
Recognizing the flow of time in a story is a crucial aspect of understanding
it. Prior work related to time has primarily focused on identifying temporal
expressions or relative sequencing of events, but here we propose
computationally annotating each line of a book with wall clock times, even in
the absence of explicit time-descriptive phrases. To do so, we construct a data
set of hourly time phrases from 52,183 fictional books. We then construct a
time-of-day classification model that achieves an average error of 2.27 hours.
Furthermore, we show that by analyzing a book in whole using dynamic
programming of breakpoints, we can roughly partition a book into segments that
each correspond to a particular time-of-day. This approach improves upon
baselines by over two hours. Finally, we apply our model to a corpus of
literature categorized by different periods in history, to show interesting
trends of hourly activity throughout the past. Among several observations we
find that the fraction of events taking place past 10 P.M jumps past 1880 -
coincident with the advent of the electric light bulb and city lights.
| 2,020 |
Computation and Language
|
Automatic Summarization of Open-Domain Podcast Episodes
|
We present implementation details of our abstractive summarizers that achieve
competitive results on the Podcast Summarization task of TREC 2020. A concise
textual summary that captures important information is crucial for users to
decide whether to listen to the podcast. Prior work focuses primarily on
learning contextualized representations. Instead, we investigate several
less-studied aspects of neural abstractive summarization, including (i) the
importance of selecting important segments from transcripts to serve as input
to the summarizer; (ii) striking a balance between the amount and quality of
training instances; (iii) the appropriate summary length and start/end points.
We highlight the design considerations behind our system and offer key insights
into the strengths and weaknesses of neural abstractive systems. Our results
suggest that identifying important segments from transcripts to use as input to
an abstractive summarizer is advantageous for summarizing long documents. Our
best system achieves a quality rating of 1.559 judged by NIST evaluators---an
absolute increase of 0.268 (+21%) over the creator descriptions.
| 2,020 |
Computation and Language
|
CxGBERT: BERT meets Construction Grammar
|
While lexico-semantic elements no doubt capture a large amount of linguistic
information, it has been argued that they do not capture all information
contained in text. This assumption is central to constructionist approaches to
language which argue that language consists of constructions, learned pairings
of a form and a function or meaning that are either frequent or have a meaning
that cannot be predicted from its component parts. BERT's training objectives
give it access to a tremendous amount of lexico-semantic information, and while
BERTology has shown that BERT captures certain important linguistic dimensions,
there have been no studies exploring the extent to which BERT might have access
to constructional information. In this work we design several probes and
conduct extensive experiments to answer this question. Our results allow us to
conclude that BERT does indeed have access to a significant amount of
information, much of which linguists typically call constructional information.
The impact of this observation is potentially far-reaching as it provides
insights into what deep learning methods learn from text, while also showing
that information contained in constructions is redundantly encoded in
lexico-semantics.
| 2,020 |
Computation and Language
|
"What Do You Mean by That?" A Parser-Independent Interactive Approach
for Enhancing Text-to-SQL
|
In Natural Language Interfaces to Databases systems, the text-to-SQL
technique allows users to query databases by using natural language questions.
Though significant progress in this area has been made recently, most parsers
may fall short when they are deployed in real systems. One main reason stems
from the difficulty of fully understanding the users' natural language
questions. In this paper, we include human in the loop and present a novel
parser-independent interactive approach (PIIA) that interacts with users using
multi-choice questions and can easily work with arbitrary parsers. Experiments
were conducted on two cross-domain datasets, the WikiSQL and the more complex
Spider, with five state-of-the-art parsers. These demonstrated that PIIA is
capable of enhancing the text-to-SQL performance with limited interaction turns
by using both simulation and human evaluation.
| 2,020 |
Computation and Language
|
Chapter Captor: Text Segmentation in Novels
|
Books are typically segmented into chapters and sections, representing
coherent subnarratives and topics. We investigate the task of predicting
chapter boundaries, as a proxy for the general task of segmenting long texts.
We build a Project Gutenberg chapter segmentation data set of 9,126 English
novels, using a hybrid approach combining neural inference and rule matching to
recognize chapter title headers in books, achieving an F1-score of 0.77 on this
task. Using this annotated data as ground truth after removing structural cues,
we present cut-based and neural methods for chapter segmentation, achieving an
F1-score of 0.453 on the challenging task of exact break prediction over
book-length documents. Finally, we reveal interesting historical trends in the
chapter structure of novels.
| 2,020 |
Computation and Language
|
Text Classification through Glyph-aware Disentangled Character Embedding
and Semantic Sub-character Augmentation
|
We propose a new character-based text classification framework for
non-alphabetic languages, such as Chinese and Japanese. Our framework consists
of a variational character encoder (VCE) and character-level text classifier.
The VCE is composed of a $\beta$-variational auto-encoder ($\beta$-VAE) that
learns the proposed glyph-aware disentangled character embedding (GDCE). Since
our GDCE provides zero-mean unit-variance character embeddings that are
dimensionally independent, it is applicable for our interpretable data
augmentation, namely, semantic sub-character augmentation (SSA). In this paper,
we evaluated our framework using Japanese text classification tasks at the
document- and sentence-level. We confirmed that our GDCE and SSA not only
provided embedding interpretability but also improved the classification
performance. Our proposal achieved a competitive result to the state-of-the-art
model while also providing model interpretability. Our code is available on
https://github.com/IyatomiLab/GDCE-SSA
| 2,020 |
Computation and Language
|
Efficient End-to-End Speech Recognition Using Performers in Conformers
|
On-device end-to-end speech recognition poses a high requirement on model
efficiency. Most prior works improve the efficiency by reducing model sizes. We
propose to reduce the complexity of model architectures in addition to model
sizes. More specifically, we reduce the floating-point operations in conformer
by replacing the transformer module with a performer. The proposed
attention-based efficient end-to-end speech recognition model yields
competitive performance on the LibriSpeech corpus with 10 millions of
parameters and linear computation complexity. The proposed model also
outperforms previous lightweight end-to-end models by about 20% relatively in
word error rate.
| 2,020 |
Computation and Language
|
Pointing to Subwords for Generating Function Names in Source Code
|
We tackle the task of automatically generating a function name from source
code. Existing generators face difficulties in generating low-frequency or
out-of-vocabulary subwords. In this paper, we propose two strategies for
copying low-frequency or out-of-vocabulary subwords in inputs. Our best
performing model showed an improvement over the conventional method in terms of
our modified F1 and accuracy on the Java-small and Java-large datasets.
| 2,020 |
Computation and Language
|
AI Stories: An Interactive Narrative System for Children
|
AI Stories is a proposed interactive dialogue system, that lets children
co-create narrative worlds through conversation. Over the next three years this
system will be developed and tested within pediatric wards, where it offers a
useful resource between the gap of education and play. Telling and making
stories is a fundamental part of language play, and its chatty and nonsensical
qualities are important; therefore, the prologued usage an automated system
offers is a benefit to children. In this paper I will present the current state
of this project, in its more experimental and general guise. Conceptually
story-telling through dialogue relates to the preprint interpretation of story,
beyond the static and linear medium, where stories were performative, temporal,
and social.
| 2,020 |
Computation and Language
|
CapWAP: Captioning with a Purpose
|
The traditional image captioning task uses generic reference captions to
provide textual information about images. Different user populations, however,
will care about different visual aspects of images. In this paper, we propose a
new task, Captioning with a Purpose (CapWAP). Our goal is to develop systems
that can be tailored to be useful for the information needs of an intended
population, rather than merely provide generic information about an image. In
this task, we use question-answer (QA) pairs---a natural expression of
information need---from users, instead of reference captions, for both training
and post-inference evaluation. We show that it is possible to use reinforcement
learning to directly optimize for the intended information need, by rewarding
outputs that allow a question answering model to provide correct answers to
sampled user questions. We convert several visual question answering datasets
into CapWAP datasets, and demonstrate that under a variety of scenarios our
purposeful captioning system learns to anticipate and fulfill specific
information needs better than its generic counterparts, as measured by QA
performance on user questions from unseen images, when using the caption alone
as context.
| 2,020 |
Computation and Language
|
BERT-JAM: Boosting BERT-Enhanced Neural Machine Translation with Joint
Attention
|
BERT-enhanced neural machine translation (NMT) aims at leveraging
BERT-encoded representations for translation tasks. A recently proposed
approach uses attention mechanisms to fuse Transformer's encoder and decoder
layers with BERT's last-layer representation and shows enhanced performance.
However, their method doesn't allow for the flexible distribution of attention
between the BERT representation and the encoder/decoder representation. In this
work, we propose a novel BERT-enhanced NMT model called BERT-JAM which improves
upon existing models from two aspects: 1) BERT-JAM uses joint-attention modules
to allow the encoder/decoder layers to dynamically allocate attention between
different representations, and 2) BERT-JAM allows the encoder/decoder layers to
make use of BERT's intermediate representations by composing them using a gated
linear unit (GLU). We train BERT-JAM with a novel three-phase optimization
strategy that progressively unfreezes different components of BERT-JAM. Our
experiments show that BERT-JAM achieves SOTA BLEU scores on multiple
translation tasks.
| 2,020 |
Computation and Language
|
Character-level Representations Improve DRS-based Semantic Parsing Even
in the Age of BERT
|
We combine character-level and contextual language model representations to
improve performance on Discourse Representation Structure parsing. Character
representations can easily be added in a sequence-to-sequence model in either
one encoder or as a fully separate encoder, with improvements that are robust
to different language models, languages and data sets. For English, these
improvements are larger than adding individual sources of linguistic
information or adding non-contextual embeddings. A new method of analysis based
on semantic tags demonstrates that the character-level representations improve
performance across a subset of selected semantic phenomena.
| 2,020 |
Computation and Language
|
Low-Resource Adaptation of Neural NLP Models
|
Real-world applications of natural language processing (NLP) are challenging.
NLP models rely heavily on supervised machine learning and require large
amounts of annotated data. These resources are often based on language data
available in large quantities, such as English newswire. However, in real-world
applications of NLP, the textual resources vary across several dimensions, such
as language, dialect, topic, and genre. It is challenging to find annotated
data of sufficient amount and quality. The objective of this thesis is to
investigate methods for dealing with such low-resource scenarios in information
extraction and natural language understanding. To this end, we study distant
supervision and sequential transfer learning in various low-resource settings.
We develop and adapt neural NLP models to explore a number of research
questions concerning NLP tasks with minimal or no training data.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.