Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Improving Clinical Document Understanding on COVID-19 Research with
Spark NLP
|
Following the global COVID-19 pandemic, the number of scientific papers
studying the virus has grown massively, leading to increased interest in
automated literate review. We present a clinical text mining system that
improves on previous efforts in three ways. First, it can recognize over 100
different entity types including social determinants of health, anatomy, risk
factors, and adverse events in addition to other commonly used clinical and
biomedical entities. Second, the text processing pipeline includes assertion
status detection, to distinguish between clinical facts that are present,
absent, conditional, or about someone other than the patient. Third, the deep
learning models used are more accurate than previously available, leveraging an
integrated pipeline of state-of-the-art pretrained named entity recognition
models, and improving on the previous best performing benchmarks for assertion
status detection. We illustrate extracting trends and insights, e.g. most
frequent disorders and symptoms, and most common vital signs and EKG findings,
from the COVID-19 Open Research Dataset (CORD-19). The system is built using
the Spark NLP library which natively supports scaling to use distributed
clusters, leveraging GPUs, configurable and reusable NLP pipelines, healthcare
specific embeddings, and the ability to train models to support new entity
types or human languages with no code changes.
| 2,020 |
Computation and Language
|
Semantics Altering Modifications for Evaluating Comprehension in Machine
Reading
|
Advances in NLP have yielded impressive results for the task of machine
reading comprehension (MRC), with approaches having been reported to achieve
performance comparable to that of humans. In this paper, we investigate whether
state-of-the-art MRC models are able to correctly process Semantics Altering
Modifications (SAM): linguistically-motivated phenomena that alter the
semantics of a sentence while preserving most of its lexical surface form. We
present a method to automatically generate and align challenge sets featuring
original and altered examples. We further propose a novel evaluation
methodology to correctly assess the capability of MRC systems to process these
examples independent of the data they were optimised on, by discounting for
effects introduced by domain shift. In a large-scale empirical study, we apply
the methodology in order to evaluate extractive MRC models with regard to their
capability to correctly process SAM-enriched data. We comprehensively cover 12
different state-of-the-art neural architecture configurations and four training
datasets and find that -- despite their well-known remarkable performance --
optimised models consistently struggle to correctly process semantically
altered data.
| 2,021 |
Computation and Language
|
A Taxonomy of Empathetic Response Intents in Human Social Conversations
|
Open-domain conversational agents or chatbots are becoming increasingly
popular in the natural language processing community. One of the challenges is
enabling them to converse in an empathetic manner. Current neural response
generation methods rely solely on end-to-end learning from large scale
conversation data to generate dialogues. This approach can produce socially
unacceptable responses due to the lack of large-scale quality data used to
train the neural models. However, recent work has shown the promise of
combining dialogue act/intent modelling and neural response generation. This
hybrid method improves the response quality of chatbots and makes them more
controllable and interpretable. A key element in dialog intent modelling is the
development of a taxonomy. Inspired by this idea, we have manually labeled 500
response intents using a subset of a sizeable empathetic dialogue dataset (25K
dialogues). Our goal is to produce a large-scale taxonomy for empathetic
response intents. Furthermore, using lexical and machine learning methods, we
automatically analysed both speaker and listener utterances of the entire
dataset with identified response intents and 32 emotion categories. Finally, we
use information visualization methods to summarize emotional dialogue exchange
patterns and their temporal progression. These results reveal novel and
important empathy patterns in human-human open-domain conversations and can
serve as heuristics for hybrid approaches.
| 2,020 |
Computation and Language
|
Frame-level SpecAugment for Deep Convolutional Neural Networks in Hybrid
ASR Systems
|
Inspired by SpecAugment -- a data augmentation method for end-to-end ASR
systems, we propose a frame-level SpecAugment method (f-SpecAugment) to improve
the performance of deep convolutional neural networks (CNN) for hybrid HMM
based ASR systems. Similar to the utterance level SpecAugment, f-SpecAugment
performs three transformations: time warping, frequency masking, and time
masking. Instead of applying the transformations at the utterance level,
f-SpecAugment applies them to each convolution window independently during
training. We demonstrate that f-SpecAugment is more effective than the
utterance level SpecAugment for deep CNN based hybrid models. We evaluate the
proposed f-SpecAugment on 50-layer Self-Normalizing Deep CNN (SNDCNN) acoustic
models trained with up to 25000 hours of training data. We observe
f-SpecAugment reduces WER by 0.5-4.5% relatively across different ASR tasks for
four languages. As the benefits of augmentation techniques tend to diminish as
training data size increases, the large scale training reported is important in
understanding the effectiveness of f-SpecAugment. Our experiments demonstrate
that even with 25k training data, f-SpecAugment is still effective. We also
demonstrate that f-SpecAugment has benefits approximately equivalent to
doubling the amount of training data for deep CNNs.
| 2,020 |
Computation and Language
|
Using multiple ASR hypotheses to boost i18n NLU performance
|
Current voice assistants typically use the best hypothesis yielded by their
Automatic Speech Recognition (ASR) module as input to their Natural Language
Understanding (NLU) module, thereby losing helpful information that might be
stored in lower-ranked ASR hypotheses. We explore the change in performance of
NLU associated tasks when utilizing five-best ASR hypotheses when compared to
status quo for two language datasets, German and Portuguese. To harvest
information from the ASR five-best, we leverage extractive summarization and
joint extractive-abstractive summarization models for Domain Classification
(DC) experiments while using a sequence-to-sequence model with a pointer
generator network for Intent Classification (IC) and Named Entity Recognition
(NER) multi-task experiments. For the DC full test set, we observe significant
improvements of up to 7.2% and 15.5% in micro-averaged F1 scores, for German
and Portuguese, respectively. In cases where the best ASR hypothesis was not an
exact match to the transcribed utterance (mismatched test set), we see
improvements of up to 6.7% and 8.8% micro-averaged F1 scores, for German and
Portuguese, respectively. For IC and NER multi-task experiments, when
evaluating on the mismatched test set, we see improvements across all domains
in German and in 17 out of 19 domains in Portuguese (improvements based on
change in SeMER scores). Our results suggest that the use of multiple ASR
hypotheses, as opposed to one, can lead to significant performance improvements
in the DC task for these non-English datasets. In addition, it could lead to
significant improvement in the performance of IC and NER tasks in cases where
the ASR model makes mistakes.
| 2,020 |
Computation and Language
|
Improving Human-Labeled Data through Dynamic Automatic Conflict
Resolution
|
This paper develops and implements a scalable methodology for (a) estimating
the noisiness of labels produced by a typical crowdsourcing semantic annotation
task, and (b) reducing the resulting error of the labeling process by as much
as 20-30% in comparison to other common labeling strategies. Importantly, this
new approach to the labeling process, which we name Dynamic Automatic Conflict
Resolution (DACR), does not require a ground truth dataset and is instead based
on inter-project annotation inconsistencies. This makes DACR not only more
accurate but also available to a broad range of labeling tasks. In what follows
we present results from a text classification task performed at scale for a
commercial personal assistant, and evaluate the inherent ambiguity uncovered by
this annotation strategy as compared to other common labeling strategies.
| 2,020 |
Computation and Language
|
Unsupervised Label Refinement Improves Dataless Text Classification
|
Dataless text classification is capable of classifying documents into
previously unseen labels by assigning a score to any document paired with a
label description. While promising, it crucially relies on accurate
descriptions of the label set for each downstream task. This reliance causes
dataless classifiers to be highly sensitive to the choice of label descriptions
and hinders the broader application of dataless classification in practice. In
this paper, we ask the following question: how can we improve dataless text
classification using the inputs of the downstream task dataset? Our primary
solution is a clustering based approach. Given a dataless classifier, our
approach refines its set of predictions using k-means clustering. We
demonstrate the broad applicability of our approach by improving the
performance of two widely used classifier architectures, one that encodes
text-category pairs with two independent encoders and one with a single joint
encoder. Experiments show that our approach consistently improves dataless
classification across different datasets and makes the classifier more robust
to the choice of label descriptions.
| 2,020 |
Computation and Language
|
A Topological Method for Comparing Document Semantics
|
Comparing document semantics is one of the toughest tasks in both Natural
Language Processing and Information Retrieval. To date, on one hand, the tools
for this task are still rare. On the other hand, most relevant methods are
devised from the statistic or the vector space model perspectives but nearly
none from a topological perspective. In this paper, we hope to make a different
sound. A novel algorithm based on topological persistence for comparing
semantics similarity between two documents is proposed. Our experiments are
conducted on a document dataset with human judges' results. A collection of
state-of-the-art methods are selected for comparison. The experimental results
show that our algorithm can produce highly human-consistent results, and also
beats most state-of-the-art methods though ties with NLTK.
| 2,020 |
Computation and Language
|
Early Detection of Fake News by Utilizing the Credibility of News,
Publishers, and Users Based on Weakly Supervised Learning
|
The dissemination of fake news significantly affects personal reputation and
public trust. Recently, fake news detection has attracted tremendous attention,
and previous studies mainly focused on finding clues from news content or
diffusion path. However, the required features of previous models are often
unavailable or insufficient in early detection scenarios, resulting in poor
performance. Thus, early fake news detection remains a tough challenge.
Intuitively, the news from trusted and authoritative sources or shared by many
users with a good reputation is more reliable than other news. Using the
credibility of publishers and users as prior weakly supervised information, we
can quickly locate fake news in massive news and detect them in the early
stages of dissemination.
In this paper, we propose a novel Structure-aware Multi-head Attention
Network (SMAN), which combines the news content, publishing, and reposting
relations of publishers and users, to jointly optimize the fake news detection
and credibility prediction tasks. In this way, we can explicitly exploit the
credibility of publishers and users for early fake news detection. We conducted
experiments on three real-world datasets, and the results show that SMAN can
detect fake news in 4 hours with an accuracy of over 91%, which is much faster
than the state-of-the-art models.
| 2,020 |
Computation and Language
|
Revisiting Iterative Back-Translation from the Perspective of
Compositional Generalization
|
Human intelligence exhibits compositional generalization (i.e., the capacity
to understand and produce unseen combinations of seen components), but current
neural seq2seq models lack such ability. In this paper, we revisit iterative
back-translation, a simple yet effective semi-supervised method, to investigate
whether and how it can improve compositional generalization. In this work: (1)
We first empirically show that iterative back-translation substantially
improves the performance on compositional generalization benchmarks (CFQ and
SCAN). (2) To understand why iterative back-translation is useful, we carefully
examine the performance gains and find that iterative back-translation can
increasingly correct errors in pseudo-parallel data. (3) To further encourage
this mechanism, we propose curriculum iterative back-translation, which better
improves the quality of pseudo-parallel data, thus further improving the
performance.
| 2,020 |
Computation and Language
|
CTRLsum: Towards Generic Controllable Text Summarization
|
Current summarization systems yield generic summaries that are disconnected
from users' preferences and expectations. To address this limitation, we
present CTRLsum, a novel framework for controllable summarization. Our approach
enables users to control multiple aspects of generated summaries by interacting
with the summarization system through textual input in the form of a set of
keywords or descriptive prompts. Using a single unified model, CTRLsum is able
to achieve a broad scope of summary manipulation at inference time without
requiring additional human annotations or pre-defining a set of control aspects
during training. We quantitatively demonstrate the effectiveness of our
approach on three domains of summarization datasets and five control aspects:
1) entity-centric and 2) length-controllable summarization, 3) contribution
summarization on scientific papers, 4) invention purpose summarization on
patent filings, and 5) question-guided summarization on news articles in a
reading comprehension setting. Moreover, when used in a standard, uncontrolled
summarization setting, CTRLsum achieves state-of-the-art results on the
CNN/DailyMail dataset. Code and model checkpoints are available at
https://github.com/salesforce/ctrl-sum
| 2,020 |
Computation and Language
|
Cross-lingual Transfer of Abstractive Summarizer to Less-resource
Language
|
Automatic text summarization extracts important information from texts and
presents the information in the form of a summary. Abstractive summarization
approaches progressed significantly by switching to deep neural networks, but
results are not yet satisfactory, especially for languages where large training
sets do not exist. In several natural language processing tasks, a
cross-lingual model transfer is successfully applied in less-resource
languages. For summarization, the cross-lingual model transfer was not
attempted due to a non-reusable decoder side of neural models that cannot
correct target language generation. In our work, we use a pre-trained English
summarization model based on deep neural networks and sequence-to-sequence
architecture to summarize Slovene news articles. We address the problem of
inadequate decoder by using an additional language model for the evaluation of
the generated text in target language. We test several cross-lingual
summarization models with different amounts of target data for fine-tuning. We
assess the models with automatic evaluation measures and conduct a small-scale
human evaluation. Automatic evaluation shows that the summaries of our best
cross-lingual model are useful and of quality similar to the model trained only
in the target language. Human evaluation shows that our best model generates
summaries with high accuracy and acceptable readability. However, similar to
other abstractive models, our models are not perfect and may occasionally
produce misleading or absurd content.
| 2,021 |
Computation and Language
|
Facts2Story: Controlling Text Generation by Key Facts
|
Recent advancements in self-attention neural network architectures have
raised the bar for open-ended text generation. Yet, while current methods are
capable of producing a coherent text which is several hundred words long,
attaining control over the content that is being generated -- as well as
evaluating it -- are still open questions. We propose a controlled generation
task which is based on expanding a sequence of facts, expressed in natural
language, into a longer narrative. We introduce human-based evaluation metrics
for this task, as well as a method for deriving a large training dataset. We
evaluate three methods on this task, based on fine-tuning pre-trained models.
We show that while auto-regressive, unidirectional Language Models such as GPT2
produce better fluency, they struggle to adhere to the requested facts. We
propose a plan-and-cloze model (using fine-tuned XLNet) which produces
competitive fluency while adhering to the requested content.
| 2,020 |
Computation and Language
|
From Bag of Sentences to Document: Distantly Supervised Relation
Extraction via Machine Reading Comprehension
|
Distant supervision (DS) is a promising approach for relation extraction but
often suffers from the noisy label problem. Traditional DS methods usually
represent an entity pair as a bag of sentences and denoise labels using
multi-instance learning techniques. The bag-based paradigm, however, fails to
leverage the inter-sentence-level and the entity-level evidence for relation
extraction, and their denoising algorithms are often specialized and
complicated. In this paper, we propose a new DS paradigm--document-based
distant supervision, which models relation extraction as a document-based
machine reading comprehension (MRC) task. By re-organizing all sentences about
an entity as a document and extracting relations via querying the document with
relation-specific questions, the document-based DS paradigm can simultaneously
encode and exploit all sentence-level, inter-sentence-level, and entity-level
evidence. Furthermore, we design a new loss function--DSLoss (distant
supervision loss), which can effectively train MRC models using only
$\langle$document, question, answer$\rangle$ tuples, therefore noisy label
problem can be inherently resolved. Experiments show that our method achieves
new state-of-the-art DS performance.
| 2,020 |
Computation and Language
|
CrossNER: Evaluating Cross-Domain Named Entity Recognition
|
Cross-domain named entity recognition (NER) models are able to cope with the
scarcity issue of NER samples in target domains. However, most of the existing
NER benchmarks lack domain-specialized entity types or do not focus on a
certain domain, leading to a less effective cross-domain evaluation. To address
these obstacles, we introduce a cross-domain NER dataset (CrossNER), a
fully-labeled collection of NER data spanning over five diverse domains with
specialized entity categories for different domains. Additionally, we also
provide a domain-related corpus since using it to continue pre-training
language models (domain-adaptive pre-training) is effective for the domain
adaptation. We then conduct comprehensive experiments to explore the
effectiveness of leveraging different levels of the domain corpus and
pre-training strategies to do domain-adaptive pre-training for the cross-domain
task. Results show that focusing on the fractional corpus containing
domain-specialized entities and utilizing a more challenging pre-training
strategy in domain-adaptive pre-training are beneficial for the NER domain
adaptation, and our proposed method can consistently outperform existing
cross-domain NER baselines. Nevertheless, experiments also illustrate the
challenge of this cross-domain NER task. We hope that our dataset and baselines
will catalyze research in the NER domain adaptation area. The code and data are
available at https://github.com/zliucr/CrossNER.
| 2,020 |
Computation and Language
|
Combining Machine Learning and Human Experts to Predict Match Outcomes
in Football: A Baseline Model
|
In this paper, we present a new application-focused benchmark dataset and
results from a set of baseline Natural Language Processing and Machine Learning
models for prediction of match outcomes for games of football (soccer). By
doing so we give a baseline for the prediction accuracy that can be achieved
exploiting both statistical match data and contextual articles from human
sports journalists. Our dataset is focuses on a representative time-period over
6 seasons of the English Premier League, and includes newspaper match previews
from The Guardian. The models presented in this paper achieve an accuracy of
63.18% showing a 6.9% boost on the traditional statistical methods.
| 2,020 |
Computation and Language
|
End-to-End Chinese Parsing Exploiting Lexicons
|
Chinese parsing has traditionally been solved by three pipeline systems
including word-segmentation, part-of-speech tagging and dependency parsing
modules. In this paper, we propose an end-to-end Chinese parsing model based on
character inputs which jointly learns to output word segmentation,
part-of-speech tags and dependency structures. In particular, our parsing model
relies on word-char graph attention networks, which can enrich the character
inputs with external word knowledge. Experiments on three Chinese parsing
benchmark datasets show the effectiveness of our models, achieving the
state-of-the-art results on end-to-end Chinese parsing.
| 2,020 |
Computation and Language
|
Extractive Opinion Summarization in Quantized Transformer Spaces
|
We present the Quantized Transformer (QT), an unsupervised system for
extractive opinion summarization. QT is inspired by Vector-Quantized
Variational Autoencoders, which we repurpose for popularity-driven
summarization. It uses a clustering interpretation of the quantized space and a
novel extraction algorithm to discover popular opinions among hundreds of
reviews, a significant step towards opinion summarization of practical scope.
In addition, QT enables controllable summarization without further training, by
utilizing properties of the quantized space to extract aspect-specific
summaries. We also make publicly available SPACE, a large-scale evaluation
benchmark for opinion summarizers, comprising general and aspect-specific
summaries for 50 hotels. Experiments demonstrate the promise of our approach,
which is validated by human studies where judges showed clear preference for
our method over competitive baselines.
| 2,020 |
Computation and Language
|
Big Green at WNUT 2020 Shared Task-1: Relation Extraction as
Contextualized Sequence Classification
|
Relation and event extraction is an important task in natural language
processing. We introduce a system which uses contextualized knowledge graph
completion to classify relations and events between known entities in a noisy
text environment. We report results which show that our system is able to
effectively extract relations and events from a dataset of wet lab protocols.
| 2,020 |
Computation and Language
|
Dartmouth CS at WNUT-2020 Task 2: Informative COVID-19 Tweet
Classification Using BERT
|
We describe the systems developed for the WNUT-2020 shared task 2,
identification of informative COVID-19 English Tweets. BERT is a highly
performant model for Natural Language Processing tasks. We increased BERT's
performance in this classification task by fine-tuning BERT and concatenating
its embeddings with Tweet-specific features and training a Support Vector
Machine (SVM) for classification (henceforth called BERT+). We compared its
performance to a suite of machine learning models. We used a Twitter specific
data cleaning pipeline and word-level TF-IDF to extract features for the
non-BERT models. BERT+ was the top performing model with an F1-score of 0.8713.
| 2,020 |
Computation and Language
|
Improvements and Extensions on Metaphor Detection
|
Metaphors are ubiquitous in human language. The metaphor detection task (MD)
aims at detecting and interpreting metaphors from written language, which is
crucial in natural language understanding (NLU) research. In this paper, we
introduce a pre-trained Transformer-based model into MD. Our model outperforms
the previous state-of-the-art models by large margins in our evaluations, with
relative improvements on the F-1 score from 5.33% to 28.39%. Second, we extend
MD to a classification task about the metaphoricity of an entire piece of text
to make MD applicable in more general NLU scenes. Finally, we clean up the
improper or outdated annotations in one of the MD benchmark datasets and
re-benchmark it with our Transformer-based model. This approach could be
applied to other existing MD datasets as well, since the metaphoricity
annotations in these benchmark datasets may be outdated. Future research
efforts are also necessary to build an up-to-date and well-annotated dataset
consisting of longer and more complex texts.
| 2,021 |
Computation and Language
|
The Role of Interpretable Patterns in Deep Learning for Morphology
|
We examine the role of character patterns in three tasks: morphological
analysis, lemmatization and copy. We use a modified version of the standard
sequence-to-sequence model, where the encoder is a pattern matching network.
Each pattern scores all possible N character long subwords (substrings) on the
source side, and the highest scoring subword's score is used to initialize the
decoder as well as the input to the attention mechanism. This method allows
learning which subwords of the input are important for generating the output.
By training the models on the same source but different target, we can compare
what subwords are important for different tasks and how they relate to each
other. We define a similarity metric, a generalized form of the Jaccard
similarity, and assign a similarity score to each pair of the three tasks that
work on the same source but may differ in target. We examine how these three
tasks are related to each other in 12 languages. Our code is publicly
available.
| 2,020 |
Computation and Language
|
Distilling Knowledge from Reader to Retriever for Question Answering
|
The task of information retrieval is an important component of many natural
language processing systems, such as open domain question answering. While
traditional methods were based on hand-crafted features, continuous
representations based on neural networks recently obtained competitive results.
A challenge of using such methods is to obtain supervised data to train the
retriever model, corresponding to pairs of query and support documents. In this
paper, we propose a technique to learn retriever models for downstream tasks,
inspired by knowledge distillation, and which does not require annotated pairs
of query and documents. Our approach leverages attention scores of a reader
model, used to solve the task based on retrieved documents, to obtain synthetic
labels for the retriever. We evaluate our method on question answering,
obtaining state-of-the-art results.
| 2,022 |
Computation and Language
|
Discourse Parsing of Contentious, Non-Convergent Online Discussions
|
Online discourse is often perceived as polarized and unproductive. While some
conversational discourse parsing frameworks are available, they do not
naturally lend themselves to the analysis of contentious and polarizing
discussions. Inspired by the Bakhtinian theory of Dialogism, we propose a novel
theoretical and computational framework, better suited for non-convergent
discussions. We redefine the measure of a successful discussion, and develop a
novel discourse annotation schema which reflects a hierarchy of discursive
strategies. We consider an array of classification models -- from Logistic
Regression to BERT. We also consider various feature types and representations,
e.g., LIWC categories, standard embeddings, conversational sequences, and
non-conversational discourse markers learnt separately. Given the 31 labels in
the tagset, an average F-Score of 0.61 is achieved if we allow a different
model for each tag, and 0.526 with a single model. The promising results
achieved in annotating discussions according to the proposed schema paves the
way for a number of downstream tasks and applications such as early detection
of discussion trajectories, active moderation of open discussions, and
teacher-assistive bots. Finally, we share the first labeled dataset of
contentious non-convergent online discussions.
| 2,020 |
Computation and Language
|
Globetrotter: Connecting Languages by Connecting Images
|
Machine translation between many languages at once is highly challenging,
since training with ground truth requires supervision between all language
pairs, which is difficult to obtain. Our key insight is that, while languages
may vary drastically, the underlying visual appearance of the world remains
consistent. We introduce a method that uses visual observations to bridge the
gap between languages, rather than relying on parallel corpora or topological
properties of the representations. We train a model that aligns segments of
text from different languages if and only if the images associated with them
are similar and each image in turn is well-aligned with its textual
description. We train our model from scratch on a new dataset of text in over
fifty languages with accompanying images. Experiments show that our method
outperforms previous work on unsupervised word and sentence translation using
retrieval. Code, models and data are available on globetrotter.cs.columbia.edu.
| 2,022 |
Computation and Language
|
Transformer Query-Target Knowledge Discovery (TEND): Drug Discovery from
CORD-19
|
Previous work established skip-gram word2vec models could be used to mine
knowledge in the materials science literature for the discovery of
thermoelectrics. Recent transformer architectures have shown great progress in
language modeling and associated fine-tuned tasks, but they have yet to be
adapted for drug discovery. We present a RoBERTa transformer-based method that
extends the masked language token prediction using query-target conditioning to
treat the specificity challenge. The transformer discovery method entails
several benefits over the word2vec method including domain-specific (antiviral)
analogy performance, negation handling, and flexible query analysis (specific)
and is demonstrated on influenza drug discovery. To stimulate COVID-19
research, we release an influenza clinical trials and antiviral analogies
dataset used in conjunction with the COVID-19 Open Research Dataset Challenge
(CORD-19) literature dataset in the study. We examine k-shot fine-tuning to
improve the downstream analogies performance as well as to mine analogies for
model explainability. Further, the query-target analysis is verified in a
forward chaining analysis against the influenza drug clinical trials dataset,
before adapted for COVID-19 drugs (combinations and side-effects) and on-going
clinical trials. In consideration of the present topic, we release the model,
dataset, and code.
| 2,020 |
Computation and Language
|
Diluted Near-Optimal Expert Demonstrations for Guiding Dialogue
Stochastic Policy Optimisation
|
A learning dialogue agent can infer its behaviour from interactions with the
users. These interactions can be taken from either human-to-human or
human-machine conversations. However, human interactions are scarce and costly,
making learning from few interactions essential. One solution to speedup the
learning process is to guide the agent's exploration with the help of an
expert. We present in this paper several imitation learning strategies for
dialogue policy where the guiding expert is a near-optimal handcrafted policy.
We incorporate these strategies with state-of-the-art reinforcement learning
methods based on Q-learning and actor-critic. We notably propose a randomised
exploration policy which allows for a seamless hybridisation of the learned
policy and the expert. Our experiments show that our hybridisation strategy
outperforms several baselines, and that it can accelerate the learning when
facing real humans.
| 2,020 |
Computation and Language
|
Generate Your Counterfactuals: Towards Controlled Counterfactual
Generation for Text
|
Machine Learning has seen tremendous growth recently, which has led to larger
adoption of ML systems for educational assessments, credit risk, healthcare,
employment, criminal justice, to name a few. The trustworthiness of ML and NLP
systems is a crucial aspect and requires a guarantee that the decisions they
make are fair and robust. Aligned with this, we propose a framework GYC, to
generate a set of counterfactual text samples, which are crucial for testing
these ML systems. Our main contributions include a) We introduce GYC, a
framework to generate counterfactual samples such that the generation is
plausible, diverse, goal-oriented, and effective, b) We generate counterfactual
samples, that can direct the generation towards a corresponding condition such
as named-entity tag, semantic role label, or sentiment. Our experimental
results on various domains show that GYC generates counterfactual text samples
exhibiting the above four properties. GYC generates counterfactuals that can
act as test cases to evaluate a model and any text debiasing algorithm.
| 2,021 |
Computation and Language
|
Edited Media Understanding: Reasoning About Implications of Manipulated
Images
|
Multimodal disinformation, from `deepfakes' to simple edits that deceive, is
an important societal problem. Yet at the same time, the vast majority of media
edits are harmless -- such as a filtered vacation photo. The difference between
this example, and harmful edits that spread disinformation, is one of intent.
Recognizing and describing this intent is a major challenge for today's AI
systems.
We present the task of Edited Media Understanding, requiring models to answer
open-ended questions that capture the intent and implications of an image edit.
We introduce a dataset for our task, EMU, with 48k question-answer pairs
written in rich natural language. We evaluate a wide variety of
vision-and-language models for our task, and introduce a new model PELICAN,
which builds upon recent progress in pretrained multimodal representations. Our
model obtains promising results on our dataset, with humans rating its answers
as accurate 40.35% of the time. At the same time, there is still much work to
be done -- humans prefer human-annotated captions 93.56% of the time -- and we
provide analysis that highlights areas for further progress.
| 2,020 |
Computation and Language
|
Fact-Enhanced Synthetic News Generation
|
The advanced text generation methods have witnessed great success in text
summarization, language translation, and synthetic news generation. However,
these techniques can be abused to generate disinformation and fake news. To
better understand the potential threats of synthetic news, we develop a new
generation method FactGen to generate high-quality news content. The existing
text generation methods either afford limited supplementary information or lose
consistency between the input and output which makes the synthetic news less
trustworthy. To address these issues, FactGen retrieves external facts to
enrich the output and reconstructs the input claim from the generated content
to improve the consistency among the input and the output. Experiment results
on real-world datasets show that the generated news contents of FactGen are
consistent and contain rich facts. We also discuss the possible defending
method to identify these synthetic news pieces if FactGen is used to generate
synthetic news.
| 2,020 |
Computation and Language
|
Open Knowledge Graphs Canonicalization using Variational Autoencoders
|
Noun phrases and Relation phrases in open knowledge graphs are not
canonicalized, leading to an explosion of redundant and ambiguous
subject-relation-object triples. Existing approaches to solve this problem take
a two-step approach. First, they generate embedding representations for both
noun and relation phrases, then a clustering algorithm is used to group them
using the embeddings as features. In this work, we propose Canonicalizing Using
Variational Autoencoders (CUVA), a joint model to learn both embeddings and
cluster assignments in an end-to-end approach, which leads to a better vector
representation for the noun and relation phrases. Our evaluation over multiple
benchmarks shows that CUVA outperforms the existing state-of-the-art
approaches. Moreover, we introduce CanonicNell, a novel dataset to evaluate
entity canonicalization systems.
| 2,021 |
Computation and Language
|
On an Unknown Ancestor of Burrows' Delta Measure
|
This article points out some surprising similarities between a 1944 study by
Georgy Udny Yule and modern approaches to authorship attribution.
| 2,020 |
Computation and Language
|
Fusing Context Into Knowledge Graph for Commonsense Question Answering
|
Commonsense question answering (QA) requires a model to grasp commonsense and
factual knowledge to answer questions about world events. Many prior methods
couple language modeling with knowledge graphs (KG). However, although a KG
contains rich structural information, it lacks the context to provide a more
precise understanding of the concepts. This creates a gap when fusing knowledge
graphs into language modeling, especially when there is insufficient labeled
data. Thus, we propose to employ external entity descriptions to provide
contextual information for knowledge understanding. We retrieve descriptions of
related concepts from Wiktionary and feed them as additional input to
pre-trained language models. The resulting model achieves state-of-the-art
result in the CommonsenseQA dataset and the best result among non-generative
models in OpenBookQA.
| 2,021 |
Computation and Language
|
Improving Relation Extraction by Leveraging Knowledge Graph Link
Prediction
|
Relation extraction (RE) aims to predict a relation between a subject and an
object in a sentence, while knowledge graph link prediction (KGLP) aims to
predict a set of objects, O, given a subject and a relation from a knowledge
graph. These two problems are closely related as their respective objectives
are intertwined: given a sentence containing a subject and an object o, a RE
model predicts a relation that can then be used by a KGLP model together with
the subject, to predict a set of objects O. Thus, we expect object o to be in
set O. In this paper, we leverage this insight by proposing a multi-task
learning approach that improves the performance of RE models by jointly
training on RE and KGLP tasks. We illustrate the generality of our approach by
applying it on several existing RE models and empirically demonstrate how it
helps them achieve consistent performance gains.
| 2,020 |
Computation and Language
|
Complex Relation Extraction: Challenges and Opportunities
|
Relation extraction aims to identify the target relations of entities in
texts. Relation extraction is very important for knowledge base construction
and text understanding. Traditional binary relation extraction, including
supervised, semi-supervised and distant supervised ones, has been extensively
studied and significant results are achieved. In recent years, many complex
relation extraction tasks, i.e., the variants of simple binary relation
extraction, are proposed to meet the complex applications in practice. However,
there is no literature to fully investigate and summarize these complex
relation extraction works so far. In this paper, we first report the recent
progress in traditional simple binary relation extraction. Then we summarize
the existing complex relation extraction tasks and present the definition,
recent progress, challenges and opportunities for each task.
| 2,020 |
Computation and Language
|
Emotional Conversation Generation with Heterogeneous Graph Neural
Network
|
The successful emotional conversation system depends on sufficient perception
and appropriate expression of emotions. In a real-life conversation, humans
firstly instinctively perceive emotions from multi-source information,
including the emotion flow hidden in dialogue history, facial expressions,
audio, and personalities of speakers. Then, they convey suitable emotions
according to their personalities, but these multiple types of information are
insufficiently exploited in emotional conversation fields. To address this
issue, in this paper, we propose a heterogeneous graph-based model for
emotional conversation generation. Firstly, we design a Heterogeneous
Graph-Based Encoder to represent the conversation content (i.e., the dialogue
history, its emotion flow, facial expressions, audio, and speakers'
personalities) with a heterogeneous graph neural network, and then predict
suitable emotions for feedback. Secondly, we employ an
Emotion-Personality-Aware Decoder to generate a response relevant to the
conversation context as well as with appropriate emotions, through taking the
encoded graph representations, the predicted emotions by the encoder and the
personality of the current speaker as inputs. Experiments on both automatic and
human evaluation show that our method can effectively perceive emotions from
multi-source knowledge and generate a satisfactory response. Furthermore, based
on the up-to-date text generator BART, our model still can achieve consistent
improvement, which significantly outperforms some existing state-of-the-art
models.
| 2,022 |
Computation and Language
|
Generating semantic maps through multidimensional scaling: linguistic
applications and theory
|
This paper reports on the state-of-the-art in application of multidimensional
scaling (MDS) techniques to create semantic maps in linguistic research. MDS
refers to a statistical technique that represents objects (lexical items,
linguistic contexts, languages, etc.) as points in a space so that close
similarity between the objects corresponds to close distances between the
corresponding points in the representation. We focus on the use of MDS in
combination with parallel corpus data as used in research on cross-linguistic
variation.
We first introduce the mathematical foundations of MDS and then give an
exhaustive overview of past research that employs MDS techniques in combination
with parallel corpus data. We propose a set of terminology to succinctly
describe the key parameters of a particular MDS application. We then show that
this computational methodology is theory-neutral, i.e. it can be employed to
answer research questions in a variety of linguistic theoretical frameworks.
Finally, we show how this leads to two lines of future developments for MDS
research in linguistics.
| 2,022 |
Computation and Language
|
Breeding Gender-aware Direct Speech Translation Systems
|
In automatic speech translation (ST), traditional cascade approaches
involving separate transcription and translation steps are giving ground to
increasingly competitive and more robust direct solutions. In particular, by
translating speech audio data without intermediate transcription, direct ST
models are able to leverage and preserve essential information present in the
input (e.g. speaker's vocal characteristics) that is otherwise lost in the
cascade framework. Although such ability proved to be useful for gender
translation, direct ST is nonetheless affected by gender bias just like its
cascade counterpart, as well as machine translation and numerous other natural
language processing applications. Moreover, direct ST systems that exclusively
rely on vocal biometric features as a gender cue can be unsuitable and
potentially harmful for certain users. Going beyond speech signals, in this
paper we compare different approaches to inform direct ST models about the
speaker's gender and test their ability to handle gender translation from
English into Italian and French. To this aim, we manually annotated large
datasets with speakers' gender information and used them for experiments
reflecting different possible real-world scenarios. Our results show that
gender-aware direct ST solutions can significantly outperform strong - but
gender-unaware - direct ST models. In particular, the translation of
gender-marked words can increase up to 30 points in accuracy while preserving
overall translation quality.
| 2,020 |
Computation and Language
|
On Knowledge Distillation for Direct Speech Translation
|
Direct speech translation (ST) has shown to be a complex task requiring
knowledge transfer from its sub-tasks: automatic speech recognition (ASR) and
machine translation (MT). For MT, one of the most promising techniques to
transfer knowledge is knowledge distillation. In this paper, we compare the
different solutions to distill knowledge in a sequence-to-sequence task like
ST. Moreover, we analyze eventual drawbacks of this approach and how to
alleviate them maintaining the benefits in terms of translation quality.
| 2,020 |
Computation and Language
|
Label Confusion Learning to Enhance Text Classification Models
|
Representing a true label as a one-hot vector is a common practice in
training text classification models. However, the one-hot representation may
not adequately reflect the relation between the instances and labels, as labels
are often not completely independent and instances may relate to multiple
labels in practice. The inadequate one-hot representations tend to train the
model to be over-confident, which may result in arbitrary prediction and model
overfitting, especially for confused datasets (datasets with very similar
labels) or noisy datasets (datasets with labeling errors). While training
models with label smoothing (LS) can ease this problem in some degree, it still
fails to capture the realistic relation among labels. In this paper, we propose
a novel Label Confusion Model (LCM) as an enhancement component to current
popular text classification models. LCM can learn label confusion to capture
semantic overlap among labels by calculating the similarity between instances
and labels during training and generate a better label distribution to replace
the original one-hot label vector, thus improving the final classification
performance. Extensive experiments on five text classification benchmark
datasets reveal the effectiveness of LCM for several widely used deep learning
classification models. Further experiments also verify that LCM is especially
helpful for confused or noisy datasets and superior to the label smoothing
method.
| 2,020 |
Computation and Language
|
Tracking Interaction States for Multi-Turn Text-to-SQL Semantic Parsing
|
The task of multi-turn text-to-SQL semantic parsing aims to translate natural
language utterances in an interaction into SQL queries in order to answer them
using a database which normally contains multiple table schemas. Previous
studies on this task usually utilized contextual information to enrich
utterance representations and to further influence the decoding process. While
they ignored to describe and track the interaction states which are determined
by history SQL queries and are related with the intent of current utterance. In
this paper, two kinds of interaction states are defined based on schema items
and SQL keywords separately. A relational graph neural network and a non-linear
layer are designed to update the representations of these two states
respectively. The dynamic schema-state and SQL-state representations are then
utilized to decode the SQL query corresponding to current utterance.
Experimental results on the challenging CoSQL dataset demonstrate the
effectiveness of our proposed method, which achieves better performance than
other published methods on the task leaderboard.
| 2,020 |
Computation and Language
|
Intrinsically Motivated Compositional Language Emergence
|
Recently, there has been a great deal of research in emergent communication
on artificial agents interacting in simulated environments. Recent studies have
revealed that, in general, emergent languages do not follow the
compositionality patterns of natural language. To deal with this, existing
works have proposed a limited channel capacity as an important constraint for
learning highly compositional languages. In this paper, we show that this is
not a sufficient condition and propose an intrinsic reward framework for
improving compositionality in emergent communication. We use a reinforcement
learning setting with two agents -- a \textit{task-aware} Speaker and a
\textit{state-aware} Listener that are required to communicate to perform a set
of tasks. Through our experiments on three different referential game setups,
including a novel environment gComm, we show intrinsic rewards improve
compositionality scores by $\approx \mathbf{1.5-2}$ times that of existing
frameworks that use limited channel capacity.
| 2,023 |
Computation and Language
|
Towards Zero-shot Cross-lingual Image Retrieval
|
There has been a recent spike in interest in multi-modal Language and Vision
problems. On the language side, most of these models primarily focus on English
since most multi-modal datasets are monolingual. We try to bridge this gap with
a zero-shot approach for learning multi-modal representations using
cross-lingual pre-training on the text side. We present a simple yet practical
approach for building a cross-lingual image retrieval model which trains on a
monolingual training dataset but can be used in a zero-shot cross-lingual
fashion during inference. We also introduce a new objective function which
tightens the text embedding clusters by pushing dissimilar texts from each
other. Finally, we introduce a new 1K multi-lingual MSCOCO2014 caption test
dataset (XTD10) in 7 languages that we collected using a crowdsourcing
platform. We use this as the test set for evaluating zero-shot model
performance across languages. XTD10 dataset is made publicly available here:
https://github.com/adobe-research/Cross-lingual-Test-Dataset-XTD10
| 2,020 |
Computation and Language
|
Cross-lingual Word Sense Disambiguation using mBERT Embeddings with
Syntactic Dependencies
|
Cross-lingual word sense disambiguation (WSD) tackles the challenge of
disambiguating ambiguous words across languages given context. The pre-trained
BERT embedding model has been proven to be effective in extracting contextual
information of words, and have been incorporated as features into many
state-of-the-art WSD systems. In order to investigate how syntactic information
can be added into the BERT embeddings to result in both semantics- and
syntax-incorporated word embeddings, this project proposes the concatenated
embeddings by producing dependency parse tress and encoding the relative
relationships of words into the input embeddings. Two methods are also proposed
to reduce the size of the concatenated embeddings. The experimental results
show that the high dimensionality of the syntax-incorporated embeddings
constitute an obstacle for the classification task, which needs to be further
addressed in future studies.
| 2,020 |
Computation and Language
|
Generative Adversarial Networks for Annotated Data Augmentation in Data
Sparse NLU
|
Data sparsity is one of the key challenges associated with model development
in Natural Language Understanding (NLU) for conversational agents. The
challenge is made more complex by the demand for high quality annotated
utterances commonly required for supervised learning, usually resulting in
weeks of manual labor and high cost. In this paper, we present our results on
boosting NLU model performance through training data augmentation using a
sequential generative adversarial network (GAN). We explore data generation in
the context of two tasks, the bootstrapping of a new language and the handling
of low resource features. For both tasks we explore three sequential GAN
architectures, one with a token-level reward function, another with our own
implementation of a token-level Monte Carlo rollout reward, and a third with
sentence-level reward. We evaluate the performance of these feedback models
across several sampling methodologies and compare our results to upsampling the
original data to the same scale. We further improve the GAN model performance
through the transfer learning of the pretrained embeddings. Our experiments
reveal synthetic data generated using the sequential generative adversarial
network provides significant performance boosts across multiple metrics and can
be a major benefit to the NLU tasks.
| 2,020 |
Computation and Language
|
Normalization of Different Swedish Dialects Spoken in Finland
|
Our study presents a dialect normalization method for different Finland
Swedish dialects covering six regions. We tested 5 different models, and the
best model improved the word error rate from 76.45 to 28.58. Contrary to
results reported in earlier research on Finnish dialects, we found that
training the model with one word at a time gave best results. We believe this
is due to the size of the training data available for the model. Our models are
accessible as a Python package. The study provides important information about
the adaptability of these methods in different contexts, and gives important
baselines for further study.
| 2,020 |
Computation and Language
|
Speech Recognition for Endangered and Extinct Samoyedic languages
|
Our study presents a series of experiments on speech recognition with
endangered and extinct Samoyedic languages, spoken in Northern and Southern
Siberia. To best of our knowledge, this is the first time a functional ASR
system is built for an extinct language. We achieve with Kamas language a Label
Error Rate of 15\%, and conclude through careful error analysis that this
quality is already very useful as a starting point for refined human
transcriptions. Our results with related Nganasan language are more modest,
with best model having the error rate of 33\%. We show, however, through
experiments where Kamas training data is enlarged incrementally, that Nganasan
results are in line with what is expected under low-resource circumstances of
the language. Based on this, we provide recommendations for scenarios in which
further language documentation or archive processing activities could benefit
from modern ASR technology. All training data and processing scripts haven been
published on Zenodo with clear licences to ensure further work in this
important topic.
| 2,020 |
Computation and Language
|
Infusing Finetuning with Semantic Dependencies
|
For natural language processing systems, two kinds of evidence support the
use of text representations from neural language models "pretrained" on large
unannotated corpora: performance on application-inspired benchmarks (Peters et
al., 2018, inter alia), and the emergence of syntactic abstractions in those
representations (Tenney et al., 2019, inter alia). On the other hand, the lack
of grounded supervision calls into question how well these representations can
ever capture meaning (Bender and Koller, 2020). We apply novel probes to recent
language models -- specifically focusing on predicate-argument structure as
operationalized by semantic dependencies (Ivanova et al., 2012) -- and find
that, unlike syntax, semantics is not brought to the surface by today's
pretrained models. We then use convolutional graph encoders to explicitly
incorporate semantic parses into task-specific finetuning, yielding benefits to
natural language understanding (NLU) tasks in the GLUE benchmark. This approach
demonstrates the potential for general-purpose (rather than task-specific)
linguistic supervision, above and beyond conventional pretraining and
finetuning. Several diagnostics help to localize the benefits of our approach.
| 2,021 |
Computation and Language
|
Rewriter-Evaluator Architecture for Neural Machine Translation
|
Encoder-decoder has been widely used in neural machine translation (NMT). A
few methods have been proposed to improve it with multiple passes of decoding.
However, their full potential is limited by a lack of appropriate termination
policies. To address this issue, we present a novel architecture,
Rewriter-Evaluator. It consists of a rewriter and an evaluator. Translating a
source sentence involves multiple passes. At every pass, the rewriter produces
a new translation to improve the past translation and the evaluator estimates
the translation quality to decide whether to terminate the rewriting process.
We also propose prioritized gradient descent (PGD) that facilitates training
the rewriter and the evaluator jointly. Though incurring multiple passes of
decoding, Rewriter-Evaluator with the proposed PGD method can be trained with a
similar time to that of training encoder-decoder models. We apply the proposed
architecture to improve the general NMT models (e.g., Transformer). We conduct
extensive experiments on two translation tasks, Chinese-English and
English-German, and show that the proposed architecture notably improves the
performances of NMT models and significantly outperforms previous baselines.
| 2,021 |
Computation and Language
|
Segmenting Natural Language Sentences via Lexical Unit Analysis
|
In this work, we present Lexical Unit Analysis (LUA), a framework for general
sequence segmentation tasks. Given a natural language sentence, LUA scores all
the valid segmentation candidates and utilizes dynamic programming (DP) to
extract the maximum scoring one. LUA enjoys a number of appealing properties
such as inherently guaranteeing the predicted segmentation to be valid and
facilitating globally optimal training and inference. Besides, the practical
time complexity of LUA can be reduced to linear time, which is very efficient.
We have conducted extensive experiments on 5 tasks, including syntactic
chunking, named entity recognition (NER), slot filling, Chinese word
segmentation, and Chinese part-of-speech (POS) tagging, across 15 datasets. Our
models have achieved the state-of-the-art performances on 13 of them. The
results also show that the F1 score of identifying long-length segments is
notably improved.
| 2,021 |
Computation and Language
|
Empirical Analysis of Unlabeled Entity Problem in Named Entity
Recognition
|
In many scenarios, named entity recognition (NER) models severely suffer from
unlabeled entity problem, where the entities of a sentence may not be fully
annotated. Through empirical studies performed on synthetic datasets, we find
two causes of performance degradation. One is the reduction of annotated
entities and the other is treating unlabeled entities as negative instances.
The first cause has less impact than the second one and can be mitigated by
adopting pretraining language models. The second cause seriously misguides a
model in training and greatly affects its performances. Based on the above
observations, we propose a general approach, which can almost eliminate the
misguidance brought by unlabeled entities. The key idea is to use negative
sampling that, to a large extent, avoids training NER models with unlabeled
entities. Experiments on synthetic datasets and real-world datasets show that
our model is robust to unlabeled entity problem and surpasses prior baselines.
On well-annotated datasets, our model is competitive with the state-of-the-art
method.
| 2,021 |
Computation and Language
|
A Framework for Generating Annotated Social Media Corpora with
Demographics, Stance, Civility, and Topicality
|
In this paper we introduce a framework for annotating a social media text
corpora for various categories. Since, social media data is generated via
individuals, it is important to annotate the text for the individuals
demographic attributes to enable a socio-technical analysis of the corpora.
Furthermore, when analyzing a large data-set we can often annotate a small
sample of data and then train a prediction model using this sample to annotate
the full data for the relevant categories. We use a case study of a Facebook
comment corpora on student loan discussion which was annotated for gender,
military affiliation, age-group, political leaning, race, stance, topicalilty,
neoliberlistic views and civility of the comment. We release three datasets of
Facebook comments for further research at:
https://github.com/socialmediaie/StudentDebtFbComments
| 2,020 |
Computation and Language
|
Causal BERT : Language models for causality detection between events
expressed in text
|
Causality understanding between events is a critical natural language
processing task that is helpful in many areas, including health care, business
risk management and finance. On close examination, one can find a huge amount
of textual content both in the form of formal documents or in content arising
from social media like Twitter, dedicated to communicating and exploring
various types of causality in the real world. Recognizing these "Cause-Effect"
relationships between natural language events continues to remain a challenge
simply because it is often expressed implicitly. Implicit causality is hard to
detect through most of the techniques employed in literature and can also, at
times be perceived as ambiguous or vague. Also, although well-known datasets do
exist for this problem, the examples in them are limited in the range and
complexity of the causal relationships they depict especially when related to
implicit relationships. Most of the contemporary methods are either based on
lexico-semantic pattern matching or are feature-driven supervised methods.
Therefore, as expected these methods are more geared towards handling explicit
causal relationships leading to limited coverage for implicit relationships and
are hard to generalize. In this paper, we investigate the language model's
capabilities for causal association among events expressed in natural language
text using sentence context combined with event information, and by leveraging
masked event context with in-domain and out-of-domain data distribution. Our
proposed methods achieve the state-of-art performance in three different data
distributions and can be leveraged for extraction of a causal diagram and/or
building a chain of events from unstructured text.
| 2,021 |
Computation and Language
|
An Event Correlation Filtering Method for Fake News Detection
|
Nowadays, social network platforms have been the prime source for people to
experience news and events due to their capacities to spread information
rapidly, which inevitably provides a fertile ground for the dissemination of
fake news. Thus, it is significant to detect fake news otherwise it could cause
public misleading and panic. Existing deep learning models have achieved great
progress to tackle the problem of fake news detection. However, training an
effective deep learning model usually requires a large amount of labeled news,
while it is expensive and time-consuming to provide sufficient labeled news in
actual applications. To improve the detection performance of fake news, we take
advantage of the event correlations of news and propose an event correlation
filtering method (ECFM) for fake news detection, mainly consisting of the news
characterizer, the pseudo label annotator, the event credibility updater, and
the news entropy selector. The news characterizer is responsible for extracting
textual features from news, which cooperates with the pseudo label annotator to
assign pseudo labels for unlabeled news by fully exploiting the event
correlations of news. In addition, the event credibility updater employs
adaptive Kalman filter to weaken the credibility fluctuations of events. To
further improve the detection performance, the news entropy selector
automatically discovers high-quality samples from pseudo labeled news by
quantifying their news entropy. Finally, ECFM is proposed to integrate them to
detect fake news in an event correlation filtering manner. Extensive
experiments prove that the explainable introduction of the event correlations
of news is beneficial to improve the detection performance of fake news.
| 2,020 |
Computation and Language
|
Approches quantitatives de l'analyse des pr{\'e}dictions en traduction
automatique neuronale (TAN)
|
As part of a larger project on optimal learning conditions in neural machine
translation, we investigate characteristic training phases of translation
engines. All our experiments are carried out using OpenNMT-Py: the
pre-processing step is implemented using the Europarl training corpus and the
INTERSECT corpus is used for validation. Longitudinal analyses of training
phases suggest that the progression of translations is not always linear.
Following the results of textometric explorations, we identify the importance
of the phenomena related to chronological progression, in order to map
different processes at work in neural machine translation (NMT).
| 2,020 |
Computation and Language
|
As Good as New. How to Successfully Recycle English GPT-2 to Make Models
for Other Languages
|
Large generative language models have been very successful for English, but
other languages lag behind, in part due to data and computational limitations.
We propose a method that may overcome these problems by adapting existing
pre-trained models to new languages. Specifically, we describe the adaptation
of English GPT-2 to Italian and Dutch by retraining lexical embeddings without
tuning the Transformer layers. As a result, we obtain lexical embeddings for
Italian and Dutch that are aligned with the original English lexical
embeddings. Additionally, we scale up complexity by transforming relearned
lexical embeddings of GPT-2 small to the GPT-2 medium embedding space. This
method minimises the amount of training and prevents losing information during
adaptation that was learned by GPT-2. English GPT-2 models with relearned
lexical embeddings can generate realistic sentences in Italian and Dutch.
Though on average these sentences are still identifiable as artificial by
humans, they are assessed on par with sentences generated by a GPT-2 model
fully trained from scratch.
| 2,021 |
Computation and Language
|
Direct multimodal few-shot learning of speech and images
|
We propose direct multimodal few-shot models that learn a shared embedding
space of spoken words and images from only a few paired examples. Imagine an
agent is shown an image along with a spoken word describing the object in the
picture, e.g. pen, book and eraser. After observing a few paired examples of
each class, the model is asked to identify the "book" in a set of unseen
pictures. Previous work used a two-step indirect approach relying on learned
unimodal representations: speech-speech and image-image comparisons are
performed across the support set of given speech-image pairs. We propose two
direct models which instead learn a single multimodal space where inputs from
different modalities are directly comparable: a multimodal triplet network
(MTriplet) and a multimodal correspondence autoencoder (MCAE). To train these
direct models, we mine speech-image pairs: the support set is used to pair up
unlabelled in-domain speech and images. In a speech-to-image digit matching
task, direct models outperform indirect models, with the MTriplet achieving the
best multimodal five-shot accuracy. We show that the improvements are due to
the combination of unsupervised and transfer learning in the direct models, and
the absence of two-step compounding errors.
| 2,021 |
Computation and Language
|
Towards Coinductive Models for Natural Language Understanding. Bringing
together Deep Learning and Deep Semantics
|
This article contains a proposal to add coinduction to the computational
apparatus of natural language understanding. This, we argue, will provide a
basis for more realistic, computationally sound, and scalable models of natural
language dialogue, syntax and semantics. Given that the bottom up, inductively
constructed, semantic and syntactic structures are brittle, and seemingly
incapable of adequately representing the meaning of longer sentences or
realistic dialogues, natural language understanding is in need of a new
foundation. Coinduction, which uses top down constraints, has been successfully
used in the design of operating systems and programming languages. Moreover,
implicitly it has been present in text mining, machine translation, and in some
attempts to model intensionality and modalities, which provides evidence that
it works. This article shows high level formalizations of some of such uses.
Since coinduction and induction can coexist, they can provide a common
language and a conceptual model for research in natural language understanding.
In particular, such an opportunity seems to be emerging in research on
compositionality. This article shows several examples of the joint appearance
of induction and coinduction in natural language processing. We argue that the
known individual limitations of induction and coinduction can be overcome in
empirical settings by a combination of the the two methods. We see an open
problem in providing a theory of their joint use.
| 2,020 |
Computation and Language
|
Longitudinal Citation Prediction using Temporal Graph Neural Networks
|
Citation count prediction is the task of predicting the number of citations a
paper has gained after a period of time. Prior work viewed this as a static
prediction task. As papers and their citations evolve over time, considering
the dynamics of the number of citations a paper will receive would seem
logical. Here, we introduce the task of sequence citation prediction. The goal
is to accurately predict the trajectory of the number of citations a scholarly
work receives over time. We propose to view papers as a structured network of
citations, allowing us to use topological information as a learning signal.
Additionally, we learn how this dynamic citation network changes over time and
the impact of paper meta-data such as authors, venues and abstracts. To
approach the new task, we derive a dynamic citation network from Semantic
Scholar spanning over 42 years. We present a model which exploits topological
and temporal information using graph convolution networks paired with sequence
prediction, and compare it against multiple baselines, testing the importance
of topological and temporal information and analyzing model performance. Our
experiments show that leveraging both the temporal and topological information
greatly increases the performance of predicting citation counts over time.
| 2,021 |
Computation and Language
|
Multi-Sense Language Modelling
|
The effectiveness of a language model is influenced by its token
representations, which must encode contextual information and handle the same
word form having a plurality of meanings (polysemy). Currently, none of the
common language modelling architectures explicitly model polysemy. We propose a
language model which not only predicts the next word, but also its sense in
context. We argue that this higher prediction granularity may be useful for end
tasks such as assistive writing, and allow for more a precise linking of
language models with knowledge bases. We find that multi-sense language
modelling requires architectures that go beyond standard language models, and
here propose a structured prediction framework that decomposes the task into a
word followed by a sense prediction task. To aid sense prediction, we utilise a
Graph Attention Network, which encodes definitions and example uses of word
senses. Overall, we find that multi-sense language modelling is a highly
challenging task, and suggest that future work focus on the creation of more
annotated training datasets.
| 2,022 |
Computation and Language
|
Exploring Pair-Wise NMT for Indian Languages
|
In this paper, we address the task of improving pair-wise machine translation
for specific low resource Indian languages. Multilingual NMT models have
demonstrated a reasonable amount of effectiveness on resource-poor languages.
In this work, we show that the performance of these models can be significantly
improved upon by using back-translation through a filtered back-translation
process and subsequent fine-tuning on the limited pair-wise language corpora.
The analysis in this paper suggests that this method can significantly improve
a multilingual model's performance over its baseline, yielding state-of-the-art
results for various Indian languages.
| 2,020 |
Computation and Language
|
Automatic Standardization of Colloquial Persian
|
The Iranian Persian language has two varieties: standard and colloquial. Most
natural language processing tools for Persian assume that the text is in
standard form: this assumption is wrong in many real applications especially
web content. This paper describes a simple and effective standardization
approach based on sequence-to-sequence translation. We design an algorithm for
generating artificial parallel colloquial-to-standard data for learning a
sequence-to-sequence model. Moreover, we annotate a publicly available
evaluation data consisting of 1912 sentences from a diverse set of domains. Our
intrinsic evaluation shows a higher BLEU score of 62.8 versus 61.7 compared to
an off-the-shelf rule-based standardization model in which the original text
has a BLEU score of 46.4. We also show that our model improves
English-to-Persian machine translation in scenarios for which the training data
is from colloquial Persian with 1.4 absolute BLEU score difference in the
development data, and 0.8 in the test data.
| 2,020 |
Computation and Language
|
Multilingual Transfer Learning for QA Using Translation as Data
Augmentation
|
Prior work on multilingual question answering has mostly focused on using
large multilingual pre-trained language models (LM) to perform zero-shot
language-wise learning: train a QA model on English and test on other
languages. In this work, we explore strategies that improve cross-lingual
transfer by bringing the multilingual embeddings closer in the semantic space.
Our first strategy augments the original English training data with machine
translation-generated data. This results in a corpus of multilingual
silver-labeled QA pairs that is 14 times larger than the original training set.
In addition, we propose two novel strategies, language adversarial training and
language arbitration framework, which significantly improve the (zero-resource)
cross-lingual transfer performance and result in LM embeddings that are less
language-variant. Empirically, we show that the proposed models outperform the
previous zero-shot baseline on the recently introduced multilingual MLQA and
TyDiQA datasets.
| 2,021 |
Computation and Language
|
Towards Neural Programming Interfaces
|
It is notoriously difficult to control the behavior of artificial neural
networks such as generative neural language models. We recast the problem of
controlling natural language generation as that of learning to interface with a
pretrained language model, just as Application Programming Interfaces (APIs)
control the behavior of programs by altering hyperparameters. In this new
paradigm, a specialized neural network (called a Neural Programming Interface
or NPI) learns to interface with a pretrained language model by manipulating
the hidden activations of the pretrained model to produce desired outputs.
Importantly, no permanent changes are made to the weights of the original
model, allowing us to re-purpose pretrained models for new tasks without
overwriting any aspect of the language model. We also contribute a new data set
construction algorithm and GAN-inspired loss function that allows us to train
NPI models to control outputs of autoregressive transformers. In experiments
against other state-of-the-art approaches, we demonstrate the efficacy of our
methods using OpenAI's GPT-2 model, successfully controlling noun selection,
topic aversion, offensive speech filtering, and other aspects of language while
largely maintaining the controlled model's fluency under deterministic
settings.
| 2,020 |
Computation and Language
|
Exploring Deep Neural Networks and Transfer Learning for Analyzing
Emotions in Tweets
|
In this paper, we present an experiment on using deep learning and transfer
learning techniques for emotion analysis in tweets and suggest a method to
interpret our deep learning models. The proposed approach for emotion analysis
combines a Long Short Term Memory (LSTM) network with a Convolutional Neural
Network (CNN). Then we extend this approach for emotion intensity prediction
using transfer learning technique. Furthermore, we propose a technique to
visualize the importance of each word in a tweet to get a better understanding
of the model. Experimentally, we show in our analysis that the proposed models
outperform the state-of-the-art in emotion classification while maintaining
competitive results in predicting emotion intensity.
| 2,020 |
Computation and Language
|
Reinforced Multi-Teacher Selection for Knowledge Distillation
|
In natural language processing (NLP) tasks, slow inference speed and huge
footprints in GPU usage remain the bottleneck of applying pre-trained deep
models in production. As a popular method for model compression, knowledge
distillation transfers knowledge from one or multiple large (teacher) models to
a small (student) model. When multiple teacher models are available in
distillation, the state-of-the-art methods assign a fixed weight to a teacher
model in the whole distillation. Furthermore, most of the existing methods
allocate an equal weight to every teacher model. In this paper, we observe
that, due to the complexity of training examples and the differences in student
model capability, learning differentially from teacher models can lead to
better performance of student models distilled. We systematically develop a
reinforced method to dynamically assign weights to teacher models for different
training instances and optimize the performance of student model. Our extensive
experimental results on several NLP tasks clearly verify the feasibility and
effectiveness of our approach.
| 2,020 |
Computation and Language
|
EQG-RACE: Examination-Type Question Generation
|
Question Generation (QG) is an essential component of the automatic
intelligent tutoring systems, which aims to generate high-quality questions for
facilitating the reading practice and assessments. However, existing QG
technologies encounter several key issues concerning the biased and unnatural
language sources of datasets which are mainly obtained from the Web (e.g.
SQuAD). In this paper, we propose an innovative Examination-type Question
Generation approach (EQG-RACE) to generate exam-like questions based on a
dataset extracted from RACE. Two main strategies are employed in EQG-RACE for
dealing with discrete answer information and reasoning among long contexts. A
Rough Answer and Key Sentence Tagging scheme is utilized to enhance the
representations of input. An Answer-guided Graph Convolutional Network (AG-GCN)
is designed to capture structure information in revealing the inter-sentences
and intra-sentence relations. Experimental results show a state-of-the-art
performance of EQG-RACE, which is apparently superior to the baselines. In
addition, our work has established a new QG prototype with a reshaped dataset
and QG method, which provides an important benchmark for related research in
future work. We will make our data and code publicly available for further
research.
| 2,020 |
Computation and Language
|
Document-aligned Japanese-English Conversation Parallel Corpus
|
Sentence-level (SL) machine translation (MT) has reached acceptable quality
for many high-resourced languages, but not document-level (DL) MT, which is
difficult to 1) train with little amount of DL data; and 2) evaluate, as the
main methods and data sets focus on SL evaluation. To address the first issue,
we present a document-aligned Japanese-English conversation corpus, including
balanced, high-quality business conversation data for tuning and testing. As
for the second issue, we manually identify the main areas where SL MT fails to
produce adequate translations in lack of context. We then create an evaluation
set where these phenomena are annotated to alleviate automatic evaluation of DL
systems. We train MT models using our corpus to demonstrate how using context
leads to improvements.
| 2,020 |
Computation and Language
|
Improving Task-Agnostic BERT Distillation with Layer Mapping Search
|
Knowledge distillation (KD) which transfers the knowledge from a large
teacher model to a small student model, has been widely used to compress the
BERT model recently. Besides the supervision in the output in the original KD,
recent works show that layer-level supervision is crucial to the performance of
the student BERT model. However, previous works designed the layer mapping
strategy heuristically (e.g., uniform or last-layer), which can lead to
inferior performance. In this paper, we propose to use the genetic algorithm
(GA) to search for the optimal layer mapping automatically. To accelerate the
search process, we further propose a proxy setting where a small portion of the
training corpus are sampled for distillation, and three representative tasks
are chosen for evaluation. After obtaining the optimal layer mapping, we
perform the task-agnostic BERT distillation with it on the whole corpus to
build a compact student model, which can be directly fine-tuned on downstream
tasks. Comprehensive experiments on the evaluation benchmarks demonstrate that
1) layer mapping strategy has a significant effect on task-agnostic BERT
distillation and different layer mappings can result in quite different
performances; 2) the optimal layer mapping strategy from the proposed search
process consistently outperforms the other heuristic ones; 3) with the optimal
layer mapping, our student model achieves state-of-the-art performance on the
GLUE tasks.
| 2,020 |
Computation and Language
|
ParsiNLU: A Suite of Language Understanding Challenges for Persian
|
Despite the progress made in recent years in addressing natural language
understanding (NLU) challenges, the majority of this progress remains to be
concentrated on resource-rich languages like English. This work focuses on
Persian language, one of the widely spoken languages in the world, and yet
there are few NLU datasets available for this rich language. The availability
of high-quality evaluation datasets is a necessity for reliable assessment of
the progress on different NLU tasks and domains. We introduce ParsiNLU, the
first benchmark in Persian language that includes a range of high-level tasks
-- Reading Comprehension, Textual Entailment, etc. These datasets are collected
in a multitude of ways, often involving manual annotations by native speakers.
This results in over 14.5$k$ new instances across 6 distinct NLU tasks.
Besides, we present the first results on state-of-the-art monolingual and
multi-lingual pre-trained language-models on this benchmark and compare them
with human performance, which provides valuable insights into our ability to
tackle natural language understanding challenges in Persian. We hope ParsiNLU
fosters further research and advances in Persian language understanding.
| 2,021 |
Computation and Language
|
Improving Zero Shot Learning Baselines with Commonsense Knowledge
|
Zero shot learning -- the problem of training and testing on a completely
disjoint set of classes -- relies greatly on its ability to transfer knowledge
from train classes to test classes. Traditionally semantic embeddings
consisting of human defined attributes (HA) or distributed word embeddings
(DWE) are used to facilitate this transfer by improving the association between
visual and semantic embeddings. In this paper, we take advantage of explicit
relations between nodes defined in ConceptNet, a commonsense knowledge graph,
to generate commonsense embeddings of the class labels by using a graph
convolution network-based autoencoder. Our experiments performed on three
standard benchmark datasets surpass the strong baselines when we fuse our
commonsense embeddings with existing semantic embeddings i.e. HA and DWE.
| 2,020 |
Computation and Language
|
Improved Robustness to Disfluencies in RNN-Transducer Based Speech
Recognition
|
Automatic Speech Recognition (ASR) based on Recurrent Neural Network
Transducers (RNN-T) is gaining interest in the speech community. We investigate
data selection and preparation choices aiming for improved robustness of RNN-T
ASR to speech disfluencies with a focus on partial words. For evaluation we use
clean data, data with disfluencies and a separate dataset with speech affected
by stuttering. We show that after including a small amount of data with
disfluencies in the training set the recognition accuracy on the tests with
disfluencies and stuttering improves. Increasing the amount of training data
with disfluencies gives additional gains without degradation on the clean data.
We also show that replacing partial words with a dedicated token helps to get
even better accuracy on utterances with disfluencies and stutter. The
evaluation of our best model shows 22.5% and 16.4% relative WER reduction on
those two evaluation sets.
| 2,020 |
Computation and Language
|
Morphology Matters: A Multilingual Language Modeling Analysis
|
Prior studies in multilingual language modeling (e.g., Cotterell et al.,
2018; Mielke et al., 2019) disagree on whether or not inflectional morphology
makes languages harder to model. We attempt to resolve the disagreement and
extend those studies. We compile a larger corpus of 145 Bible translations in
92 languages and a larger number of typological features. We fill in missing
typological data for several languages and consider corpus-based measures of
morphological complexity in addition to expert-produced typological features.
We find that several morphological measures are significantly associated with
higher surprisal when LSTM models are trained with BPE-segmented data. We also
investigate linguistically-motivated subword segmentation strategies like
Morfessor and Finite-State Transducers (FSTs) and find that these segmentation
strategies yield better performance and reduce the impact of a language's
morphology on language modeling.
| 2,021 |
Computation and Language
|
Discriminating Between Similar Nordic Languages
|
Automatic language identification is a challenging problem. Discriminating
between closely related languages is especially difficult. This paper presents
a machine learning approach for automatic language identification for the
Nordic languages, which often suffer miscategorisation by existing
state-of-the-art tools. Concretely we will focus on discrimination between six
Nordic languages: Danish, Swedish, Norwegian (Nynorsk), Norwegian (Bokm{\aa}l),
Faroese and Icelandic.
| 2,023 |
Computation and Language
|
Orthogonal Language and Task Adapters in Zero-Shot Cross-Lingual
Transfer
|
Adapter modules, additional trainable parameters that enable efficient
fine-tuning of pretrained transformers, have recently been used for language
specialization of multilingual transformers, improving downstream zero-shot
cross-lingual transfer. In this work, we propose orthogonal language and task
adapters (dubbed orthoadapters) for cross-lingual transfer. They are trained to
encode language- and task-specific information that is complementary (i.e.,
orthogonal) to the knowledge already stored in the pretrained transformer's
parameters. Our zero-shot cross-lingual transfer experiments, involving three
tasks (POS-tagging, NER, NLI) and a set of 10 diverse languages, 1) point to
the usefulness of orthoadapters in cross-lingual transfer, especially for the
most complex NLI task, but also 2) indicate that the optimal adapter
configuration highly depends on the task and the target language. We hope that
our work will motivate a wider investigation of usefulness of orthogonality
constraints in language- and task-specific fine-tuning of pretrained
transformers.
| 2,020 |
Computation and Language
|
TF-CR: Weighting Embeddings for Text Classification
|
Text classification, as the task consisting in assigning categories to
textual instances, is a very common task in information science. Methods
learning distributed representations of words, such as word embeddings, have
become popular in recent years as the features to use for text classification
tasks. Despite the increasing use of word embeddings for text classification,
these are generally used in an unsupervised manner, i.e. information derived
from class labels in the training data are not exploited. While word embeddings
inherently capture the distributional characteristics of words, and contexts
observed around them in a large dataset, they aren't optimised to consider the
distributions of words across categories in the classification dataset at hand.
To optimise text representations based on word embeddings by incorporating
class distributions in the training data, we propose the use of weighting
schemes that assign a weight to embeddings of each word based on its saliency
in each class. To achieve this, we introduce a novel weighting scheme, Term
Frequency-Category Ratio (TF-CR), which can weight high-frequency,
category-exclusive words higher when computing word embeddings. Our experiments
on 16 classification datasets show the effectiveness of TF-CR, leading to
improved performance scores over existing weighting schemes, with a performance
gap that increases as the size of the training data grows.
| 2,020 |
Computation and Language
|
Yelp Review Rating Prediction: Machine Learning and Deep Learning Models
|
We predict restaurant ratings from Yelp reviews based on Yelp Open Dataset.
Data distribution is presented, and one balanced training dataset is built. Two
vectorizers are experimented for feature engineering. Four machine learning
models including Naive Bayes, Logistic Regression, Random Forest, and Linear
Support Vector Machine are implemented. Four transformer-based models
containing BERT, DistilBERT, RoBERTa, and XLNet are also applied. Accuracy,
weighted F1 score, and confusion matrix are used for model evaluation. XLNet
achieves 70% accuracy for 5-star classification compared with Logistic
Regression with 64% accuracy.
| 2,020 |
Computation and Language
|
Mapping the Timescale Organization of Neural Language Models
|
In the human brain, sequences of language input are processed within a
distributed and hierarchical architecture, in which higher stages of processing
encode contextual information over longer timescales. In contrast, in recurrent
neural networks which perform natural language processing, we know little about
how the multiple timescales of contextual information are functionally
organized. Therefore, we applied tools developed in neuroscience to map the
"processing timescales" of individual units within a word-level LSTM language
model. This timescale-mapping method assigned long timescales to units
previously found to track long-range syntactic dependencies. Additionally, the
mapping revealed a small subset of the network (less than 15% of units) with
long timescales and whose function had not previously been explored. We next
probed the functional organization of the network by examining the relationship
between the processing timescale of units and their network connectivity. We
identified two classes of long-timescale units: "controller" units composed a
densely interconnected subnetwork and strongly projected to the rest of the
network, while "integrator" units showed the longest timescales in the network,
and expressed projection profiles closer to the mean projection profile.
Ablating integrator and controller units affected model performance at
different positions within a sentence, suggesting distinctive functions of
these two sets of units. Finally, we tested the generalization of these results
to a character-level LSTM model and models with different architectures. In
summary, we demonstrated a model-free technique for mapping the timescale
organization in recurrent neural networks, and we applied this method to reveal
the timescale and functional organization of neural language models.
| 2,021 |
Computation and Language
|
Less Is More: Improved RNN-T Decoding Using Limited Label Context and
Path Merging
|
End-to-end models that condition the output label sequence on all previously
predicted labels have emerged as popular alternatives to conventional systems
for automatic speech recognition (ASR). Since unique label histories correspond
to distinct models states, such models are decoded using an approximate
beam-search process which produces a tree of hypotheses.
In this work, we study the influence of the amount of label context on the
model's accuracy, and its impact on the efficiency of the decoding process. We
find that we can limit the context of the recurrent neural network transducer
(RNN-T) during training to just four previous word-piece labels, without
degrading word error rate (WER) relative to the full-context baseline. Limiting
context also provides opportunities to improve the efficiency of the
beam-search process during decoding by removing redundant paths from the active
beam, and instead retaining them in the final lattice. This path-merging scheme
can also be applied when decoding the baseline full-context model through an
approximation. Overall, we find that the proposed path-merging scheme is
extremely effective allowing us to improve oracle WERs by up to 36% over the
baseline, while simultaneously reducing the number of model evaluations by up
to 5.3% without any degradation in WER.
| 2,020 |
Computation and Language
|
SenSeNet: Neural Keyphrase Generation with Document Structure
|
Keyphrase Generation (KG) is the task of generating central topics from a
given document or literary work, which captures the crucial information
necessary to understand the content. Documents such as scientific literature
contain rich meta-sentence information, which represents the logical-semantic
structure of the documents. However, previous approaches ignore the constraints
of document logical structure, and hence they mistakenly generate keyphrases
from unimportant sentences. To address this problem, we propose a new method
called Sentence Selective Network (SenSeNet) to incorporate the meta-sentence
inductive bias into KG. In SenSeNet, we use a straight-through estimator for
end-to-end training and incorporate weak supervision in the training of the
sentence selection module. Experimental results show that SenSeNet can
consistently improve the performance of major KG models based on seq2seq
framework, which demonstrate the effectiveness of capturing structural
information and distinguishing the significance of sentences in KG task.
| 2,020 |
Computation and Language
|
GDPNet: Refining Latent Multi-View Graph for Relation Extraction
|
Relation Extraction (RE) is to predict the relation type of two entities that
are mentioned in a piece of text, e.g., a sentence or a dialogue. When the
given text is long, it is challenging to identify indicative words for the
relation prediction. Recent advances on RE task are from BERT-based sequence
modeling and graph-based modeling of relationships among the tokens in the
sequence. In this paper, we propose to construct a latent multi-view graph to
capture various possible relationships among tokens. We then refine this graph
to select important words for relation prediction. Finally, the representation
of the refined graph and the BERT-based sequence representation are
concatenated for relation extraction. Specifically, in our proposed GDPNet
(Gaussian Dynamic Time Warping Pooling Net), we utilize Gaussian Graph
Generator (GGG) to generate edges of the multi-view graph. The graph is then
refined by Dynamic Time Warping Pooling (DTWPool). On DialogRE and TACRED, we
show that GDPNet achieves the best performance on dialogue-level RE, and
comparable performance with the state-of-the-arts on sentence-level RE.
| 2,023 |
Computation and Language
|
AffectON: Incorporating Affect Into Dialog Generation
|
Due to its expressivity, natural language is paramount for explicit and
implicit affective state communication among humans. The same linguistic
inquiry (e.g., How are you?) might induce responses with different affects
depending on the affective state of the conversational partner(s) and the
context of the conversation. Yet, most dialog systems do not consider affect as
constitutive aspect of response generation. In this paper, we introduce
AffectON, an approach for generating affective responses during inference. For
generating language in a targeted affect, our approach leverages a
probabilistic language model and an affective space. AffectON is language model
agnostic, since it can work with probabilities generated by any language model
(e.g., sequence-to-sequence models, neural language models, n-grams). Hence, it
can be employed for both affective dialog and affective language generation. We
experimented with affective dialog generation and evaluated the generated text
objectively and subjectively. For the subjective part of the evaluation, we
designed a custom user interface for rating and provided recommendations for
the design of such interfaces. The results, both subjective and objective
demonstrate that our approach is successful in pulling the generated language
toward the targeted affect, with little sacrifice in syntactic coherence.
| 2,020 |
Computation and Language
|
Discriminative Pre-training for Low Resource Title Compression in
Conversational Grocery
|
The ubiquity of smart voice assistants has made conversational shopping
commonplace. This is especially true for low consideration segments like
grocery. A central problem in conversational grocery is the automatic
generation of short product titles that can be read out fast during a
conversation. Several supervised models have been proposed in the literature
that leverage manually labeled datasets and additional product features to
generate short titles automatically. However, obtaining large amounts of
labeled data is expensive and most grocery item pages are not as feature-rich
as other categories. To address this problem we propose a pre-training based
solution that makes use of unlabeled data to learn contextual product
representations which can then be fine-tuned to obtain better title compression
even in a low resource setting. We use a self-attentive BiLSTM encoder network
with a time distributed softmax layer for the title compression task. We
overcome the vocabulary mismatch problem by using a hybrid embedding layer that
combines pre-trained word embeddings with trainable character level
convolutions. We pre-train this network as a discriminator on a replaced-token
detection task over a large number of unlabeled grocery product titles.
Finally, we fine tune this network, without any modifications, with a small
labeled dataset for the title compression task. Experiments on Walmart's online
grocery catalog show our model achieves performance comparable to
state-of-the-art models like BERT and XLNet. When fine tuned on all of the
available training data our model attains an F1 score of 0.8558 which lags the
best performing model, BERT-Base, by 2.78% and XLNet by 0.28% only, while using
55 times lesser parameters than both. Further, when allowed to fine tune on 5%
of the training data only, our model outperforms BERT-Base by 24.3% in F1
score.
| 2,020 |
Computation and Language
|
Syntactic representation learning for neural network based TTS with
syntactic parse tree traversal
|
Syntactic structure of a sentence text is correlated with the prosodic
structure of the speech that is crucial for improving the prosody and
naturalness of a text-to-speech (TTS) system. Nowadays TTS systems usually try
to incorporate syntactic structure information with manually designed features
based on expert knowledge. In this paper, we propose a syntactic representation
learning method based on syntactic parse tree traversal to automatically
utilize the syntactic structure information. Two constituent label sequences
are linearized through left-first and right-first traversals from constituent
parse tree. Syntactic representations are then extracted at word level from
each constituent label sequence by a corresponding uni-directional gated
recurrent unit (GRU) network. Meanwhile, nuclear-norm maximization loss is
introduced to enhance the discriminability and diversity of the embeddings of
constituent labels. Upsampled syntactic representations and phoneme embeddings
are concatenated to serve as the encoder input of Tacotron2. Experimental
results demonstrate the effectiveness of our proposed approach, with mean
opinion score (MOS) increasing from 3.70 to 3.82 and ABX preference exceeding
by 17% compared with the baseline. In addition, for sentences with multiple
syntactic parse trees, prosodic differences can be clearly perceived from the
synthesized speeches.
| 2,020 |
Computation and Language
|
C2C-GenDA: Cluster-to-Cluster Generation for Data Augmentation of Slot
Filling
|
Slot filling, a fundamental module of spoken language understanding, often
suffers from insufficient quantity and diversity of training data. To remedy
this, we propose a novel Cluster-to-Cluster generation framework for Data
Augmentation (DA), named C2C-GenDA. It enlarges the training set by
reconstructing existing utterances into alternative expressions while keeping
semantic. Different from previous DA works that reconstruct utterances one by
one independently, C2C-GenDA jointly encodes multiple existing utterances of
the same semantics and simultaneously decodes multiple unseen expressions.
Jointly generating multiple new utterances allows to consider the relations
between generated instances and encourages diversity. Besides, encoding
multiple existing utterances endows C2C with a wider view of existing
expressions, helping to reduce generation that duplicates existing data.
Experiments on ATIS and Snips datasets show that instances augmented by
C2C-GenDA improve slot filling by 7.99 (11.9%) and 5.76 (13.6%) F-scores
respectively, when there are only hundreds of training utterances.
| 2,020 |
Computation and Language
|
Context-Enhanced Entity and Relation Embedding for Knowledge Graph
Completion
|
Most researches for knowledge graph completion learn representations of
entities and relations to predict missing links in incomplete knowledge graphs.
However, these methods fail to take full advantage of both the contextual
information of entity and relation. Here, we extract contexts of entities and
relations from the triplets which they compose. We propose a model named AggrE,
which conducts efficient aggregations respectively on entity context and
relation context in multi-hops, and learns context-enhanced entity and relation
embeddings for knowledge graph completion. The experiment results show that
AggrE is competitive to existing models.
| 2,020 |
Computation and Language
|
Iterative Utterance Segmentation for Neural Semantic Parsing
|
Neural semantic parsers usually fail to parse long and complex utterances
into correct meaning representations, due to the lack of exploiting the
principle of compositionality. To address this issue, we present a novel
framework for boosting neural semantic parsers via iterative utterance
segmentation. Given an input utterance, our framework iterates between two
neural modules: a segmenter for segmenting a span from the utterance, and a
parser for mapping the span into a partial meaning representation. Then, these
intermediate parsing results are composed into the final meaning
representation. One key advantage is that this framework does not require any
handcraft templates or additional labeled data for utterance segmentation: we
achieve this through proposing a novel training method, in which the parser
provides pseudo supervision for the segmenter. Experiments on Geo,
ComplexWebQuestions, and Formulas show that our framework can consistently
improve performances of neural semantic parsers in different domains. On data
splits that require compositional generalization, our framework brings
significant accuracy gains: Geo 63.1 to 81.2, Formulas 59.7 to 72.7,
ComplexWebQuestions 27.1 to 56.3.
| 2,020 |
Computation and Language
|
SPARTA: Speaker Profiling for ARabic TAlk
|
This paper proposes a novel approach to an automatic estimation of three
speaker traits from Arabic speech: gender, emotion, and dialect. After showing
promising results on different text classification tasks, the multi-task
learning (MTL) approach is used in this paper for Arabic speech classification
tasks. The dataset was assembled from six publicly available datasets. First,
The datasets were edited and thoroughly divided into train, development, and
test sets (open to the public), and a benchmark was set for each task and
dataset throughout the paper. Then, three different networks were explored:
Long Short Term Memory (LSTM), Convolutional Neural Network (CNN), and
Fully-Connected Neural Network (FCNN) on five different types of features: two
raw features (MFCC and MEL) and three pre-trained vectors (i-vectors,
d-vectors, and x-vectors). LSTM and CNN networks were implemented using raw
features: MFCC and MEL, where FCNN was explored on the pre-trained vectors
while varying the hyper-parameters of these networks to obtain the best results
for each dataset and task. MTL was evaluated against the single task learning
(STL) approach for the three tasks and six datasets, in which the MTL and
pre-trained vectors almost constantly outperformed STL. All the data and
pre-trained models used in this paper are available and can be acquired by the
public.
| 2,020 |
Computation and Language
|
Mask-Align: Self-Supervised Neural Word Alignment
|
Word alignment, which aims to align translationally equivalent words between
source and target sentences, plays an important role in many natural language
processing tasks. Current unsupervised neural alignment methods focus on
inducing alignments from neural machine translation models, which does not
leverage the full context in the target sequence. In this paper, we propose
Mask-Align, a self-supervised word alignment model that takes advantage of the
full context on the target side. Our model masks out each target token and
predicts it conditioned on both source and the remaining target tokens. This
two-step process is based on the assumption that the source token contributing
most to recovering the masked target token should be aligned. We also introduce
an attention variant called leaky attention, which alleviates the problem of
unexpected high cross-attention weights on special tokens such as periods.
Experiments on four language pairs show that our model outperforms previous
unsupervised neural aligners and obtains new state-of-the-art results.
| 2,021 |
Computation and Language
|
Contrastive Learning with Adversarial Perturbations for Conditional Text
Generation
|
Recently, sequence-to-sequence (seq2seq) models with the Transformer
architecture have achieved remarkable performance on various conditional text
generation tasks, such as machine translation. However, most of them are
trained with teacher forcing with the ground truth label given at each time
step, without being exposed to incorrectly generated tokens during training,
which hurts its generalization to unseen inputs, that is known as the "exposure
bias" problem. In this work, we propose to mitigate the conditional text
generation problem by contrasting positive pairs with negative pairs, such that
the model is exposed to various valid or incorrect perturbations of the inputs,
for improved generalization. However, training the model with naive contrastive
learning framework using random non-target sequences as negative examples is
suboptimal, since they are easily distinguishable from the correct output,
especially so with models pretrained with large text corpora. Also, generating
positive examples requires domain-specific augmentation heuristics which may
not generalize over diverse domains. To tackle this problem, we propose a
principled method to generate positive and negative samples for contrastive
learning of seq2seq models. Specifically, we generate negative examples by
adding small perturbations to the input sequence to minimize its conditional
likelihood, and positive examples by adding large perturbations while enforcing
it to have a high conditional likelihood. Such "hard" positive and negative
pairs generated using our method guides the model to better distinguish correct
outputs from incorrect ones. We empirically show that our proposed method
significantly improves the generalization of the seq2seq on three text
generation tasks - machine translation, text summarization, and question
generation.
| 2,021 |
Computation and Language
|
Unsupervised Summarization for Chat Logs with Topic-Oriented Ranking and
Context-Aware Auto-Encoders
|
Automatic chat summarization can help people quickly grasp important
information from numerous chat messages. Unlike conventional documents, chat
logs usually have fragmented and evolving topics. In addition, these logs
contain a quantity of elliptical and interrogative sentences, which make the
chat summarization highly context dependent. In this work, we propose a novel
unsupervised framework called RankAE to perform chat summarization without
employing manually labeled data. RankAE consists of a topic-oriented ranking
strategy that selects topic utterances according to centrality and diversity
simultaneously, as well as a denoising auto-encoder that is carefully designed
to generate succinct but context-informative summaries based on the selected
utterances. To evaluate the proposed method, we collect a large-scale dataset
of chat logs from a customer service environment and build an annotated set
only for model evaluation. Experimental results show that RankAE significantly
outperforms other unsupervised methods and is able to generate high-quality
summaries in terms of relevance and topic coverage.
| 2,021 |
Computation and Language
|
Topic-Oriented Spoken Dialogue Summarization for Customer Service with
Saliency-Aware Topic Modeling
|
In a customer service system, dialogue summarization can boost service
efficiency by automatically creating summaries for long spoken dialogues in
which customers and agents try to address issues about specific topics. In this
work, we focus on topic-oriented dialogue summarization, which generates highly
abstractive summaries that preserve the main ideas from dialogues. In spoken
dialogues, abundant dialogue noise and common semantics could obscure the
underlying informative content, making the general topic modeling approaches
difficult to apply. In addition, for customer service, role-specific
information matters and is an indispensable part of a summary. To effectively
perform topic modeling on dialogues and capture multi-role information, in this
work we propose a novel topic-augmented two-stage dialogue summarizer (TDS)
jointly with a saliency-aware neural topic model (SATM) for topic-oriented
summarization of customer service dialogues. Comprehensive studies on a
real-world Chinese customer service dataset demonstrated the superiority of our
method against several strong baselines.
| 2,021 |
Computation and Language
|
LRC-BERT: Latent-representation Contrastive Knowledge Distillation for
Natural Language Understanding
|
The pre-training models such as BERT have achieved great results in various
natural language processing problems. However, a large number of parameters
need significant amounts of memory and the consumption of inference time, which
makes it difficult to deploy them on edge devices. In this work, we propose a
knowledge distillation method LRC-BERT based on contrastive learning to fit the
output of the intermediate layer from the angular distance aspect, which is not
considered by the existing distillation methods. Furthermore, we introduce a
gradient perturbation-based training architecture in the training phase to
increase the robustness of LRC-BERT, which is the first attempt in knowledge
distillation. Additionally, in order to better capture the distribution
characteristics of the intermediate layer, we design a two-stage training
method for the total distillation loss. Finally, by verifying 8 datasets on the
General Language Understanding Evaluation (GLUE) benchmark, the performance of
the proposed LRC-BERT exceeds the existing state-of-the-art methods, which
proves the effectiveness of our method.
| 2,020 |
Computation and Language
|
Generating Math Word Problems from Equations with Topic Controlling and
Commonsense Enforcement
|
Recent years have seen significant advancement in text generation tasks with
the help of neural language models. However, there exists a challenging task:
generating math problem text based on mathematical equations, which has made
little progress so far. In this paper, we present a novel equation-to-problem
text generation model. In our model, 1) we propose a flexible scheme to
effectively encode math equations, we then enhance the equation encoder by a
Varitional Autoen-coder (VAE) 2) given a math equation, we perform topic
selection, followed by which a dynamic topic memory mechanism is introduced to
restrict the topic distribution of the generator 3) to avoid commonsense
violation in traditional generation model, we pretrain word embedding with
background knowledge graph (KG), and we link decoded words to related words in
KG, targeted at injecting background knowledge into our model. We evaluate our
model through both automatic metrices and human evaluation, experiments
demonstrate our model outperforms baseline and previous models in both accuracy
and richness of generated problem text.
| 2,021 |
Computation and Language
|
A comparison of self-supervised speech representations as input features
for unsupervised acoustic word embeddings
|
Many speech processing tasks involve measuring the acoustic similarity
between speech segments. Acoustic word embeddings (AWE) allow for efficient
comparisons by mapping speech segments of arbitrary duration to
fixed-dimensional vectors. For zero-resource speech processing, where
unlabelled speech is the only available resource, some of the best AWE
approaches rely on weak top-down constraints in the form of automatically
discovered word-like segments. Rather than learning embeddings at the segment
level, another line of zero-resource research has looked at representation
learning at the short-time frame level. Recent approaches include
self-supervised predictive coding and correspondence autoencoder (CAE) models.
In this paper we consider whether these frame-level features are beneficial
when used as inputs for training to an unsupervised AWE model. We compare
frame-level features from contrastive predictive coding (CPC), autoregressive
predictive coding and a CAE to conventional MFCCs. These are used as inputs to
a recurrent CAE-based AWE model. In a word discrimination task on English and
Xitsonga data, all three representation learning approaches outperform MFCCs,
with CPC consistently showing the biggest improvement. In cross-lingual
experiments we find that CPC features trained on English can also be
transferred to Xitsonga.
| 2,020 |
Computation and Language
|
Towards localisation of keywords in speech using weak supervision
|
Developments in weakly supervised and self-supervised models could enable
speech technology in low-resource settings where full transcriptions are not
available. We consider whether keyword localisation is possible using two forms
of weak supervision where location information is not provided explicitly. In
the first, only the presence or absence of a word is indicated, i.e. a
bag-of-words (BoW) labelling. In the second, visual context is provided in the
form of an image paired with an unlabelled utterance; a model then needs to be
trained in a self-supervised fashion using the paired data. For keyword
localisation, we adapt a saliency-based method typically used in the vision
domain. We compare this to an existing technique that performs localisation as
a part of the network architecture. While the saliency-based method is more
flexible (it can be applied without architectural restrictions), we identify a
critical limitation when using it for keyword localisation. Of the two forms of
supervision, the visually trained model performs worse than the BoW-trained
model. We show qualitatively that the visually trained model sometimes locate
semantically related words, but this is not consistent. While our results show
that there is some signal allowing for localisation, it also calls for other
localisation methods better matched to these forms of weak supervision.
| 2,020 |
Computation and Language
|
Reasoning in Dialog: Improving Response Generation by Context Reading
Comprehension
|
In multi-turn dialog, utterances do not always take the full form of
sentences \cite{Carbonell1983DiscoursePA}, which naturally makes understanding
the dialog context more difficult. However, it is essential to fully grasp the
dialog context to generate a reasonable response. Hence, in this paper, we
propose to improve the response generation performance by examining the model's
ability to answer a reading comprehension question, where the question is
focused on the omitted information in the dialog. Enlightened by the multi-task
learning scheme, we propose a joint framework that unifies these two tasks,
sharing the same encoder to extract the common and task-invariant features with
different decoders to learn task-specific features. To better fusing
information from the question and the dialog history in the encoding part, we
propose to augment the Transformer architecture with a memory updater, which is
designed to selectively store and update the history dialog information so as
to support downstream tasks. For the experiment, we employ human annotators to
write and examine a large-scale dialog reading comprehension dataset. Extensive
experiments are conducted on this dataset, and the results show that the
proposed model brings substantial improvements over several strong baselines on
both tasks. In this way, we demonstrate that reasoning can indeed help better
response generation and vice versa. We release our large-scale dataset for
further research.
| 2,021 |
Computation and Language
|
The Style-Content Duality of Attractiveness: Learning to Write
Eye-Catching Headlines via Disentanglement
|
Eye-catching headlines function as the first device to trigger more clicks,
bringing reciprocal effect between producers and viewers. Producers can obtain
more traffic and profits, and readers can have access to outstanding articles.
When generating attractive headlines, it is important to not only capture the
attractive content but also follow an eye-catching written style. In this
paper, we propose a Disentanglement-based Attractive Headline Generator (DAHG)
that generates headline which captures the attractive content following the
attractive style. Concretely, we first devise a disentanglement module to
divide the style and content of an attractive prototype headline into latent
spaces, with two auxiliary constraints to ensure the two spaces are indeed
disentangled. The latent content information is then used to further polish the
document representation and help capture the salient part. Finally, the
generator takes the polished document as input to generate headline under the
guidance of the attractive style. Extensive experiments on the public Kuaibao
dataset show that DAHG achieves state-of-the-art performance. Human evaluation
also demonstrates that DAHG triggers 22% more clicks than existing models.
| 2,020 |
Computation and Language
|
Parameter-Efficient Transfer Learning with Diff Pruning
|
While task-specific finetuning of pretrained networks has led to significant
empirical advances in NLP, the large size of networks makes finetuning
difficult to deploy in multi-task, memory-constrained settings. We propose diff
pruning as a simple approach to enable parameter-efficient transfer learning
within the pretrain-finetune framework. This approach views finetuning as
learning a task-specific diff vector that is applied on top of the pretrained
parameter vector, which remains fixed and is shared across different tasks. The
diff vector is adaptively pruned during training with a differentiable
approximation to the L0-norm penalty to encourage sparsity. Diff pruning
becomes parameter-efficient as the number of tasks increases, as it requires
storing only the nonzero positions and weights of the diff vector for each
task, while the cost of storing the shared pretrained model remains constant.
It further does not require access to all tasks during training, which makes it
attractive in settings where tasks arrive in stream or the set of tasks is
unknown. We find that models finetuned with diff pruning can match the
performance of fully finetuned baselines on the GLUE benchmark while only
modifying 0.5% of the pretrained model's parameters per task.
| 2,021 |
Computation and Language
|
Machine Learning to study the impact of gender-based violence in the
news media
|
While it remains a taboo topic, gender-based violence (GBV) undermines the
health, dignity, security and autonomy of its victims. Many factors have been
studied to generate or maintain this kind of violence, however, the influence
of the media is still uncertain. Here, we use Machine Learning tools to
extrapolate the effect of the news in GBV. By feeding neural networks with
news, the topic information associated with each article can be recovered. Our
findings show a relationship between GBV news and public awareness, the effect
of mediatic GBV cases, and the intrinsic thematic relationship of GBV news.
Because the used neural model can be easily adjusted, this also allows us to
extend our approach to other media sources or topics
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.