Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
SmartTriage: A system for personalized patient data capture,
documentation generation, and decision support
|
Symptom checkers have emerged as an important tool for collecting symptoms
and diagnosing patients, minimizing the involvement of clinical personnel. We
developed a machine-learning-backed system, SmartTriage, which goes beyond
conventional symptom checking through a tight bi-directional integration with
the electronic medical record (EMR). Conditioned on EMR-derived patient
history, our system identifies the patient's chief complaint from a free-text
entry and then asks a series of discrete questions to obtain relevant
symptomatology. The patient-specific data are used to predict detailed
ICD-10-CM codes as well as medication, laboratory, and imaging orders. Patient
responses and clinical decision support (CDS) predictions are then inserted
back into the EMR. To train the machine learning components of SmartTriage, we
employed novel data sets of over 25 million primary care encounters and 1
million patient free-text reason-for-visit entries. These data sets were used
to construct: (1) a long short-term memory (LSTM) based patient history
representation, (2) a fine-tuned transformer model for chief complaint
extraction, (3) a random forest model for question sequencing, and (4) a
feed-forward network for CDS predictions. In total, our system supports 337
patient chief complaints, which together make up $>90\%$ of all primary care
encounters at Kaiser Permanente.
| 2,021 |
Computation and Language
|
Explainable Automated Fact-Checking for Public Health Claims
|
Fact-checking is the task of verifying the veracity of claims by assessing
their assertions against credible evidence. The vast majority of fact-checking
studies focus exclusively on political claims. Very little research explores
fact-checking for other topics, specifically subject matters for which
expertise is required. We present the first study of explainable fact-checking
for claims which require specific expertise. For our case study we choose the
setting of public health. To support this case study we construct a new dataset
PUBHEALTH of 11.8K claims accompanied by journalist crafted, gold standard
explanations (i.e., judgments) to support the fact-check labels for claims. We
explore two tasks: veracity prediction and explanation generation. We also
define and evaluate, with humans and computationally, three coherence
properties of explanation quality. Our results indicate that, by training on
in-domain data, gains can be made in explainable, automated fact-checking for
claims which require specific expertise.
| 2,020 |
Computation and Language
|
ColloQL: Robust Cross-Domain Text-to-SQL Over Search Queries
|
Translating natural language utterances to executable queries is a helpful
technique in making the vast amount of data stored in relational databases
accessible to a wider range of non-tech-savvy end users. Prior work in this
area has largely focused on textual input that is linguistically correct and
semantically unambiguous. However, real-world user queries are often succinct,
colloquial, and noisy, resembling the input of a search engine. In this work,
we introduce data augmentation techniques and a sampling-based content-aware
BERT model (ColloQL) to achieve robust text-to-SQL modeling over natural
language search (NLS) questions. Due to the lack of evaluation data, we curate
a new dataset of NLS questions and demonstrate the efficacy of our approach.
ColloQL's superior performance extends to well-formed text, achieving 84.9%
(logical) and 90.7% (execution) accuracy on the WikiSQL dataset, making it, to
the best of our knowledge, the highest performing model that does not use
execution guided decoding.
| 2,020 |
Computation and Language
|
Enhancing Keyphrase Extraction from Microblogs using Human Reading Time
|
The premise of manual keyphrase annotation is to read the corresponding
content of an annotated object. Intuitively, when we read, more important words
will occupy a longer reading time. Hence, by leveraging human reading time, we
can find the salient words in the corresponding content. However, previous
studies on keyphrase extraction ignore human reading features. In this article,
we aim to leverage human reading time to extract keyphrases from microblog
posts. There are two main tasks in this study. One is to determine how to
measure the time spent by a human on reading a word. We use eye fixation
durations extracted from an open source eye-tracking corpus (OSEC). Moreover,
we propose strategies to make eye fixation duration more effective on keyphrase
extraction. The other task is to determine how to integrate human reading time
into keyphrase extraction models. We propose two novel neural network models.
The first is a model in which the human reading time is used as the ground
truth of the attention mechanism. In the second model, we use human reading
time as the external feature. Quantitative and qualitative experiments show
that our proposed models yield better performance than the baseline models on
two microblog datasets.
| 2,021 |
Computation and Language
|
Cue Me In: Content-Inducing Approaches to Interactive Story Generation
|
Automatically generating stories is a challenging problem that requires
producing causally related and logical sequences of events about a topic.
Previous approaches in this domain have focused largely on one-shot generation,
where a language model outputs a complete story based on limited initial input
from a user. Here, we instead focus on the task of interactive story
generation, where the user provides the model mid-level sentence abstractions
in the form of cue phrases during the generation process. This provides an
interface for human users to guide the story generation. We present two
content-inducing approaches to effectively incorporate this additional
information. Experimental results from both automatic and human evaluations
show that these methods produce more topically coherent and personalized
stories compared to baseline methods.
| 2,020 |
Computation and Language
|
Improving Dialog Systems for Negotiation with Personality Modeling
|
In this paper, we explore the ability to model and infer personality types of
opponents, predict their responses, and use this information to adapt a dialog
agent's high-level strategy in negotiation tasks. Inspired by the idea of
incorporating a theory of mind (ToM) into machines, we introduce a
probabilistic formulation to encapsulate the opponent's personality type during
both learning and inference. We test our approach on the CraigslistBargain
dataset and show that our method using ToM inference achieves a 20% higher
dialog agreement rate compared to baselines on a mixed population of opponents.
We also find that our model displays diverse negotiation behavior with
different types of opponents.
| 2,021 |
Computation and Language
|
Word Shape Matters: Robust Machine Translation with Visual Embedding
|
Neural machine translation has achieved remarkable empirical performance over
standard benchmark datasets, yet recent evidence suggests that the models can
still fail easily dealing with substandard inputs such as misspelled words, To
overcome this issue, we introduce a new encoding heuristic of the input symbols
for character-level NLP models: it encodes the shape of each character through
the images depicting the letters when printed. We name this new strategy visual
embedding and it is expected to improve the robustness of NLP models because
humans also process the corpus visually through printed letters, instead of
machinery one-hot vectors. Empirically, our method improves models' robustness
against substandard inputs, even in the test scenario where the models are
tested with the noises that are beyond what is available during the training
phase.
| 2,020 |
Computation and Language
|
Elaborative Simplification: Content Addition and Explanation Generation
in Text Simplification
|
Much of modern-day text simplification research focuses on sentence-level
simplification, transforming original, more complex sentences into simplified
versions. However, adding content can often be useful when difficult concepts
and reasoning need to be explained. In this work, we present the first
data-driven study of content addition in text simplification, which we call
elaborative simplification. We introduce a new annotated dataset of 1.3K
instances of elaborative simplification in the Newsela corpus, and analyze how
entities, ideas, and concepts are elaborated through the lens of contextual
specificity. We establish baselines for elaboration generation using
large-scale pre-trained language models, and demonstrate that considering
contextual specificity during generation can improve performance. Our results
illustrate the complexities of elaborative simplification, suggesting many
interesting directions for future work.
| 2,021 |
Computation and Language
|
Looking for Clues of Language in Multilingual BERT to Improve
Cross-lingual Generalization
|
Token embeddings in multilingual BERT (m-BERT) contain both language and
semantic information. We find that the representation of a language can be
obtained by simply averaging the embeddings of the tokens of the language.
Given this language representation, we control the output languages of
multilingual BERT by manipulating the token embeddings, thus achieving
unsupervised token translation. We further propose a computationally cheap but
effective approach to improve the cross-lingual ability of m-BERT based on this
observation.
| 2,021 |
Computation and Language
|
Improving Factual Completeness and Consistency of Image-to-Text
Radiology Report Generation
|
Neural image-to-text radiology report generation systems offer the potential
to improve radiology reporting by reducing the repetitive process of report
drafting and identifying possible medical errors. However, existing report
generation systems, despite achieving high performances on natural language
generation metrics such as CIDEr or BLEU, still suffer from incomplete and
inconsistent generations. Here we introduce two new simple rewards to encourage
the generation of factually complete and consistent radiology reports: one that
encourages the system to generate radiology domain entities consistent with the
reference, and one that uses natural language inference to encourage these
entities to be described in inferentially consistent ways. We combine these
with the novel use of an existing semantic equivalence metric (BERTScore). We
further propose a report generation system that optimizes these rewards via
reinforcement learning. On two open radiology report datasets, our system
substantially improved the F1 score of a clinical information extraction
performance by +22.1 (Delta +63.9%). We further show via a human evaluation and
a qualitative analysis that our system leads to generations that are more
factually complete and consistent compared to the baselines.
| 2,021 |
Computation and Language
|
Incorporating Commonsense Knowledge into Abstractive Dialogue
Summarization via Heterogeneous Graph Networks
|
Abstractive dialogue summarization is the task of capturing the highlights of
a dialogue and rewriting them into a concise version. In this paper, we present
a novel multi-speaker dialogue summarizer to demonstrate how large-scale
commonsense knowledge can facilitate dialogue understanding and summary
generation. In detail, we consider utterance and commonsense knowledge as two
different types of data and design a Dialogue Heterogeneous Graph Network
(D-HGN) for modeling both information. Meanwhile, we also add speakers as
heterogeneous nodes to facilitate information flow. Experimental results on the
SAMSum dataset show that our model can outperform various methods. We also
conduct zero-shot setting experiments on the Argumentative Dialogue Summary
Corpus, the results show that our model can better generalized to the new
domain.
| 2,020 |
Computation and Language
|
Fluent and Low-latency Simultaneous Speech-to-Speech Translation with
Self-adaptive Training
|
Simultaneous speech-to-speech translation is widely useful but extremely
challenging, since it needs to generate target-language speech concurrently
with the source-language speech, with only a few seconds delay. In addition, it
needs to continuously translate a stream of sentences, but all recent solutions
merely focus on the single-sentence scenario. As a result, current approaches
accumulate latencies progressively when the speaker talks faster, and introduce
unnatural pauses when the speaker talks slower. To overcome these issues, we
propose Self-Adaptive Translation (SAT) which flexibly adjusts the length of
translations to accommodate different source speech rates. At similar levels of
translation quality (as measured by BLEU), our method generates more fluent
target speech (as measured by the naturalness metric MOS) with substantially
lower latency than the baseline, in both Zh <-> En directions.
| 2,020 |
Computation and Language
|
Neural Language Modeling for Contextualized Temporal Graph Generation
|
This paper presents the first study on using large-scale pre-trained language
models for automated generation of an event-level temporal graph for a
document. Despite the huge success of neural pre-training methods in NLP tasks,
its potential for temporal reasoning over event graphs has not been
sufficiently explored. Part of the reason is the difficulty in obtaining large
training corpora with human-annotated events and temporal links. We address
this challenge by using existing IE/NLP tools to automatically generate a large
quantity (89,000) of system-produced document-graph pairs, and propose a novel
formulation of the contextualized graph generation problem as a
sequence-to-sequence mapping task. These strategies enable us to leverage and
fine-tune pre-trained language models on the system-induced training data for
the graph generation task. Our experiments show that our approach is highly
effective in generating structurally and semantically valid graphs. Further,
evaluation on a challenging hand-labeled, out-domain corpus shows that our
method outperforms the closest existing method by a large margin on several
metrics. Code and pre-trained models are available at
https://github.com/madaan/temporal-graph-gen.
| 2,021 |
Computation and Language
|
JUNLP@Dravidian-CodeMix-FIRE2020: Sentiment Classification of Code-Mixed
Tweets using Bi-Directional RNN and Language Tags
|
Sentiment analysis has been an active area of research in the past two
decades and recently, with the advent of social media, there has been an
increasing demand for sentiment analysis on social media texts. Since the
social media texts are not in one language and are largely code-mixed in
nature, the traditional sentiment classification models fail to produce
acceptable results. This paper tries to solve this very research problem and
uses bi-directional LSTMs along with language tagging, to facilitate sentiment
tagging of code-mixed Tamil texts that have been extracted from social media.
The presented algorithm, when evaluated on the test data, garnered precision,
recall, and F1 scores of 0.59, 0.66, and 0.58 respectively.
| 2,020 |
Computation and Language
|
Local Knowledge Powered Conversational Agents
|
State-of-the-art conversational agents have advanced significantly in
conjunction with the use of large transformer-based language models. However,
even with these advancements, conversational agents still lack the ability to
produce responses that are informative and coherent with the local context. In
this work, we propose a dialog framework that incorporates both local knowledge
as well as users' past dialogues to generate high quality conversations. We
introduce an approach to build a dataset based on Reddit conversations, where
outbound URL links are widely available in the conversations and the
hyperlinked documents can be naturally included as local external knowledge.
Using our framework and dataset, we demonstrate that incorporating local
knowledge can largely improve informativeness, coherency and realisticness
measures using human evaluations. In particular, our approach consistently
outperforms the state-of-the-art conversational model on the Reddit dataset
across all three measures. We also find that scaling the size of our models
from 117M to 8.3B parameters yields consistent improvement of validation
perplexity as well as human evaluated metrics. Our model with 8.3B parameters
can generate human-like responses as rated by various human evaluations in a
single-turn dialog setting.
| 2,020 |
Computation and Language
|
Individual corpora predict fast memory retrieval during reading
|
The corpus, from which a predictive language model is trained, can be
considered the experience of a semantic system. We recorded everyday reading of
two participants for two months on a tablet, generating individual corpus
samples of 300/500K tokens. Then we trained word2vec models from individual
corpora and a 70 million-sentence newspaper corpus to obtain individual and
norm-based long-term memory structure. To test whether individual corpora can
make better predictions for a cognitive task of long-term memory retrieval, we
generated stimulus materials consisting of 134 sentences with uncorrelated
individual and norm-based word probabilities. For the subsequent eye tracking
study 1-2 months later, our regression analyses revealed that individual, but
not norm-corpus-based word probabilities can account for first-fixation
duration and first-pass gaze duration. Word length additionally affected gaze
duration and total viewing duration. The results suggest that corpora
representative for an individual's longterm memory structure can better explain
reading performance than a norm corpus, and that recently acquired information
is lexically accessed rapidly.
| 2,020 |
Computation and Language
|
Simulated Chats for Building Dialog Systems: Learning to Generate
Conversations from Instructions
|
Popular dialog datasets such as MultiWOZ are created by providing crowd
workers an instruction, expressed in natural language, that describes the task
to be accomplished. Crowd workers play the role of a user and an agent to
generate dialogs to accomplish tasks involving booking restaurant tables,
calling a taxi etc. In this paper, we present a data creation strategy that
uses the pre-trained language model, GPT2, to simulate the interaction between
crowd workers by creating a user bot and an agent bot. We train the simulators
using a smaller percentage of actual crowd-generated conversations and their
corresponding instructions. We demonstrate that by using the simulated data, we
achieve significant improvements in low-resource settings on two publicly
available datasets - the MultiWOZ dataset and the Persona chat dataset.
| 2,021 |
Computation and Language
|
Supertagging-based Parsing with Linear Context-free Rewriting Systems
|
We present the first supertagging-based parser for LCFRS. It utilizes neural
classifiers and tremendously outperforms previous LCFRS-based parsers in both
accuracy and parsing speed. Moreover, our results keep up with the best
(general) discontinuous parsers, particularly the scores for discontinuous
constitutents are excellent. The heart of our approach is an efficient
lexicalization procedure which induces a lexical LCFRS from any discontinuous
treebank. It is an adaptation of previous work by M\"orbitz and Ruprecht
(2020). We also describe a modification to usual chart-based LCFRS parsing that
accounts for supertagging and introduce a procedure for the transformation of
lexical LCFRS derivations into equivalent parse trees of the original treebank.
Our approach is implemented and evaluated on the English Discontinuous Penn
Treebank and the German corpora NeGra and Tiger.
| 2,020 |
Computation and Language
|
Complete Multilingual Neural Machine Translation
|
Multilingual Neural Machine Translation (MNMT) models are commonly trained on
a joint set of bilingual corpora which is acutely English-centric (i.e. English
either as the source or target language). While direct data between two
languages that are non-English is explicitly available at times, its use is not
common. In this paper, we first take a step back and look at the commonly used
bilingual corpora (WMT), and resurface the existence and importance of implicit
structure that existed in it: multi-way alignment across examples (the same
sentence in more than two languages). We set out to study the use of multi-way
aligned examples to enrich the original English-centric parallel corpora. We
reintroduce this direct parallel data from multi-way aligned corpora between
all source and target languages. By doing so, the English-centric graph expands
into a complete graph, every language pair being connected. We call MNMT with
such connectivity pattern complete Multilingual Neural Machine Translation
(cMNMT) and demonstrate its utility and efficacy with a series of experiments
and analysis. In combination with a novel training data sampling strategy that
is conditioned on the target language only, cMNMT yields competitive
translation quality for all language pairs. We further study the size effect of
multi-way aligned data, its transfer learning capabilities and how it eases
adding a new language in MNMT. Finally, we stress test cMNMT at scale and
demonstrate that we can train a cMNMT model with up to 111*112=12,432 language
pairs that provides competitive translation quality for all language pairs.
| 2,020 |
Computation and Language
|
Human-Paraphrased References Improve Neural Machine Translation
|
Automatic evaluation comparing candidate translations to human-generated
paraphrases of reference translations has recently been proposed by Freitag et
al. When used in place of original references, the paraphrased versions produce
metric scores that correlate better with human judgment. This effect holds for
a variety of different automatic metrics, and tends to favor natural
formulations over more literal (translationese) ones. In this paper we compare
the results of performing end-to-end system development using standard and
paraphrased references. With state-of-the-art English-German NMT components, we
show that tuning to paraphrased references produces a system that is
significantly better according to human judgment, but 5 BLEU points worse when
tested on standard references. Our work confirms the finding that paraphrased
references yield metric scores that correlate better with human judgment, and
demonstrates for the first time that using these scores for system development
can lead to significant improvements.
| 2,020 |
Computation and Language
|
Text Classification of Manifestos and COVID-19 Press Briefings using
BERT and Convolutional Neural Networks
|
We build a sentence-level political discourse classifier using existing human
expert annotated corpora of political manifestos from the Manifestos Project
(Volkens et al., 2020a) and applying them to a corpus ofCOVID-19Press Briefings
(Chatsiou, 2020). We use manually annotated political manifestos as training
data to train a local topic ConvolutionalNeural Network (CNN) classifier; then
apply it to the COVID-19PressBriefings Corpus to automatically classify
sentences in the test corpus.We report on a series of experiments with CNN
trained on top of pre-trained embeddings for sentence-level classification
tasks. We show thatCNN combined with transformers like BERT outperforms CNN
combined with other embeddings (Word2Vec, Glove, ELMo) and that it is possible
to use a pre-trained classifier to conduct automatic classification on
different political texts without additional training.
| 2,020 |
Computation and Language
|
Bi-directional Cognitive Thinking Network for Machine Reading
Comprehension
|
We propose a novel Bi-directional Cognitive Knowledge Framework (BCKF) for
reading comprehension from the perspective of complementary learning systems
theory. It aims to simulate two ways of thinking in the brain to answer
questions, including reverse thinking and inertial thinking. To validate the
effectiveness of our framework, we design a corresponding Bi-directional
Cognitive Thinking Network (BCTN) to encode the passage and generate a question
(answer) given an answer (question) and decouple the bi-directional knowledge.
The model has the ability to reverse reasoning questions which can assist
inertial thinking to generate more accurate answers. Competitive improvement is
observed in DuReader dataset, confirming our hypothesis that bi-directional
knowledge helps the QA task. The novel framework shows an interesting
perspective on machine reading comprehension and cognitive science.
| 2,020 |
Computation and Language
|
Topic-Guided Abstractive Text Summarization: a Joint Learning Approach
|
We introduce a new approach for abstractive text summarization, Topic-Guided
Abstractive Summarization, which calibrates long-range dependencies from
topic-level features with globally salient content. The idea is to incorporate
neural topic modeling with a Transformer-based sequence-to-sequence (seq2seq)
model in a joint learning framework. This design can learn and preserve the
global semantics of the document, which can provide additional contextual
guidance for capturing important ideas of the document, thereby enhancing the
generation of summary. We conduct extensive experiments on two datasets and the
results show that our proposed model outperforms many extractive and
abstractive systems in terms of both ROUGE measurements and human evaluation.
Our code is available at: https://github.com/chz816/tas.
| 2,021 |
Computation and Language
|
CR-Walker: Tree-Structured Graph Reasoning and Dialog Acts for
Conversational Recommendation
|
Growing interests have been attracted in Conversational Recommender Systems
(CRS), which explore user preference through conversational interactions in
order to make appropriate recommendation. However, there is still a lack of
ability in existing CRS to (1) traverse multiple reasoning paths over
background knowledge to introduce relevant items and attributes, and (2)
arrange selected entities appropriately under current system intents to control
response generation. To address these issues, we propose CR-Walker in this
paper, a model that performs tree-structured reasoning on a knowledge graph,
and generates informative dialog acts to guide language generation. The unique
scheme of tree-structured reasoning views the traversed entity at each hop as
part of dialog acts to facilitate language generation, which links how entities
are selected and expressed. Automatic and human evaluations show that CR-Walker
can arrive at more accurate recommendation, and generate more informative and
engaging responses.
| 2,021 |
Computation and Language
|
Bootleg: Chasing the Tail with Self-Supervised Named Entity
Disambiguation
|
A challenge for named entity disambiguation (NED), the task of mapping
textual mentions to entities in a knowledge base, is how to disambiguate
entities that appear rarely in the training data, termed tail entities. Humans
use subtle reasoning patterns based on knowledge of entity facts, relations,
and types to disambiguate unfamiliar entities. Inspired by these patterns, we
introduce Bootleg, a self-supervised NED system that is explicitly grounded in
reasoning patterns for disambiguation. We define core reasoning patterns for
disambiguation, create a learning procedure to encourage the self-supervised
model to learn the patterns, and show how to use weak supervision to enhance
the signals in the training data. Encoding the reasoning patterns in a simple
Transformer architecture, Bootleg meets or exceeds state-of-the-art on three
NED benchmarks. We further show that the learned representations from Bootleg
successfully transfer to other non-disambiguation tasks that require
entity-based knowledge: we set a new state-of-the-art in the popular TACRED
relation extraction task by 1.0 F1 points and demonstrate up to 8% performance
lift in highly optimized production search and assistant tasks at a major
technology company
| 2,020 |
Computation and Language
|
UmlsBERT: Clinical Domain Knowledge Augmentation of Contextual
Embeddings Using the Unified Medical Language System Metathesaurus
|
Contextual word embedding models, such as BioBERT and Bio_ClinicalBERT, have
achieved state-of-the-art results in biomedical natural language processing
tasks by focusing their pre-training process on domain-specific corpora.
However, such models do not take into consideration expert domain knowledge.
In this work, we introduced UmlsBERT, a contextual embedding model that
integrates domain knowledge during the pre-training process via a novel
knowledge augmentation strategy. More specifically, the augmentation on
UmlsBERT with the Unified Medical Language System (UMLS) Metathesaurus was
performed in two ways: i) connecting words that have the same underlying
`concept' in UMLS, and ii) leveraging semantic group knowledge in UMLS to
create clinically meaningful input embeddings. By applying these two
strategies, UmlsBERT can encode clinical domain knowledge into word embeddings
and outperform existing domain-specific models on common named-entity
recognition (NER) and clinical natural language inference clinical NLP tasks.
| 2,021 |
Computation and Language
|
CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary
Representations From Characters
|
Due to the compelling improvements brought by BERT, many recent
representation models adopted the Transformer architecture as their main
building block, consequently inheriting the wordpiece tokenization system
despite it not being intrinsically linked to the notion of Transformers. While
this system is thought to achieve a good balance between the flexibility of
characters and the efficiency of full words, using predefined wordpiece
vocabularies from the general domain is not always suitable, especially when
building models for specialized domains (e.g., the medical domain). Moreover,
adopting a wordpiece tokenization shifts the focus from the word level to the
subword level, making the models conceptually more complex and arguably less
convenient in practice. For these reasons, we propose CharacterBERT, a new
variant of BERT that drops the wordpiece system altogether and uses a
Character-CNN module instead to represent entire words by consulting their
characters. We show that this new model improves the performance of BERT on a
variety of medical domain tasks while at the same time producing robust,
word-level and open-vocabulary representations.
| 2,020 |
Computation and Language
|
ConjNLI: Natural Language Inference Over Conjunctive Sentences
|
Reasoning about conjuncts in conjunctive sentences is important for a deeper
understanding of conjunctions in English and also how their usages and
semantics differ from conjunctive and disjunctive boolean logic. Existing NLI
stress tests do not consider non-boolean usages of conjunctions and use
templates for testing such model knowledge. Hence, we introduce ConjNLI, a
challenge stress-test for natural language inference over conjunctive
sentences, where the premise differs from the hypothesis by conjuncts removed,
added, or replaced. These sentences contain single and multiple instances of
coordinating conjunctions ("and", "or", "but", "nor") with quantifiers,
negations, and requiring diverse boolean and non-boolean inferences over
conjuncts. We find that large-scale pre-trained language models like RoBERTa do
not understand conjunctive semantics well and resort to shallow heuristics to
make inferences over such sentences. As some initial solutions, we first
present an iterative adversarial fine-tuning method that uses synthetically
created training data based on boolean and non-boolean heuristics. We also
propose a direct model advancement by making RoBERTa aware of predicate
semantic roles. While we observe some performance gains, ConjNLI is still
challenging for current methods, thus encouraging interesting future work for
better understanding of conjunctions. Our data and code are publicly available
at: https://github.com/swarnaHub/ConjNLI
| 2,020 |
Computation and Language
|
Open Question Answering over Tables and Text
|
In open question answering (QA), the answer to a question is produced by
retrieving and then analyzing documents that might contain answers to the
question. Most open QA systems have considered only retrieving information from
unstructured text. Here we consider for the first time open QA over both
tabular and textual data and present a new large-scale dataset Open
Table-and-Text Question Answering (OTT-QA) to evaluate performance on this
task. Most questions in OTT-QA require multi-hop inference across tabular data
and unstructured text, and the evidence required to answer a question can be
distributed in different ways over these two types of input, making evidence
retrieval challenging -- our baseline model using an iterative retriever and
BERT-based reader achieves an exact match score less than 10%. We then propose
two novel techniques to address the challenge of retrieving and aggregating
evidence for OTT-QA. The first technique is to use "early fusion" to group
multiple highly relevant tabular and textual units into a fused block, which
provides more context for the retriever to search for. The second technique is
to use a cross-block reader to model the cross-dependency between multiple
retrieved evidence with global-local sparse attention. Combining these two
techniques improves the score significantly, to above 27%.
| 2,021 |
Computation and Language
|
Modeling Content and Context with Deep Relational Learning
|
Building models for realistic natural language tasks requires dealing with
long texts and accounting for complicated structural dependencies.
Neural-symbolic representations have emerged as a way to combine the reasoning
capabilities of symbolic methods, with the expressiveness of neural networks.
However, most of the existing frameworks for combining neural and symbolic
representations have been designed for classic relational learning tasks that
work over a universe of symbolic entities and relations. In this paper, we
present DRaiL, an open-source declarative framework for specifying deep
relational models, designed to support a variety of NLP scenarios. Our
framework supports easy integration with expressive language encoders, and
provides an interface to study the interactions between representation,
inference and learning.
| 2,021 |
Computation and Language
|
Comparison of Interactive Knowledge Base Spelling Correction Models for
Low-Resource Languages
|
Spelling normalization for low resource languages is a challenging task
because the patterns are hard to predict and large corpora are usually required
to collect enough examples. This work shows a comparison of a neural model and
character language models with varying amounts on target language data. Our
usage scenario is interactive correction with nearly zero amounts of training
examples, improving models as more data is collected, for example within a chat
app. Such models are designed to be incrementally improved as feedback is given
from users. In this work, we design a knowledge-base and prediction model
embedded system for spelling correction in low-resource languages. Experimental
results on multiple languages show that the model could become effective with a
small amount of data. We perform experiments on both natural and synthetic
data, as well as on data from two endangered languages (Ainu and Griko). Last,
we built a prototype system that was used for a small case study on Hinglish,
which further demonstrated the suitability of our approach in real world
scenarios.
| 2,020 |
Computation and Language
|
Optimal Subarchitecture Extraction For BERT
|
We extract an optimal subset of architectural parameters for the BERT
architecture from Devlin et al. (2018) by applying recent breakthroughs in
algorithms for neural architecture search. This optimal subset, which we refer
to as "Bort", is demonstrably smaller, having an effective (that is, not
counting the embedding layer) size of $5.5\%$ the original BERT-large
architecture, and $16\%$ of the net size. Bort is also able to be pretrained in
$288$ GPU hours, which is $1.2\%$ of the time required to pretrain the
highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et
al., 2019), and about $33\%$ of that of the world-record, in GPU hours,
required to train BERT-large on the same hardware. It is also $7.9$x faster on
a CPU, as well as being better performing than other compressed variants of the
architecture, and some of the non-compressed variants: it obtains performance
improvements of between $0.3\%$ and $31\%$, absolute, with respect to
BERT-large, on multiple public natural language understanding (NLU) benchmarks.
| 2,020 |
Computation and Language
|
Natural Language Inference with Mixed Effects
|
There is growing evidence that the prevalence of disagreement in the raw
annotations used to construct natural language inference datasets makes the
common practice of aggregating those annotations to a single label problematic.
We propose a generic method that allows one to skip the aggregation step and
train on the raw annotations directly without subjecting the model to unwanted
noise that can arise from annotator response biases. We demonstrate that this
method, which generalizes the notion of a \textit{mixed effects model} by
incorporating \textit{annotator random effects} into any existing neural model,
improves performance over models that do not incorporate such effects.
| 2,020 |
Computation and Language
|
Better Highlighting: Creating Sub-Sentence Summary Highlights
|
Amongst the best means to summarize is highlighting. In this paper, we aim to
generate summary highlights to be overlaid on the original documents to make it
easier for readers to sift through a large amount of text. The method allows
summaries to be understood in context to prevent a summarizer from distorting
the original meaning, of which abstractive summarizers usually fall short. In
particular, we present a new method to produce self-contained highlights that
are understandable on their own to avoid confusion. Our method combines
determinantal point processes and deep contextualized representations to
identify an optimal set of sub-sentence segments that are both important and
non-redundant to form summary highlights. To demonstrate the flexibility and
modeling power of our method, we conduct extensive experiments on summarization
datasets. Our analysis provides evidence that highlighting is a promising
avenue of research towards future summarization.
| 2,020 |
Computation and Language
|
AutoMeTS: The Autocomplete for Medical Text Simplification
|
The goal of text simplification (TS) is to transform difficult text into a
version that is easier to understand and more broadly accessible to a wide
variety of readers. In some domains, such as healthcare, fully automated
approaches cannot be used since information must be accurately preserved.
Instead, semi-automated approaches can be used that assist a human writer in
simplifying text faster and at a higher quality. In this paper, we examine the
application of autocomplete to text simplification in the medical domain. We
introduce a new parallel medical data set consisting of aligned English
Wikipedia with Simple English Wikipedia sentences and examine the application
of pretrained neural language models (PNLMs) on this dataset. We compare four
PNLMs(BERT, RoBERTa, XLNet, and GPT-2), and show how the additional context of
the sentence to be simplified can be incorporated to achieve better results
(6.17% absolute improvement over the best individual model). We also introduce
an ensemble model that combines the four PNLMs and outperforms the best
individual model by 2.1%, resulting in an overall word prediction accuracy of
64.52%.
| 2,020 |
Computation and Language
|
SKATE: A Natural Language Interface for Encoding Structured Knowledge
|
In Natural Language (NL) applications, there is often a mismatch between what
the NL interface is capable of interpreting and what a lay user knows how to
express. This work describes a novel natural language interface that reduces
this mismatch by refining natural language input through successive,
automatically generated semi-structured templates. In this paper we describe
how our approach, called SKATE, uses a neural semantic parser to parse NL input
and suggest semi-structured templates, which are recursively filled to produce
fully structured interpretations. We also show how SKATE integrates with a
neural rule-generation model to interactively suggest and acquire commonsense
knowledge. We provide a preliminary coverage analysis of SKATE for the task of
story understanding, and then describe a current business use-case of the tool
in a specific domain: COVID-19 policy design.
| 2,020 |
Computation and Language
|
Towards End-to-End In-Image Neural Machine Translation
|
In this paper, we offer a preliminary investigation into the task of in-image
machine translation: transforming an image containing text in one language into
an image containing the same text in another language. We propose an end-to-end
neural model for this task inspired by recent approaches to neural machine
translation, and demonstrate promising initial results based purely on
pixel-level supervision. We then offer a quantitative and qualitative
evaluation of our system outputs and discuss some common failure modes.
Finally, we conclude with directions for future work.
| 2,020 |
Computation and Language
|
Detecting Media Bias in News Articles using Gaussian Bias Distributions
|
Media plays an important role in shaping public opinion. Biased media can
influence people in undesirable directions and hence should be unmasked as
such. We observe that featurebased and neural text classification approaches
which rely only on the distribution of low-level lexical information fail to
detect media bias. This weakness becomes most noticeable for articles on new
events, where words appear in new contexts and hence their "bias
predictiveness" is unclear. In this paper, we therefore study how second-order
information about biased statements in an article helps to improve detection
effectiveness. In particular, we utilize the probability distributions of the
frequency, positions, and sequential order of lexical and informational
sentence-level bias in a Gaussian Mixture Model. On an existing media bias
dataset, we find that the frequency and positions of biased statements strongly
impact article-level bias, whereas their exact sequential order is secondary.
Using a standard model for sentence-level bias detection, we provide empirical
evidence that article-level bias detectors that use second-order information
clearly outperform those without.
| 2,020 |
Computation and Language
|
Analyzing Political Bias and Unfairness in News Articles at Different
Levels of Granularity
|
Media organizations bear great reponsibility because of their considerable
influence on shaping beliefs and positions of our society. Any form of media
can contain overly biased content, e.g., by reporting on political events in a
selective or incomplete manner. A relevant question hence is whether and how
such form of imbalanced news coverage can be exposed. The research presented in
this paper addresses not only the automatic detection of bias but goes one step
further in that it explores how political bias and unfairness are manifested
linguistically. In this regard we utilize a new corpus of 6964 news articles
with labels derived from adfontesmedia.com and develop a neural model for bias
assessment. By analyzing this model on article excerpts, we find insightful
bias patterns at different levels of text granularity, from single words to the
whole article discourse.
| 2,020 |
Computation and Language
|
Transition-based Parsing with Stack-Transformers
|
Modeling the parser state is key to good performance in transition-based
parsing. Recurrent Neural Networks considerably improved the performance of
transition-based systems by modelling the global state, e.g. stack-LSTM
parsers, or local state modeling of contextualized features, e.g. Bi-LSTM
parsers. Given the success of Transformer architectures in recent parsing
systems, this work explores modifications of the sequence-to-sequence
Transformer architecture to model either global or local parser states in
transition-based parsing. We show that modifications of the cross attention
mechanism of the Transformer considerably strengthen performance both on
dependency and Abstract Meaning Representation (AMR) parsing tasks,
particularly for smaller models or limited training data.
| 2,020 |
Computation and Language
|
Pushing the Limits of AMR Parsing with Self-Learning
|
Abstract Meaning Representation (AMR) parsing has experienced a notable
growth in performance in the last two years, due both to the impact of transfer
learning and the development of novel architectures specific to AMR. At the
same time, self-learning techniques have helped push the performance boundaries
of other natural language processing applications, such as machine translation
or question answering. In this paper, we explore different ways in which
trained models can be applied to improve AMR parsing performance, including
generation of synthetic text and AMR annotations as well as refinement of
actions oracle. We show that, without any additional human annotations, these
techniques improve an already performant parser and achieve state-of-the-art
results on AMR 1.0 and AMR 2.0.
| 2,020 |
Computation and Language
|
An Investigation of the Relation Between Grapheme Embeddings and
Pronunciation for Tacotron-based Systems
|
End-to-end models, particularly Tacotron-based ones, are currently a popular
solution for text-to-speech synthesis. They allow the production of
high-quality synthesized speech with little to no text preprocessing. Indeed,
they can be trained using either graphemes or phonemes as input directly.
However, in the case of grapheme inputs, little is known concerning the
relation between the underlying representations learned by the model and word
pronunciations. This work investigates this relation in the case of a Tacotron
model trained on French graphemes. Our analysis shows that grapheme embeddings
are related to phoneme information despite no such information being present
during training. Thanks to this property, we show that grapheme embeddings
learned by Tacotron models can be useful for tasks such as grapheme-to-phoneme
conversion and control of the pronunciation in synthetic speech.
| 2,021 |
Computation and Language
|
A Weighted Heterogeneous Graph Based Dialogue System
|
Knowledge based dialogue systems have attracted increasing research interest
in diverse applications. However, for disease diagnosis, the widely used
knowledge graph is hard to represent the symptom-symptom relations and
symptom-disease relations since the edges of traditional knowledge graph are
unweighted. Most research on disease diagnosis dialogue systems highly rely on
data-driven methods and statistical features, lacking profound comprehension of
symptom-disease relations and symptom-symptom relations. To tackle this issue,
this work presents a weighted heterogeneous graph based dialogue system for
disease diagnosis. Specifically, we build a weighted heterogeneous graph based
on symptom co-occurrence and a proposed symptom frequency-inverse disease
frequency. Then this work proposes a graph based deep Q-network (Graph-DQN) for
dialogue management. By combining Graph Convolutional Network (GCN) with DQN to
learn the embeddings of diseases and symptoms from both the structural and
attribute information in the weighted heterogeneous graph, Graph-DQN could
capture the symptom-disease relations and symptom-symptom relations better.
Experimental results show that the proposed dialogue system rivals the
state-of-the-art models. More importantly, the proposed dialogue system can
complete the task with less dialogue turns and possess a better distinguishing
capability on diseases with similar symptoms.
| 2,020 |
Computation and Language
|
Multi-Unit Transformers for Neural Machine Translation
|
Transformer models achieve remarkable success in Neural Machine Translation.
Many efforts have been devoted to deepening the Transformer by stacking several
units (i.e., a combination of Multihead Attentions and FFN) in a cascade, while
the investigation over multiple parallel units draws little attention. In this
paper, we propose the Multi-Unit Transformers (MUTE), which aim to promote the
expressiveness of the Transformer by introducing diverse and complementary
units. Specifically, we use several parallel units and show that modeling with
multiple units improves model performance and introduces diversity. Further, to
better leverage the advantage of the multi-unit setting, we design biased
module and sequential dependency that guide and encourage complementariness
among different units. Experimental results on three machine translation tasks,
the NIST Chinese-to-English, WMT'14 English-to-German and WMT'18
Chinese-to-English, show that the MUTE models significantly outperform the
Transformer-Base, by up to +1.52, +1.90 and +1.10 BLEU points, with only a mild
drop in inference speed (about 3.1%). In addition, our methods also surpass the
Transformer-Big model, with only 54\% of its parameters. These results
demonstrate the effectiveness of the MUTE, as well as its efficiency in both
the inference process and parameter usage.
| 2,020 |
Computation and Language
|
FreeDOM: A Transferable Neural Architecture for Structured Information
Extraction on Web Documents
|
Extracting structured data from HTML documents is a long-studied problem with
a broad range of applications like augmenting knowledge bases, supporting
faceted search, and providing domain-specific experiences for key verticals
like shopping and movies. Previous approaches have either required a small
number of examples for each target site or relied on carefully handcrafted
heuristics built over visual renderings of websites. In this paper, we present
a novel two-stage neural approach, named FreeDOM, which overcomes both these
limitations. The first stage learns a representation for each DOM node in the
page by combining both the text and markup information. The second stage
captures longer range distance and semantic relatedness using a relational
neural network. By combining these stages, FreeDOM is able to generalize to
unseen sites after training on a small number of seed sites from that vertical
without requiring expensive hand-crafted features over visual renderings of the
page. Through experiments on a public dataset with 8 different verticals, we
show that FreeDOM beats the previous state of the art by nearly 3.7 F1 points
on average without requiring features over rendered pages or expensive
hand-crafted features.
| 2,020 |
Computation and Language
|
RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open
Domain Question Answering
|
State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain
Question Answering (QA) are typically trained for span selection using
distantly supervised positive examples and heuristically retrieved negative
examples. This training scheme possibly explains empirical observations that
these models achieve a high recall amongst their top few predictions, but a low
overall accuracy, motivating the need for answer re-ranking. We develop a
simple and effective re-ranking approach (RECONSIDER) for span-extraction
tasks, that improves upon the performance of large pre-trained MRC models.
RECONSIDER is trained on positive and negative examples extracted from high
confidence predictions of MRC models, and uses in-passage span annotations to
perform span-focused re-ranking over a smaller candidate set. As a result,
RECONSIDER learns to eliminate close false positive passages, and achieves a
new state of the art on four QA tasks, including 45.5% Exact Match accuracy on
Natural Questions with real user questions, and 61.7% on TriviaQA.
| 2,020 |
Computation and Language
|
Quasi Error-free Text Classification and Authorship Recognition in a
large Corpus of English Literature based on a Novel Feature Set
|
The Gutenberg Literary English Corpus (GLEC) provides a rich source of
textual data for research in digital humanities, computational linguistics or
neurocognitive poetics. However, so far only a small subcorpus, the Gutenberg
English Poetry Corpus, has been submitted to quantitative text analyses
providing predictions for scientific studies of literature. Here we show that
in the entire GLEC quasi error-free text classification and authorship
recognition is possible with a method using the same set of five style and five
content features, computed via style and sentiment analysis, in both tasks. Our
results identify two standard and two novel features (i.e., type-token ratio,
frequency, sonority score, surprise) as most diagnostic in these tasks. By
providing a simple tool applicable to both short poems and long novels
generating quantitative predictions about features that co-determe the
cognitive and affective processing of specific text categories or authors, our
data pave the way for many future computational and empirical studies of
literature or experiments in reading psychology.
| 2,020 |
Computation and Language
|
STN4DST: A Scalable Dialogue State Tracking based on Slot Tagging
Navigation
|
Scalability for handling unknown slot values is a important problem in
dialogue state tracking (DST). As far as we know, previous scalable DST
approaches generally rely on either the candidate generation from slot tagging
output or the span extraction in dialogue context. However, the candidate
generation based DST often suffers from error propagation due to its pipelined
two-stage process; meanwhile span extraction based DST has the risk of
generating invalid spans in the lack of semantic constraints between start and
end position pointers. To tackle the above drawbacks, in this paper, we propose
a novel scalable dialogue state tracking method based on slot tagging
navigation, which implements an end-to-end single-step pointer to locate and
extract slot value quickly and accurately by the joint learning of slot tagging
and slot value position prediction in the dialogue context, especially for
unknown slot values. Extensive experiments over several benchmark datasets show
that the proposed model performs better than state-of-the-art baselines
greatly.
| 2,021 |
Computation and Language
|
PBoS: Probabilistic Bag-of-Subwords for Generalizing Word Embedding
|
We look into the task of \emph{generalizing} word embeddings: given a set of
pre-trained word vectors over a finite vocabulary, the goal is to predict
embedding vectors for out-of-vocabulary words, \emph{without} extra contextual
information. We rely solely on the spellings of words and propose a model,
along with an efficient algorithm, that simultaneously models subword
segmentation and computes subword-based compositional word embedding. We call
the model probabilistic bag-of-subwords (PBoS), as it applies bag-of-subwords
for all possible segmentations based on their likelihood. Inspections and affix
prediction experiment show that PBoS is able to produce meaningful subword
segmentations and subword rankings without any source of explicit morphological
knowledge. Word similarity and POS tagging experiments show clear advantages of
PBoS over previous subword-level models in the quality of generated word
embeddings across languages.
| 2,020 |
Computation and Language
|
Using the Full-text Content of Academic Articles to Identify and
Evaluate Algorithm Entities in the Domain of Natural Language Processing
|
In the era of big data, the advancement, improvement, and application of
algorithms in academic research have played an important role in promoting the
development of different disciplines. Academic papers in various disciplines,
especially computer science, contain a large number of algorithms. Identifying
the algorithms from the full-text content of papers can determine popular or
classical algorithms in a specific field and help scholars gain a comprehensive
understanding of the algorithms and even the field. To this end, this article
takes the field of natural language processing (NLP) as an example and
identifies algorithms from academic papers in the field. A dictionary of
algorithms is constructed by manually annotating the contents of papers, and
sentences containing algorithms in the dictionary are extracted through
dictionary-based matching. The number of articles mentioning an algorithm is
used as an indicator to analyze the influence of that algorithm. Our results
reveal the algorithm with the highest influence in NLP papers and show that
classification algorithms represent the largest proportion among the
high-impact algorithms. In addition, the evolution of the influence of
algorithms reflects the changes in research tasks and topics in the field, and
the changes in the influence of different algorithms show different trends. As
a preliminary exploration, this paper conducts an analysis of the impact of
algorithms mentioned in the academic text, and the results can be used as
training data for the automatic extraction of large-scale algorithms in the
future. The methodology in this paper is domain-independent and can be applied
to other domains.
| 2,020 |
Computation and Language
|
Multilingual Contextual Affective Analysis of LGBT People Portrayals in
Wikipedia
|
Specific lexical choices in narrative text reflect both the writer's
attitudes towards people in the narrative and influence the audience's
reactions. Prior work has examined descriptions of people in English using
contextual affective analysis, a natural language processing (NLP) technique
that seeks to analyze how people are portrayed along dimensions of power,
agency, and sentiment. Our work presents an extension of this methodology to
multilingual settings, which is enabled by a new corpus that we collect and a
new multilingual model. We additionally show how word connotations differ
across languages and cultures, highlighting the difficulty of generalizing
existing English datasets and methods. We then demonstrate the usefulness of
our method by analyzing Wikipedia biography pages of members of the LGBT
community across three languages: English, Russian, and Spanish. Our results
show systematic differences in how the LGBT community is portrayed across
languages, surfacing cultural differences in narratives and signs of social
biases. Practically, this model can be used to identify Wikipedia articles for
further manual analysis -- articles that might contain content gaps or an
imbalanced representation of particular social groups.
| 2,021 |
Computation and Language
|
KnowDis: Knowledge Enhanced Data Augmentation for Event Causality
Detection via Distant Supervision
|
Modern models of event causality detection (ECD) are mainly based on
supervised learning from small hand-labeled corpora. However, hand-labeled
training data is expensive to produce, low coverage of causal expressions and
limited in size, which makes supervised methods hard to detect causal relations
between events. To solve this data lacking problem, we investigate a data
augmentation framework for ECD, dubbed as Knowledge Enhanced Distant Data
Augmentation (KnowDis). Experimental results on two benchmark datasets
EventStoryLine corpus and Causal-TimeBank show that 1) KnowDis can augment
available training data assisted with the lexical and causal commonsense
knowledge for ECD via distant supervision, and 2) our method outperforms
previous methods by a large margin assisted with automatically labeled training
data.
| 2,020 |
Computation and Language
|
ReSCo-CC: Unsupervised Identification of Key Disinformation Sentences
|
Disinformation is often presented in long textual articles, especially when
it relates to domains such as health, often seen in relation to COVID-19. These
articles are typically observed to have a number of trustworthy sentences among
which core disinformation sentences are scattered. In this paper, we propose a
novel unsupervised task of identifying sentences containing key disinformation
within a document that is known to be untrustworthy. We design a three-phase
statistical NLP solution for the task which starts with embedding sentences
within a bespoke feature space designed for the task. Sentences represented
using those features are then clustered, following which the key sentences are
identified through proximity scoring. We also curate a new dataset with
sentence level disinformation scorings to aid evaluation for this task; the
dataset is being made publicly available to facilitate further research. Based
on a comprehensive empirical evaluation against techniques from related tasks
such as claim detection and summarization, as well as against simplified
variants of our proposed approach, we illustrate that our method is able to
identify core disinformation effectively.
| 2,020 |
Computation and Language
|
TMT: A Transformer-based Modal Translator for Improving Multimodal
Sequence Representations in Audio Visual Scene-aware Dialog
|
Audio Visual Scene-aware Dialog (AVSD) is a task to generate responses when
discussing about a given video. The previous state-of-the-art model shows
superior performance for this task using Transformer-based architecture.
However, there remain some limitations in learning better representation of
modalities. Inspired by Neural Machine Translation (NMT), we propose the
Transformer-based Modal Translator (TMT) to learn the representations of the
source modal sequence by translating the source modal sequence to the related
target modal sequence in a supervised manner. Based on Multimodal Transformer
Networks (MTN), we apply TMT to video and dialog, proposing MTN-TMT for the
video-grounded dialog system. On the AVSD track of the Dialog System Technology
Challenge 7, MTN-TMT outperforms the MTN and other submission models in both
Video and Text task and Text Only task. Compared with MTN, MTN-TMT improves all
metrics, especially, achieving relative improvement up to 14.1% on CIDEr. Index
Terms: multimodal learning, audio-visual scene-aware dialog, neural machine
translation, multi-task learning
| 2,020 |
Computation and Language
|
Gender Prediction Based on Vietnamese Names with Machine Learning
Techniques
|
As biological gender is one of the aspects of presenting individual human,
much work has been done on gender classification based on people names. The
proposals for English and Chinese languages are tremendous; still, there have
been few works done for Vietnamese so far. We propose a new dataset for gender
prediction based on Vietnamese names. This dataset comprises over 26,000 full
names annotated with genders. This dataset is available on our website for
research purposes. In addition, this paper describes six machine learning
algorithms (Support Vector Machine, Multinomial Naive Bayes, Bernoulli Naive
Bayes, Decision Tree, Random Forrest and Logistic Regression) and a deep
learning model (LSTM) with fastText word embedding for gender prediction on
Vietnamese names. We create a dataset and investigate the impact of each name
component on detecting gender. As a result, the best F1-score that we have
achieved is up to 96% on LSTM model and we generate a web API based on our
trained model.
| 2,021 |
Computation and Language
|
PARENTing via Model-Agnostic Reinforcement Learning to Correct
Pathological Behaviors in Data-to-Text Generation
|
In language generation models conditioned by structured data, the classical
training via maximum likelihood almost always leads models to pick up on
dataset divergence (i.e., hallucinations or omissions), and to incorporate them
erroneously in their own generations at inference. In this work, we build ontop
of previous Reinforcement Learning based approaches and show that a
model-agnostic framework relying on the recently introduced PARENT metric is
efficient at reducing both hallucinations and omissions. Evaluations on the
widely used WikiBIO and WebNLG benchmarks demonstrate the effectiveness of this
framework compared to state-of-the-art models.
| 2,020 |
Computation and Language
|
TurnGPT: a Transformer-based Language Model for Predicting Turn-taking
in Spoken Dialog
|
Syntactic and pragmatic completeness is known to be important for turn-taking
prediction, but so far machine learning models of turn-taking have used such
linguistic information in a limited way. In this paper, we introduce TurnGPT, a
transformer-based language model for predicting turn-shifts in spoken dialog.
The model has been trained and evaluated on a variety of written and spoken
dialog datasets. We show that the model outperforms two baselines used in prior
work. We also report on an ablation study, as well as attention and gradient
analyses, which show that the model is able to utilize the dialog context and
pragmatic completeness for turn-taking prediction. Finally, we explore the
model's potential in not only detecting, but also projecting, turn-completions.
| 2,020 |
Computation and Language
|
Learning to Decouple Relations: Few-Shot Relation Classification with
Entity-Guided Attention and Confusion-Aware Training
|
This paper aims to enhance the few-shot relation classification especially
for sentences that jointly describe multiple relations. Due to the fact that
some relations usually keep high co-occurrence in the same context, previous
few-shot relation classifiers struggle to distinguish them with few annotated
instances. To alleviate the above relation confusion problem, we propose CTEG,
a model equipped with two mechanisms to learn to decouple these easily-confused
relations. On the one hand, an Entity-Guided Attention (EGA) mechanism, which
leverages the syntactic relations and relative positions between each word and
the specified entity pair, is introduced to guide the attention to filter out
information causing confusion. On the other hand, a Confusion-Aware Training
(CAT) method is proposed to explicitly learn to distinguish relations by
playing a pushing-away game between classifying a sentence into a true relation
and its confusing relation. Extensive experiments are conducted on the FewRel
dataset, and the results show that our proposed model achieves comparable and
even much better results to strong baselines in terms of accuracy. Furthermore,
the ablation test and case study verify the effectiveness of our proposed EGA
and CAT, especially in addressing the relation confusion problem.
| 2,020 |
Computation and Language
|
Exploring Sequence-to-Sequence Models for SPARQL Pattern Composition
|
A booming amount of information is continuously added to the Internet as
structured and unstructured data, feeding knowledge bases such as DBpedia and
Wikidata with billions of statements describing millions of entities. The aim
of Question Answering systems is to allow lay users to access such data using
natural language without needing to write formal queries. However, users often
submit questions that are complex and require a certain level of abstraction
and reasoning to decompose them into basic graph patterns. In this short paper,
we explore the use of architectures based on Neural Machine Translation called
Neural SPARQL Machines to learn pattern compositions. We show that
sequence-to-sequence models are a viable and promising option to transform long
utterances into complex SPARQL queries.
| 2,020 |
Computation and Language
|
German's Next Language Model
|
In this work we present the experiments which lead to the creation of our
BERT and ELECTRA based German language models, GBERT and GELECTRA. By varying
the input training data, model size, and the presence of Whole Word Masking
(WWM) we were able to attain SoTA performance across a set of document
classification and named entity recognition (NER) tasks for both models of base
and large size. We adopt an evaluation driven approach in training these models
and our results indicate that both adding more data and utilizing WWM improve
model performance. By benchmarking against existing German models, we show that
these models are the best German models to date. Our trained models will be
made publicly available to the research community.
| 2,020 |
Computation and Language
|
Analyzing the Source and Target Contributions to Predictions in Neural
Machine Translation
|
In Neural Machine Translation (and, more generally, conditional language
modeling), the generation of a target token is influenced by two types of
context: the source and the prefix of the target sequence. While many attempts
to understand the internal workings of NMT models have been made, none of them
explicitly evaluates relative source and target contributions to a generation
decision. We argue that this relative contribution can be evaluated by adopting
a variant of Layerwise Relevance Propagation (LRP). Its underlying
'conservation principle' makes relevance propagation unique: differently from
other methods, it evaluates not an abstract quantity reflecting token
importance, but the proportion of each token's influence. We extend LRP to the
Transformer and conduct an analysis of NMT models which explicitly evaluates
the source and target relative contributions to the generation process. We
analyze changes in these contributions when conditioning on different types of
prefixes, when varying the training objective or the amount of training data,
and during the training process. We find that models trained with more data
tend to rely on source information more and to have more sharp token
contributions; the training process is non-monotonic with several stages of
different nature.
| 2,021 |
Computation and Language
|
Complaint Identification in Social Media with Transformer Networks
|
Complaining is a speech act extensively used by humans to communicate a
negative inconsistency between reality and expectations. Previous work on
automatically identifying complaints in social media has focused on using
feature-based and task-specific neural network models. Adapting
state-of-the-art pre-trained neural language models and their combinations with
other linguistic information from topics or sentiment for complaint prediction
has yet to be explored. In this paper, we evaluate a battery of neural models
underpinned by transformer networks which we subsequently combine with
linguistic information. Experiments on a publicly available data set of
complaints demonstrate that our models outperform previous state-of-the-art
methods by a large margin achieving a macro F1 up to 87.
| 2,020 |
Computation and Language
|
LemMED: Fast and Effective Neural Morphological Analysis with Short
Context Windows
|
We present LemMED, a character-level encoder-decoder for contextual
morphological analysis (combined lemmatization and tagging). LemMED extends and
is named after two other attention-based models, namely Lematus, a contextual
lemmatizer, and MED, a morphological (re)inflection model. Our approach does
not require training separate lemmatization and tagging models, nor does it
need additional resources and tools, such as morphological dictionaries or
transducers. Moreover, LemMED relies solely on character-level representations
and on local context. Although the model can, in principle, account for global
context on sentence level, our experiments show that using just a single word
of context around each target word is not only more computationally feasible,
but yields better results as well. We evaluate LemMED in the framework of the
SIMGMORPHON-2019 shared task on combined lemmatization and tagging. In terms of
average performance LemMED ranks 5th among 13 systems and is bested only by the
submissions that use contextualized embeddings.
| 2,020 |
Computation and Language
|
What makes multilingual BERT multilingual?
|
Recently, multilingual BERT works remarkably well on cross-lingual transfer
tasks, superior to static non-contextualized word embeddings. In this work, we
provide an in-depth experimental study to supplement the existing literature of
cross-lingual ability. We compare the cross-lingual ability of
non-contextualized and contextualized representation model with the same data.
We found that datasize and context window size are crucial factors to the
transferability.
| 2,020 |
Computation and Language
|
Open-Domain Frame Semantic Parsing Using Transformers
|
Frame semantic parsing is a complex problem which includes multiple
underlying subtasks. Recent approaches have employed joint learning of subtasks
(such as predicate and argument detection), and multi-task learning of related
tasks (such as syntactic and semantic parsing). In this paper, we explore
multi-task learning of all subtasks with transformer-based models. We show that
a purely generative encoder-decoder architecture handily beats the previous
state of the art in FrameNet 1.7 parsing, and that a mixed decoding multi-task
approach achieves even better performance. Finally, we show that the multi-task
model also outperforms recent state of the art systems for PropBank SRL parsing
on the CoNLL 2012 benchmark.
| 2,020 |
Computation and Language
|
Is Retriever Merely an Approximator of Reader?
|
The state of the art in open-domain question answering (QA) relies on an
efficient retriever that drastically reduces the search space for the expensive
reader. A rather overlooked question in the community is the relationship
between the retriever and the reader, and in particular, if the whole purpose
of the retriever is just a fast approximation for the reader. Our empirical
evidence indicates that the answer is no, and that the reader and the retriever
are complementary to each other even in terms of accuracy only. We make a
careful conjecture that the architectural constraint of the retriever, which
has been originally intended for enabling approximate search, seems to also
make the model more robust in large-scale search. We then propose to distill
the reader into the retriever so that the retriever absorbs the strength of the
reader while keeping its own benefit. Experimental results show that our method
can enhance the document recall rate as well as the end-to-end QA accuracy of
off-the-shelf retrievers in open-domain QA tasks.
| 2,020 |
Computation and Language
|
Unsupervised Multiple Choices Question Answering: Start Learning from
Basic Knowledge
|
In this paper, we study the possibility of almost unsupervised Multiple
Choices Question Answering (MCQA). Starting from very basic knowledge, MCQA
model knows that some choices have higher probabilities of being correct than
the others. The information, though very noisy, guides the training of an MCQA
model. The proposed method is shown to outperform the baseline approaches on
RACE and even comparable with some supervised learning approaches on MC500.
| 2,021 |
Computation and Language
|
Controllable Text Simplification with Explicit Paraphrasing
|
Text Simplification improves the readability of sentences through several
rewriting transformations, such as lexical paraphrasing, deletion, and
splitting. Current simplification systems are predominantly
sequence-to-sequence models that are trained end-to-end to perform all these
operations simultaneously. However, such systems limit themselves to mostly
deleting words and cannot easily adapt to the requirements of different target
audiences. In this paper, we propose a novel hybrid approach that leverages
linguistically-motivated rules for splitting and deletion, and couples them
with a neural paraphrasing model to produce varied rewriting styles. We
introduce a new data augmentation method to improve the paraphrasing capability
of our model. Through automatic and manual evaluations, we show that our
proposed model establishes a new state-of-the-art for the task, paraphrasing
more often than the existing systems, and can control the degree of each
simplification operation applied to the input texts.
| 2,021 |
Computation and Language
|
Token Drop mechanism for Neural Machine Translation
|
Neural machine translation with millions of parameters is vulnerable to
unfamiliar inputs. We propose Token Drop to improve generalization and avoid
overfitting for the NMT model. Similar to word dropout, whereas we replace
dropped token with a special token instead of setting zero to words. We further
introduce two self-supervised objectives: Replaced Token Detection and Dropped
Token Prediction. Our method aims to force model generating target translation
with less information, in this way the model can learn textual representation
better. Experiments on Chinese-English and English-Romanian benchmark
demonstrate the effectiveness of our approach and our model achieves
significant improvements over a strong Transformer baseline.
| 2,020 |
Computation and Language
|
LT3 at SemEval-2020 Task 9: Cross-lingual Embeddings for Sentiment
Analysis of Hinglish Social Media Text
|
This paper describes our contribution to the SemEval-2020 Task 9 on Sentiment
Analysis for Code-mixed Social Media Text. We investigated two approaches to
solve the task of Hinglish sentiment analysis. The first approach uses
cross-lingual embeddings resulting from projecting Hinglish and pre-trained
English FastText word embeddings in the same space. The second approach
incorporates pre-trained English embeddings that are incrementally retrained
with a set of Hinglish tweets. The results show that the second approach
performs best, with an F1-score of 70.52% on the held-out test data.
| 2,020 |
Computation and Language
|
Classifying Syntactic Errors in Learner Language
|
We present a method for classifying syntactic errors in learner language,
namely errors whose correction alters the morphosyntactic structure of a
sentence.
The methodology builds on the established Universal Dependencies syntactic
representation scheme, and provides complementary information to other
error-classification systems.
Unlike existing error classification methods, our method is applicable across
languages, which we showcase by producing a detailed picture of syntactic
errors in learner English and learner Russian. We further demonstrate the
utility of the methodology for analyzing the outputs of leading Grammatical
Error Correction (GEC) systems.
| 2,020 |
Computation and Language
|
Deciphering Undersegmented Ancient Scripts Using Phonetic Prior
|
Most undeciphered lost languages exhibit two characteristics that pose
significant decipherment challenges: (1) the scripts are not fully segmented
into words; (2) the closest known language is not determined. We propose a
decipherment model that handles both of these challenges by building on rich
linguistic constraints reflecting consistent patterns in historical sound
change. We capture the natural phonological geometry by learning character
embeddings based on the International Phonetic Alphabet (IPA). The resulting
generative framework jointly models word segmentation and cognate alignment,
informed by phonological constraints. We evaluate the model on both deciphered
languages (Gothic, Ugaritic) and an undeciphered one (Iberian). The experiments
show that incorporating phonetic geometry leads to clear and consistent gains.
Additionally, we propose a measure for language closeness which correctly
identifies related languages for Gothic and Ugaritic. For Iberian, the method
does not show strong evidence supporting Basque as a related language,
concurring with the favored position by the current scholarship.
| 2,020 |
Computation and Language
|
Contextualized Attention-based Knowledge Transfer for Spoken
Conversational Question Answering
|
Spoken conversational question answering (SCQA) requires machines to model
complex dialogue flow given the speech utterances and text corpora. Different
from traditional text question answering (QA) tasks, SCQA involves audio signal
processing, passage comprehension, and contextual understanding. However, ASR
systems introduce unexpected noisy signals to the transcriptions, which result
in performance degradation on SCQA. To overcome the problem, we propose CADNet,
a novel contextualized attention-based distillation approach, which applies
both cross-attention and self-attention to obtain ASR-robust contextualized
embedding representations of the passage and dialogue history for performance
improvements. We also introduce the spoken conventional knowledge distillation
framework to distill the ASR-robust knowledge from the estimated probabilities
of the teacher model to the student. We conduct extensive experiments on the
Spoken-CoQA dataset and demonstrate that our approach achieves remarkable
performance in this task.
| 2,021 |
Computation and Language
|
Knowledge Distillation for Improved Accuracy in Spoken Question
Answering
|
Spoken question answering (SQA) is a challenging task that requires the
machine to fully understand the complex spoken documents. Automatic speech
recognition (ASR) plays a significant role in the development of QA systems.
However, the recent work shows that ASR systems generate highly noisy
transcripts, which critically limit the capability of machine comprehension on
the SQA task. To address the issue, we present a novel distillation framework.
Specifically, we devise a training strategy to perform knowledge distillation
(KD) from spoken documents and written counterparts. Our work makes a step
towards distilling knowledge from the language model as a supervision signal to
lead to better student accuracy by reducing the misalignment between automatic
and manual transcriptions. Experiments demonstrate that our approach
outperforms several state-of-the-art language models on the Spoken-SQuAD
dataset.
| 2,021 |
Computation and Language
|
Online Conversation Disentanglement with Pointer Networks
|
Huge amounts of textual conversations occur online every day, where multiple
conversations take place concurrently. Interleaved conversations lead to
difficulties in not only following the ongoing discussions but also extracting
relevant information from simultaneous messages. Conversation disentanglement
aims to separate intermingled messages into detached conversations. However,
existing disentanglement methods rely mostly on handcrafted features that are
dataset specific, which hinders generalization and adaptability. In this work,
we propose an end-to-end online framework for conversation disentanglement that
avoids time-consuming domain-specific feature engineering. We design a novel
way to embed the whole utterance that comprises timestamp, speaker, and message
text, and proposes a custom attention mechanism that models disentanglement as
a pointing problem while effectively capturing inter-utterance interactions in
an end-to-end fashion. We also introduce a joint-learning objective to better
capture contextual information. Our experiments on the Ubuntu IRC dataset show
that our method achieves state-of-the-art performance in both link and
conversation prediction tasks.
| 2,020 |
Computation and Language
|
NeuSpell: A Neural Spelling Correction Toolkit
|
We introduce NeuSpell, an open-source toolkit for spelling correction in
English. Our toolkit comprises ten different models, and benchmarks them on
naturally occurring misspellings from multiple sources. We find that many
systems do not adequately leverage the context around the misspelt token. To
remedy this, (i) we train neural models using spelling errors in context,
synthetically constructed by reverse engineering isolated misspellings; and
(ii) use contextual representations. By training on our synthetic examples,
correction rates improve by 9% (absolute) compared to the case when models are
trained on randomly sampled character perturbations. Using richer contextual
representations boosts the correction rate by another 3%. Our toolkit enables
practitioners to use our proposed and existing spelling correction systems,
both via a unified command line, as well as a web interface. Among many
potential applications, we demonstrate the utility of our spell-checkers in
combating adversarial misspellings. The toolkit can be accessed at
neuspell.github.io. Code and pretrained models are available at
http://github.com/neuspell/neuspell.
| 2,020 |
Computation and Language
|
Lexicon generation for detecting fake news
|
With the digitization of media, an immense amount of news data has been
generated by online sources, including mainstream media outlets as well as
social networks. However, the ease of production and distribution resulted in
circulation of fake news as well as credible, authentic news. The pervasive
dissemination of fake news has extreme negative impacts on individuals and
society. Therefore, fake news detection has recently become an emerging topic
as an interdisciplinary research field that is attracting significant attention
from many research disciplines, including social sciences and linguistics. In
this study, we propose a method primarily based on lexicons including a scoring
system to facilitate the detection of the fake news in Turkish. We contribute
to the literature by collecting a novel, large scale, and credible dataset of
Turkish news, and by constructing the first fake news detection lexicon for
Turkish.
| 2,020 |
Computation and Language
|
TweetBERT: A Pretrained Language Representation Model for Twitter Text
Analysis
|
Twitter is a well-known microblogging social site where users express their
views and opinions in real-time. As a result, tweets tend to contain valuable
information. With the advancements of deep learning in the domain of natural
language processing, extracting meaningful information from tweets has become a
growing interest among natural language researchers. Applying existing language
representation models to extract information from Twitter does not often
produce good results. Moreover, there is no existing language representation
models for text analysis specific to the social media domain. Hence, in this
article, we introduce two TweetBERT models, which are domain specific language
presentation models, pre-trained on millions of tweets. We show that the
TweetBERT models significantly outperform the traditional BERT models in
Twitter text mining tasks by more than 7% on each Twitter dataset. We also
provide an extensive analysis by evaluating seven BERT models on 31 different
datasets. Our results validate our hypothesis that continuously training
language models on twitter corpus help performance with Twitter.
| 2,020 |
Computation and Language
|
Stacking Neural Network Models for Automatic Short Answer Scoring
|
Automatic short answer scoring is one of the text classification problems to
assess students' answers during exams automatically. Several challenges can
arise in making an automatic short answer scoring system, one of which is the
quantity and quality of the data. The data labeling process is not easy because
it requires a human annotator who is an expert in their field. Further, the
data imbalance process is also a challenge because the number of labels for
correct answers is always much less than the wrong answers. In this paper, we
propose the use of a stacking model based on neural network and XGBoost for
classification process with sentence embedding feature. We also propose to use
data upsampling method to handle imbalance classes and hyperparameters
optimization algorithm to find a robust model automatically. We use Ukara 1.0
Challenge dataset and our best model obtained an F1-score of 0.821 exceeding
the previous work at the same dataset.
| 2,021 |
Computation and Language
|
DuoRAT: Towards Simpler Text-to-SQL Models
|
Recent neural text-to-SQL models can effectively translate natural language
questions to corresponding SQL queries on unseen databases. Working mostly on
the Spider dataset, researchers have proposed increasingly sophisticated
solutions to the problem. Contrary to this trend, in this paper we focus on
simplifications. We begin by building DuoRAT, a re-implementation of the
state-of-the-art RAT-SQL model that unlike RAT-SQL is using only relation-aware
or vanilla transformers as the building blocks. We perform several ablation
experiments using DuoRAT as the baseline model. Our experiments confirm the
usefulness of some techniques and point out the redundancy of others, including
structural SQL features and features that link the question with the schema.
| 2,021 |
Computation and Language
|
Beyond English-Centric Multilingual Machine Translation
|
Existing work in translation demonstrated the potential of massively
multilingual machine translation by training a single model able to translate
between any pair of languages. However, much of this work is English-Centric by
training only on data which was translated from or to English. While this is
supported by large sources of training data, it does not reflect translation
needs worldwide. In this work, we create a true Many-to-Many multilingual
translation model that can translate directly between any pair of 100
languages. We build and open source a training dataset that covers thousands of
language directions with supervised data, created through large-scale mining.
Then, we explore how to effectively increase model capacity through a
combination of dense scaling and language-specific sparse parameters to create
high quality models. Our focus on non-English-Centric models brings gains of
more than 10 BLEU when directly translating between non-English directions
while performing competitively to the best single systems of WMT. We
open-source our scripts so that others may reproduce the data, evaluation, and
final M2M-100 model.
| 2,020 |
Computation and Language
|
Sentence Boundary Augmentation For Neural Machine Translation Robustness
|
Neural Machine Translation (NMT) models have demonstrated strong state of the
art performance on translation tasks where well-formed training and evaluation
data are provided, but they remain sensitive to inputs that include errors of
various types. Specifically, in the context of long-form speech translation
systems, where the input transcripts come from Automatic Speech Recognition
(ASR), the NMT models have to handle errors including phoneme substitutions,
grammatical structure, and sentence boundaries, all of which pose challenges to
NMT robustness. Through in-depth error analysis, we show that sentence boundary
segmentation has the largest impact on quality, and we develop a simple data
augmentation strategy to improve segmentation robustness.
| 2,020 |
Computation and Language
|
Multi-Domain Dialogue State Tracking based on State Graph
|
We investigate the problem of multi-domain Dialogue State Tracking (DST) with
open vocabulary, which aims to extract the state from the dialogue. Existing
approaches usually concatenate previous dialogue state with dialogue history as
the input to a bi-directional Transformer encoder. They rely on the
self-attention mechanism of Transformer to connect tokens in them. However,
attention may be paid to spurious connections, leading to wrong inference. In
this paper, we propose to construct a dialogue state graph in which domains,
slots and values from the previous dialogue state are connected properly.
Through training, the graph node and edge embeddings can encode co-occurrence
relations between domain-domain, slot-slot and domain-slot, reflecting the
strong transition paths in general dialogue. The state graph, encoded with
relational-GCN, is fused into the Transformer encoder. Experimental results
show that our approach achieves a new state of the art on the task while
remaining efficient. It outperforms existing open-vocabulary DST approaches.
| 2,020 |
Computation and Language
|
A Simple and Efficient Multi-Task Learning Approach for Conditioned
Dialogue Generation
|
Conditioned dialogue generation suffers from the scarcity of labeled
responses. In this work, we exploit labeled non-dialogue text data related to
the condition, which are much easier to collect. We propose a multi-task
learning approach to leverage both labeled dialogue and text data. The 3 tasks
jointly optimize the same pre-trained Transformer -- conditioned dialogue
generation task on the labeled dialogue data, conditioned language encoding
task and conditioned language generation task on the labeled text data.
Experimental results show that our approach outperforms the state-of-the-art
models by leveraging the labeled texts, and it also obtains larger improvement
in performance comparing to the previous methods to leverage text data.
| 2,021 |
Computation and Language
|
Cascaded Models With Cyclic Feedback For Direct Speech Translation
|
Direct speech translation describes a scenario where only speech inputs and
corresponding translations are available. Such data are notoriously limited. We
present a technique that allows cascades of automatic speech recognition (ASR)
and machine translation (MT) to exploit in-domain direct speech translation
data in addition to out-of-domain MT and ASR data. After pre-training MT and
ASR, we use a feedback cycle where the downstream performance of the MT system
is used as a signal to improve the ASR system by self-training, and the MT
component is fine-tuned on multiple ASR outputs, making it more tolerant
towards spelling variations. A comparison to end-to-end speech translation
using components of identical architecture and the same data shows gains of up
to 3.8 BLEU points on LibriVoxDeEn and up to 5.1 BLEU points on CoVoST for
German-to-English speech translation.
| 2,023 |
Computation and Language
|
Semantic Role Labeling as Syntactic Dependency Parsing
|
We reduce the task of (span-based) PropBank-style semantic role labeling
(SRL) to syntactic dependency parsing. Our approach is motivated by our
empirical analysis that shows three common syntactic patterns account for over
98% of the SRL annotations for both English and Chinese data. Based on this
observation, we present a conversion scheme that packs SRL annotations into
dependency tree representations through joint labels that permit highly
accurate recovery back to the original format. This representation allows us to
train statistical dependency parsers to tackle SRL and achieve competitive
performance with the current state of the art. Our findings show the promise of
syntactic dependency trees in encoding semantic role relations within their
syntactic domain of locality, and point to potential further integration of
syntactic methods into semantic role labeling in the future.
| 2,020 |
Computation and Language
|
Detection of COVID-19 informative tweets using RoBERTa
|
Social media such as Twitter is a hotspot of user-generated information. In
this ongoing Covid-19 pandemic, there has been an abundance of data on social
media which can be classified as informative and uninformative content. In this
paper, we present our work to detect informative Covid-19 English tweets using
RoBERTa model as a part of the W-NUT workshop 2020. We show the efficacy of our
model on a public dataset with an F1-score of 0.89 on the validation dataset
and 0.87 on the leaderboard.
| 2,020 |
Computation and Language
|
On the Potential of Lexico-logical Alignments for Semantic Parsing to
SQL Queries
|
Large-scale semantic parsing datasets annotated with logical forms have
enabled major advances in supervised approaches. But can richer supervision
help even more? To explore the utility of fine-grained, lexical-level
supervision, we introduce Squall, a dataset that enriches 11,276
WikiTableQuestions English-language questions with manually created SQL
equivalents plus alignments between SQL and question fragments. Our annotation
enables new training possibilities for encoder-decoder models, including
approaches from machine translation previously precluded by the absence of
alignments. We propose and test two methods: (1) supervised attention; (2)
adopting an auxiliary objective of disambiguating references in the input
queries to table columns. In 5-fold cross validation, these strategies improve
over strong baselines by 4.4% execution accuracy. Oracle experiments suggest
that annotated alignments can support further accuracy gains of up to 23.9%.
| 2,020 |
Computation and Language
|
Improving Simultaneous Translation by Incorporating Pseudo-References
with Fewer Reorderings
|
Simultaneous translation is vastly different from full-sentence translation,
in the sense that it starts translation before the source sentence ends, with
only a few words delay. However, due to the lack of large-scale, high-quality
simultaneous translation datasets, most such systems are still trained on
conventional full-sentence bitexts. This is far from ideal for the simultaneous
scenario due to the abundance of unnecessary long-distance reorderings in those
bitexts. We propose a novel method that rewrites the target side of existing
full-sentence corpora into simultaneous-style translation. Experiments on
Zh->En and Ja->En simultaneous translation show substantial improvements (up to
+2.7 BLEU) with the addition of these generated pseudo-references.
| 2,021 |
Computation and Language
|
Clustering-based Inference for Biomedical Entity Linking
|
Due to large number of entities in biomedical knowledge bases, only a small
fraction of entities have corresponding labelled training data. This
necessitates entity linking models which are able to link mentions of unseen
entities using learned representations of entities. Previous approaches link
each mention independently, ignoring the relationships within and across
documents between the entity mentions. These relations can be very useful for
linking mentions in biomedical text where linking decisions are often difficult
due mentions having a generic or a highly specialized form. In this paper, we
introduce a model in which linking decisions can be made not merely by linking
to a knowledge base entity but also by grouping multiple mentions together via
clustering and jointly making linking predictions. In experiments on the
largest publicly available biomedical dataset, we improve the best independent
prediction for entity linking by 3.0 points of accuracy, and our
clustering-based inference model further improves entity linking by 2.3 points.
| 2,021 |
Computation and Language
|
Document-Level Relation Extraction with Adaptive Thresholding and
Localized Context Pooling
|
Document-level relation extraction (RE) poses new challenges compared to its
sentence-level counterpart. One document commonly contains multiple entity
pairs, and one entity pair occurs multiple times in the document associated
with multiple possible relations. In this paper, we propose two novel
techniques, adaptive thresholding and localized context pooling, to solve the
multi-label and multi-entity problems. The adaptive thresholding replaces the
global threshold for multi-label classification in the prior work with a
learnable entities-dependent threshold. The localized context pooling directly
transfers attention from pre-trained language models to locate relevant context
that is useful to decide the relation. We experiment on three document-level RE
benchmark datasets: DocRED, a recently released large-scale RE dataset, and two
datasets CDRand GDA in the biomedical domain. Our ATLOP (Adaptive Thresholding
and Localized cOntext Pooling) model achieves an F1 score of 63.4, and also
significantly outperforms existing models on both CDR and GDA.
| 2,020 |
Computation and Language
|
Learning to Summarize Long Texts with Memory Compression and Transfer
|
We introduce Mem2Mem, a memory-to-memory mechanism for hierarchical recurrent
neural network based encoder decoder architectures and we explore its use for
abstractive document summarization. Mem2Mem transfers "memories" via
readable/writable external memory modules that augment both the encoder and
decoder. Our memory regularization compresses an encoded input article into a
more compact set of sentence representations. Most importantly, the memory
compression step performs implicit extraction without labels, sidestepping
issues with suboptimal ground-truth data and exposure bias of hybrid
extractive-abstractive summarization techniques. By allowing the decoder to
read/write over the encoded input memory, the model learns to read salient
information about the input article while keeping track of what has been
generated. Our Mem2Mem approach yields results that are competitive with state
of the art transformer based summarization methods, but with 16 times fewer
parameters
| 2,020 |
Computation and Language
|
Probing and Fine-tuning Reading Comprehension Models for Few-shot Event
Extraction
|
We study the problem of event extraction from text data, which requires both
detecting target event types and their arguments. Typically, both the event
detection and argument detection subtasks are formulated as supervised sequence
labeling problems. We argue that the event extraction models so trained are
inherently label-hungry, and can generalize poorly across domains and text
genres.We propose a reading comprehension framework for event
extraction.Specifically, we formulate event detection as a textual entailment
prediction problem, and argument detection as a question answer-ing problem. By
constructing proper query templates, our approach can effectively distill rich
knowledge about tasks and label semantics from pretrained reading comprehension
models. Moreover, our model can be fine-tuned with a small amount of data to
boost its performance. Our experiment results show that our method performs
strongly for zero-shot and few-shot event extraction, and it achieves
state-of-the-art performance on the ACE 2005 benchmark when trained with full
supervision.
| 2,020 |
Computation and Language
|
Linking Entities to Unseen Knowledge Bases with Arbitrary Schemas
|
In entity linking, mentions of named entities in raw text are disambiguated
against a knowledge base (KB). This work focuses on linking to unseen KBs that
do not have training data and whose schema is unknown during training. Our
approach relies on methods to flexibly convert entities from arbitrary KBs with
several attribute-value pairs into flat strings, which we use in conjunction
with state-of-the-art models for zero-shot linking. To improve the
generalization of our model, we use two regularization schemes based on
shuffling of entity attributes and handling of unseen attributes. Experiments
on English datasets where models are trained on the CoNLL dataset, and tested
on the TAC-KBP 2010 dataset show that our models outperform baseline models by
over 12 points of accuracy. Unlike prior work, our approach also allows for
seamlessly combining multiple training datasets. We test this ability by adding
both a completely different dataset (Wikia), as well as increasing amount of
training data from the TAC-KBP 2010 training set. Our models perform favorably
across the board.
| 2,020 |
Computation and Language
|
NADI 2020: The First Nuanced Arabic Dialect Identification Shared Task
|
We present the results and findings of the First Nuanced Arabic Dialect
Identification Shared Task (NADI). This Shared Task includes two subtasks:
country-level dialect identification (Subtask 1) and province-level sub-dialect
identification (Subtask 2). The data for the shared task covers a total of 100
provinces from 21 Arab countries and are collected from the Twitter domain. As
such, NADI is the first shared task to target naturally-occurring fine-grained
dialectal text at the sub-country level. A total of 61 teams from 25 countries
registered to participate in the tasks, thus reflecting the interest of the
community in this area. We received 47 submissions for Subtask 1 from 18 teams
and 9 submissions for Subtask 2 from 9 teams.
| 2,020 |
Computation and Language
|
A General Multi-Task Learning Framework to Leverage Text Data for Speech
to Text Tasks
|
Attention-based sequence-to-sequence modeling provides a powerful and elegant
solution for applications that need to map one sequence to a different
sequence. Its success heavily relies on the availability of large amounts of
training data. This presents a challenge for speech applications where labelled
speech data is very expensive to obtain, such as automatic speech recognition
(ASR) and speech translation (ST). In this study, we propose a general
multi-task learning framework to leverage text data for ASR and ST tasks. Two
auxiliary tasks, a denoising autoencoder task and machine translation task, are
proposed to be co-trained with ASR and ST tasks respectively. We demonstrate
that representing text input as phoneme sequences can reduce the difference
between speech and text inputs, and enhance the knowledge transfer from text
corpora to the speech to text tasks. Our experiments show that the proposed
method achieves a relative 10~15% word error rate reduction on the English
Librispeech task compared with our baseline, and improves the speech
translation quality on the MuST-C tasks by 3.6~9.2 BLEU.
| 2,021 |
Computation and Language
|
LSTM-LM with Long-Term History for First-Pass Decoding in Conversational
Speech Recognition
|
LSTM language models (LSTM-LMs) have been proven to be powerful and yielded
significant performance improvements over count based n-gram LMs in modern
speech recognition systems. Due to its infinite history states and
computational load, most previous studies focus on applying LSTM-LMs in the
second-pass for rescoring purpose. Recent work shows that it is feasible and
computationally affordable to adopt the LSTM-LMs in the first-pass decoding
within a dynamic (or tree based) decoder framework. In this work, the LSTM-LM
is composed with a WFST decoder on-the-fly for the first-pass decoding.
Furthermore, motivated by the long-term history nature of LSTM-LMs, the use of
context beyond the current utterance is explored for the first-pass decoding in
conversational speech recognition. The context information is captured by the
hidden states of LSTM-LMs across utterance and can be used to guide the
first-pass search effectively. The experimental results in our internal meeting
transcription system show that significant performance improvements can be
obtained by incorporating the contextual information with LSTM-LMs in the
first-pass decoding, compared to applying the contextual information in the
second-pass rescoring.
| 2,020 |
Computation and Language
|
Latte-Mix: Measuring Sentence Semantic Similarity with Latent
Categorical Mixtures
|
Measuring sentence semantic similarity using pre-trained language models such
as BERT generally yields unsatisfactory zero-shot performance, and one main
reason is ineffective token aggregation methods such as mean pooling. In this
paper, we demonstrate under a Bayesian framework that distance between
primitive statistics such as the mean of word embeddings are fundamentally
flawed for capturing sentence-level semantic similarity. To remedy this issue,
we propose to learn a categorical variational autoencoder (VAE) based on
off-the-shelf pre-trained language models. We theoretically prove that
measuring the distance between the latent categorical mixtures, namely
Latte-Mix, can better reflect the true sentence semantic similarity. In
addition, our Bayesian framework provides explanations for why models finetuned
on labelled sentence pairs have better zero-shot performance. We also
empirically demonstrate that these finetuned models could be further improved
by Latte-Mix. Our method not only yields the state-of-the-art zero-shot
performance on semantic similarity datasets such as STS, but also enjoy the
benefits of fast training and having small memory footprints.
| 2,020 |
Computation and Language
|
Stronger Transformers for Neural Multi-Hop Question Generation
|
Prior work on automated question generation has almost exclusively focused on
generating simple questions whose answers can be extracted from a single
document. However, there is an increasing interest in developing systems that
are capable of more complex multi-hop question generation, where answering the
questions requires reasoning over multiple documents. In this work, we
introduce a series of strong transformer models for multi-hop question
generation, including a graph-augmented transformer that leverages relations
between entities in the text. While prior work has emphasized the importance of
graph-based models, we show that we can substantially outperform the
state-of-the-art by 5 BLEU points using a standard transformer architecture. We
further demonstrate that graph-based augmentations can provide complimentary
improvements on top of this foundation. Interestingly, we find that several
important factors--such as the inclusion of an auxiliary contrastive objective
and data filtering could have larger impacts on performance. We hope that our
stronger baselines and analysis provide a constructive foundation for future
work in this area.
| 2,020 |
Computation and Language
|
Exploit Multiple Reference Graphs for Semi-supervised Relation
Extraction
|
Manual annotation of the labeled data for relation extraction is
time-consuming and labor-intensive. Semi-supervised methods can offer helping
hands for this problem and have aroused great research interests. Existing work
focuses on mapping the unlabeled samples to the classes to augment the labeled
dataset. However, it is hard to find an overall good mapping function,
especially for the samples with complicated syntactic components in one
sentence.
To tackle this limitation, we propose to build the connection between the
unlabeled data and the labeled ones rather than directly mapping the unlabeled
samples to the classes. Specifically, we first use three kinds of information
to construct reference graphs, including entity reference, verb reference, and
semantics reference. The goal is to semantically or lexically connect the
unlabeled sample(s) to the labeled one(s). Then, we develop a Multiple
Reference Graph (MRefG) model to exploit the reference information for better
recognizing high-quality unlabeled samples. The effectiveness of our method is
demonstrated by extensive comparison experiments with the state-of-the-art
baselines on two public datasets.
| 2,020 |
Computation and Language
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.