Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
"My Way of Telling a Story": Persona based Grounded Story Generation | Visual storytelling is the task of generating stories based on a sequence of
images. Inspired by the recent works in neural generation focusing on
controlling the form of text, this paper explores the idea of generating these
stories in different personas. However, one of the main challenges of
performing this task is the lack of a dataset of visual stories in different
personas. Having said that, there are independent datasets for both visual
storytelling and annotated sentences for various persona. In this paper we
describe an approach to overcome this by getting labelled persona data from a
different task and leveraging those annotations to perform persona based story
generation. We inspect various ways of incorporating personality in both the
encoder and the decoder representations to steer the generation in the target
direction. To this end, we propose five models which are incremental extensions
to the baseline model to perform the task at hand. In our experiments we use
five different personas to guide the generation process. We find that the
models based on our hypotheses perform better at capturing words while
generating stories in the target persona.
| 2,019 | Computation and Language |
Principled Frameworks for Evaluating Ethics in NLP Systems | We critique recent work on ethics in natural language processing. Those
discussions have focused on data collection, experimental design, and
interventions in modeling. But we argue that we ought to first understand the
frameworks of ethics that are being used to evaluate the fairness and justice
of algorithmic systems. Here, we begin that discussion by outlining
deontological ethics, and envision a research agenda prioritized by it.
| 2,019 | Computation and Language |
Scalable Syntax-Aware Language Models Using Knowledge Distillation | Prior work has shown that, on small amounts of training data, syntactic
neural language models learn structurally sensitive generalisations more
successfully than sequential language models. However, their computational
complexity renders scaling difficult, and it remains an open question whether
structural biases are still necessary when sequential models have access to
ever larger amounts of training data. To answer this question, we introduce an
efficient knowledge distillation (KD) technique that transfers knowledge from a
syntactic language model trained on a small corpus to an LSTM language model,
hence enabling the LSTM to develop a more structurally sensitive representation
of the larger training data it learns from. On targeted syntactic evaluations,
we find that, while sequential LSTMs perform much better than previously
reported, our proposed technique substantially improves on this baseline,
yielding a new state of the art. Our findings and analysis affirm the
importance of structural biases, even in models that learn from large amounts
of data.
| 2,019 | Computation and Language |
Tagged Back-Translation | Recent work in Neural Machine Translation (NMT) has shown significant quality
gains from noised-beam decoding during back-translation, a method to generate
synthetic parallel data. We show that the main role of such synthetic noise is
not to diversify the source side, as previously suggested, but simply to
indicate to the model that the given source is synthetic. We propose a simpler
alternative to noising techniques, consisting of tagging back-translated source
sentences with an extra token. Our results on WMT outperform noised
back-translation in English-Romanian and match performance on English-German,
re-defining state-of-the-art in the former.
| 2,019 | Computation and Language |
Can neural networks understand monotonicity reasoning? | Monotonicity reasoning is one of the important reasoning skills for any
intelligent natural language inference (NLI) model in that it requires the
ability to capture the interaction between lexical and syntactic structures.
Since no test set has been developed for monotonicity reasoning with wide
coverage, it is still unclear whether neural models can perform monotonicity
reasoning in a proper way. To investigate this issue, we introduce the
Monotonicity Entailment Dataset (MED). Performance by state-of-the-art NLI
models on the new test set is substantially worse, under 55%, especially on
downward reasoning. In addition, analysis using a monotonicity-driven data
augmentation method showed that these models might be limited in their
generalization ability in upward and downward reasoning.
| 2,019 | Computation and Language |
Action-Sensitive Phonological Dependencies | This paper defines a subregular class of functions called the tier-based
synchronized strictly local (TSSL) functions. These functions are similar to
the the tier-based input-output strictly local (TIOSL) functions, except that
the locality condition is enforced not on the input and output streams, but on
the computation history of the minimal subsequential finite-state transducer.
We show that TSSL functions naturally describe rhythmic syncope while TIOSL
functions cannot, and we argue that TSSL functions provide a more restricted
characterization of rhythmic syncope than existing treatments within Optimality
Theory.
| 2,019 | Computation and Language |
Correlating Twitter Language with Community-Level Health Outcomes | We study how language on social media is linked to diseases such as
atherosclerotic heart disease (AHD), diabetes and various types of cancer. Our
proposed model leverages state-of-the-art sentence embeddings, followed by a
regression model and clustering, without the need of additional labelled data.
It allows to predict community-level medical outcomes from language, and
thereby potentially translate these to the individual level. The method is
applicable to a wide range of target variables and allows us to discover known
and potentially novel correlations of medical outcomes with life-style aspects
and other socioeconomic risk factors.
| 2,019 | Computation and Language |
A Hierarchical Attention Based Seq2seq Model for Chinese Lyrics
Generation | In this paper, we comprehensively study on context-aware generation of
Chinese song lyrics. Conventional text generative models generate a sequence or
sentence word by word, failing to consider the contextual relationship between
sentences. Taking account into the characteristics of lyrics, a hierarchical
attention based Seq2Seq (Sequence-to-Sequence) model is proposed for Chinese
lyrics generation. With encoding of word-level and sentence-level contextual
information, this model promotes the topic relevance and consistency of
generation. A large Chinese lyrics corpus is also leveraged for model training.
Eventually, results of automatic and human evaluations demonstrate that our
model is able to compose complete Chinese lyrics with one united topic
constraint.
| 2,019 | Computation and Language |
A weakly supervised sequence tagging and grammar induction approach to
semantic frame slot filling | This paper describes continuing work on semantic frame slot filling for a
command and control task using a weakly-supervised approach. We investigate the
advantages of using retraining techniques that take the output of a
hierarchical hidden markov model as input to two inductive approaches: (1)
discriminative sequence labelers based on conditional random fields and
memory-based learning and (2) probabilistic context-free grammar induction.
Experimental results show that this setup can significantly improve F-scores
without the need for additional information sources. Furthermore, qualitative
analysis shows that the weakly supervised technique is able to automatically
induce an easily interpretable and syntactically appropriate grammar for the
domain and task at hand.
| 2,019 | Computation and Language |
Towards Integration of Statistical Hypothesis Tests into Deep Neural
Networks | We report our ongoing work about a new deep architecture working in tandem
with a statistical test procedure for jointly training texts and their label
descriptions for multi-label and multi-class classification tasks. A
statistical hypothesis testing method is used to extract the most informative
words for each given class. These words are used as a class description for
more label-aware text classification. Intuition is to help the model to
concentrate on more informative words rather than more frequent ones. The model
leverages the use of label descriptions in addition to the input text to
enhance text classification performance. Our method is entirely data-driven,
has no dependency on other sources of information than the training data, and
is adaptable to different classification problems by providing appropriate
training data without major hyper-parameter tuning. We trained and tested our
system on several publicly available datasets, where we managed to improve the
state-of-the-art on one set with a high margin, and to obtain competitive
results on all other ones.
| 2,019 | Computation and Language |
Practical User Feedback-driven Internal Search Using Online Learning to
Rank | We present a system, Spoke, for creating and searching internal knowledge
base (KB) articles for organizations. Spoke is available as a SaaS
(Software-as-a-Service) product deployed across hundreds of organizations with
a diverse set of domains. Spoke continually improves search quality using
conversational user feedback which allows it to provide better search
experience than standard information retrieval systems without encoding any
explicit domain knowledge. We achieve this by using a real-time online
learning-to-rank (L2R) algorithm that automatically customizes relevance
scoring for each organization deploying Spoke by using a query similarity
kernel.
The focus of this paper is on incorporating practical considerations into our
relevance scoring function and algorithm that make Spoke easy to deploy and
suitable for handling events that naturally happen over the life-cycle of any
KB deployment. We show that Spoke outperforms competitive baselines by up to
41% in offline F1 comparisons.
| 2,019 | Computation and Language |
Context is Key: Grammatical Error Detection with Contextual Word
Representations | Grammatical error detection (GED) in non-native writing requires systems to
identify a wide range of errors in text written by language learners. Error
detection as a purely supervised task can be challenging, as GED datasets are
limited in size and the label distributions are highly imbalanced.
Contextualized word representations offer a possible solution, as they can
efficiently capture compositional information in language and can be optimized
on large amounts of unsupervised data. In this paper, we perform a systematic
comparison of ELMo, BERT and Flair embeddings (Peters et al., 2017; Devlin et
al., 2018; Akbik et al., 2018) on a range of public GED datasets, and propose
an approach to effectively integrate such representations in current methods,
achieving a new state of the art on GED. We further analyze the strengths and
weaknesses of different contextual embeddings for the task at hand, and present
detailed analyses of their impact on different types of errors.
| 2,019 | Computation and Language |
Multi-Hop Paragraph Retrieval for Open-Domain Question Answering | This paper is concerned with the task of multi-hop open-domain Question
Answering (QA). This task is particularly challenging since it requires the
simultaneous performance of textual reasoning and efficient searching. We
present a method for retrieving multiple supporting paragraphs, nested amidst a
large knowledge base, which contain the necessary evidence to answer a given
question. Our method iteratively retrieves supporting paragraphs by forming a
joint vector representation of both a question and a paragraph. The retrieval
is performed by considering contextualized sentence-level representations of
the paragraphs in the knowledge source. Our method achieves state-of-the-art
performance over two well-known datasets, SQuAD-Open and HotpotQA, which serve
as our single- and multi-hop open-domain QA benchmarks, respectively.
| 2,019 | Computation and Language |
Multi-Level Matching and Aggregation Network for Few-Shot Relation
Classification | This paper presents a multi-level matching and aggregation network (MLMAN)
for few-shot relation classification. Previous studies on this topic adopt
prototypical networks, which calculate the embedding vector of a query instance
and the prototype vector of each support set independently. In contrast, our
proposed MLMAN model encodes the query instance and each support set in an
interactive way by considering their matching information at both local and
instance levels. The final class prototype for each support set is obtained by
attentive aggregation over the representations of its support instances, where
the weights are calculated using the query instance. Experimental results
demonstrate the effectiveness of our proposed methods, which achieve a new
state-of-the-art performance on the FewRel dataset.
| 2,019 | Computation and Language |
Improving Background Based Conversation with Context-aware Knowledge
Pre-selection | Background Based Conversations (BBCs) have been developed to make dialogue
systems generate more informative and natural responses by leveraging
background knowledge. Existing methods for BBCs can be grouped into two
categories: extraction-based methods and generation-based methods. The former
extract spans frombackground material as responses that are not necessarily
natural. The latter generate responses thatare natural but not necessarily
effective in leveraging background knowledge. In this paper, we focus on
generation-based methods and propose a model, namely Context-aware Knowledge
Pre-selection (CaKe), which introduces a pre-selection process that uses
dynamic bi-directional attention to improve knowledge selection by using the
utterance history context as prior information to select the most relevant
background material. Experimental results show that our model is superior to
current state-of-the-art baselines, indicating that it benefits from the
pre-selection process, thus improving in-formativeness and fluency.
| 2,019 | Computation and Language |
Using Automatically Extracted Minimum Spans to Disentangle Coreference
Evaluation from Boundary Detection | The common practice in coreference resolution is to identify and evaluate the
maximum span of mentions. The use of maximum spans tangles coreference
evaluation with the challenges of mention boundary detection like prepositional
phrase attachment. To address this problem, minimum spans are manually
annotated in smaller corpora. However, this additional annotation is costly and
therefore, this solution does not scale to large corpora. In this paper, we
propose the MINA algorithm for automatically extracting minimum spans to
benefit from minimum span evaluation in all corpora. We show that the extracted
minimum spans by MINA are consistent with those that are manually annotated by
experts. Our experiments show that using minimum spans is in particular
important in cross-dataset coreference evaluation, in which detected mention
boundaries are noisier due to domain shift. We will integrate MINA into
https://github.com/ns-moosavi/coval for reporting standard coreference scores
based on both maximum and automatically detected minimum spans.
| 2,019 | Computation and Language |
Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B | In this paper we propose a novel neural approach for automatic decipherment
of lost languages. To compensate for the lack of strong supervision signal, our
model design is informed by patterns in language change documented in
historical linguistics. The model utilizes an expressive sequence-to-sequence
model to capture character-level correspondences between cognates. To
effectively train the model in an unsupervised manner, we innovate the training
procedure by formalizing it as a minimum-cost flow problem. When applied to the
decipherment of Ugaritic, we achieve a 5.5% absolute improvement over
state-of-the-art results. We also report the first automatic results in
deciphering Linear B, a syllabic language related to ancient Greek, where our
model correctly translates 67.3% of cognates.
| 2,019 | Computation and Language |
Persuasion for Good: Towards a Personalized Persuasive Dialogue System
for Social Good | Developing intelligent persuasive conversational agents to change people's
opinions and actions for social good is the frontier in advancing the ethical
development of automated dialogue systems. To do so, the first step is to
understand the intricate organization of strategic disclosures and appeals
employed in human persuasion conversations. We designed an online persuasion
task where one participant was asked to persuade the other to donate to a
specific charity. We collected a large dataset with 1,017 dialogues and
annotated emerging persuasion strategies from a subset. Based on the
annotation, we built a baseline classifier with context information and
sentence-level features to predict the 10 persuasion strategies used in the
corpus. Furthermore, to develop an understanding of personalized persuasion
processes, we analyzed the relationships between individuals' demographic and
psychological backgrounds including personality, morality, value systems, and
their willingness for donation. Then, we analyzed which types of persuasion
strategies led to a greater amount of donation depending on the individuals'
personal backgrounds. This work lays the ground for developing a personalized
persuasive dialogue system.
| 2,020 | Computation and Language |
Theoretical Limitations of Self-Attention in Neural Sequence Models | Transformers are emerging as the new workhorse of NLP, showing great success
across tasks. Unlike LSTMs, transformers process input sequences entirely
through self-attention. Previous work has suggested that the computational
capabilities of self-attention to process hierarchical structures are limited.
In this work, we mathematically investigate the computational power of
self-attention to model formal languages. Across both soft and hard attention,
we show strong theoretical limitations of the computational abilities of
self-attention, finding that it cannot model periodic finite-state languages,
nor hierarchical structure, unless the number of layers or heads increases with
input length. These limitations seem surprising given the practical success of
self-attention and the prominent role assigned to hierarchical structure in
linguistics, suggesting that natural language can be approximated well with
models that are too weak for the formal languages typically assumed in
theoretical linguistics.
| 2,021 | Computation and Language |
Robust Zero-Shot Cross-Domain Slot Filling with Example Values | Task-oriented dialog systems increasingly rely on deep learning-based slot
filling models, usually needing extensive labeled training data for target
domains. Often, however, little to no target domain training data may be
available, or the training and target domain schemas may be misaligned, as is
common for web forms on similar websites. Prior zero-shot slot filling models
use slot descriptions to learn concepts, but are not robust to misaligned
schemas. We propose utilizing both the slot description and a small number of
examples of slot values, which may be easily available, to learn semantic
representations of slots which are transferable across domains and robust to
misaligned schemas. Our approach outperforms state-of-the-art models on two
multi-domain datasets, especially in the low-data setting.
| 2,019 | Computation and Language |
Interconnected Question Generation with Coreference Alignment and
Conversation Flow Modeling | We study the problem of generating interconnected questions in
question-answering style conversations. Compared with previous works which
generate questions based on a single sentence (or paragraph), this setting is
different in two major aspects: (1) Questions are highly conversational. Almost
half of them refer back to conversation history using coreferences. (2) In a
coherent conversation, questions have smooth transitions between turns. We
propose an end-to-end neural model with coreference alignment and conversation
flow modeling. The coreference alignment modeling explicitly aligns coreferent
mentions in conversation history with corresponding pronominal references in
generated questions, which makes generated questions interconnected to
conversation history. The conversation flow modeling builds a coherent
conversation by starting questioning on the first few sentences in a text
passage and smoothly shifting the focus to later parts. Extensive experiments
show that our system outperforms several baselines and can generate highly
conversational questions. The code implementation is released at
https://github.com/Evan-Gao/conversational-QG.
| 2,019 | Computation and Language |
Manipulating the Difficulty of C-Tests | We propose two novel manipulation strategies for increasing and decreasing
the difficulty of C-tests automatically. This is a crucial step towards
generating learner-adaptive exercises for self-directed language learning and
preparing language assessment tests. To reach the desired difficulty level, we
manipulate the size and the distribution of gaps based on absolute and relative
gap difficulty predictions. We evaluate our approach in corpus-based
experiments and in a user study with 60 participants. We find that both
strategies are able to generate C-tests with the desired difficulty level.
| 2,019 | Computation and Language |
An Interactive Multi-Task Learning Network for End-to-End Aspect-Based
Sentiment Analysis | Aspect-based sentiment analysis produces a list of aspect terms and their
corresponding sentiments for a natural language sentence. This task is usually
done in a pipeline manner, with aspect term extraction performed first,
followed by sentiment predictions toward the extracted aspect terms. While
easier to develop, such an approach does not fully exploit joint information
from the two subtasks and does not use all available sources of training
information that might be helpful, such as document-level labeled sentiment
corpus. In this paper, we propose an interactive multi-task learning network
(IMN) which is able to jointly learn multiple related tasks simultaneously at
both the token level as well as the document level. Unlike conventional
multi-task learning methods that rely on learning common features for the
different tasks, IMN introduces a message passing architecture where
information is iteratively passed to different tasks through a shared set of
latent variables. Experimental results demonstrate superior performance of the
proposed method against multiple baselines on three benchmark datasets.
| 2,019 | Computation and Language |
BERE: An accurate distantly supervised biomedical entity relation
extraction network | Automated entity relation extraction (RE) from literature provides an
important source for constructing biomedical database, which is more efficient
and extensible than manual curation. However, existing RE models usually ignore
the information contained in sentence structures and target entities. In this
paper, we propose BERE, a deep learning based model which uses Gumbel Tree-GRU
to learn sentence structures and joint embedding to incorporate entity
information. It also employs word-level attention for improved relation
extraction and sentence-level attention to suit the distantly supervised
dataset. Because the existing dataset are relatively small, we further
construct a much larger drug-target interaction extraction (DTIE) dataset by
distant supervision. Experiments conducted on both DDIExtraction 2013 task and
DTIE dataset show our model's effectiveness over state-of-the-art baselines in
terms of F1 measures and PR curves.
| 2,019 | Computation and Language |
Recursive Style Breach Detection with Multifaceted Ensemble Learning | We present a supervised approach for style change detection, which aims at
predicting whether there are changes in the style in a given text document, as
well as at finding the exact positions where such changes occur. In particular,
we combine a TF.IDF representation of the document with features specifically
engineered for the task, and we make predictions via an ensemble of diverse
classifiers including SVM, Random Forest, AdaBoost, MLP, and LightGBM. Whenever
the model detects that style change is present, we apply it recursively,
looking to find the specific positions of the change. Our approach powered the
winning system for the PAN@CLEF 2018 task on Style Change Detection.
| 2,019 | Computation and Language |
Context-aware Embedding for Targeted Aspect-based Sentiment Analysis | Attention-based neural models were employed to detect the different aspects
and sentiment polarities of the same target in targeted aspect-based sentiment
analysis (TABSA). However, existing methods do not specifically pre-train
reasonable embeddings for targets and aspects in TABSA. This may result in
targets or aspects having the same vector representations in different contexts
and losing the context-dependent information. To address this problem, we
propose a novel method to refine the embeddings of targets and aspects. Such
pivotal embedding refinement utilizes a sparse coefficient vector to adjust the
embeddings of target and aspect from the context. Hence the embeddings of
targets and aspects can be refined from the highly correlative words instead of
using context-independent or randomly initialized vectors. Experiment results
on two benchmark datasets show that our approach yields the state-of-the-art
performance in TABSA task.
| 2,019 | Computation and Language |
Open Domain Event Extraction Using Neural Latent Variable Models | We consider open domain event extraction, the task of extracting unconstraint
types of events from news clusters. A novel latent variable neural model is
constructed, which is scalable to very large corpus. A dataset is collected and
manually annotated, with task-specific evaluation metrics being designed.
Results show that the proposed unsupervised model gives better performance
compared to the state-of-the-art method for event schema induction.
| 2,022 | Computation and Language |
Improving Multi-turn Dialogue Modelling with Utterance ReWriter | Recent research has made impressive progress in single-turn dialogue
modelling. In the multi-turn setting, however, current models are still far
from satisfactory. One major challenge is the frequently occurred coreference
and information omission in our daily conversation, making it hard for machines
to understand the real intention. In this paper, we propose rewriting the human
utterance as a pre-process to help multi-turn dialgoue modelling. Each
utterance is first rewritten to recover all coreferred and omitted information.
The next processing steps are then performed based on the rewritten utterance.
To properly train the utterance rewriter, we collect a new dataset with human
annotations and introduce a Transformer-based utterance rewriting architecture
using the pointer network. We show the proposed architecture achieves
remarkably good performance on the utterance rewriting task. The trained
utterance rewriter can be easily integrated into online chatbots and brings
general improvement over different domains.
| 2,019 | Computation and Language |
Attention-based Modeling for Emotion Detection and Classification in
Textual Conversations | This paper addresses the problem of modeling textual conversations and
detecting emotions. Our proposed model makes use of 1) deep transfer learning
rather than the classical shallow methods of word embedding; 2) self-attention
mechanisms to focus on the most important parts of the texts and 3) turn-based
conversational modeling for classifying the emotions. The approach does not
rely on any hand-crafted features or lexicons. Our model was evaluated on the
data provided by the SemEval-2019 shared task on contextual emotion detection
in text. The model shows very competitive results.
| 2,019 | Computation and Language |
Making Fast Graph-based Algorithms with Graph Metric Embeddings | The computation of distance measures between nodes in graphs is inefficient
and does not scale to large graphs. We explore dense vector representations as
an effective way to approximate the same information: we introduce a simple yet
efficient and effective approach for learning graph embeddings. Instead of
directly operating on the graph structure, our method takes structural measures
of pairwise node similarities into account and learns dense node
representations reflecting user-defined graph distance measures, such as
e.g.the shortest path distance or distance measures that take information
beyond the graph structure into account. We demonstrate a speed-up of several
orders of magnitude when predicting word similarity by vector operations on our
embeddings as opposed to directly computing the respective path-based measures,
while outperforming various other graph embeddings on semantic similarity and
word sense disambiguation tasks and show evaluations on the WordNet graph and
two knowledge base graphs.
| 2,019 | Computation and Language |
Adversarial Training for Multilingual Acoustic Modeling | Multilingual training has been shown to improve acoustic modeling performance
by sharing and transferring knowledge in modeling different languages.
Knowledge sharing is usually achieved by using common lower-level layers for
different languages in a deep neural network. Recently, the domain adversarial
network was proposed to reduce domain mismatch of training data and learn
domain-invariant features. It is thus worth exploring whether adversarial
training can further promote knowledge sharing in multilingual models. In this
work, we apply the domain adversarial network to encourage the shared layers of
a multilingual model to learn language-invariant features. Bidirectional Long
Short-Term Memory (LSTM) recurrent neural networks (RNN) are used as building
blocks. We show that shared layers learned this way contain less language
identification information and lead to better performance. In an automatic
speech recognition task for seven languages, the resultant acoustic model
improves the word error rate (WER) of the multilingual model by 4% relative on
average, and the monolingual models by 10%.
| 2,019 | Computation and Language |
Coupling Retrieval and Meta-Learning for Context-Dependent Semantic
Parsing | In this paper, we present an approach to incorporate retrieved datapoints as
supporting evidence for context-dependent semantic parsing, such as generating
source code conditioned on the class environment. Our approach naturally
combines a retrieval model and a meta-learner, where the former learns to find
similar datapoints from the training data, and the latter considers retrieved
datapoints as a pseudo task for fast adaptation. Specifically, our retriever is
a context-aware encoder-decoder model with a latent variable which takes
context environment into consideration, and our meta-learner learns to utilize
retrieved datapoints in a model-agnostic meta-learning paradigm for fast
adaptation. We conduct experiments on CONCODE and CSQA datasets, where the
context refers to class environment in JAVA codes and conversational history,
respectively. We use sequence-to-action model as the base semantic parser,
which performs the state-of-the-art accuracy on both datasets. Results show
that both the context-aware retriever and the meta-learning strategy improve
accuracy, and our approach performs better than retrieve-and-edit baselines.
| 2,019 | Computation and Language |
Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and
Model Development for Multi-Hop QA | Multi-hop question answering requires a model to connect multiple pieces of
evidence scattered in a long context to answer the question. In this paper, we
show that in the multi-hop HotpotQA (Yang et al., 2018) dataset, the examples
often contain reasoning shortcuts through which models can directly locate the
answer by word-matching the question with a sentence in the context. We
demonstrate this issue by constructing adversarial documents that create
contradicting answers to the shortcut but do not affect the validity of the
original answer. The performance of strong baseline models drops significantly
on our adversarial evaluation, indicating that they are indeed exploiting the
shortcuts rather than performing multi-hop reasoning. After adversarial
training, the baseline's performance improves but is still limited on the
adversarial evaluation. Hence, we use a control unit that dynamically attends
to the question at different reasoning hops to guide the model's multi-hop
reasoning. We show that this 2-hop model trained on the regular data is more
robust to the adversaries than the baseline model. After adversarial training,
this 2-hop model not only achieves improvements over its counterpart trained on
regular data, but also outperforms the adversarially-trained 1-hop baseline. We
hope that these insights and initial improvements will motivate the development
of new models that combine explicit compositional reasoning with adversarial
training.
| 2,019 | Computation and Language |
Finding Your Voice: The Linguistic Development of Mental Health
Counselors | Mental health counseling is an enterprise with profound societal importance
where conversations play a primary role. In order to acquire the conversational
skills needed to face a challenging range of situations, mental health
counselors must rely on training and on continued experience with actual
clients. However, in the absence of large scale longitudinal studies, the
nature and significance of this developmental process remain unclear. For
example, prior literature suggests that experience might not translate into
consequential changes in counselor behavior. This has led some to even argue
that counseling is a profession without expertise.
In this work, we develop a computational framework to quantify the extent to
which individuals change their linguistic behavior with experience and to study
the nature of this evolution. We use our framework to conduct a large
longitudinal study of mental health counseling conversations, tracking over
3,400 counselors across their tenure. We reveal that overall, counselors do
indeed change their conversational behavior to become more diverse across
interactions, developing an individual voice that distinguishes them from other
counselors. Furthermore, a finer-grained investigation shows that the rate and
nature of this diversification vary across functionally different
conversational components.
| 2,019 | Computation and Language |
Constrained Decoding for Neural NLG from Compositional Representations
in Task-Oriented Dialogue | Generating fluent natural language responses from structured semantic
representations is a critical step in task-oriented conversational systems.
Avenues like the E2E NLG Challenge have encouraged the development of neural
approaches, particularly sequence-to-sequence (Seq2Seq) models for this
problem. The semantic representations used, however, are often underspecified,
which places a higher burden on the generation model for sentence planning, and
also limits the extent to which generated responses can be controlled in a live
system. In this paper, we (1) propose using tree-structured semantic
representations, like those used in traditional rule-based NLG systems, for
better discourse-level structuring and sentence-level planning; (2) introduce a
challenging dataset using this representation for the weather domain; (3)
introduce a constrained decoding approach for Seq2Seq models that leverages
this representation to improve semantic correctness; and (4) demonstrate
promising results on our dataset and the E2E dataset.
| 2,019 | Computation and Language |
Barack's Wife Hillary: Using Knowledge-Graphs for Fact-Aware Language
Modeling | Modeling human language requires the ability to not only generate fluent text
but also encode factual knowledge. However, traditional language models are
only capable of remembering facts seen at training time, and often have
difficulty recalling them. To address this, we introduce the knowledge graph
language model (KGLM), a neural language model with mechanisms for selecting
and copying facts from a knowledge graph that are relevant to the context.
These mechanisms enable the model to render information it has never seen
before, as well as generate out-of-vocabulary tokens. We also introduce the
Linked WikiText-2 dataset, a corpus of annotated text aligned to the Wikidata
knowledge graph whose contents (roughly) match the popular WikiText-2
benchmark. In experiments, we demonstrate that the KGLM achieves significantly
better performance than a strong baseline language model. We additionally
compare different language model's ability to complete sentences requiring
factual knowledge, showing that the KGLM outperforms even very large language
models in generating facts.
| 2,019 | Computation and Language |
A Structured Distributional Model of Sentence Meaning and Processing | Most compositional distributional semantic models represent sentence meaning
with a single vector. In this paper, we propose a Structured Distributional
Model (SDM) that combines word embeddings with formal semantics and is based on
the assumption that sentences represent events and situations. The semantic
representation of a sentence is a formal structure derived from Discourse
Representation Theory and containing distributional vectors. This structure is
dynamically and incrementally built by integrating knowledge about events and
their typical participants, as they are activated by lexical items. Event
knowledge is modeled as a graph extracted from parsed corpora and encoding
roles and relationships between participants that are represented as
distributional vectors. SDM is grounded on extensive psycholinguistic research
showing that generalized knowledge about events stored in semantic memory plays
a key role in sentence comprehension. We evaluate SDM on two recently
introduced compositionality datasets, and our results show that combining a
simple compositional model with event knowledge constantly improves
performances, even with different types of word embeddings.
| 2,019 | Computation and Language |
Tabula nearly rasa: Probing the Linguistic Knowledge of Character-Level
Neural Language Models Trained on Unsegmented Text | Recurrent neural networks (RNNs) have reached striking performance in many
natural language processing tasks. This has renewed interest in whether these
generic sequence processing devices are inducing genuine linguistic knowledge.
Nearly all current analytical studies, however, initialize the RNNs with a
vocabulary of known words, and feed them tokenized input during training. We
present a multi-lingual study of the linguistic knowledge encoded in RNNs
trained as character-level language models, on input data with word boundaries
removed. These networks face a tougher and more cognitively realistic task,
having to discover any useful linguistic unit from scratch based on input
statistics. The results show that our "near tabula rasa" RNNs are mostly able
to solve morphological, syntactic and semantic tasks that intuitively
presuppose word-level knowledge, and indeed they learned, to some extent, to
track word boundaries. Our study opens the door to speculations about the
necessity of an explicit, rigid word lexicon in language learning and usage.
| 2,019 | Computation and Language |
Generalizing Back-Translation in Neural Machine Translation | Back-translation - data augmentation by translating target monolingual data -
is a crucial component in modern neural machine translation (NMT). In this
work, we reformulate back-translation in the scope of cross-entropy
optimization of an NMT model, clarifying its underlying mathematical
assumptions and approximations beyond its heuristic usage. Our formulation
covers broader synthetic data generation schemes, including sampling from a
target-to-source NMT model. With this formulation, we point out fundamental
problems of the sampling-based approaches and propose to remedy them by (i)
disabling label smoothing for the target-to-source model and (ii) sampling from
a restricted search space. Our statements are investigated on the WMT 2018
German - English news translation task.
| 2,019 | Computation and Language |
Towards Transfer Learning for End-to-End Speech Synthesis from Deep
Pre-Trained Language Models | Modern text-to-speech (TTS) systems are able to generate audio that sounds
almost as natural as human speech. However, the bar of developing high-quality
TTS systems remains high since a sizable set of studio-quality <text, audio>
pairs is usually required. Compared to commercial data used to develop
state-of-the-art systems, publicly available data are usually worse in terms of
both quality and size. Audio generated by TTS systems trained on publicly
available data tends to not only sound less natural, but also exhibits more
background noise. In this work, we aim to lower TTS systems' reliance on
high-quality data by providing them the textual knowledge extracted by deep
pre-trained language models during training. In particular, we investigate the
use of BERT to assist the training of Tacotron-2, a state of the art TTS
consisting of an encoder and an attention-based decoder. BERT representations
learned from large amounts of unlabeled text data are shown to contain very
rich semantic and syntactic information about the input text, and have
potential to be leveraged by a TTS system to compensate the lack of
high-quality data. We incorporate BERT as a parallel branch to the Tacotron-2
encoder with its own attention head. For an input text, it is simultaneously
passed into BERT and the Tacotron-2 encoder. The representations extracted by
the two branches are concatenated and then fed to the decoder. As a preliminary
study, although we have not found incorporating BERT into Tacotron-2 generates
more natural or cleaner speech at a human-perceivable level, we observe
improvements in other aspects such as the model is being significantly better
at knowing when to stop decoding such that there is much less babbling at the
end of the synthesized audio and faster convergence during training.
| 2,019 | Computation and Language |
Measuring Bias in Contextualized Word Representations | Contextual word embeddings such as BERT have achieved state of the art
performance in numerous NLP tasks. Since they are optimized to capture the
statistical properties of training data, they tend to pick up on and amplify
social stereotypes present in the data as well. In this study, we (1)~propose a
template-based method to quantify bias in BERT; (2)~show that this method
obtains more consistent results in capturing social biases than the traditional
cosine based method; and (3)~conduct a case study, evaluating gender bias in a
downstream task of Gender Pronoun Resolution. Although our case study focuses
on gender bias, the proposed technique is generalizable to unveiling other
biases, including in multiclass settings, such as racial and religious biases.
| 2,019 | Computation and Language |
Zero-Shot Entity Linking by Reading Entity Descriptions | We present the zero-shot entity linking task, where mentions must be linked
to unseen entities without in-domain labeled data. The goal is to enable robust
transfer to highly specialized domains, and so no metadata or alias tables are
assumed. In this setting, entities are only identified by text descriptions,
and models must rely strictly on language understanding to resolve the new
entities. First, we show that strong reading comprehension models pre-trained
on large unlabeled data can be used to generalize to unseen entities. Second,
we propose a simple and effective adaptive pre-training strategy, which we term
domain-adaptive pre-training (DAP), to address the domain shift problem
associated with linking unseen entities in a new domain. We present experiments
on a new dataset that we construct for this task and show that DAP improves
over strong pre-training baselines, including BERT. The data and code are
available at https://github.com/lajanugen/zeshel.
| 2,019 | Computation and Language |
Curriculum Learning Strategies for Hindi-English Codemixed Sentiment
Analysis | Sentiment Analysis and other semantic tasks are commonly used for social
media textual analysis to gauge public opinion and make sense from the noise on
social media. The language used on social media not only commonly diverges from
the formal language, but is compounded by codemixing between languages,
especially in large multilingual societies like India.
Traditional methods for learning semantic NLP tasks have long relied on end
to end task specific training, requiring expensive data creation process, even
more so for deep learning methods. This challenge is even more severe for
resource scarce texts like codemixed language pairs, with lack of well learnt
representations as model priors, and task specific datasets can be few and
small in quantities to efficiently exploit recent deep learning approaches. To
address above challenges, we introduce curriculum learning strategies for
semantic tasks in code-mixed Hindi-English (Hi-En) texts, and investigate
various training strategies for enhancing model performance. Our method
outperforms the state of the art methods for Hi-En codemixed sentiment analysis
by 3.31% accuracy, and also shows better model robustness in terms of
convergence, and variance in test performance.
| 2,019 | Computation and Language |
Uncovering Probabilistic Implications in Typological Knowledge Bases | The study of linguistic typology is rooted in the implications we find
between linguistic features, such as the fact that languages with object-verb
word ordering tend to have post-positions. Uncovering such implications
typically amounts to time-consuming manual processing by trained and
experienced linguists, which potentially leaves key linguistic universals
unexplored. In this paper, we present a computational model which successfully
identifies known universals, including Greenberg universals, but also uncovers
new ones, worthy of further linguistic investigation. Our approach outperforms
baselines previously used for this problem, as well as a strong baseline from
knowledge base population.
| 2,019 | Computation and Language |
Modeling Semantic Relationship in Multi-turn Conversations with
Hierarchical Latent Variables | Multi-turn conversations consist of complex semantic structures, and it is
still a challenge to generate coherent and diverse responses given previous
utterances. It's practical that a conversation takes place under a background,
meanwhile, the query and response are usually most related and they are
consistent in topic but also different in content. However, little work focuses
on such hierarchical relationship among utterances. To address this problem, we
propose a Conversational Semantic Relationship RNN (CSRR) model to construct
the dependency explicitly. The model contains latent variables in three
hierarchies. The discourse-level one captures the global background, the
pair-level one stands for the common topic information between query and
response, and the utterance-level ones try to represent differences in content.
Experimental results show that our model significantly improves the quality of
responses in terms of fluency, coherence and diversity compared to baseline
methods.
| 2,019 | Computation and Language |
Attention Guided Graph Convolutional Networks for Relation Extraction | Dependency trees convey rich structural information that is proven useful for
extracting relations among entities in text. However, how to effectively make
use of relevant information while ignoring irrelevant information from the
dependency trees remains a challenging research question. Existing approaches
employing rule based hard-pruning strategies for selecting relevant partial
dependency structures may not always yield optimal results. In this work, we
propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model
which directly takes full dependency trees as inputs. Our model can be
understood as a soft-pruning approach that automatically learns how to
selectively attend to the relevant sub-structures useful for the relation
extraction task. Extensive results on various tasks including cross-sentence
n-ary relation extraction and large-scale sentence-level relation extraction
show that our model is able to better leverage the structural information of
the full dependency trees, giving significantly better results than previous
approaches.
| 2,020 | Computation and Language |
Multi-Graph Decoding for Code-Switching ASR | In the FAME! Project, a code-switching (CS) automatic speech recognition
(ASR) system for Frisian-Dutch speech is developed that can accurately
transcribe the local broadcaster's bilingual archives with CS speech. This
archive contains recordings with monolingual Frisian and Dutch speech segments
as well as Frisian-Dutch CS speech, hence the recognition performance on
monolingual segments is also vital for accurate transcriptions. In this work,
we propose a multi-graph decoding and rescoring strategy using bilingual and
monolingual graphs together with a unified acoustic model for CS ASR. The
proposed decoding scheme gives the freedom to design and employ alternative
search spaces for each (monolingual or bilingual) recognition task and enables
the effective use of monolingual resources of the high-resourced mixed language
in low-resourced CS scenarios. In our scenario, Dutch is the high-resourced and
Frisian is the low-resourced language. We therefore use additional monolingual
Dutch text resources to improve the Dutch language model (LM) and compare the
performance of single- and multi-graph CS ASR systems on Dutch segments using
larger Dutch LMs. The ASR results show that the proposed approach outperforms
baseline single-graph CS ASR systems, providing better performance on the
monolingual Dutch segments without any accuracy loss on monolingual Frisian and
code-mixed segments.
| 2,019 | Computation and Language |
Mimicking Human Process: Text Representation via Latent Semantic
Clustering for Classification | Considering that words with different characteristic in the text have
different importance for classification, grouping them together separately can
strengthen the semantic expression of each part. Thus we propose a new text
representation scheme by clustering words according to their latent semantics
and composing them together to get a set of cluster vectors, which are then
concatenated as the final text representation. Evaluation on five
classification benchmarks proves the effectiveness of our method. We further
conduct visualization analysis showing statistical clustering results and
verifying the validity of our motivation.
| 2,019 | Computation and Language |
Transfer Learning for Causal Sentence Detection | We consider the task of detecting sentences that express causality, as a step
towards mining causal relations from texts. To bypass the scarcity of causal
instances in relation extraction datasets, we exploit transfer learning, namely
ELMO and BERT, using a bidirectional GRU with self-attention (BIGRUATT) as a
baseline. We experiment with both generic public relation extraction datasets
and a new biomedical causal sentence detection dataset, a subset of which we
make publicly available. We find that transfer learning helps only in very
small datasets. With larger datasets, BIGRUATT reaches a performance plateau,
then larger datasets and transfer learning do not help.
| 2,019 | Computation and Language |
Automatic learner summary assessment for reading comprehension | Automating the assessment of learner summaries provides a useful tool for
assessing learner reading comprehension. We present a summarization task for
evaluating non-native reading comprehension and propose three novel approaches
to automatically assess the learner summaries. We evaluate our models on two
datasets we created and show that our models outperform traditional approaches
that rely on exact word match on this task. Our best model produces quality
assessments close to professional examiners.
| 2,019 | Computation and Language |
Hyperintensional Reasoning based on Natural Language Knowledge Base | The success of automated reasoning techniques over large natural-language
texts heavily relies on a fine-grained analysis of natural language
assumptions. While there is a common agreement that the analysis should be
hyperintensional, most of the automatic reasoning systems are still based on an
intensional logic, at the best. In this paper, we introduce the system of
reasoning based on a fine-grained, hyperintensional analysis. To this end we
apply Tichy's Transparent Intensional Logic (TIL) with its procedural
semantics. TIL is a higher-order, hyperintensional logic of partial functions,
in particular apt for a fine-grained natural-language analysis. Within TIL we
recognise three kinds of context, namely extensional, intensional and
hyperintensional, in which a particular natural-language term, or rather its
meaning, can occur. Having defined the three kinds of context and implemented
an algorithm of context recognition, we are in a position to develop and
implement an extensional logic of hyperintensions with the inference machine
that should neither over-infer nor under-infer.
| 2,019 | Computation and Language |
Text Readability Assessment for Second Language Learners | This paper addresses the task of readability assessment for the texts aimed
at second language (L2) learners. One of the major challenges in this task is
the lack of significantly sized level-annotated data. For the present work, we
collected a dataset of CEFR-graded texts tailored for learners of English as an
L2 and investigated text readability assessment for both native and L2
learners. We applied a generalization method to adapt models trained on larger
native corpora to estimate text readability for learners, and explored domain
adaptation and self-learning techniques to make use of the native data to
improve system performance on the limited L2 data. In our experiments, the best
performing model for readability on learner texts achieves an accuracy of 0.797
and PCC of $0.938$.
| 2,019 | Computation and Language |
Towards Robust Named Entity Recognition for Historic German | Recent advances in language modeling using deep neural networks have shown
that these models learn representations, that vary with the network depth from
morphology to semantic relationships like co-reference. We apply pre-trained
language models to low-resource named entity recognition for Historic German.
We show on a series of experiments that character-based pre-trained language
models do not run into trouble when faced with low-resource datasets. Our
pre-trained character-based language models improve upon classical CRF-based
methods and previous work on Bi-LSTMs by boosting F1 score performance by up to
6%. Our pre-trained language and NER models are publicly available under
https://github.com/stefan-it/historic-ner .
| 2,019 | Computation and Language |
LTG-Oslo Hierarchical Multi-task Network: The importance of negation for
document-level sentiment in Spanish | This paper details LTG-Oslo team's participation in the sentiment track of
the NEGES 2019 evaluation campaign. We participated in the task with a
hierarchical multi-task network, which used shared lower-layers in a deep
BiLSTM to predict negation, while the higher layers were dedicated to
predicting document-level sentiment. The multi-task component shows promise as
a way to incorporate information on negation into deep neural sentiment
classifiers, despite the fact that the absolute results on the test set were
relatively low for a binary classification task.
| 2,019 | Computation and Language |
Curriculum-based transfer learning for an effective end-to-end spoken
language understanding and domain portability | We present an end-to-end approach to extract semantic concepts directly from
the speech audio signal. To overcome the lack of data available for this spoken
language understanding approach, we investigate the use of a transfer learning
strategy based on the principles of curriculum learning. This approach allows
us to exploit out-of-domain data that can help to prepare a fully neural
architecture. Experiments are carried out on the French MEDIA and PORTMEDIA
corpora and show that this end-to-end SLU approach reaches the best results
ever published on this task. We compare our approach to a classical pipeline
approach that uses ASR, POS tagging, lemmatizer, chunker... and other NLP tools
that aim to enrich ASR outputs that feed an SLU text to concepts system. Last,
we explore the promising capacity of our end-to-end SLU approach to address the
problem of domain portability.
| 2,019 | Computation and Language |
Improving Sentiment Analysis with Multi-task Learning of Negation | Sentiment analysis is directly affected by compositional phenomena in
language that act on the prior polarity of the words and phrases found in the
text. Negation is the most prevalent of these phenomena and in order to
correctly predict sentiment, a classifier must be able to identify negation and
disentangle the effect that its scope has on the final polarity of a text. This
paper proposes a multi-task approach to explicitly incorporate information
about negation in sentiment analysis, which we show outperforms learning
negation implicitly in a data-driven manner. We describe our approach, a
cascading neural architecture with selective sharing of LSTM layers, and show
that explicitly training the model with negation as an auxiliary task helps
improve the main task of sentiment analysis. The effect is demonstrated across
several different standard English-language data sets for both tasks and we
analyze several aspects of our system related to its performance, varying types
and amounts of input data and different multi-task setups.
| 2,021 | Computation and Language |
Scheduled Sampling for Transformers | Scheduled sampling is a technique for avoiding one of the known problems in
sequence-to-sequence generation: exposure bias. It consists of feeding the
model a mix of the teacher forced embeddings and the model predictions from the
previous step in training time. The technique has been used for improving the
model performance with recurrent neural networks (RNN). In the Transformer
model, unlike the RNN, the generation of a new word attends to the full
sentence generated so far, not only to the last word, and it is not
straightforward to apply the scheduled sampling technique. We propose some
structural changes to allow scheduled sampling to be applied to Transformer
architecture, via a two-pass decoding strategy. Experiments on two language
pairs achieve performance close to a teacher-forcing baseline and show that
this technique is promising for further exploration.
| 2,019 | Computation and Language |
State-of-the-Art Vietnamese Word Segmentation | Word segmentation is the first step of any tasks in Vietnamese language
processing. This paper reviews stateof-the-art approaches and systems for word
segmentation in Vietnamese. To have an overview of all stages from building
corpora to developing toolkits, we discuss building the corpus stage,
approaches applied to solve the word segmentation and existing toolkits to
segment words in Vietnamese sentences. In addition, this study shows clearly
the motivations on building corpus and implementing machine learning techniques
to improve the accuracy for Vietnamese word segmentation. According to our
observation, this study also reports a few of achivements and limitations in
existing Vietnamese word segmentation systems.
| 2,019 | Computation and Language |
Yoga-Veganism: Correlation Mining of Twitter Health Data | Nowadays social media is a huge platform of data. People usually share their
interest, thoughts via discussions, tweets, status. It is not possible to go
through all the data manually. We need to mine the data to explore hidden
patterns or unknown correlations, find out the dominant topic in data and
understand people's interest through the discussions. In this work, we explore
Twitter data related to health. We extract the popular topics under different
categories (e.g. diet, exercise) discussed in Twitter via topic modeling,
observe model behavior on new tweets, discover interesting correlation (i.e.
Yoga-Veganism). We evaluate accuracy by comparing with ground truth using
manual annotation both for train and test data.
| 2,020 | Computation and Language |
Expressing Visual Relationships via Language | Describing images with text is a fundamental problem in vision-language
research. Current studies in this domain mostly focus on single image
captioning. However, in various real applications (e.g., image editing,
difference interpretation, and retrieval), generating relational captions for
two images, can also be very useful. This important problem has not been
explored mostly due to lack of datasets and effective models. To push forward
the research in this direction, we first introduce a new language-guided image
editing dataset that contains a large number of real image pairs with
corresponding editing instructions. We then propose a new relational speaker
model based on an encoder-decoder architecture with static relational attention
and sequential multi-head attention. We also extend the model with dynamic
relational attention, which calculates visual alignment while decoding. Our
models are evaluated on our newly collected and two public datasets consisting
of image pairs annotated with relationship sentences. Experimental results,
based on both automatic and human evaluation, demonstrate that our model
outperforms all baselines and existing methods on all the datasets.
| 2,019 | Computation and Language |
Distilling Translations with Visual Awareness | Previous work on multimodal machine translation has shown that visual
information is only needed in very specific cases, for example in the presence
of ambiguous words where the textual context is not sufficient. As a
consequence, models tend to learn to ignore this information. We propose a
translate-and-refine approach to this problem where images are only used by a
second stage decoder. This approach is trained jointly to generate a good first
draft translation and to improve over this draft by (i) making better use of
the target language textual context (both left and right-side contexts) and
(ii) making use of visual context. This approach leads to the state of the art
results. Additionally, we show that it has the ability to recover from
erroneous or missing words in the source language.
| 2,019 | Computation and Language |
Adaptation of Machine Translation Models with Back-translated Data using
Transductive Data Selection Methods | Data selection has proven its merit for improving Neural Machine Translation
(NMT), when applied to authentic data. But the benefit of using synthetic data
in NMT training, produced by the popular back-translation technique, raises the
question if data selection could also be useful for synthetic data?
In this work we use Infrequent N-gram Recovery (INR) and Feature Decay
Algorithms (FDA), two transductive data selection methods to obtain subsets of
sentences from synthetic data. These methods ensure that selected sentences
share n-grams with the test set so the NMT model can be adapted to translate
it.
Performing data selection on back-translated data creates new challenges as
the source-side may contain noise originated by the model used in the
back-translation. Hence, finding n-grams present in the test set become more
difficult. Despite that, in our work we show that adapting a model with a
selection of synthetic data is an useful approach.
| 2,019 | Computation and Language |
Surf at MEDIQA 2019: Improving Performance of Natural Language Inference
in the Clinical Domain by Adopting Pre-trained Language Model | While deep learning techniques have shown promising results in many natural
language processing (NLP) tasks, it has not been widely applied to the clinical
domain. The lack of large datasets and the pervasive use of domain-specific
language (i.e. abbreviations and acronyms) in the clinical domain causes slower
progress in NLP tasks than that of the general NLP tasks. To fill this gap, we
employ word/subword-level based models that adopt large-scale data-driven
methods such as pre-trained language models and transfer learning in analyzing
text for the clinical domain. Empirical results demonstrate the superiority of
the proposed methods by achieving 90.6% accuracy in medical domain natural
language inference task. Furthermore, we inspect the independent strengths of
the proposed approaches in quantitative and qualitative manners. This analysis
will help researchers to select necessary components in building models for the
medical domain.
| 2,019 | Computation and Language |
Second-Order Semantic Dependency Parsing with End-to-End Neural Networks | Semantic dependency parsing aims to identify semantic relationships between
words in a sentence that form a graph. In this paper, we propose a second-order
semantic dependency parser, which takes into consideration not only individual
dependency edges but also interactions between pairs of edges. We show that
second-order parsing can be approximated using mean field (MF) variational
inference or loopy belief propagation (LBP). We can unfold both algorithms as
recurrent layers of a neural network and therefore can train the parser in an
end-to-end manner. Our experiments show that our approach achieves
state-of-the-art performance.
| 2,021 | Computation and Language |
Multimodal Abstractive Summarization for How2 Videos | In this paper, we study abstractive summarization for open-domain videos.
Unlike the traditional text news summarization, the goal is less to "compress"
text information but rather to provide a fluent textual summary of information
that has been collected and fused from different source modalities, in our case
video and audio transcripts (or text). We show how a multi-source
sequence-to-sequence model with hierarchical attention can integrate
information from different modalities into a coherent output, compare various
models trained with different modalities and present pilot experiments on the
How2 corpus of instructional videos. We also propose a new evaluation metric
(Content F1) for abstractive summarization task that measures semantic adequacy
rather than fluency of the summaries, which is covered by metrics like ROUGE
and BLEU.
| 2,019 | Computation and Language |
Large-Scale Speaker Diarization of Radio Broadcast Archives | This paper describes our initial efforts to build a large-scale speaker
diarization (SD) and identification system on a recently digitized radio
broadcast archive from the Netherlands which has more than 6500 audio tapes
with 3000 hours of Frisian-Dutch speech recorded between 1950-2016. The
employed large-scale diarization scheme involves two stages: (1) tape-level
speaker diarization providing pseudo-speaker identities and (2) speaker linking
to relate pseudo-speakers appearing in multiple tapes. Having access to the
speaker models of several frequently appearing speakers from the previously
collected FAME! speech corpus, we further perform speaker identification by
linking these known speakers to the pseudo-speakers identified at the first
stage. In this work, we present a recently created longitudinal and
multilingual SD corpus designed for large-scale SD research and evaluate the
performance of a new speaker linking system using x-vectors with PLDA to
quantify cross-tape speaker similarity on this corpus. The performance of this
speaker linking system is evaluated on a small subset of the archive which is
manually annotated with speaker information. The speaker linking performance
reported on this subset (53 hours) and the whole archive (3000 hours) is
compared to quantify the impact of scaling up in the amount of speech data.
| 2,019 | Computation and Language |
Multilingual Multi-Domain Adaptation Approaches for Neural Machine
Translation | In this paper, we propose two novel methods for domain adaptation for the
attention-only neural machine translation (NMT) model, i.e., the Transformer.
Our methods focus on training a single translation model for multiple domains
by either learning domain specialized hidden state representations or predictor
biases for each domain. We combine our methods with a previously proposed
black-box method called mixed fine tuning, which is known to be highly
effective for domain adaptation. In addition, we incorporate multilingualism
into the domain adaptation framework. Experiments show that multilingual
multi-domain adaptation can significantly improve both resource-poor in-domain
and resource-rich out-of-domain translations, and the combination of our
methods with mixed fine tuning achieves the best performance.
| 2,019 | Computation and Language |
Code-Switching Detection Using ASR-Generated Language Posteriors | Code-switching (CS) detection refers to the automatic detection of language
switches in code-mixed utterances. This task can be achieved by using a CS
automatic speech recognition (ASR) system that can handle such language
switches. In our previous work, we have investigated the code-switching
detection performance of the Frisian-Dutch CS ASR system by using the time
alignment of the most likely hypothesis and found that this technique suffers
from over-switching due to numerous very short spurious language switches. In
this paper, we propose a novel method for CS detection aiming to remedy this
shortcoming by using the language posteriors which are the sum of the
frame-level posteriors of phones belonging to the same language. The CS
ASR-generated language posteriors contain more complete language-specific
information on frame level compared to the time alignment of the ASR output.
Hence, it is expected to yield more accurate and robust CS detection. The CS
detection experiments demonstrate that the proposed language posterior-based
approach provides higher detection accuracy than the baseline system in terms
of equal error rate. Moreover, a detailed CS detection error analysis reveals
that using language posteriors reduces the false alarms and results in more
robust CS detection.
| 2,019 | Computation and Language |
The Effect of Translationese in Machine Translation Test Sets | The effect of translationese has been studied in the field of machine
translation (MT), mostly with respect to training data. We study in depth the
effect of translationese on test data, using the test sets from the last three
editions of WMT's news shared task, containing 17 translation directions. We
show evidence that (i) the use of translationese in test sets results in
inflated human evaluation scores for MT systems; (ii) in some cases system
rankings do change and (iii) the impact translationese has on a translation
direction is inversely correlated to the translation quality attainable by
state-of-the-art MT systems for that direction.
| 2,019 | Computation and Language |
Pre-Training with Whole Word Masking for Chinese BERT | Bidirectional Encoder Representations from Transformers (BERT) has shown
marvelous improvements across various NLP tasks, and its consecutive variants
have been proposed to further improve the performance of the pre-trained
language models. In this paper, we aim to first introduce the whole word
masking (wwm) strategy for Chinese BERT, along with a series of Chinese
pre-trained language models. Then we also propose a simple but effective model
called MacBERT, which improves upon RoBERTa in several ways. Especially, we
propose a new masking strategy called MLM as correction (Mac). To demonstrate
the effectiveness of these models, we create a series of Chinese pre-trained
language models as our baselines, including BERT, RoBERTa, ELECTRA, RBT, etc.
We carried out extensive experiments on ten Chinese NLP tasks to evaluate the
created Chinese pre-trained language models as well as the proposed MacBERT.
Experimental results show that MacBERT could achieve state-of-the-art
performances on many NLP tasks, and we also ablate details with several
findings that may help future research. We open-source our pre-trained language
models for further facilitating our research community. Resources are
available: https://github.com/ymcui/Chinese-BERT-wwm
| 2,021 | Computation and Language |
EditNTS: An Neural Programmer-Interpreter Model for Sentence
Simplification through Explicit Editing | We present the first sentence simplification model that learns explicit edit
operations (ADD, DELETE, and KEEP) via a neural programmer-interpreter
approach. Most current neural sentence simplification systems are variants of
sequence-to-sequence models adopted from machine translation. These methods
learn to simplify sentences as a byproduct of the fact that they are trained on
complex-simple sentence pairs. By contrast, our neural programmer-interpreter
is directly trained to predict explicit edit operations on targeted parts of
the input sentence, resembling the way that humans might perform simplification
and revision. Our model outperforms previous state-of-the-art neural sentence
simplification models (without external knowledge) by large margins on three
benchmark text simplification corpora in terms of SARI (+0.95 WikiLarge, +1.89
WikiSmall, +1.41 Newsela), and is judged by humans to produce overall better
and simpler output sentences.
| 2,019 | Computation and Language |
XLNet: Generalized Autoregressive Pretraining for Language Understanding | With the capability of modeling bidirectional contexts, denoising
autoencoding based pretraining like BERT achieves better performance than
pretraining approaches based on autoregressive language modeling. However,
relying on corrupting the input with masks, BERT neglects dependency between
the masked positions and suffers from a pretrain-finetune discrepancy. In light
of these pros and cons, we propose XLNet, a generalized autoregressive
pretraining method that (1) enables learning bidirectional contexts by
maximizing the expected likelihood over all permutations of the factorization
order and (2) overcomes the limitations of BERT thanks to its autoregressive
formulation. Furthermore, XLNet integrates ideas from Transformer-XL, the
state-of-the-art autoregressive model, into pretraining. Empirically, under
comparable experiment settings, XLNet outperforms BERT on 20 tasks, often by a
large margin, including question answering, natural language inference,
sentiment analysis, and document ranking.
| 2,020 | Computation and Language |
Incorporating Priors with Feature Attribution on Text Classification | Feature attribution methods, proposed recently, help users interpret the
predictions of complex models. Our approach integrates feature attributions
into the objective function to allow machine learning practitioners to
incorporate priors in model building. To demonstrate the effectiveness our
technique, we apply it to two tasks: (1) mitigating unintended bias in text
classifiers by neutralizing identity terms; (2) improving classifier
performance in a scarce data setting by forcing the model to focus on toxic
terms. Our approach adds an L2 distance loss between feature attributions and
task-specific prior values to the objective. Our experiments show that i) a
classifier trained with our technique reduces undesired model biases without a
trade off on the original task; ii) incorporating priors helps model
performance in scarce data settings.
| 2,019 | Computation and Language |
Embedding time expressions for deep temporal ordering models | Data-driven models have demonstrated state-of-the-art performance in
inferring the temporal ordering of events in text. However, these models often
overlook explicit temporal signals, such as dates and time windows. Rule-based
methods can be used to identify the temporal links between these time
expressions (timexes), but they fail to capture timexes' interactions with
events and are hard to integrate with the distributed representations of neural
net models. In this paper, we introduce a framework to infuse temporal
awareness into such models by learning a pre-trained model to embed timexes. We
generate synthetic data consisting of pairs of timexes, then train a character
LSTM to learn embeddings and classify the timexes' temporal relation. We
evaluate the utility of these embeddings in the context of a strong neural
model for event temporal ordering, and show a small increase in performance on
the MATRES dataset and more substantial gains on an automatically collected
dataset with more frequent event-timex interactions.
| 2,019 | Computation and Language |
REflex: Flexible Framework for Relation Extraction in Multiple Domains | Systematic comparison of methods for relation extraction (RE) is difficult
because many experiments in the field are not described precisely enough to be
completely reproducible and many papers fail to report ablation studies that
would highlight the relative contributions of their various combined
techniques. In this work, we build a unifying framework for RE, applying this
on three highly used datasets (from the general, biomedical and clinical
domains) with the ability to be extendable to new datasets. By performing a
systematic exploration of modeling, pre-processing and training methodologies,
we find that choices of pre-processing are a large contributor performance and
that omission of such information can further hinder fair comparison. Other
insights from our exploration allow us to provide recommendations for future
research in this area.
| 2,021 | Computation and Language |
Learning Compressed Sentence Representations for On-Device Text
Processing | Vector representations of sentences, trained on massive text corpora, are
widely used as generic sentence embeddings across a variety of NLP problems.
The learned representations are generally assumed to be continuous and
real-valued, giving rise to a large memory footprint and slow retrieval speed,
which hinders their applicability to low-resource (memory and computation)
platforms, such as mobile devices. In this paper, we propose four different
strategies to transform continuous and generic sentence embeddings into a
binarized form, while preserving their rich semantic information. The
introduced methods are evaluated across a wide range of downstream tasks, where
the binarized sentence embeddings are demonstrated to degrade performance by
only about 2% relative to their continuous counterparts, while reducing the
storage requirement by over 98%. Moreover, with the learned binary
representations, the semantic relatedness of two sentences can be evaluated by
simply calculating their Hamming distance, which is more computational
efficient compared with the inner product operation between continuous
embeddings. Detailed analysis and case study further validate the effectiveness
of proposed methods.
| 2,019 | Computation and Language |
Considerations for the Interpretation of Bias Measures of Word
Embeddings | Word embedding spaces are powerful tools for capturing latent semantic
relationships between terms in corpora, and have become widely popular for
building state-of-the-art natural language processing algorithms. However,
studies have shown that societal biases present in text corpora may be
incorporated into the word embedding spaces learned from them. Thus, there is
an ethical concern that human-like biases contained in the corpora and their
derived embedding spaces might be propagated, or even amplified with the usage
of the biased embedding spaces in downstream applications. In an attempt to
quantify these biases so that they may be better understood and studied,
several bias metrics have been proposed. We explore the statistical properties
of these proposed measures in the context of their cited applications as well
as their supposed utilities. We find that there are caveats to the simple
interpretation of these metrics as proposed. We find that the bias metric
proposed by Bolukbasi et al. 2016 is highly sensitive to embedding
hyper-parameter selection, and that in many cases, the variance due to the
selection of some hyper-parameters is greater than the variance in the metric
due to corpus selection, while in fewer cases the bias rankings of corpora vary
with hyper-parameter selection. In light of these observations, it may be the
case that bias estimates should not be thought to directly measure the
properties of the underlying corpus, but rather the properties of the specific
embedding spaces in question, particularly in the context of hyper-parameter
selections used to generate them. Hence, bias metrics of spaces generated with
differing hyper-parameters should be compared only with explicit consideration
of the embedding-learning algorithms particular configurations.
| 2,019 | Computation and Language |
Robust Machine Translation with Domain Sensitive Pseudo-Sources:
Baidu-OSU WMT19 MT Robustness Shared Task System Report | This paper describes the machine translation system developed jointly by
Baidu Research and Oregon State University for WMT 2019 Machine Translation
Robustness Shared Task. Translation of social media is a very challenging
problem, since its style is very different from normal parallel corpora (e.g.
News) and also include various types of noises. To make it worse, the amount of
social media parallel corpora is extremely limited. In this paper, we use a
domain sensitive training method which leverages a large amount of parallel
data from popular domains together with a little amount of parallel data from
social media. Furthermore, we generate a parallel dataset with pseudo noisy
source sentences which are back-translated from monolingual data using a model
trained by a similar domain sensitive way. We achieve more than 10 BLEU
improvement in both En-Fr and Fr-En translation compared with the baseline
methods.
| 2,019 | Computation and Language |
Hierarchical Document Encoder for Parallel Corpus Mining | We explore using multilingual document embeddings for nearest neighbor mining
of parallel data. Three document-level representations are investigated: (i)
document embeddings generated by simply averaging multilingual sentence
embeddings; (ii) a neural bag-of-words (BoW) document encoding model; (iii) a
hierarchical multilingual document encoder (HiDE) that builds on our
sentence-level model. The results show document embeddings derived from
sentence-level averaging are surprisingly effective for clean datasets, but
suggest models trained hierarchically at the document-level are more effective
on noisy data. Analysis experiments demonstrate our hierarchical models are
very robust to variations in the underlying sentence embedding quality. Using
document embeddings trained with HiDE achieves state-of-the-art performance on
United Nations (UN) parallel document mining, 94.9% P@1 for en-fr and 97.3% P@1
for en-es.
| 2,019 | Computation and Language |
Multi-Grained Named Entity Recognition | This paper presents a novel framework, MGNER, for Multi-Grained Named Entity
Recognition where multiple entities or entity mentions in a sentence could be
non-overlapping or totally nested. Different from traditional approaches
regarding NER as a sequential labeling task and annotate entities
consecutively, MGNER detects and recognizes entities on multiple granularities:
it is able to recognize named entities without explicitly assuming
non-overlapping or totally nested structures. MGNER consists of a Detector that
examines all possible word segments and a Classifier that categorizes entities.
In addition, contextual information and a self-attention mechanism are utilized
throughout the framework to improve the NER performance. Experimental results
show that MGNER outperforms current state-of-the-art baselines up to 4.4% in
terms of the F1 score among nested/non-overlapping NER tasks.
| 2,020 | Computation and Language |
Generating Empathetic Responses by Looking Ahead the User's Sentiment | An important aspect of human conversation difficult for machines is
conversing with empathy, which is to understand the user's emotion and respond
appropriately. Recent neural conversation models that attempted to generate
empathetic responses either focused on conditioning the output to a given
emotion, or incorporating the current user emotional state. However, these
approaches do not factor in how the user would feel towards the generated
response. Hence, in this paper, we propose Sentiment Look-ahead, which is a
novel perspective for empathy that models the future user emotional state. In
short, Sentiment Look-ahead is a reward function under a reinforcement learning
framework that provides a higher reward to the generative model when the
generated utterance improves the user's sentiment. We implement and evaluate
three different possible implementations of sentiment look-ahead and
empirically show that our proposed approach can generate significantly more
empathetic, relevant, and fluent responses than other competitive baselines
such as multitask learning.
| 2,021 | Computation and Language |
Hindi Question Generation Using Dependency Structures | Hindi question answering systems suffer from a lack of data. To address the
same, this paper presents an approach towards automatic question generation. We
present a rule-based system for question generation in Hindi by formalizing
question transformation methods based on karaka-dependency theory. We use a
Hindi dependency parser to mark the karaka roles and use IndoWordNet a Hindi
ontology to detect the semantic category of the karaka role heads to generate
the interrogatives. We analyze how one sentence can have multiple generations
from the same karaka role's rule. The generations are manually annotated by
multiple annotators on a semantic and syntactic scale for evaluation. Further,
we constrain our generation with the help of various semantic and syntactic
filters so as to improve the generation quality. Using these methods, we are
able to generate diverse questions, significantly more than number of sentences
fed to the system.
| 2,019 | Computation and Language |
Improving Zero-shot Translation with Language-Independent Constraints | An important concern in training multilingual neural machine translation
(NMT) is to translate between language pairs unseen during training, i.e
zero-shot translation. Improving this ability kills two birds with one stone by
providing an alternative to pivot translation which also allows us to better
understand how the model captures information between languages.
In this work, we carried out an investigation on this capability of the
multilingual NMT models. First, we intentionally create an encoder architecture
which is independent with respect to the source language. Such experiments shed
light on the ability of NMT encoders to learn multilingual representations, in
general. Based on such proof of concept, we were able to design regularization
methods into the standard Transformer model, so that the whole architecture
becomes more robust in zero-shot conditions. We investigated the behaviour of
such models on the standard IWSLT 2017 multilingual dataset. We achieved an
average improvement of 2.23 BLEU points across 12 language pairs compared to
the zero-shot performance of a state-of-the-art multilingual system.
Additionally, we carry out further experiments in which the effect is confirmed
even for language pairs with multiple intermediate pivots.
| 2,019 | Computation and Language |
Conflict as an Inverse of Attention in Sequence Relationship | Attention is a very efficient way to model the relationship between two
sequences by comparing how similar two intermediate representations are.
Initially demonstrated in NMT, it is a standard in all NLU tasks today when
efficient interaction between sequences is considered. However, we show that
attention, by virtue of its composition, works best only when it is given that
there is a match somewhere between two sequences. It does not very well adapt
to cases when there is no similarity between two sequences or if the
relationship is contrastive. We propose an Conflict model which is very similar
to how attention works but which emphasizes mostly on how well two sequences
repel each other and finally empirically show how this method in conjunction
with attention can boost the overall performance.
| 2,019 | Computation and Language |
Fine-tuning Pre-Trained Transformer Language Models to Distantly
Supervised Relation Extraction | Distantly supervised relation extraction is widely used to extract relational
facts from text, but suffers from noisy labels. Current relation extraction
methods try to alleviate the noise by multi-instance learning and by providing
supporting linguistic and contextual information to more efficiently guide the
relation classification. While achieving state-of-the-art results, we observed
these models to be biased towards recognizing a limited set of relations with
high precision, while ignoring those in the long tail. To address this gap, we
utilize a pre-trained language model, the OpenAI Generative Pre-trained
Transformer (GPT) [Radford et al., 2018]. The GPT and similar models have been
shown to capture semantic and syntactic features, and also a notable amount of
"common-sense" knowledge, which we hypothesize are important features for
recognizing a more diverse set of relations. By extending the GPT to the
distantly supervised setting, and fine-tuning it on the NYT10 dataset, we show
that it predicts a larger set of distinct relation types with high confidence.
Manual and automated evaluation of our model shows that it achieves a
state-of-the-art AUC score of 0.422 on the NYT10 dataset, and performs
especially well at higher recall levels.
| 2,019 | Computation and Language |
Semi-supervised acoustic model training for five-lingual code-switched
ASR | This paper presents recent progress in the acoustic modelling of
under-resourced code-switched (CS) speech in multiple South African languages.
We consider two approaches. The first constructs separate bilingual acoustic
models corresponding to language pairs (English-isiZulu, English-isiXhosa,
English-Setswana and English-Sesotho). The second constructs a single unified
five-lingual acoustic model representing all the languages (English, isiZulu,
isiXhosa, Setswana and Sesotho). For these two approaches we consider the
effectiveness of semi-supervised training to increase the size of the very
sparse acoustic training sets. Using approximately 11 hours of untranscribed
speech, we show that both approaches benefit from semi-supervised training. The
bilingual TDNN-F acoustic models also benefit from the addition of CNN layers
(CNN-TDNN-F), while the five-lingual system does not show any significant
improvement. Furthermore, because English is common to all language pairs in
our data, it dominates when training a unified language model, leading to
improved English ASR performance at the expense of the other languages.
Nevertheless, the five-lingual model offers flexibility because it can process
more than two languages simultaneously, and is therefore an attractive option
as an automatic transcription system in a semi-supervised training pipeline.
| 2,019 | Computation and Language |
Few-Shot Sequence Labeling with Label Dependency Transfer and Pair-wise
Embedding | While few-shot classification has been widely explored with similarity based
methods, few-shot sequence labeling poses a unique challenge as it also calls
for modeling the label dependencies. To consider both the item similarity and
label dependency, we propose to leverage the conditional random fields (CRFs)
in few-shot sequence labeling. It calculates emission score with similarity
based methods and obtains transition score with a specially designed transfer
mechanism. When applying CRF in the few-shot scenarios, the discrepancy of
label sets among different domains makes it hard to use the label dependency
learned in prior domains. To tackle this, we introduce the dependency transfer
mechanism that transfers abstract label transition patterns. In addition, the
similarity methods rely on the high quality sample representation, which is
challenging for sequence labeling, because sense of a word is different when
measuring its similarity to words in different sentences. To remedy this, we
take advantage of recent contextual embedding technique, and further propose a
pair-wise embedder. It provides additional certainty for word sense by
embedding query and support sentence pairwisely. Experimental results on slot
tagging and named entity recognition show that our model significantly
outperforms the strongest few-shot learning baseline by 11.76 (21.2%) and 12.18
(97.7%) F1 scores respectively in the one-shot setting.
| 2,019 | Computation and Language |
A New Statistical Approach for Comparing Algorithms for Lexicon Based
Sentiment Analysis | Lexicon based sentiment analysis usually relies on the identification of
various words to which a numerical value corresponding to sentiment can be
assigned. In principle, classifiers can be obtained from these algorithms by
comparison with human annotation, which is considered the gold standard. In
practise this is difficult in languages such as Portuguese where there is a
paucity of human annotated texts. Thus in order to compare algorithms, a next
best step is to directly compare different algorithms with each other without
referring to human annotation. In this paper we develop methods for a
statistical comparison of algorithms which does not rely on human annotation or
on known class labels. We will motivate the use of marginal homogeneity tests,
as well as log linear models within the framework of maximum likelihood
estimation We will also show how some uncertainties present in lexicon based
sentiment analysis may be similar to those which occur in human annotated
tweets. We will also show how the variability in the output of different
algorithms is lexicon dependent, and quantify this variability in the output
within the framework of log linear models.
| 2,019 | Computation and Language |
Autonomous Haiku Generation | Artificial Intelligence is an excellent tool to improve efficiency and lower
cost in many quantitative real world applications, but what if the task is not
easily defined? What if the task is generating creativity? Poetry is a creative
endeavor that is highly difficult to both grasp and achieve with any level of
competence. As Rita Dove, a famous American poet and author states, "Poetry is
language at its most distilled and most powerful." Taking Doves quote as an
inspiration, our task was to generate high quality haikus using artificial
intelligence and deep learning.
| 2,019 | Computation and Language |
Informative Image Captioning with External Sources of Information | An image caption should fluently present the essential information in a given
image, including informative, fine-grained entity mentions and the manner in
which these entities interact. However, current captioning models are usually
trained to generate captions that only contain common object names, thus
falling short on an important "informativeness" dimension. We present a
mechanism for integrating image information together with fine-grained labels
(assumed to be generated by some upstream models) into a caption that describes
the image in a fluent and informative manner. We introduce a multimodal,
multi-encoder model based on Transformer that ingests both image features and
multiple sources of entity labels. We demonstrate that we can learn to control
the appearance of these entity labels in the output, resulting in captions that
are both fluent and informative.
| 2,019 | Computation and Language |
Low-Resource Corpus Filtering using Multilingual Sentence Embeddings | In this paper, we describe our submission to the WMT19 low-resource parallel
corpus filtering shared task. Our main approach is based on the LASER toolkit
(Language-Agnostic SEntence Representations), which uses an encoder-decoder
architecture trained on a parallel corpus to obtain multilingual sentence
representations. We then use the representations directly to score and filter
the noisy parallel sentences without additionally training a scoring function.
We contrast our approach to other promising methods and show that LASER yields
strong results. Finally, we produce an ensemble of different scoring methods
and obtain additional gains. Our submission achieved the best overall
performance for both the Nepali-English and Sinhala-English 1M tasks by a
margin of 1.3 and 1.4 BLEU respectively, as compared to the second best
systems. Moreover, our experiments show that this technique is promising for
low and even no-resource scenarios.
| 2,019 | Computation and Language |
Exploiting Entity BIO Tag Embeddings and Multi-task Learning for
Relation Extraction with Imbalanced Data | In practical scenario, relation extraction needs to first identify entity
pairs that have relation and then assign a correct relation class. However, the
number of non-relation entity pairs in context (negative instances) usually far
exceeds the others (positive instances), which negatively affects a model's
performance. To mitigate this problem, we propose a multi-task architecture
which jointly trains a model to perform relation identification with
cross-entropy loss and relation classification with ranking loss. Meanwhile, we
observe that a sentence may have multiple entities and relation mentions, and
the patterns in which the entities appear in a sentence may contain useful
semantic information that can be utilized to distinguish between positive and
negative instances. Thus we further incorporate the embeddings of
character-wise/word-wise BIO tag from the named entity recognition task into
character/word embeddings to enrich the input representation. Experiment
results show that our proposed approach can significantly improve the
performance of a baseline model with more than 10% absolute increase in
F1-score, and outperform the state-of-the-art models on ACE 2005 Chinese and
English corpus. Moreover, BIO tag embeddings are particularly effective and can
be used to improve other models as well.
| 2,019 | Computation and Language |
Learning Bilingual Word Embeddings Using Lexical Definitions | Bilingual word embeddings, which representlexicons of different languages in
a shared em-bedding space, are essential for supporting se-mantic and knowledge
transfers in a variety ofcross-lingual NLP tasks. Existing approachesto
training bilingual word embeddings requireoften require pre-defined seed
lexicons that areexpensive to obtain, or parallel sentences thatcomprise coarse
and noisy alignment. In con-trast, we propose BilLex that leverages pub-licly
available lexical definitions for bilingualword embedding learning. Without the
needof predefined seed lexicons, BilLex comprisesa novel word pairing strategy
to automati-cally identify and propagate the precise fine-grained word
alignment from lexical defini-tions. We evaluate BilLex in word-level
andsentence-level translation tasks, which seek tofind the cross-lingual
counterparts of wordsand sentences respectively.BilLex signifi-cantly
outperforms previous embedding meth-ods on both tasks.
| 2,020 | Computation and Language |
Be Consistent! Improving Procedural Text Comprehension using Label
Consistency | Our goal is procedural text comprehension, namely tracking how the properties
of entities (e.g., their location) change with time given a procedural text
(e.g., a paragraph about photosynthesis, a recipe). This task is challenging as
the world is changing throughout the text, and despite recent advances, current
systems still struggle with this task. Our approach is to leverage the fact
that, for many procedural texts, multiple independent descriptions are readily
available, and that predictions from them should be consistent (label
consistency). We present a new learning framework that leverages label
consistency during training, allowing consistency bias to be built into the
model. Evaluation on a standard benchmark dataset for procedural text, ProPara
(Dalvi et al., 2018), shows that our approach significantly improves prediction
performance (F1) over prior state-of-the-art systems.
| 2,019 | Computation and Language |
A Deep Generative Model for Code-Switched Text | Code-switching, the interleaving of two or more languages within a sentence
or discourse is pervasive in multilingual societies. Accurate language models
for code-switched text are critical for NLP tasks. State-of-the-art
data-intensive neural language models are difficult to train well from scarce
language-labeled code-switched text. A potential solution is to use deep
generative models to synthesize large volumes of realistic code-switched text.
Although generative adversarial networks and variational autoencoders can
synthesize plausible monolingual text from continuous latent space, they cannot
adequately address code-switched text, owing to their informal style and
complex interplay between the constituent languages. We introduce VACS, a novel
variational autoencoder architecture specifically tailored to code-switching
phenomena. VACS encodes to and decodes from a two-level hierarchical
representation, which models syntactic contextual signals in the lower level,
and language switching signals in the upper layer. Sampling representations
from the prior and decoding them produced well-formed, diverse code-switched
sentences. Extensive experiments show that using synthetic code-switched text
with natural monolingual data results in significant (33.06%) drop in
perplexity.
| 2,019 | Computation and Language |
Mitigating Gender Bias in Natural Language Processing: Literature Review | As Natural Language Processing (NLP) and Machine Learning (ML) tools rise in
popularity, it becomes increasingly vital to recognize the role they play in
shaping societal biases and stereotypes. Although NLP models have shown success
in modeling various applications, they propagate and may even amplify gender
bias found in text corpora. While the study of bias in artificial intelligence
is not new, methods to mitigate gender bias in NLP are relatively nascent. In
this paper, we review contemporary studies on recognizing and mitigating gender
bias in NLP. We discuss gender bias based on four forms of representation bias
and analyze methods recognizing gender bias. Furthermore, we discuss the
advantages and drawbacks of existing gender debiasing methods. Finally, we
discuss future studies for recognizing and mitigating gender bias in NLP.
| 2,019 | Computation and Language |
Incremental Adaptation of NMT for Professional Post-editors: A User
Study | A common use of machine translation in the industry is providing initial
translation hypotheses, which are later supervised and post-edited by a human
expert. During this revision process, new bilingual data are continuously
generated. Machine translation systems can benefit from these new data,
incrementally updating the underlying models under an online learning paradigm.
We conducted a user study on this scenario, for a neural machine translation
system. The experimentation was carried out by professional translators, with a
vast experience in machine translation post-editing. The results showed a
reduction in the required amount of human effort needed when post-editing the
outputs of the system, improvements in the translation quality and a positive
perception of the adaptive system by the users.
| 2,019 | Computation and Language |
Demonstration of a Neural Machine Translation System with Online
Learning for Translators | We introduce a demonstration of our system, which implements online learning
for neural machine translation in a production environment. These techniques
allow the system to continuously learn from the corrections provided by the
translators. We implemented an end-to-end platform integrating our machine
translation servers to one of the most common user interfaces for professional
translators: SDL Trados Studio. Our objective was to save post-editing effort
as the machine is continuously learning from human choices and adapting the
models to a specific domain or user style.
| 2,019 | Computation and Language |
CUNI System for the WMT19 Robustness Task | We present our submission to the WMT19 Robustness Task. Our baseline system
is the Charles University (CUNI) Transformer system trained for the WMT18
shared task on News Translation. Quantitative results show that the CUNI
Transformer system is already far more robust to noisy input than the
LSTM-based baseline provided by the task organizers. We further improved the
performance of our model by fine-tuning on the in-domain noisy data without
influencing the translation quality on the news domain.
| 2,019 | Computation and Language |
A Multitask Network for Localization and Recognition of Text in Images | We present an end-to-end trainable multi-task network that addresses the
problem of lexicon-free text extraction from complex documents. This network
simultaneously solves the problems of text localization and text recognition
and text segments are identified with no post-processing, cropping, or word
grouping. A convolutional backbone and Feature Pyramid Network are combined to
provide a shared representation that benefits each of three model heads: text
localization, classification, and text recognition. To improve recognition
accuracy, we describe a dynamic pooling mechanism that retains high-resolution
information across all RoIs. For text recognition, we propose a convolutional
mechanism with attention which out-performs more common recurrent
architectures. Our model is evaluated against benchmark datasets and comparable
methods and achieves high performance in challenging regimes of non-traditional
OCR.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.