Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
From Bilingual to Multilingual Neural Machine Translation by Incremental
Training | Multilingual Neural Machine Translation approaches are based on the use of
task-specific models and the addition of one more language can only be done by
retraining the whole system. In this work, we propose a new training schedule
that allows the system to scale to more languages without modification of the
previous components based on joint training and language-independent
encoder/decoder modules allowing for zero-shot translation. This work in
progress shows close results to the state-of-the-art in the WMT task.
| 2,019 | Computation and Language |
Synchronising audio and ultrasound by learning cross-modal embeddings | Audiovisual synchronisation is the task of determining the time offset
between speech audio and a video recording of the articulators. In child speech
therapy, audio and ultrasound videos of the tongue are captured using
instruments which rely on hardware to synchronise the two modalities at
recording time. Hardware synchronisation can fail in practice, and no mechanism
exists to synchronise the signals post hoc. To address this problem, we employ
a two-stream neural network which exploits the correlation between the two
modalities to find the offset. We train our model on recordings from 69
speakers, and show that it correctly synchronises 82.9% of test utterances from
unseen therapy sessions and unseen speakers, thus considerably reducing the
number of utterances to be manually synchronised. An analysis of model
performance on the test utterances shows that directed phone articulations are
more difficult to automatically synchronise compared to utterances containing
natural variation in speech such as words, sentences, or conversations.
| 2,019 | Computation and Language |
Multilingual, Multi-scale and Multi-layer Visualization of Intermediate
Representations | The main alternatives nowadays to deal with sequences are Recurrent Neural
Networks (RNN), Convolutional Neural Networks (CNN) architectures and the
Transformer. In this context, RNN's, CNN's and Transformer have most commonly
been used as an encoder-decoder architecture with multiple layers in each
module. Far beyond this, these architectures are the basis for the contextual
word embeddings which are revolutionizing most natural language downstream
applications. However, intermediate layer representations in sequence-based
architectures can be difficult to interpret. To make each layer representation
within these architectures more accessible and meaningful, we introduce a
web-based tool that visualizes them both at the sentence and token level. We
present three use cases. The first analyses gender issues in contextual word
embeddings. The second and third are showing multilingual intermediate
representations for sentences and tokens and the evolution of these
intermediate representations along the multiple layers of the decoder and in
the context of multilingual machine translation.
| 2,019 | Computation and Language |
UltraSuite: A Repository of Ultrasound and Acoustic Data from Child
Speech Therapy Sessions | We introduce UltraSuite, a curated repository of ultrasound and acoustic
data, collected from recordings of child speech therapy sessions. This release
includes three data collections, one from typically developing children and two
from children with speech sound disorders. In addition, it includes a set of
annotations, some manual and some automatically produced, and software tools to
process, transform and visualise the data.
| 2,019 | Computation and Language |
EGG: a toolkit for research on Emergence of lanGuage in Games | There is renewed interest in simulating language emergence among deep neural
agents that communicate to jointly solve a task, spurred by the practical aim
to develop language-enabled interactive AIs, as well as by theoretical
questions about the evolution of human language. However, optimizing deep
architectures connected by a discrete communication channel (such as that in
which language emerges) is technically challenging. We introduce EGG, a toolkit
that greatly simplifies the implementation of emergent-language communication
games. EGG's modular design provides a set of building blocks that the user can
combine to create new games, easily navigating the optimization and
architecture space. We hope that the tool will lower the technical barrier, and
encourage researchers from various backgrounds to do original work in this
exciting area.
| 2,019 | Computation and Language |
Katecheo: A Portable and Modular System for Multi-Topic Question
Answering | We introduce a modular system that can be deployed on any Kubernetes cluster
for question answering via REST API. This system, called Katecheo, includes
three configurable modules that collectively enable identification of
questions, classification of those questions into topics, document search, and
reading comprehension. We demonstrate the system using publicly available
knowledge base articles extracted from Stack Exchange sites. However, users can
extend the system to any number of topics, or domains, without the need to
modify any of the model serving code or train their own models. All components
of the system are open source and available under a permissive Apache 2
License.
| 2,020 | Computation and Language |
HyST: A Hybrid Approach for Flexible and Accurate Dialogue State
Tracking | Recent works on end-to-end trainable neural network based approaches have
demonstrated state-of-the-art results on dialogue state tracking. The best
performing approaches estimate a probability distribution over all possible
slot values. However, these approaches do not scale for large value sets
commonly present in real-life applications and are not ideal for tracking slot
values that were not observed in the training set. To tackle these issues,
candidate-generation-based approaches have been proposed. These approaches
estimate a set of values that are possible at each turn based on the
conversation history and/or language understanding outputs, and hence enable
state tracking over unseen values and large value sets however, they fall short
in terms of performance in comparison to the first group. In this work, we
analyze the performance of these two alternative dialogue state tracking
methods, and present a hybrid approach (HyST) which learns the appropriate
method for each slot type. To demonstrate the effectiveness of HyST on a
rich-set of slot types, we experiment with the recently released MultiWOZ-2.0
multi-domain, task-oriented dialogue-dataset. Our experiments show that HyST
scales to multi-domain applications. Our best performing model results in a
relative improvement of 24% and 10% over the previous SOTA and our best
baseline respectively.
| 2,019 | Computation and Language |
Post-editese: an Exacerbated Translationese | Post-editing (PE) machine translation (MT) is widely used for dissemination
because it leads to higher productivity than human translation from scratch
(HT). In addition, PE translations are found to be of equal or better quality
than HTs. However, most such studies measure quality solely as the number of
errors. We conduct a set of computational analyses in which we compare PE
against HT on three different datasets that cover five translation directions
with measures that address different translation universals and laws of
translation: simplification, normalisation and interference. We find out that
PEs are simpler and more normalised and have a higher degree of interference
from the source language than HTs.
| 2,019 | Computation and Language |
Claim Extraction in Biomedical Publications using Deep Discourse Model
and Transfer Learning | Claims are a fundamental unit of scientific discourse. The exponential growth
in the number of scientific publications makes automatic claim extraction an
important problem for researchers who are overwhelmed by this information
overload. Such an automated claim extraction system is useful for both manual
and programmatic exploration of scientific knowledge. In this paper, we
introduce a new dataset of 1,500 scientific abstracts from the biomedical
domain with expert annotations for each sentence indicating whether the
sentence presents a scientific claim. We introduce a new model for claim
extraction and compare it to several baseline models including rule-based and
deep learning techniques. Moreover, we show that using a transfer learning
approach with a fine-tuning step allows us to improve performance from a large
discourse-annotated dataset. Our final model increases F1-score by over 14
percent points compared to a baseline model without transfer learning. We
release a publicly accessible tool for discourse and claims prediction along
with an annotation tool. We discuss further applications beyond biomedical
literature.
| 2,020 | Computation and Language |
Natural Language Understanding with the Quora Question Pairs Dataset | This paper explores the task Natural Language Understanding (NLU) by looking
at duplicate question detection in the Quora dataset. We conducted extensive
exploration of the dataset and used various machine learning models, including
linear and tree-based models. Our final finding was that a simple Continuous
Bag of Words neural network model had the best performance, outdoing more
complicated recurrent and attention based models. We also conducted error
analysis and found some subjectivity in the labeling of the dataset.
| 2,019 | Computation and Language |
Is artificial data useful for biomedical Natural Language Processing
algorithms? | A major obstacle to the development of Natural Language Processing (NLP)
methods in the biomedical domain is data accessibility. This problem can be
addressed by generating medical data artificially. Most previous studies have
focused on the generation of short clinical text, and evaluation of the data
utility has been limited. We propose a generic methodology to guide the
generation of clinical text with key phrases. We use the artificial data as
additional training data in two key biomedical NLP tasks: text classification
and temporal relation extraction. We show that artificially generated training
data used in conjunction with real training data can lead to performance boosts
for data-greedy neural network algorithms. We also demonstrate the usefulness
of the generated data for NLP setups where it fully replaces real training
data.
| 2,019 | Computation and Language |
Neural Machine Reading Comprehension: Methods and Trends | Machine reading comprehension (MRC), which requires a machine to answer
questions based on a given context, has attracted increasing attention with the
incorporation of various deep-learning techniques over the past few years.
Although research on MRC based on deep learning is flourishing, there remains a
lack of a comprehensive survey summarizing existing approaches and recent
trends, which motivated the work presented in this article. Specifically, we
give a thorough review of this research field, covering different aspects
including (1) typical MRC tasks: their definitions, differences, and
representative datasets; (2) the general architecture of neural MRC: the main
modules and prevalent approaches to each; and (3) new trends: some emerging
areas in neural MRC as well as the corresponding challenges. Finally,
considering what has been achieved so far, the survey also envisages what the
future may hold by discussing the open issues left to be addressed.
| 2,019 | Computation and Language |
Multimodal Transformer Networks for End-to-End Video-Grounded Dialogue
Systems | Developing Video-Grounded Dialogue Systems (VGDS), where a dialogue is
conducted based on visual and audio aspects of a given video, is significantly
more challenging than traditional image or text-grounded dialogue systems
because (1) feature space of videos span across multiple picture frames, making
it difficult to obtain semantic information; and (2) a dialogue agent must
perceive and process information from different modalities (audio, video,
caption, etc.) to obtain a comprehensive understanding. Most existing work is
based on RNNs and sequence-to-sequence architectures, which are not very
effective for capturing complex long-term dependencies (like in videos). To
overcome this, we propose Multimodal Transformer Networks (MTN) to encode
videos and incorporate information from different modalities. We also propose
query-aware attention through an auto-encoder to extract query-aware features
from non-text modalities. We develop a training procedure to simulate
token-level decoding to improve the quality of generated responses during
inference. We get state of the art performance on Dialogue System Technology
Challenge 7 (DSTC7). Our model also generalizes to another multimodal
visual-grounded dialogue task, and obtains promising performance. We
implemented our models using PyTorch and the code is released at
https://github.com/henryhungle/MTN.
| 2,019 | Computation and Language |
A Neural Grammatical Error Correction System Built On Better
Pre-training and Sequential Transfer Learning | Grammatical error correction can be viewed as a low-resource
sequence-to-sequence task, because publicly available parallel corpora are
limited. To tackle this challenge, we first generate erroneous versions of
large unannotated corpora using a realistic noising function. The resulting
parallel corpora are subsequently used to pre-train Transformer models. Then,
by sequentially applying transfer learning, we adapt these models to the domain
and style of the test set. Combined with a context-aware neural spellchecker,
our system achieves competitive results in both restricted and low resource
tracks in ACL 2019 BEA Shared Task. We release all of our code and materials
for reproducibility.
| 2,019 | Computation and Language |
Discourse Understanding and Factual Consistency in Abstractive
Summarization | We introduce a general framework for abstractive summarization with factual
consistency and distinct modeling of the narrative flow in an output summary.
Our work addresses current limitations of models for abstractive summarization
that often hallucinate information or generate summaries with coherence issues.
To generate abstractive summaries with factual consistency and narrative
flow, we propose Cooperative Generator -- Discriminator Networks (Co-opNet), a
novel transformer-based framework where a generator works with a discriminator
architecture to compose coherent long-form summaries. We explore four different
discriminator objectives which each capture a different aspect of coherence,
including whether salient spans of generated abstracts are hallucinated or
appear in the input context, and the likelihood of sentence adjacency in
generated abstracts. We measure the ability of Co-opNet to learn these
objectives with arXiv scientific papers, using the abstracts as a proxy for
gold long-form scientific article summaries. Empirical results from automatic
and human evaluations demonstrate that Co-opNet learns to summarize with
considerably improved global coherence compared to competitive baselines.
| 2,021 | Computation and Language |
Improving Robustness in Real-World Neural Machine Translation Engines | As a commercial provider of machine translation, we are constantly training
engines for a variety of uses, languages, and content types. In each case,
there can be many variables, such as the amount of training data available, and
the quality requirements of the end user. These variables can have an impact on
the robustness of Neural MT engines. On the whole, Neural MT cures many ills of
other MT paradigms, but at the same time, it has introduced a new set of
challenges to address. In this paper, we describe some of the specific issues
with practical NMT and the approaches we take to improve model robustness in
real-world scenarios.
| 2,019 | Computation and Language |
Latent Dirichlet Allocation Based Acoustic Data Selection for Automatic
Speech Recognition | Selecting in-domain data from a large pool of diverse and out-of-domain data
is a non-trivial problem. In most cases simply using all of the available data
will lead to sub-optimal and in some cases even worse performance compared to
carefully selecting a matching set. This is true even for data-inefficient
neural models. Acoustic Latent Dirichlet Allocation (aLDA) is shown to be
useful in a variety of speech technology related tasks, including domain
adaptation of acoustic models for automatic speech recognition and entity
labeling for information retrieval. In this paper we propose to use aLDA as a
data similarity criterion in a data selection framework. Given a large pool of
out-of-domain and potentially mismatched data, the task is to select the
best-matching training data to a set of representative utterances sampled from
a target domain. Our target data consists of around 32 hours of meeting data
(both far-field and close-talk) and the pool contains 2k hours of meeting,
talks, voice search, dictation, command-and-control, audio books, lectures,
generic media and telephony speech data. The proposed technique for training
data selection, significantly outperforms random selection, posterior-based
selection as well as using all of the available data.
| 2,019 | Computation and Language |
Danish Stance Classification and Rumour Resolution | The Internet is rife with flourishing rumours that spread through microblogs
and social media. Recent work has shown that analysing the stance of the crowd
towards a rumour is a good indicator for its veracity. One state-of-the-art
system uses an LSTM neural network to automatically classify stance for posts
on Twitter by considering the context of a whole branch, while another, more
simple Decision Tree classifier, performs at least as well by performing
careful feature engineering. One approach to predict the veracity of a rumour
is to use stance as the only feature for a Hidden Markov Model (HMM). This
thesis generates a stance-annotated Reddit dataset for the Danish language, and
implements various models for stance classification. Out of these, a Linear
Support Vector Machine provides the best results with an accuracy of 0.76 and
macro F1 score of 0.42. Furthermore, experiments show that stance labels can be
used across languages and platforms with a HMM to predict the veracity of
rumours, achieving an accuracy of 0.82 and F1 score of 0.67. Even higher scores
are achieved by relying only on the Danish dataset. In this case veracity
prediction scores an accuracy of 0.83 and an F1 of 0.68. Finally, when using
automatic stance labels for the HMM, only a small drop in performance is
observed, showing that the implemented system can have practical applications.
| 2,019 | Computation and Language |
Sequence Labeling Parsing by Learning Across Representations | We use parsing as sequence labeling as a common framework to learn across
constituency and dependency syntactic abstractions. To do so, we cast the
problem as multitask learning (MTL). First, we show that adding a parsing
paradigm as an auxiliary loss consistently improves the performance on the
other paradigm. Secondly, we explore an MTL sequence labeling model that parses
both representations, at almost no cost in terms of performance and speed. The
results across the board show that on average MTL models with auxiliary losses
for constituency parsing outperform single-task ones by 1.14 F1 points, and for
dependency parsing by 0.62 UAS points.
| 2,020 | Computation and Language |
Constructing large scale biomedical knowledge bases from scratch with
rapid annotation of interpretable patterns | Knowledge base construction is crucial for summarising, understanding and
inferring relationships between biomedical entities. However, for many
practical applications such as drug discovery, the scarcity of relevant facts
(e.g. gene X is therapeutic target for disease Y) severely limits a domain
expert's ability to create a usable knowledge base, either directly or by
training a relation extraction model.
In this paper, we present a simple and effective method of extracting new
facts with a pre-specified binary relationship type from the biomedical
literature, without requiring any training data or hand-crafted rules. Our
system discovers, ranks and presents the most salient patterns to domain
experts in an interpretable form. By marking patterns as compatible with the
desired relationship type, experts indirectly batch-annotate candidate pairs
whose relationship is expressed with such patterns in the literature. Even with
a complete absence of seed data, experts are able to discover thousands of
high-quality pairs with the desired relationship within minutes. When a small
number of relevant pairs do exist - even when their relationship is more
general (e.g. gene X is biologically associated with disease Y) than the
relationship of interest - our system leverages them in order to i) learn a
better ranking of the patterns to be annotated or ii) generate weakly labelled
pairs in a fully automated manner.
We evaluate our method both intrinsically and via a downstream knowledge base
completion task, and show that it is an effective way of constructing knowledge
bases when few or no relevant facts are already available.
| 2,019 | Computation and Language |
How we do things with words: Analyzing text as social and cultural data | In this article we describe our experiences with computational text analysis.
We hope to achieve three primary goals. First, we aim to shed light on thorny
issues not always at the forefront of discussions about computational text
analysis methods. Second, we hope to provide a set of best practices for
working with thick social and cultural concepts. Our guidance is based on our
own experiences and is therefore inherently imperfect. Still, given our
diversity of disciplinary backgrounds and research practices, we hope to
capture a range of ideas and identify commonalities that will resonate for
many. And this leads to our final goal: to help promote interdisciplinary
collaborations. Interdisciplinary insights and partnerships are essential for
realizing the full potential of any computational text analysis that involves
social and cultural concepts, and the more we are able to bridge these divides,
the more fruitful we believe our work will be.
| 2,020 | Computation and Language |
CS563-QA: A Collection for Evaluating Question Answering Systems | Question Answering (QA) is a challenging topic since it requires tackling the
various difficulties of natural language understanding. Since evaluation is
important not only for identifying the strong and weak points of the various
techniques for QA, but also for facilitating the inception of new methods and
techniques, in this paper we present a collection for evaluating QA methods
over free text that we have created. Although it is a small collection, it
contains cases of increasing difficulty, therefore it has an educational value
and it can be used for rapid evaluation of QA systems.
| 2,021 | Computation and Language |
Data mining Mandarin tone contour shapes | In spontaneous speech, Mandarin tones that belong to the same tone category
may exhibit many different contour shapes. We explore the use of data mining
and NLP techniques for understanding the variability of tones in a large corpus
of Mandarin newscast speech. First, we adapt a graph-based approach to
characterize the clusters (fuzzy types) of tone contour shapes observed in each
tone n-gram category. Second, we show correlations between these realized
contour shape types and a bag of automatically extracted linguistic features.
We discuss the implications of the current study within the context of
phonological and information theory.
| 2,019 | Computation and Language |
MultiWOZ 2.1: A Consolidated Multi-Domain Dialogue Dataset with State
Corrections and State Tracking Baselines | MultiWOZ 2.0 (Budzianowski et al., 2018) is a recently released multi-domain
dialogue dataset spanning 7 distinct domains and containing over 10,000
dialogues. Though immensely useful and one of the largest resources of its kind
to-date, MultiWOZ 2.0 has a few shortcomings. Firstly, there is substantial
noise in the dialogue state annotations and dialogue utterances which
negatively impact the performance of state-tracking models. Secondly, follow-up
work (Lee et al., 2019) has augmented the original dataset with user dialogue
acts. This leads to multiple co-existent versions of the same dataset with
minor modifications. In this work we tackle the aforementioned issues by
introducing MultiWOZ 2.1. To fix the noisy state annotations, we use
crowdsourced workers to re-annotate state and utterances based on the original
utterances in the dataset. This correction process results in changes to over
32% of state annotations across 40% of the dialogue turns. In addition, we fix
146 dialogue utterances by canonicalizing slot values in the utterances to the
values in the dataset ontology. To address the second problem, we combined the
contributions of the follow-up works into MultiWOZ 2.1. Hence, our dataset also
includes user dialogue acts as well as multiple slot descriptions per dialogue
state slot. We then benchmark a number of state-of-the-art dialogue state
tracking models on the MultiWOZ 2.1 dataset and show the joint state tracking
performance on the corrected state annotations. We are publicly releasing
MultiWOZ 2.1 to the community, hoping that this dataset resource will allow for
more effective models across various dialogue subproblems to be built in the
future.
| 2,019 | Computation and Language |
Scalable Multi Corpora Neural Language Models for ASR | Neural language models (NLM) have been shown to outperform conventional
n-gram language models by a substantial margin in Automatic Speech Recognition
(ASR) and other tasks. There are, however, a number of challenges that need to
be addressed for an NLM to be used in a practical large-scale ASR system. In
this paper, we present solutions to some of the challenges, including training
NLM from heterogenous corpora, limiting latency impact and handling
personalized bias in the second-pass rescorer. Overall, we show that we can
achieve a 6.2% relative WER reduction using neural LM in a second-pass n-best
rescoring framework with a minimal increase in latency.
| 2,019 | Computation and Language |
Machine Reading Comprehension: a Literature Review | Machine reading comprehension aims to teach machines to understand a text
like a human and is a new challenging direction in Artificial Intelligence.
This article summarizes recent advances in MRC, mainly focusing on two aspects
(i.e., corpus and techniques). The specific characteristics of various MRC
corpus are listed and compared. The main ideas of some typical MRC techniques
are also described.
| 2,019 | Computation and Language |
Polyphone Disambiguation for Mandarin Chinese Using Conditional Neural
Network with Multi-level Embedding Features | This paper describes a conditional neural network architecture for Mandarin
Chinese polyphone disambiguation. The system is composed of a bidirectional
recurrent neural network component acting as a sentence encoder to accumulate
the context correlations, followed by a prediction network that maps the
polyphonic character embeddings along with the conditions to corresponding
pronunciations. We obtain the word-level condition from a pre-trained
word-to-vector lookup table. One goal of polyphone disambiguation is to address
the homograph problem existing in the front-end processing of Mandarin Chinese
text-to-speech system. Our system achieves an accuracy of 94.69\% on a publicly
available polyphonic character dataset. To further validate our choices on the
conditional feature, we investigate polyphone disambiguation systems with
multi-level conditions respectively. The experimental results show that both
the sentence-level and the word-level conditional embedding features are able
to attain good performance for Mandarin Chinese polyphone disambiguation.
| 2,019 | Computation and Language |
On the Weaknesses of Reinforcement Learning for Neural Machine
Translation | Reinforcement learning (RL) is frequently used to increase performance in
text generation tasks, including machine translation (MT), notably through the
use of Minimum Risk Training (MRT) and Generative Adversarial Networks (GAN).
However, little is known about what and how these methods learn in the context
of MT. We prove that one of the most common RL methods for MT does not optimize
the expected reward, as well as show that other methods take an infeasibly long
time to converge. In fact, our results suggest that RL practices in MT are
likely to improve performance only where the pre-trained parameters are already
close to yielding the correct translation. Our findings further suggest that
observed gains may be due to effects unrelated to the training signal, but
rather from changes in the shape of the distribution curve.
| 2,020 | Computation and Language |
Multi-Task Networks With Universe, Group, and Task Feature Learning | We present methods for multi-task learning that take advantage of natural
groupings of related tasks. Task groups may be defined along known properties
of the tasks, such as task domain or language. Such task groups represent
supervised information at the inter-task level and can be encoded into the
model. We investigate two variants of neural network architectures that
accomplish this, learning different feature spaces at the levels of individual
tasks, task groups, as well as the universe of all tasks: (1) parallel
architectures encode each input simultaneously into feature spaces at different
levels; (2) serial architectures encode each input successively into feature
spaces at different levels in the task hierarchy. We demonstrate the methods on
natural language understanding (NLU) tasks, where a grouping of tasks into
different task domains leads to improved performance on ATIS, Snips, and a
large inhouse dataset.
| 2,019 | Computation and Language |
Depth Growing for Neural Machine Translation | While very deep neural networks have shown effectiveness for computer vision
and text classification applications, how to increase the network depth of
neural machine translation (NMT) models for better translation quality remains
a challenging problem. Directly stacking more blocks to the NMT model results
in no improvement and even reduces performance. In this work, we propose an
effective two-stage approach with three specially designed components to
construct deeper NMT models, which result in significant improvements over the
strong Transformer baselines on WMT$14$ English$\to$German and
English$\to$French translation tasks\footnote{Our code is available at
\url{https://github.com/apeterswu/Depth_Growing_NMT}}.
| 2,019 | Computation and Language |
Real-time Claim Detection from News Articles and Retrieval of
Semantically-Similar Factchecks | Factchecking has always been a part of the journalistic process. However with
newsroom budgets shrinking it is coming under increasing pressure just as the
amount of false information circulating is on the rise. We therefore propose a
method to increase the efficiency of the factchecking process, using the latest
developments in Natural Language Processing (NLP). This method allows us to
compare incoming claims to an existing corpus and return similar, factchecked,
claims in a live system-allowing factcheckers to work simultaneously without
duplicating their work.
| 2,019 | Computation and Language |
Deep neural network-based classification model for Sentiment Analysis | The growing prosperity of social networks has brought great challenges to the
sentimental tendency mining of users. As more and more researchers pay
attention to the sentimental tendency of online users, rich research results
have been obtained based on the sentiment classification of explicit texts.
However, research on the implicit sentiment of users is still in its infancy.
Aiming at the difficulty of implicit sentiment classification, a research on
implicit sentiment classification model based on deep neural network is carried
out. Classification models based on DNN, LSTM, Bi-LSTM and CNN were established
to judge the tendency of the user's implicit sentiment text. Based on the
Bi-LSTM model, the classification model of word-level attention mechanism is
studied. The experimental results on the public dataset show that the
established LSTM series classification model and CNN classification model can
achieve good sentiment classification effect, and the classification effect is
significantly better than the DNN model. The Bi-LSTM based attention mechanism
classification model obtained the optimal R value in the positive category
identification.
| 2,019 | Computation and Language |
Patent Claim Generation by Fine-Tuning OpenAI GPT-2 | In this work, we focus on fine-tuning an OpenAI GPT-2 pre-trained model for
generating patent claims. GPT-2 has demonstrated impressive efficacy of
pre-trained language models on various tasks, particularly coherent text
generation. Patent claim language itself has rarely been explored in the past
and poses a unique challenge. We are motivated to generate coherent patent
claims automatically so that augmented inventing might be viable someday. In
our implementation, we identified a unique language structure in patent claims
and leveraged its implicit human annotations. We investigated the fine-tuning
process by probing the first 100 steps and observing the generated text at each
step. Based on both conditional and unconditional random sampling, we analyze
the overall quality of generated patent claims. Our contributions include: (1)
being the first to generate patent claims by machines and being the first to
apply GPT-2 to patent claim generation, (2) providing various experiment
results for qualitative analysis and future research, (3) proposing a new
sampling approach for text generation, and (4) building an e-mail bot for
future researchers to explore the fine-tuned GPT-2 model further.
| 2,019 | Computation and Language |
Neural Image Captioning | In recent years, the biggest advances in major Computer Vision tasks, such as
object recognition, handwritten-digit identification, facial recognition, and
many others., have all come through the use of Convolutional Neural Networks
(CNNs). Similarly, in the domain of Natural Language Processing, Recurrent
Neural Networks (RNNs), and Long Short Term Memory networks (LSTMs) in
particular, have been crucial to some of the biggest breakthroughs in
performance for tasks such as machine translation, part-of-speech tagging,
sentiment analysis, and many others. These individual advances have greatly
benefited tasks even at the intersection of NLP and Computer Vision, and
inspired by this success, we studied some existing neural image captioning
models that have proven to work well. In this work, we study some existing
captioning models that provide near state-of-the-art performances, and try to
enhance one such model. We also present a simple image captioning model that
makes use of a CNN, an LSTM, and the beam search1 algorithm, and study its
performance based on various qualitative and quantitative metrics.
| 2,019 | Computation and Language |
Learning Multi-Party Turn-Taking Models from Dialogue Logs | This paper investigates the application of machine learning (ML) techniques
to enable intelligent systems to learn multi-party turn-taking models from
dialogue logs. The specific ML task consists of determining who speaks next,
after each utterance of a dialogue, given who has spoken and what was said in
the previous utterances. With this goal, this paper presents comparisons of the
accuracy of different ML techniques such as Maximum Likelihood Estimation
(MLE), Support Vector Machines (SVM), and Convolutional Neural Networks (CNN)
architectures, with and without utterance data. We present three corpora: the
first with dialogues from an American TV situated comedy (chit-chat), the
second with logs from a financial advice multi-bot system and the third with a
corpus created from the Multi-Domain Wizard-of-Oz dataset (both are
topic-oriented). The results show: (i) the size of the corpus has a very
positive impact on the accuracy for the content-based deep learning approaches
and those models perform best in the larger datasets; and (ii) if the dialogue
dataset is small and topic-oriented (but with few topics), it is sufficient to
use an agent-only MLE or SVM models, although slightly higher accuracies can be
achieved with the use of the content of the utterances with a CNN model.
| 2,019 | Computation and Language |
Use of OWL and Semantic Web Technologies at Pinterest | Pinterest is a popular Web application that has over 250 million active
users. It is a visual discovery engine for finding ideas for recipes, fashion,
weddings, home decoration, and much more. In the last year, the company adopted
Semantic Web technologies to create a knowledge graph that aims to represent
the vast amount of content and users on Pinterest, to help both content
recommendation and ads targeting. In this paper, we present the engineering of
an OWL ontology---the Pinterest Taxonomy---that forms the core of Pinterest's
knowledge graph, the Pinterest Taste Graph. We describe modeling choices and
enhancements to WebProt\'eg\'e that we used for the creation of the ontology.
In two months, eight Pinterest engineers, without prior experience of OWL and
WebProt\'eg\'e, revamped an existing taxonomy of noisy terms into an OWL
ontology. We share our experience and present the key aspects of our work that
we believe will be useful for others working in this area.
| 2,020 | Computation and Language |
An External Knowledge Enhanced Multi-label Charge Prediction Approach
with Label Number Learning | Multi-label charge prediction is a task to predict the corresponding
accusations for legal cases, and recently becomes a hot topic. However, current
studies use rough methods to deal with the label number. These methods manually
set parameters to select label numbers, which has an effect in final prediction
quality. We propose an external knowledge enhanced multi-label charge
prediction approach that has two phases. One is charge label prediction phase
with external knowledge from law provisions, the other one is number learning
phase with a number learning network (NLN) designed. Our approach enhanced by
external knowledge can automatically adjust the threshold to get label number
of law cases. It combines the output probabilities of samples and their
corresponding label numbers to get final prediction results. In experiments,
our approach is connected to some state of-the art deep learning models. By
testing on the biggest published Chinese law dataset, we find that our approach
has improvements on these models. We future conduct experiments on multi-label
samples from the dataset. In items of macro-F1, the improvement of baselines
with our approach is 3%-5%; In items of micro-F1, the significant improvement
of our approach is 5%-15%. The experiment results show the effectiveness our
approach for multi-label charge prediction.
| 2,019 | Computation and Language |
A Comparative Analysis of Knowledge-Intensive and Data-Intensive
Semantic Parsers | We present a phenomenon-oriented comparative analysis of the two dominant
approaches in task-independent semantic parsing: classic, knowledge-intensive
and neural, data-intensive models. To reflect state-of-the-art neural NLP
technologies, we introduce a new target structure-centric parser that can
produce semantic graphs much more accurately than previous data-driven parsers.
We then show that, in spite of comparable performance overall, knowledge- and
data-intensive models produce different types of errors, in a way that can be
explained by their theoretical properties. This analysis leads to new
directions for parser development.
| 2,020 | Computation and Language |
Interactive-Predictive Neural Machine Translation through Reinforcement
and Imitation | We propose an interactive-predictive neural machine translation framework for
easier model personalization using reinforcement and imitation learning. During
the interactive translation process, the user is asked for feedback on
uncertain locations identified by the system. Responses are weak feedback in
the form of "keep" and "delete" edits, and expert demonstrations in the form of
"substitute" edits. Conditioning on the collected feedback, the system creates
alternative translations via constrained beam search. In simulation experiments
on two language pairs our systems get close to the performance of supervised
training with much less human effort.
| 2,019 | Computation and Language |
Morphological Word Embeddings | Linguistic similarity is multi-faceted. For instance, two words may be
similar with respect to semantics, syntax, or morphology inter alia. Continuous
word-embeddings have been shown to capture most of these shades of similarity
to some degree. This work considers guiding word-embeddings with
morphologically annotated data, a form of semi-supervised learning, encouraging
the vectors to encode a word's morphology, i.e., words close in the embedded
space share morphological features. We extend the log-bilinear model to this
end and show that indeed our learned embeddings achieve this, using German as a
case study.
| 2,019 | Computation and Language |
Multi-Task Learning for Coherence Modeling | We address the task of assessing discourse coherence, an aspect of text
quality that is essential for many NLP tasks, such as summarization and
language assessment. We propose a hierarchical neural network trained in a
multi-task fashion that learns to predict a document-level coherence score (at
the network's top layers) along with word-level grammatical roles (at the
bottom layers), taking advantage of inductive transfer between the two tasks.
We assess the extent to which our framework generalizes to different domains
and prediction tasks, and demonstrate its effectiveness not only on standard
binary evaluation coherence tasks, but also on real-world tasks involving the
prediction of varying degrees of coherence, achieving a new state of the art.
| 2,019 | Computation and Language |
Transfer Learning for Risk Classification of Social Media Posts: Model
Evaluation Study | Mental illness affects a significant portion of the worldwide population.
Online mental health forums can provide a supportive environment for those
afflicted and also generate a large amount of data which can be mined to
predict mental health states using machine learning methods. We benchmark
multiple methods of text feature representation for social media posts and
compare their downstream use with automated machine learning (AutoML) tools to
triage content for moderator attention. We used 1588 labeled posts from the
CLPsych 2017 shared task collected from the Reachout.com forum (Milne et al.,
2019). Posts were represented using lexicon based tools including VADER,
Empath, LIWC and also used pre-trained artificial neural network models
including DeepMoji, Universal Sentence Encoder, and GPT-1. We used TPOT and
auto-sklearn as AutoML tools to generate classifiers to triage the posts. The
top-performing system used features derived from the GPT-1 model, which was
finetuned on over 150,000 unlabeled posts from Reachout.com. Our top system had
a macro averaged F1 score of 0.572, providing a new state-of-the-art result on
the CLPsych 2017 task. This was achieved without additional information from
meta-data or preceding posts. Error analyses revealed that this top system
often misses expressions of hopelessness. We additionally present
visualizations that aid understanding of the learned classifiers. We show that
transfer learning is an effective strategy for predicting risk with relatively
little labeled data. We note that finetuning of pretrained language models
provides further gains when large amounts of unlabeled text is available.
| 2,019 | Computation and Language |
Collecting Indicators of Compromise from Unstructured Text of
Cybersecurity Articles using Neural-Based Sequence Labelling | Indicators of Compromise (IOCs) are artifacts observed on a network or in an
operating system that can be utilized to indicate a computer intrusion and
detect cyber-attacks in an early stage. Thus, they exert an important role in
the field of cybersecurity. However, state-of-the-art IOCs detection systems
rely heavily on hand-crafted features with expert knowledge of cybersecurity,
and require large-scale manually annotated corpora to train an IOC classifier.
In this paper, we propose using an end-to-end neural-based sequence labelling
model to identify IOCs automatically from cybersecurity articles without expert
knowledge of cybersecurity. By using a multi-head self-attention module and
contextual features, we find that the proposed model is capable of gathering
contextual information from texts of cybersecurity articles and performs better
in the task of IOC identification. Experiments show that the proposed model
outperforms other sequence labelling models, achieving the average F1-score of
89.0% on English cybersecurity article test set, and approximately the average
F1-score of 81.8% on Chinese test set.
| 2,019 | Computation and Language |
Improving Chemical Named Entity Recognition in Patents with
Contextualized Word Embeddings | Chemical patents are an important resource for chemical information. However,
few chemical Named Entity Recognition (NER) systems have been evaluated on
patent documents, due in part to their structural and linguistic complexity. In
this paper, we explore the NER performance of a BiLSTM-CRF model utilising
pre-trained word embeddings, character-level word representations and
contextualized ELMo word representations for chemical patents. We compare word
embeddings pre-trained on biomedical and chemical patent corpora. The effect of
tokenizers optimized for the chemical domain on NER performance in chemical
patents is also explored. The results on two patent corpora show that
contextualized word representations generated from ELMo substantially improve
chemical NER performance w.r.t. the current state-of-the-art. We also show that
domain-specific resources such as word embeddings trained on chemical patents
and chemical-specific tokenizers have a positive impact on NER performance.
| 2,019 | Computation and Language |
Head-Driven Phrase Structure Grammar Parsing on Penn Treebank | Head-driven phrase structure grammar (HPSG) enjoys a uniform formalism
representing rich contextual syntactic and even semantic meanings. This paper
makes the first attempt to formulate a simplified HPSG by integrating
constituent and dependency formal representations into head-driven phrase
structure. Then two parsing algorithms are respectively proposed for two
converted tree representations, division span and joint span. As HPSG encodes
both constituent and dependency structure information, the proposed HPSG
parsers may be regarded as a sort of joint decoder for both types of structures
and thus are evaluated in terms of extracted or converted constituent and
dependency parsing trees. Our parser achieves new state-of-the-art performance
for both parsing tasks on Penn Treebank (PTB) and Chinese Penn Treebank,
verifying the effectiveness of joint learning constituent and dependency
structures. In details, we report 96.33 F1 of constituent parsing and 97.20\%
UAS of dependency parsing on PTB.
| 2,020 | Computation and Language |
Multi-lingual Intent Detection and Slot Filling in a Joint BERT-based
Model | Intent Detection and Slot Filling are two pillar tasks in Spoken Natural
Language Understanding. Common approaches adopt joint Deep Learning
architectures in attention-based recurrent frameworks. In this work, we aim at
exploiting the success of "recurrence-less" models for these tasks. We
introduce Bert-Joint, i.e., a multi-lingual joint text classification and
sequence labeling framework. The experimental evaluation over two well-known
English benchmarks demonstrates the strong performances that can be obtained
with this model, even when few annotated data is available. Moreover, we
annotated a new dataset for the Italian language, and we observed similar
performances without the need for changing the model.
| 2,019 | Computation and Language |
Towards Universal Dialogue Act Tagging for Task-Oriented Dialogues | Machine learning approaches for building task-oriented dialogue systems
require large conversational datasets with labels to train on. We are
interested in building task-oriented dialogue systems from human-human
conversations, which may be available in ample amounts in existing customer
care center logs or can be collected from crowd workers. Annotating these
datasets can be prohibitively expensive. Recently multiple annotated
task-oriented human-machine dialogue datasets have been released, however their
annotation schema varies across different collections, even for well-defined
categories such as dialogue acts (DAs). We propose a Universal DA schema for
task-oriented dialogues and align existing annotated datasets with our schema.
Our aim is to train a Universal DA tagger (U-DAT) for task-oriented dialogues
and use it for tagging human-human conversations. We investigate multiple
datasets, propose manual and automated approaches for aligning the different
schema, and present results on a target corpus of human-human dialogues. In
unsupervised learning experiments we achieve an F1 score of 54.1% on system
turns in human-human dialogues. In a semi-supervised setup, the F1 score
increases to 57.7% which would otherwise require at least 1.7K manually
annotated turns. For new domains, we show further improvements when unlabeled
or labeled target domain data is available.
| 2,019 | Computation and Language |
BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional
Encoder Representations from Transformer | An important yet rarely tackled problem in dialogue state tracking (DST) is
scalability for dynamic ontology (e.g., movie, restaurant) and unseen slot
values. We focus on a specific condition, where the ontology is unknown to the
state tracker, but the target slot value (except for none and dontcare),
possibly unseen during training, can be found as word segment in the dialogue
context. Prior approaches often rely on candidate generation from n-gram
enumeration or slot tagger outputs, which can be inefficient or suffer from
error propagation. We propose BERT-DST, an end-to-end dialogue state tracker
which directly extracts slot values from the dialogue context. We use BERT as
dialogue context encoder whose contextualized language representations are
suitable for scalable DST to identify slot values from their semantic context.
Furthermore, we employ encoder parameter sharing across all slots with two
advantages: (1) Number of parameters does not grow linearly with the ontology.
(2) Language representation knowledge can be transferred among slots. Empirical
evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior
work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves
competitive performance on the standard DSTC2 and WOZ 2.0 datasets.
| 2,019 | Computation and Language |
Exploiting Out-of-Domain Parallel Data through Multilingual Transfer
Learning for Low-Resource Neural Machine Translation | This paper proposes a novel multilingual multistage fine-tuning approach for
low-resource neural machine translation (NMT), taking a challenging
Japanese--Russian pair for benchmarking. Although there are many solutions for
low-resource scenarios, such as multilingual NMT and back-translation, we have
empirically confirmed their limited success when restricted to in-domain data.
We therefore propose to exploit out-of-domain data through transfer learning,
by using it to first train a multilingual NMT model followed by multistage
fine-tuning on in-domain parallel and back-translated pseudo-parallel data. Our
approach, which combines domain adaptation, multilingualism, and
back-translation, helps improve the translation quality by more than 3.7 BLEU
points, over a strong baseline, for this extremely low-resource scenario.
| 2,019 | Computation and Language |
Improved low-resource Somali speech recognition by semi-supervised
acoustic and language model training | We present improvements in automatic speech recognition (ASR) for Somali, a
currently extremely under-resourced language. This forms part of a continuing
United Nations (UN) effort to employ ASR-based keyword spotting systems to
support humanitarian relief programmes in rural Africa. Using just 1.57 hours
of annotated speech data as a seed corpus, we increase the pool of training
data by applying semi-supervised training to 17.55 hours of untranscribed
speech. We make use of factorised time-delay neural networks (TDNN-F) for
acoustic modelling, since these have recently been shown to be effective in
resource-scarce situations. Three semi-supervised training passes were
performed, where the decoded output from each pass was used for acoustic model
training in the subsequent pass. The automatic transcriptions from the best
performing pass were used for language model augmentation. To ensure the
quality of automatic transcriptions, decoder confidence is used as a threshold.
The acoustic and language models obtained from the semi-supervised approach
show significant improvement in terms of WER and perplexity compared to the
baseline. Incorporating the automatically generated transcriptions yields a
6.55\% improvement in language model perplexity. The use of 17.55 hour of
Somali acoustic data in semi-supervised training shows an improvement of 7.74\%
relative over the baseline.
| 2,019 | Computation and Language |
Short Text Conversation Based on Deep Neural Network and Analysis on
Evaluation Measures | With the development of Natural Language Processing, Automatic
question-answering system such as Waston, Siri, Alexa, has become one of the
most important NLP applications. Nowadays, enterprises try to build automatic
custom service chatbots to save human resources and provide a 24-hour customer
service. Evaluation of chatbots currently relied greatly on human annotation
which cost a plenty of time. Thus, has initiated a new Short Text Conversation
subtask called Dialogue Quality (DQ) and Nugget Detection (ND) which aim to
automatically evaluate dialogues generated by chatbots. In this paper, we solve
the DQ and ND subtasks by deep neural network. We proposed two models for both
DQ and ND subtasks which is constructed by hierarchical structure: embedding
layer, utterance layer, context layer and memory layer, to hierarchical learn
dialogue representation from word level, sentence level, context level to long
range context level. Furthermore, we apply gating and attention mechanism at
utterance layer and context layer to improve the performance. We also tried
BERT to replace embedding layer and utterance layer as sentence representation.
The result shows that BERT produced a better utterance representation than
multi-stack CNN for both DQ and ND subtasks and outperform other models
proposed by other researches. The evaluation measures are proposed by , that
is, NMD, RSNOD for DQ and JSD, RNSS for ND, which is not traditional evaluation
measures such as accuracy, precision, recall and f1-score. Thus, we have done a
series of experiments by using traditional evaluation measures and analyze the
performance and error.
| 2,019 | Computation and Language |
ANETAC: Arabic Named Entity Transliteration and Classification Dataset | In this paper, we make freely accessible ANETAC our English-Arabic named
entity transliteration and classification dataset that we built from freely
available parallel translation corpora. The dataset contains 79,924 instances,
each instance is a triplet (e, a, c), where e is the English named entity, a is
its Arabic transliteration and c is its class that can be either a Person, a
Location, or an Organization. The ANETAC dataset is mainly aimed for the
researchers that are working on Arabic named entity transliteration, but it can
also be used for named entity classification purposes.
| 2,019 | Computation and Language |
Best Practices for Learning Domain-Specific Cross-Lingual Embeddings | Cross-lingual embeddings aim to represent words in multiple languages in a
shared vector space by capturing semantic similarities across languages. They
are a crucial component for scaling tasks to multiple languages by transferring
knowledge from languages with rich resources to low-resource languages. A
common approach to learning cross-lingual embeddings is to train monolingual
embeddings separately for each language and learn a linear projection from the
monolingual spaces into a shared space, where the mapping relies on a small
seed dictionary. While there are high-quality generic seed dictionaries and
pre-trained cross-lingual embeddings available for many language pairs, there
is little research on how they perform on specialised tasks. In this paper, we
investigate the best practices for constructing the seed dictionary for a
specific domain. We evaluate the embeddings on the sequence labelling task of
Curriculum Vitae parsing and show that the size of a bilingual dictionary, the
frequency of the dictionary words in the domain corpora and the source of data
(task-specific vs generic) influence the performance. We also show that the
less training data is available in the low-resource language, the more the
construction of the bilingual dictionary matters, and demonstrate that some of
the choices are crucial in the zero-shot transfer learning case.
| 2,019 | Computation and Language |
Exploring difference in public perceptions on HPV vaccine between gender
groups from Twitter using deep learning | In this study, we proposed a convolutional neural network model for gender
prediction using English Twitter text as input. Ensemble of proposed model
achieved an accuracy at 0.8237 on gender prediction and compared favorably with
the state-of-the-art performance in a recent author profiling task. We further
leveraged the trained models to predict the gender labels from an HPV vaccine
related corpus and identified gender difference in public perceptions regarding
HPV vaccine. The findings are largely consistent with previous survey-based
studies.
| 2,019 | Computation and Language |
Applying a Pre-trained Language Model to Spanish Twitter Humor
Prediction | Our entry into the HAHA 2019 Challenge placed $3^{rd}$ in the classification
task and $2^{nd}$ in the regression task. We describe our system and
innovations, as well as comparing our results to a Naive Bayes baseline. A
large Twitter based corpus allowed us to train a language model from scratch
focused on Spanish and transfer that knowledge to our competition model. To
overcome the inherent errors in some labels we reduce our class confidence with
label smoothing in the loss function. All the code for our project is included
in a GitHub repository for easy reference and to enable replication by others.
| 2,019 | Computation and Language |
Evolutionary Algorithm for Sinhala to English Translation | Machine Translation (MT) is an area in natural language processing, which
focus on translating from one language to another. Many approaches ranging from
statistical methods to deep learning approaches are used in order to achieve
MT. However, these methods either require a large number of data or a clear
understanding about the language. Sinhala language has less digital text which
could be used to train a deep neural network. Furthermore, Sinhala has complex
rules therefore, it is harder to create statistical rules in order to apply
statistical methods in MT. This research focuses on Sinhala to English
translation using an Evolutionary Algorithm (EA). EA is used to identifying the
correct meaning of Sinhala text and to translate it to English. The Sinhala
text is passed to identify the meaning in order to get the correct meaning of
the sentence. With the use of the EA the translation is carried out. The
translated text is passed on to grammatically correct the sentence. This has
shown to achieve accurate results.
| 2,019 | Computation and Language |
Joint Lifelong Topic Model and Manifold Ranking for Document
Summarization | Due to the manifold ranking method has a significant effect on the ranking of
unknown data based on known data by using a weighted network, many researchers
use the manifold ranking method to solve the document summarization task.
However, their models only consider the original features but ignore the
semantic features of sentences when they construct the weighted networks for
the manifold ranking method. To solve this problem, we proposed two improved
models based on the manifold ranking method. One is combining the topic model
and manifold ranking method (JTMMR) to solve the document summarization task.
This model not only uses the original feature, but also uses the semantic
feature to represent the document, which can improve the accuracy of the
manifold ranking method. The other one is combining the lifelong topic model
and manifold ranking method (JLTMMR). On the basis of the JTMMR, this model
adds the constraint of knowledge to improve the quality of the topic. At the
same time, we also add the constraint of the relationship between documents to
dig out a better document semantic features. The JTMMR model can improve the
effect of the manifold ranking method by using the better semantic feature.
Experiments show that our models can achieve a better result than other
baseline models for multi-document summarization task. At the same time, our
models also have a good performance on the single document summarization task.
After combining with a few basic surface features, our model significantly
outperforms some model based on deep learning in recent years. After that, we
also do an exploring work for lifelong machine learning by analyzing the effect
of adding feedback. Experiments show that the effect of adding feedback to our
model is significant.
| 2,019 | Computation and Language |
Graph based Neural Networks for Event Factuality Prediction using
Syntactic and Semantic Structures | Event factuality prediction (EFP) is the task of assessing the degree to
which an event mentioned in a sentence has happened. For this task, both
syntactic and semantic information are crucial to identify the important
context words. The previous work for EFP has only combined these information in
a simple way that cannot fully exploit their coordination. In this work, we
introduce a novel graph-based neural network for EFP that can integrate the
semantic and syntactic information more effectively. Our experiments
demonstrate the advantage of the proposed model for EFP.
| 2,019 | Computation and Language |
Zero-Shot Open Entity Typing as Type-Compatible Grounding | The problem of entity-typing has been studied predominantly in supervised
learning fashion, mostly with task-specific annotations (for coarse types) and
sometimes with distant supervision (for fine types). While such approaches have
strong performance within datasets, they often lack the flexibility to transfer
across text genres and to generalize to new type taxonomies. In this work we
propose a zero-shot entity typing approach that requires no annotated data and
can flexibly identify newly defined types. Given a type taxonomy defined as
Boolean functions of FREEBASE "types", we ground a given mention to a set of
type-compatible Wikipedia entries and then infer the target mention's types
using an inference algorithm that makes use of the types of these entries. We
evaluate our system on a broad range of datasets, including standard
fine-grained and coarse-grained entity typing datasets, and also a dataset in
the biological domain. Our system is shown to be competitive with
state-of-the-art supervised NER systems and outperforms them on out-of-domain
datasets. We also show that our system significantly outperforms other
zero-shot fine typing systems.
| 2,019 | Computation and Language |
Improving Cross-Domain Performance for Relation Extraction via
Dependency Prediction and Information Flow Control | Relation Extraction (RE) is one of the fundamental tasks in Information
Extraction and Natural Language Processing. Dependency trees have been shown to
be a very useful source of information for this task. The current deep learning
models for relation extraction has mainly exploited this dependency information
by guiding their computation along the structures of the dependency trees. One
potential problem with this approach is it might prevent the models from
capturing important context information beyond syntactic structures and cause
the poor cross-domain generalization. This paper introduces a novel method to
use dependency trees in RE for deep learning models that jointly predicts
dependency and semantics relations. We also propose a new mechanism to control
the information flow in the model based on the input entity mentions. Our
extensive experiments on benchmark datasets show that the proposed model
outperforms the existing methods for RE significantly.
| 2,019 | Computation and Language |
NIESR: Nuisance Invariant End-to-end Speech Recognition | Deep neural network models for speech recognition have achieved great success
recently, but they can learn incorrect associations between the target and
nuisance factors of speech (e.g., speaker identities, background noise, etc.),
which can lead to overfitting. While several methods have been proposed to
tackle this problem, existing methods incorporate additional information about
nuisance factors during training to develop invariant models. However,
enumeration of all possible nuisance factors in speech data and the collection
of their annotations is difficult and expensive. We present a robust training
scheme for end-to-end speech recognition that adopts an unsupervised
adversarial invariance induction framework to separate out essential factors
for speech-recognition from nuisances without using any supplementary labels
besides the transcriptions. Experiments show that the speech recognition model
trained with the proposed training scheme achieves relative improvements of
5.48% on WSJ0, 6.16% on CHiME3, and 6.61% on TIMIT dataset over the base model.
Additionally, the proposed method achieves a relative improvement of 14.44% on
the combined WSJ0+CHiME3 dataset.
| 2,019 | Computation and Language |
A Natural Language Corpus of Common Grounding under Continuous and
Partially-Observable Context | Common grounding is the process of creating, repairing and updating mutual
understandings, which is a critical aspect of sophisticated human
communication. However, traditional dialogue systems have limited capability of
establishing common ground, and we also lack task formulations which introduce
natural difficulty in terms of common grounding while enabling easy evaluation
and analysis of complex models. In this paper, we propose a minimal dialogue
task which requires advanced skills of common grounding under continuous and
partially-observable context. Based on this task formulation, we collected a
largescale dataset of 6,760 dialogues which fulfills essential requirements of
natural language corpora. Our analysis of the dataset revealed important
phenomena related to common grounding that need to be considered. Finally, we
evaluate and analyze baseline neural models on a simple subtask that requires
recognition of the created common ground. We show that simple baseline models
perform decently but leave room for further improvement. Overall, we show that
our proposed task will be a fundamental testbed where we can train, evaluate,
and analyze dialogue system's ability for sophisticated common grounding.
| 2,019 | Computation and Language |
Correct-and-Memorize: Learning to Translate from Interactive Revisions | State-of-the-art machine translation models are still not on par with human
translators. Previous work takes human interactions into the neural machine
translation process to obtain improved results in target languages. However,
not all model-translation errors are equal -- some are critical while others
are minor. In the meanwhile, the same translation mistakes occur repeatedly in
a similar context. To solve both issues, we propose CAMIT, a novel method for
translating in an interactive environment. Our proposed method works with
critical revision instructions, therefore allows human to correct arbitrary
words in model-translated sentences. In addition, CAMIT learns from and softly
memorizes revision actions based on the context, alleviating the issue of
repeating mistakes. Experiments in both ideal and real interactive translation
settings demonstrate that our proposed \method enhances machine translation
results significantly while requires fewer revision instructions from human
compared to previous methods.
| 2,019 | Computation and Language |
Searching for Effective Neural Extractive Summarization: What Works and
What's Next | The recent years have seen remarkable success in the use of deep neural
networks on text summarization.
However, there is no clear understanding of \textit{why} they perform so
well, or \textit{how} they might be improved.
In this paper, we seek to better understand how neural extractive
summarization systems could benefit from different types of model
architectures, transferable knowledge and
learning schemas. Additionally, we find an effective way to improve current
frameworks and achieve the state-of-the-art result on CNN/DailyMail by a large
margin based on our
observations and analyses. Hopefully, our work could provide more clues for
future research on extractive summarization.
| 2,019 | Computation and Language |
Early Discovery of Emerging Entities in Microblogs | Keeping up to date on emerging entities that appear every day is
indispensable for various applications, such as social-trend analysis and
marketing research. Previous studies have attempted to detect unseen entities
that are not registered in a particular knowledge base as emerging entities and
consequently find non-emerging entities since the absence of entities in
knowledge bases does not guarantee their emergence. We therefore introduce a
novel task of discovering truly emerging entities when they have just been
introduced to the public through microblogs and propose an effective method
based on time-sensitive distant supervision, which exploits distinctive
early-stage contexts of emerging entities. Experimental results with a
large-scale Twitter archive show that the proposed method achieves 83.2%
precision of the top 500 discovered emerging entities, which outperforms
baselines based on unseen entity recognition with burst detection. Besides
notable emerging entities, our method can discover massive long-tail and
homographic emerging entities. An evaluation of relative recall shows that the
method detects 80.4% emerging entities newly registered in Wikipedia; 92.4% of
them are discovered earlier than their registration in Wikipedia, and the
average lead-time is more than one year (571 days).
| 2,019 | Computation and Language |
Multiple Generative Models Ensemble for Knowledge-Driven Proactive
Human-Computer Dialogue Agent | Multiple sequence to sequence models were used to establish an end-to-end
multi-turns proactive dialogue generation agent, with the aid of data
augmentation techniques and variant encoder-decoder structure designs. A
rank-based ensemble approach was developed for boosting performance. Results
indicate that our single model, in average, makes an obvious improvement in the
terms of F1-score and BLEU over the baseline by 18.67% on the DuConv dataset.
In particular, the ensemble methods further significantly outperform the
baseline by 35.85%.
| 2,020 | Computation and Language |
Knowledge-aware Pronoun Coreference Resolution | Resolving pronoun coreference requires knowledge support, especially for
particular domains (e.g., medicine). In this paper, we explore how to leverage
different types of knowledge to better resolve pronoun coreference with a
neural model. To ensure the generalization ability of our model, we directly
incorporate knowledge in the format of triplets, which is the most common
format of modern knowledge graphs, instead of encoding it with features or
rules as that in conventional approaches. Moreover, since not all knowledge is
helpful in certain contexts, to selectively use them, we propose a knowledge
attention module, which learns to select and use informative knowledge based on
contexts, to enhance our model. Experimental results on two datasets from
different domains prove the validity and effectiveness of our model, where it
outperforms state-of-the-art baselines by a large margin. Moreover, since our
model learns to use external knowledge rather than only fitting the training
data, it also demonstrates superior performance to baselines in the
cross-domain setting.
| 2,019 | Computation and Language |
Learning Neural Sequence-to-Sequence Models from Weak Feedback with
Bipolar Ramp Loss | In many machine learning scenarios, supervision by gold labels is not
available and consequently neural models cannot be trained directly by maximum
likelihood estimation (MLE). In a weak supervision scenario, metric-augmented
objectives can be employed to assign feedback to model outputs, which can be
used to extract a supervision signal for training. We present several
objectives for two separate weakly supervised tasks, machine translation and
semantic parsing. We show that objectives should actively discourage negative
outputs in addition to promoting a surrogate gold structure. This notion of
bipolarity is naturally present in ramp loss objectives, which we adapt to
neural models. We show that bipolar ramp loss objectives outperform other
non-bipolar ramp loss objectives and minimum risk training (MRT) on both weakly
supervised tasks, as well as on a supervised machine translation task.
Additionally, we introduce a novel token-level ramp loss objective, which is
able to outperform even the best sequence-level ramp loss on both weakly
supervised tasks.
| 2,019 | Computation and Language |
Neural Aspect and Opinion Term Extraction with Mined Rules as Weak
Supervision | Lack of labeled training data is a major bottleneck for neural network based
aspect and opinion term extraction on product reviews. To alleviate this
problem, we first propose an algorithm to automatically mine extraction rules
from existing training examples based on dependency parsing results. The mined
rules are then applied to label a large amount of auxiliary data. Finally, we
study training procedures to train a neural model which can learn from both the
data automatically labeled by the rules and a small amount of data accurately
annotated by human. Experimental results show that although the mined rules
themselves do not perform well due to their limited flexibility, the
combination of human annotated data and rule labeled auxiliary data can improve
the neural model and allow it to achieve performance better than or comparable
with the current state-of-the-art.
| 2,019 | Computation and Language |
Improving short text classification through global augmentation methods | We study the effect of different approaches to text augmentation. To do this
we use 3 datasets that include social media and formal text in the form of news
articles. Our goal is to provide insights for practitioners and researchers on
making choices for augmentation for classification use cases. We observe that
Word2vec-based augmentation is a viable option when one does not have access to
a formal synonym model (like WordNet-based augmentation). The use of
\emph{mixup} further improves performance of all text based augmentations and
reduces the effects of overfitting on a tested deep learning model. Round-trip
translation with a translation service proves to be harder to use due to cost
and as such is less accessible for both normal and low resource use-cases.
| 2,020 | Computation and Language |
A Study of the Effect of Resolving Negation and Sentiment Analysis in
Recognizing Text Entailment for Arabic | Recognizing the entailment relation showed that its influence to extract the
semantic inferences in wide-ranging natural language processing domains (text
summarization, question answering, etc.) and enhanced the results of their
output. For Arabic language, few attempts concerns with Arabic entailment
problem. This paper aims to increase the entailment accuracy for Arabic texts
by resolving negation of the text-hypothesis pair and determining the polarity
of the text-hypothesis pair whether it is Positive, Negative or Neutral. It is
noticed that the absence of negation detection feature gives inaccurate results
when detecting the entailment relation since the negation revers the truth. The
negation words are considered stop words and removed from the text-hypothesis
pair which may lead wrong entailment decision. Another case not solved
previously, it is impossible that the positive text entails negative text and
vice versa. In this paper, in order to classify the text-hypothesis pair
polarity, a sentiment analysis tool is used. We show that analyzing the
polarity of the text-hypothesis pair increases the entailment accuracy. to
evaluate our approach we used a dataset for Arabic textual entailment (ArbTEDS)
consisted of 618 text-hypothesis pairs and showed that the Arabic entailment
accuracy is increased by resolving negation for entailment relation and
analyzing the polarity of the text-hypothesis pair.
| 2,015 | Computation and Language |
An Intrinsic Nearest Neighbor Analysis of Neural Machine Translation
Architectures | Earlier approaches indirectly studied the information captured by the hidden
states of recurrent and non-recurrent neural machine translation models by
feeding them into different classifiers. In this paper, we look at the encoder
hidden states of both transformer and recurrent machine translation models from
the nearest neighbors perspective. We investigate to what extent the nearest
neighbors share information with the underlying word embeddings as well as
related WordNet entries. Additionally, we study the underlying syntactic
structure of the nearest neighbors to shed light on the role of syntactic
similarities in bringing the neighbors together. We compare transformer and
recurrent models in a more intrinsic way in terms of capturing lexical
semantics and syntactic structures, in contrast to extrinsic approaches used by
previous works. In agreement with the extrinsic evaluations in the earlier
works, our experimental results show that transformers are superior in
capturing lexical semantics, but not necessarily better in capturing the
underlying syntax. Additionally, we show that the backward recurrent layer in a
recurrent model learns more about the semantics of words, whereas the forward
recurrent layer encodes more context.
| 2,019 | Computation and Language |
Hahahahaha, Duuuuude, Yeeessss!: A two-parameter characterization of
stretchable words and the dynamics of mistypings and misspellings | Stretched words like `heellllp' or `heyyyyy' are a regular feature of spoken
language, often used to emphasize or exaggerate the underlying meaning of the
root word. While stretched words are rarely found in formal written language
and dictionaries, they are prevalent within social media. In this paper, we
examine the frequency distributions of `stretchable words' found in roughly 100
billion tweets authored over an 8 year period. We introduce two central
parameters, `balance' and `stretch', that capture their main characteristics,
and explore their dynamics by creating visual tools we call `balance plots' and
`spelling trees'. We discuss how the tools and methods we develop here could be
used to study the statistical patterns of mistypings and misspellings, along
with the potential applications in augmenting dictionaries, improving language
processing, and in any area where sequence construction matters, such as
genetics.
| 2,020 | Computation and Language |
NTT's Machine Translation Systems for WMT19 Robustness Task | This paper describes NTT's submission to the WMT19 robustness task. This task
mainly focuses on translating noisy text (e.g., posts on Twitter), which
presents different difficulties from typical translation tasks such as news.
Our submission combined techniques including utilization of a synthetic corpus,
domain adaptation, and a placeholder mechanism, which significantly improved
over the previous baseline. Experimental results revealed the placeholder
mechanism, which temporarily replaces the non-standard tokens including emojis
and emoticons with special placeholder tokens during translation, improves
translation accuracy even with noisy texts.
| 2,019 | Computation and Language |
Systematic quantitative analyses reveal the folk-zoological knowledge
embedded in folktales | Cultural learning is a unique human capacity essential for a wide range of
adaptations. Researchers have argued that folktales have the pedagogical
function of transmitting the essential information for the environment. The
most important knowledge for foraging and pastoral society is folk-zoological
knowledge, such as the predator-prey relationship among wild animals, or
between wild and domesticated animals. Here, we analysed the descriptions of
the 382 animal folktales using the natural language processing method and
descriptive statistics listed in a worldwide tale-type index
(Aarne-Thompson-Uther type index). Our analyses suggested that first, the
predator-prey relationship frequently appeared in a co-occurrent animal pair
within a folktale (e.g., cat and mouse or wolf and pig), and second, the motif
of 'deception', describing the antagonistic behaviour among animals, appeared
relatively higher in 'wild and domestic animals' and 'wild animals' than other
types. Furthermore, the motif of 'deception' appeared more frequently in pairs,
corresponding to the predator-prey relationship. These results corresponded
with the hypothesis that the combination of animal characters and what happens
in stories represented relationships in the real world. The present study
demonstrated that the combination of quantitative methods and qualitative data
broaden our understanding of the evolutionary aspects of human cultures.
| 2,019 | Computation and Language |
Implicit Discourse Relation Identification for Open-domain Dialogues | Discourse relation identification has been an active area of research for
many years, and the challenge of identifying implicit relations remains largely
an unsolved task, especially in the context of an open-domain dialogue system.
Previous work primarily relies on a corpora of formal text which is inherently
non-dialogic, i.e., news and journals. This data however is not suitable to
handle the nuances of informal dialogue nor is it capable of navigating the
plethora of valid topics present in open-domain dialogue. In this paper, we
designed a novel discourse relation identification pipeline specifically tuned
for open-domain dialogue systems. We firstly propose a method to automatically
extract the implicit discourse relation argument pairs and labels from a
dataset of dialogic turns, resulting in a novel corpus of discourse relation
pairs; the first of its kind to attempt to identify the discourse relations
connecting the dialogic turns in open-domain discourse. Moreover, we have taken
the first steps to leverage the dialogue features unique to our task to further
improve the identification of such relations by performing feature ablation and
incorporating dialogue features to enhance the state-of-the-art model.
| 2,019 | Computation and Language |
Sentiment and position-taking analysis of parliamentary debates: A
systematic literature review | Parliamentary and legislative debate transcripts provide access to
information concerning the opinions, positions and policy preferences of
elected politicians. They attract attention from researchers from a wide
variety of backgrounds, from political and social sciences to computer science.
As a result, the problem of automatic sentiment and position-taking analysis
has been tackled from different perspectives, using varying approaches and
methods, and with relatively little collaboration or cross-pollination of
ideas. The existing research is scattered across publications from various
fields and venues. In this article we present the results of a systematic
literature review of 61 studies, all of which address the automatic analysis of
the sentiment and opinions expressed and positions taken by speakers in
parliamentary (and other legislative) debates. In this review, we discuss the
available research with regard to the aims and objectives of the researchers
who work on these problems, the automatic analysis tasks they undertake, and
the approaches and methods they use. We conclude by summarizing their findings,
discussing the challenges of applying computational analysis to parliamentary
debates, and suggesting possible avenues for further research.
| 2,020 | Computation and Language |
Answer Extraction for Why Arabic Questions Answering Systems: EWAQ | With the increasing amount of web information, questions answering systems
becomes very important to allow users to access to direct answers for their
requests. This paper presents an Arabic Questions Answering Systems based on
entailment metrics. The type of questions which this paper focuses on is why
questions. There are many reasons lead us to develop this system: generally,
the lack of Arabic Questions Answering Systems and scarcity Arabic Questions
Answering Systems which focus on why questions. The goal of the proposed system
in this research is to extract answers from re-ranked retrieved passages which
are retrieved by search engines. This system extracts the answer only to why
questions. This system is called by EWAQ: Entailment based Why Arabic Questions
Answering. Each answer is scored with entailment metrics and ranked according
to their scores in order to determine the most possible correct answer. EWAQ is
compared with search engines: yahoo, google and ask.com, the well-established
web-based Questions Answering systems, using manual test set. In EWAQ
experiments, it is showed that the accuracy is increased by implementing the
textual entailment in re-raking the retrieved relevant passages by search
engines and deciding the correct answer. The obtained results show that using
entailment based similarity can help significantly to tackle the why Answer
Extraction module in Arabic language.
| 2,015 | Computation and Language |
Interpretable Segmentation of Medical Free-Text Records Based on Word
Embeddings | Is it true that patients with similar conditions get similar diagnoses? In
this paper we show NLP methods and a unique corpus of documents to validate
this claim. We (1) introduce a method for representation of medical visits
based on free-text descriptions recorded by doctors, (2) introduce a new method
for clustering of patients' visits and (3) present an~application of the
proposed method on a corpus of 100,000 visits. With the proposed method we
obtained stable and separated segments of visits which were positively
validated against final medical diagnoses. We show how the presented algorithm
may be used to aid doctors during their practice.
| 2,020 | Computation and Language |
Analyzing Phonetic and Graphemic Representations in End-to-End Automatic
Speech Recognition | End-to-end neural network systems for automatic speech recognition (ASR) are
trained from acoustic features to text transcriptions. In contrast to modular
ASR systems, which contain separately-trained components for acoustic modeling,
pronunciation lexicon, and language modeling, the end-to-end paradigm is both
conceptually simpler and has the potential benefit of training the entire
system on the end task. However, such neural network models are more opaque: it
is not clear how to interpret the role of different parts of the network and
what information it learns during training. In this paper, we analyze the
learned internal representations in an end-to-end ASR model. We evaluate the
representation quality in terms of several classification tasks, comparing
phonemes and graphemes, as well as different articulatory features. We study
two languages (English and Arabic) and three datasets, finding remarkable
consistency in how different properties are represented in different layers of
the deep neural network.
| 2,020 | Computation and Language |
Multilingual Universal Sentence Encoder for Semantic Retrieval | We introduce two pre-trained retrieval focused multilingual sentence encoding
models, respectively based on the Transformer and CNN model architectures. The
models embed text from 16 languages into a single semantic space using a
multi-task trained dual-encoder that learns tied representations using
translation based bridge tasks (Chidambaram al., 2018). The models provide
performance that is competitive with the state-of-the-art on: semantic
retrieval (SR), translation pair bitext retrieval (BR) and retrieval question
answering (ReQA). On English transfer learning tasks, our sentence-level
embeddings approach, and in some cases exceed, the performance of monolingual,
English only, sentence embedding models. Our models are made available for
download on TensorFlow Hub.
| 2,019 | Computation and Language |
Cross-Domain Generalization of Neural Constituency Parsers | Neural parsers obtain state-of-the-art results on benchmark treebanks for
constituency parsing -- but to what degree do they generalize to other domains?
We present three results about the generalization of neural parsers in a
zero-shot setting: training on trees from one corpus and evaluating on
out-of-domain corpora. First, neural and non-neural parsers generalize
comparably to new domains. Second, incorporating pre-trained encoder
representations into neural parsers substantially improves their performance
across all domains, but does not give a larger relative improvement for
out-of-domain treebanks. Finally, despite the rich input representations they
learn, neural parsers still benefit from structured output prediction of output
trees, yielding higher exact match accuracy and stronger generalization both to
larger text spans and to out-of-domain corpora. We analyze generalization on
English and Chinese corpora, and in the process obtain state-of-the-art parsing
results for the Brown, Genia, and English Web treebanks.
| 2,019 | Computation and Language |
Transfer Learning from Audio-Visual Grounding to Speech Recognition | Transfer learning aims to reduce the amount of data required to excel at a
new task by re-using the knowledge acquired from learning other related tasks.
This paper proposes a novel transfer learning scenario, which distills robust
phonetic features from grounding models that are trained to tell whether a pair
of image and speech are semantically correlated, without using any textual
transcripts. As semantics of speech are largely determined by its lexical
content, grounding models learn to preserve phonetic information while
disregarding uncorrelated factors, such as speaker and channel. To study the
properties of features distilled from different layers, we use them as input
separately to train multiple speech recognition models. Empirical results
demonstrate that layers closer to input retain more phonetic information, while
following layers exhibit greater invariance to domain shift. Moreover, while
most previous studies include training data for speech recognition for feature
extractor training, our grounding models are not trained on any of those data,
indicating more universal applicability to new domains.
| 2,019 | Computation and Language |
Don't Take the Premise for Granted: Mitigating Artifacts in Natural
Language Inference | Natural Language Inference (NLI) datasets often contain hypothesis-only
biases---artifacts that allow models to achieve non-trivial performance without
learning whether a premise entails a hypothesis. We propose two probabilistic
methods to build models that are more robust to such biases and better transfer
across datasets. In contrast to standard approaches to NLI, our methods predict
the probability of a premise given a hypothesis and NLI label, discouraging
models from ignoring the premise. We evaluate our methods on synthetic and
existing NLI datasets by training on datasets containing biases and testing on
datasets containing no (or different) hypothesis-only biases. Our results
indicate that these methods can make NLI models more robust to dataset-specific
artifacts, transferring better than a baseline architecture in 9 out of 12 NLI
datasets. Additionally, we provide an extensive analysis of the interplay of
our methods with known biases in NLI datasets, as well as the effects of
encouraging models to ignore biases and fine-tuning on target datasets.
| 2,019 | Computation and Language |
On Adversarial Removal of Hypothesis-only Bias in Natural Language
Inference | Popular Natural Language Inference (NLI) datasets have been shown to be
tainted by hypothesis-only biases. Adversarial learning may help models ignore
sensitive biases and spurious correlations in data. We evaluate whether
adversarial learning can be used in NLI to encourage models to learn
representations free of hypothesis-only biases. Our analyses indicate that the
representations learned via adversarial learning may be less biased, with only
small drops in NLI accuracy.
| 2,019 | Computation and Language |
Learning to Speak Fluently in a Foreign Language: Multilingual Speech
Synthesis and Cross-Language Voice Cloning | We present a multispeaker, multilingual text-to-speech (TTS) synthesis model
based on Tacotron that is able to produce high quality speech in multiple
languages. Moreover, the model is able to transfer voices across languages,
e.g. synthesize fluent Spanish speech using an English speaker's voice, without
training on any bilingual or parallel examples. Such transfer works across
distantly related languages, e.g. English and Mandarin.
Critical to achieving this result are: 1. using a phonemic input
representation to encourage sharing of model capacity across languages, and 2.
incorporating an adversarial loss term to encourage the model to disentangle
its representation of speaker identity (which is perfectly correlated with
language in the training data) from the speech content. Further scaling up the
model by training on multiple speakers of each language, and incorporating an
autoencoding input to help stabilize attention during training, results in a
model which can be used to consistently synthesize intelligible speech for
training speakers in all languages seen during training, and in native or
foreign accents.
| 2,019 | Computation and Language |
Multi-Speaker End-to-End Speech Synthesis | In this work, we extend ClariNet (Ping et al., 2019), a fully end-to-end
speech synthesis model (i.e., text-to-wave), to generate high-fidelity speech
from multiple speakers. To model the unique characteristic of different voices,
low dimensional trainable speaker embeddings are shared across each component
of ClariNet and trained together with the rest of the model. We demonstrate
that the multi-speaker ClariNet outperforms state-of-the-art systems in terms
of naturalness, because the whole model is jointly optimized in an end-to-end
manner.
| 2,019 | Computation and Language |
Exploiting user-frequency information for mining regionalisms from
Social Media texts | The task of detecting regionalisms (expressions or words used in certain
regions) has traditionally relied on the use of questionnaires and surveys, and
has also heavily depended on the expertise and intuition of the surveyor. The
irruption of Social Media and its microblogging services has produced an
unprecedented wealth of content, mainly informal text generated by users,
opening new opportunities for linguists to extend their studies of language
variation. Previous work on automatic detection of regionalisms depended mostly
on word frequencies. In this work, we present a novel metric based on
Information Theory that incorporates user frequency. We tested this metric on a
corpus of Argentinian Spanish tweets in two ways: via manual annotation of the
relevance of the retrieved terms, and also as a feature selection method for
geolocation of users. In either case, our metric outperformed other techniques
based solely in word frequency, suggesting that measuring the amount of users
that produce a word is informative. This tool has helped lexicographers
discover several unregistered words of Argentinian Spanish, as well as
different meanings assigned to registered words.
| 2,019 | Computation and Language |
Neural Networks as Explicit Word-Based Rules | Filters of convolutional networks used in computer vision are often
visualized as image patches that maximize the response of the filter. We use
the same approach to interpret weight matrices in simple architectures for
natural language processing tasks. We interpret a convolutional network for
sentiment classification as word-based rules. Using the rule, we recover the
performance of the original model.
| 2,019 | Computation and Language |
Lingua Custodia at WMT'19: Attempts to Control Terminology | This paper describes Lingua Custodia's submission to the WMT'19 news shared
task for German-to-French on the topic of the EU elections. We report
experiments on the adaptation of the terminology of a machine translation
system to a specific topic, aimed at providing more accurate translations of
specific entities like political parties and person names, given that the
shared task provided no in-domain training parallel data dealing with the
restricted topic. Our primary submission to the shared task uses
backtranslation generated with a type of decoding allowing the insertion of
constraints in the output in order to guarantee the correct translation of
specific terms that are not necessarily observed in the data.
| 2,019 | Computation and Language |
Modeling Semantic Compositionality with Sememe Knowledge | Semantic compositionality (SC) refers to the phenomenon that the meaning of a
complex linguistic unit can be composed of the meanings of its constituents.
Most related works focus on using complicated compositionality functions to
model SC while few works consider external knowledge in models. In this paper,
we verify the effectiveness of sememes, the minimum semantic units of human
languages, in modeling SC by a confirmatory experiment. Furthermore, we make
the first attempt to incorporate sememe knowledge into SC models, and employ
the sememeincorporated models in learning representations of multiword
expressions, a typical task of SC. In experiments, we implement our models by
incorporating knowledge from a famous sememe knowledge base HowNet and perform
both intrinsic and extrinsic evaluations. Experimental results show that our
models achieve significant performance boost as compared to the baseline
methods without considering sememe knowledge. We further conduct quantitative
analysis and case studies to demonstrate the effectiveness of applying sememe
knowledge in modeling SC. All the code and data of this paper can be obtained
on https://github.com/thunlp/Sememe-SC.
| 2,019 | Computation and Language |
ReQA: An Evaluation for End-to-End Answer Retrieval Models | Popular QA benchmarks like SQuAD have driven progress on the task of
identifying answer spans within a specific passage, with models now surpassing
human performance. However, retrieving relevant answers from a huge corpus of
documents is still a challenging problem, and places different requirements on
the model architecture. There is growing interest in developing scalable answer
retrieval models trained end-to-end, bypassing the typical document retrieval
step. In this paper, we introduce Retrieval Question-Answering (ReQA), a
benchmark for evaluating large-scale sentence-level answer retrieval models. We
establish baselines using both neural encoding models as well as classical
information retrieval techniques. We release our evaluation code to encourage
further work on this challenging task.
| 2,020 | Computation and Language |
BAM! Born-Again Multi-Task Networks for Natural Language Understanding | It can be challenging to train multi-task neural networks that outperform or
even match their single-task counterparts. To help address this, we propose
using knowledge distillation where single-task models teach a multi-task model.
We enhance this training with teacher annealing, a novel method that gradually
transitions the model from distillation to supervised learning, helping the
multi-task model surpass its single-task teachers. We evaluate our approach by
multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently
improves over standard single-task and multi-task training.
| 2,019 | Computation and Language |
Acoustic Model Optimization Based On Evolutionary Stochastic Gradient
Descent with Anchors for Automatic Speech Recognition | Evolutionary stochastic gradient descent (ESGD) was proposed as a
population-based approach that combines the merits of gradient-aware and
gradient-free optimization algorithms for superior overall optimization
performance. In this paper we investigate a variant of ESGD for optimization of
acoustic models for automatic speech recognition (ASR). In this variant, we
assume the existence of a well-trained acoustic model and use it as an anchor
in the parent population whose good "gene" will propagate in the evolution to
the offsprings. We propose an ESGD algorithm leveraging the anchor models such
that it guarantees the best fitness of the population will never degrade from
the anchor model. Experiments on 50-hour Broadcast News (BN50) and 300-hour
Switchboard (SWB300) show that the ESGD with anchors can further improve the
loss and ASR performance over the existing well-trained acoustic models.
| 2,019 | Computation and Language |
Can Unconditional Language Models Recover Arbitrary Sentences? | Neural network-based generative language models like ELMo and BERT can work
effectively as general purpose sentence encoders in text classification without
further fine-tuning. Is it possible to adapt them in a similar way for use as
general-purpose decoders? For this to be possible, it would need to be the case
that for any target sentence of interest, there is some continuous
representation that can be passed to the language model to cause it to
reproduce that sentence. We set aside the difficult problem of designing an
encoder that can produce such representations and, instead, ask directly
whether such representations exist at all. To do this, we introduce a pair of
effective, complementary methods for feeding representations into pretrained
unconditional language models and a corresponding set of methods to map
sentences into and out of this representation space, the reparametrized
sentence space. We then investigate the conditions under which a language model
can be made to generate a sentence through the identification of a point in
such a space and find that it is possible to recover arbitrary sentences nearly
perfectly with language models and representations of moderate size without
modifying any model parameters.
| 2,020 | Computation and Language |
Modelling the Socialization of Creative Agents in a Master-Apprentice
Setting: The Case of Movie Title Puns | This paper presents work on modelling the social psychological aspect of
socialization in the case of a computationally creative master-apprentice
system. In each master-apprentice pair, the master, a genetic algorithm, is
seen as a parent for its apprentice, which is an NMT based sequence-to-sequence
model. The effect of different parenting styles on the creative output of each
pair is in the focus of this study. This approach brings a novel view point to
computational social creativity, which has mainly focused in the past on
computationally creative agents being on a socially equal level, whereas our
approach studies the phenomenon in the context of a social hierarchy.
| 2,019 | Computation and Language |
Vision-and-Dialog Navigation | Robots navigating in human environments should use language to ask for
assistance and be able to understand human responses. To study this challenge,
we introduce Cooperative Vision-and-Dialog Navigation, a dataset of over 2k
embodied, human-human dialogs situated in simulated, photorealistic home
environments. The Navigator asks questions to their partner, the Oracle, who
has privileged access to the best next steps the Navigator should take
according to a shortest path planner. To train agents that search an
environment for a goal location, we define the Navigation from Dialog History
task. An agent, given a target object and a dialog history between humans
cooperating to find that object, must infer navigation actions towards the goal
in unexplored environments. We establish an initial, multi-modal
sequence-to-sequence model and demonstrate that looking farther back in the
dialog history improves performance. Sourcecode and a live interface demo can
be found at https://cvdn.dev/
| 2,019 | Computation and Language |
Massively Multilingual Neural Machine Translation in the Wild: Findings
and Challenges | We introduce our efforts towards building a universal neural machine
translation (NMT) system capable of translating between any language pair. We
set a milestone towards this goal by building a single massively multilingual
NMT model handling 103 languages trained on over 25 billion examples. Our
system demonstrates effective transfer learning ability, significantly
improving translation quality of low-resource languages, while keeping
high-resource language translation quality on-par with competitive bilingual
baselines. We provide in-depth analysis of various aspects of model building
that are crucial to achieving quality and practicality in universal NMT. While
we prototype a high-quality universal translation system, our extensive
empirical analysis exposes issues that need to be further addressed, and we
suggest directions for future research.
| 2,019 | Computation and Language |
No Word is an Island -- A Transformation Weighting Model for Semantic
Composition | Composition models of distributional semantics are used to construct phrase
representations from the representations of their words. Composition models are
typically situated on two ends of a spectrum. They either have a small number
of parameters but compose all phrases in the same way, or they perform
word-specific compositions at the cost of a far larger number of parameters. In
this paper we propose transformation weighting (TransWeight), a composition
model that consistently outperforms existing models on nominal compounds,
adjective-noun phrases and adverb-adjective phrases in English, German and
Dutch. TransWeight drastically reduces the number of parameters needed compared
to the best model in the literature by composing similar words in the same way.
| 2,019 | Computation and Language |
MeetUp! A Corpus of Joint Activity Dialogues in a Visual Environment | Building computer systems that can converse about their visual environment is
one of the oldest concerns of research in Artificial Intelligence and
Computational Linguistics (see, for example, Winograd's 1972 SHRDLU system).
Only recently, however, have methods from computer vision and natural language
processing become powerful enough to make this vision seem more attainable.
Pushed especially by developments in computer vision, many data sets and
collection environments have recently been published that bring together verbal
interaction and visual processing. Here, we argue that these datasets tend to
oversimplify the dialogue part, and we propose a task---MeetUp!---that requires
both visual and conversational grounding, and that makes stronger demands on
representations of the discourse. MeetUp! is a two-player coordination game
where players move in a visual environment, with the objective of finding each
other. To do so, they must talk about what they see, and achieve mutual
understanding. We describe a data collection and show that the resulting
dialogues indeed exhibit the dialogue phenomena of interest, while also
challenging the language & vision aspect.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.