Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
SurfCon: Synonym Discovery on Privacy-Aware Clinical Data | Unstructured clinical texts contain rich health-related information. To
better utilize the knowledge buried in clinical texts, discovering synonyms for
a medical query term has become an important task. Recent automatic synonym
discovery methods leveraging raw text information have been developed. However,
to preserve patient privacy and security, it is usually quite difficult to get
access to large-scale raw clinical texts. In this paper, we study a new setting
named synonym discovery on privacy-aware clinical data (i.e., medical terms
extracted from the clinical texts and their aggregated co-occurrence counts,
without raw clinical texts). To solve the problem, we propose a new framework
SurfCon that leverages two important types of information in the privacy-aware
clinical data, i.e., the surface form information, and the global context
information for synonym discovery. In particular, the surface form module
enables us to detect synonyms that look similar while the global context module
plays a complementary role to discover synonyms that are semantically similar
but in different surface forms, and both allow us to deal with the OOV query
issue (i.e., when the query is not found in the given data). We conduct
extensive experiments and case studies on publicly available privacy-aware
clinical data, and show that SurfCon can outperform strong baseline methods by
large margins under various settings.
| 2,019 | Computation and Language |
Phoneme-Based Contextualization for Cross-Lingual Speech Recognition in
End-to-End Models | Contextual automatic speech recognition, i.e., biasing recognition towards a
given context (e.g. user's playlists, or contacts), is challenging in
end-to-end (E2E) models. Such models maintain a limited number of candidates
during beam-search decoding, and have been found to recognize rare named
entities poorly. The problem is exacerbated when biasing towards proper nouns
in foreign languages, e.g., geographic location names, which are virtually
unseen in training and are thus out-of-vocabulary (OOV). While grapheme or
wordpiece E2E models might have a difficult time spelling OOV words, phonemes
are more acoustically salient and past work has shown that E2E phoneme models
can better predict such words. In this work, we propose an E2E model containing
both English wordpieces and phonemes in the modeling space, and perform
contextual biasing of foreign words at the phoneme level by mapping
pronunciations of foreign words into similar English phonemes. In experimental
evaluations, we find that the proposed approach performs 16% better than a
grapheme-only biasing model, and 8% better than a wordpiece-only biasing model
on a foreign place name recognition task, with only slight degradation on
regular English tasks.
| 2,019 | Computation and Language |
Neural Machine Translating from Natural Language to SPARQL | SPARQL is a highly powerful query language for an ever-growing number of
Linked Data resources and Knowledge Graphs. Using it requires a certain
familiarity with the entities in the domain to be queried as well as expertise
in the language's syntax and semantics, none of which average human web users
can be assumed to possess. To overcome this limitation, automatically
translating natural language questions to SPARQL queries has been a vibrant
field of research. However, to this date, the vast success of deep learning
methods has not yet been fully propagated to this research problem. This paper
contributes to filling this gap by evaluating the utilization of eight
different Neural Machine Translation (NMT) models for the task of translating
from natural language to the structured query language SPARQL. While
highlighting the importance of high-quantity and high-quality datasets, the
results show a dominance of a CNN-based architecture with a BLEU score of up to
98 and accuracy of up to 94%.
| 2,019 | Computation and Language |
Approximating Interactive Human Evaluation with Self-Play for
Open-Domain Dialog Systems | Building an open-domain conversational agent is a challenging problem.
Current evaluation methods, mostly post-hoc judgments of static conversation,
do not capture conversation quality in a realistic interactive context. In this
paper, we investigate interactive human evaluation and provide evidence for its
necessity; we then introduce a novel, model-agnostic, and dataset-agnostic
method to approximate it. In particular, we propose a self-play scenario where
the dialog system talks to itself and we calculate a combination of proxies
such as sentiment and semantic coherence on the conversation trajectory. We
show that this metric is capable of capturing the human-rated quality of a
dialog model better than any automated metric known to-date, achieving a
significant Pearson correlation (r>.7, p<.05). To investigate the strengths of
this novel metric and interactive evaluation in comparison to state-of-the-art
metrics and human evaluation of static conversations, we perform extended
experiments with a set of models, including several that make novel
improvements to recent hierarchical dialog generation architectures through
sentiment and semantic knowledge distillation on the utterance level. Finally,
we open-source the interactive evaluation platform we built and the dataset we
collected to allow researchers to efficiently deploy and evaluate dialog
models.
| 2,019 | Computation and Language |
Identification of Tasks, Datasets, Evaluation Metrics, and Numeric
Scores for Scientific Leaderboards Construction | While the fast-paced inception of novel tasks and new datasets helps foster
active research in a community towards interesting directions, keeping track of
the abundance of research activity in different areas on different datasets is
likely to become increasingly difficult. The community could greatly benefit
from an automatic system able to summarize scientific results, e.g., in the
form of a leaderboard. In this paper we build two datasets and develop a
framework (TDMS-IE) aimed at automatically extracting task, dataset, metric and
score from NLP papers, towards the automatic construction of leaderboards.
Experiments show that our model outperforms several baselines by a large
margin. Our model is a first step towards automatic leaderboard construction,
e.g., in the NLP domain.
| 2,019 | Computation and Language |
Neural Collective Entity Linking Based on Recurrent Random Walk Network
Learning | Benefiting from the excellent ability of neural networks on learning semantic
representations, existing studies for entity linking (EL) have resorted to
neural networks to exploit both the local mention-to-entity compatibility and
the global interdependence between different EL decisions for target entity
disambiguation. However, most neural collective EL methods depend entirely upon
neural networks to automatically model the semantic dependencies between
different EL decisions, which lack of the guidance from external knowledge. In
this paper, we propose a novel end-to-end neural network with recurrent
random-walk layers for collective EL, which introduces external knowledge to
model the semantic interdependence between different EL decisions.
Specifically, we first establish a model based on local context features, and
then stack random-walk layers to reinforce the evidence for related EL
decisions into high-probability decisions, where the semantic interdependence
between candidate entities is mainly induced from an external knowledge base.
Finally, a semantic regularizer that preserves the collective EL decisions
consistency is incorporated into the conventional objective function, so that
the external knowledge base can be fully exploited in collective EL decisions.
Experimental results and in-depth analysis on various datasets show that our
model achieves better performance than other state-of-the-art models. Our code
and data are released at \url{https://github.com/DeepLearnXMU/RRWEL}.
| 2,019 | Computation and Language |
Automatic Acrostic Couplet Generation with Three-Stage Neural Network
Pipelines | As one of the quintessence of Chinese traditional culture, couplet
compromises two syntactically symmetric clauses equal in length, namely, an
antecedent and subsequent clause. Moreover, corresponding characters and
phrases at the same position of the two clauses are paired with each other
under certain constraints of semantic and/or syntactic relatedness. Automatic
couplet generation is recognized as a challenging problem even in the
Artificial Intelligence field. In this paper, we comprehensively study on
automatic generation of acrostic couplet with the first characters defined by
users. The complete couplet generation is mainly divided into three stages,
that is, antecedent clause generation pipeline, subsequent clause generation
pipeline and clause re-ranker. To realize semantic and/or syntactic relatedness
between two clauses, attention-based Sequence-to-Sequence (S2S) neural network
is employed. Moreover, to provide diverse couplet candidates for re-ranking, a
cluster-based beam search approach is incorporated into the S2S network. Both
BLEU metrics and human judgments have demonstrated the effectiveness of our
proposed method. Eventually, a mini-program based on this generation system is
developed and deployed on Wechat for real users.
| 2,019 | Computation and Language |
A Syllable-Structured, Contextually-Based Conditionally Generation of
Chinese Lyrics | This paper presents a novel, syllable-structured Chinese lyrics generation
model given a piece of original melody. Most previously reported lyrics
generation models fail to include the relationship between lyrics and melody.
In this work, we propose to interpret lyrics-melody alignments as syllable
structural information and use a multi-channel sequence-to-sequence model with
considering both phrasal structures and semantics. Two different RNN encoders
are applied, one of which is for encoding syllable structures while the other
for semantic encoding with contextual sentences or input keywords. Moreover, a
large Chinese lyrics corpus for model training is leveraged. With automatic and
human evaluations, results demonstrate the effectiveness of our proposed lyrics
generation model. To the best of our knowledge, there is few previous reports
on lyrics generation considering both music and linguistic perspectives.
| 2,019 | Computation and Language |
Automatic Conditional Generation of Personalized Social Media Short
Texts | Automatic text generation has received much attention owing to rapid
development of deep neural networks. In general, text generation systems based
on statistical language model will not consider anthropomorphic
characteristics, which results in machine-like generated texts. To fill the
gap, we propose a conditional language generation model with Big Five
Personality (BFP) feature vectors as input context, which writes human-like
short texts. The short text generator consists of a layer of long short memory
network (LSTM), where a BFP feature vector is concatenated as one part of input
for each cell. To enable supervised training generation model, a text
classification model based convolution neural network (CNN) has been used to
prepare BFP-tagged Chinese micro-blog corpora. Validated by a BFP linguistic
computational model, our generated Chinese short texts exhibit discriminative
personality styles, which are also syntactically correct and semantically
smooth with appropriate emoticons. With combination of natural language
generation with psychological linguistics, our proposed BFP-dependent text
generation model can be widely used for individualization in machine
translation, image caption, dialogue generation and so on.
| 2,018 | Computation and Language |
Exploiting Unsupervised Pre-training and Automated Feature Engineering
for Low-resource Hate Speech Detection in Polish | This paper presents our contribution to PolEval 2019 Task 6: Hate speech and
bullying detection. We describe three parallel approaches that we followed:
fine-tuning a pre-trained ULMFiT model to our classification task, fine-tuning
a pre-trained BERT model to our classification task, and using the TPOT library
to find the optimal pipeline. We present results achieved by these three tools
and review their advantages and disadvantages in terms of user experience. Our
team placed second in subtask 2 with a shallow model found by TPOT: a~logistic
regression classifier with non-trivial feature engineering.
| 2,019 | Computation and Language |
Evaluating Computational Language Models with Scaling Properties of
Natural Language | In this article, we evaluate computational models of natural language with
respect to the universal statistical behaviors of natural language. Statistical
mechanical analyses have revealed that natural language text is characterized
by scaling properties, which quantify the global structure in the vocabulary
population and the long memory of a text. We study whether five scaling
properties (given by Zipf's law, Heaps' law, Ebeling's method, Taylor's law,
and long-range correlation analysis) can serve for evaluation of computational
models. Specifically, we test $n$-gram language models, a probabilistic
context-free grammar (PCFG), language models based on Simon/Pitman-Yor
processes, neural language models, and generative adversarial networks (GANs)
for text generation. Our analysis reveals that language models based on
recurrent neural networks (RNNs) with a gating mechanism (i.e., long short-term
memory, LSTM; a gated recurrent unit, GRU; and quasi-recurrent neural networks,
QRNNs) are the only computational models that can reproduce the long memory
behavior of natural language. Furthermore, through comparison with recently
proposed model-based evaluation methods, we find that the exponent of Taylor's
law is a good indicator of model quality.
| 2,019 | Computation and Language |
RLTM: An Efficient Neural IR Framework for Long Documents | Deep neural networks have achieved significant improvements in information
retrieval (IR). However, most existing models are computational costly and can
not efficiently scale to long documents. This paper proposes a novel End-to-End
neural ranking framework called Reinforced Long Text Matching (RLTM) which
matches a query with long documents efficiently and effectively. The core idea
behind the framework can be analogous to the human judgment process which
firstly locates the relevance parts quickly from the whole document and then
matches these parts with the query carefully to obtain the final label.
Firstly, we select relevant sentences from the long documents by a coarse and
efficient matching model. Secondly, we generate a relevance score by a more
sophisticated matching model based on the sentence selected. The whole model is
trained jointly with reinforcement learning in a pairwise manner by maximizing
the expected score gaps between positive and negative examples. Experimental
results demonstrate that RLTM has greatly improved the efficiency and
effectiveness of the state-of-the-art models.
| 2,019 | Computation and Language |
Retrieving Sequential Information for Non-Autoregressive Neural Machine
Translation | Non-Autoregressive Transformer (NAT) aims to accelerate the Transformer model
through discarding the autoregressive mechanism and generating target words
independently, which fails to exploit the target sequential information.
Over-translation and under-translation errors often occur for the above reason,
especially in the long sentence translation scenario. In this paper, we propose
two approaches to retrieve the target sequential information for NAT to enhance
its translation ability while preserving the fast-decoding property. Firstly,
we propose a sequence-level training method based on a novel reinforcement
algorithm for NAT (Reinforce-NAT) to reduce the variance and stabilize the
training procedure. Secondly, we propose an innovative Transformer decoder
named FS-decoder to fuse the target sequential information into the top layer
of the decoder. Experimental results on three translation tasks show that the
Reinforce-NAT surpasses the baseline NAT system by a significant margin on BLEU
without decelerating the decoding speed and the FS-decoder achieves comparable
translation performance to the autoregressive Transformer with considerable
speedup.
| 2,019 | Computation and Language |
Learning with fuzzy hypergraphs: a topical approach to query-oriented
text summarization | Existing graph-based methods for extractive document summarization represent
sentences of a corpus as the nodes of a graph or a hypergraph in which edges
depict relationships of lexical similarity between sentences. Such approaches
fail to capture semantic similarities between sentences when they express a
similar information but have few words in common and are thus lexically
dissimilar. To overcome this issue, we propose to extract semantic similarities
based on topical representations of sentences. Inspired by the Hierarchical
Dirichlet Process, we propose a probabilistic topic model in order to infer
topic distributions of sentences. As each topic defines a semantic connection
among a group of sentences with a certain degree of membership for each
sentence, we propose a fuzzy hypergraph model in which nodes are sentences and
fuzzy hyperedges are topics. To produce an informative summary, we extract a
set of sentences from the corpus by simultaneously maximizing their relevance
to a user-defined query, their centrality in the fuzzy hypergraph and their
coverage of topics present in the corpus. We formulate a polynomial time
algorithm building on the theory of submodular functions to solve the
associated optimization problem. A thorough comparative analysis with other
graph-based summarization systems is included in the paper. Our obtained
results show the superiority of our method in terms of content coverage of the
summaries.
| 2,019 | Computation and Language |
Semantically Driven Auto-completion | The Bloomberg Terminal has been a leading source of financial data and
analytics for over 30 years. Through its thousands of functions, the Terminal
allows its users to query and run analytics over a large array of data sources,
including structured, semi-structured, and unstructured data; as well as plot
charts, set up event-driven alerts and triggers, create interactive maps,
exchange information via instant and email-style messages, and so on. To
improve user experience, we have been building question answering systems that
can understand a wide range of natural language constructions for various
domains that are of fundamental interest to our users. Such natural language
interfaces, while exceedingly helpful to users, introduce a number of usability
challenges of their own. We tackle some of these challenges through
auto-completion for query formulation. A distinguishing mark of our
auto-complete systems is that they are based on and guided by corresponding
semantic parsing systems. We describe the auto-complete problem as it arises in
this setting, the novel algorithms that we use to solve it, and report on the
quality of the results and the efficiency of our approach.
| 2,019 | Computation and Language |
Smaller Text Classifiers with Discriminative Cluster Embeddings | Word embedding parameters often dominate overall model sizes in neural
methods for natural language processing. We reduce deployed model sizes of text
classifiers by learning a hard word clustering in an end-to-end manner. We use
the Gumbel-Softmax distribution to maximize over the latent clustering while
minimizing the task loss. We propose variations that selectively assign
additional parameters to words, which further improves accuracy while still
remaining parameter-efficient.
| 2,019 | Computation and Language |
Variational Sequential Labelers for Semi-Supervised Learning | We introduce a family of multitask variational methods for semi-supervised
sequence labeling. Our model family consists of a latent-variable generative
model and a discriminative labeler. The generative models use latent variables
to define the conditional probability of a word given its context, drawing
inspiration from word prediction objectives commonly used in learning word
embeddings. The labeler helps inject discriminative information into the latent
space. We explore several latent variable configurations, including ones with
hierarchical structure, which enables the model to account for both
label-specific and word-specific information. Our models consistently
outperform standard sequential baselines on 8 sequence labeling datasets, and
improve further with unlabeled data.
| 2,019 | Computation and Language |
DAL: Dual Adversarial Learning for Dialogue Generation | In open-domain dialogue systems, generative approaches have attracted much
attention for response generation. However, existing methods are heavily
plagued by generating safe responses and unnatural responses. To alleviate
these two problems, we propose a novel framework named Dual Adversarial
Learning (DAL) for high-quality response generation. DAL is the first work to
innovatively utilizes the duality between query generation and response
generation to avoid safe responses and increase the diversity of the generated
responses. Additionally, DAL uses adversarial learning to mimic human judges
and guides the system to generate natural responses. Experimental results
demonstrate that DAL effectively improves both diversity and overall quality of
the generated responses. DAL outperforms the state-of-the-art methods regarding
automatic metrics and human evaluations.
| 2,019 | Computation and Language |
Systematic improvement of user engagement with academic titles using
computational linguistics | This paper describes a novel approach to systematically improve information
interactions based solely on its wording. Following an interdisciplinary
literature review, we recognized three key attributes of words that drive user
engagement: (1) Novelty (2) Familiarity (3) Emotionality. Based on these
attributes, we developed a model to systematically improve a given content
using computational linguistics, natural language processing (NLP) and text
analysis (word frequency, sentiment analysis and lexical substitution). We
conducted a pilot study (n=216) in which the model was used to formalize
evaluation and optimization of academic titles. A between-group design (A/B
testing) was used to compare responses to the original and modified (treatment)
titles. Data was collected for selection and evaluation (User Engagement
Scale). The pilot results suggest that user engagement with digital information
is fostered by, and perhaps dependent upon, the wording being used. They also
provide empirical support that engaging content can be systematically evaluated
and produced. The preliminary results show that the modified (treatment) titles
had significantly higher scores for information use and user engagement
(selection and evaluation). We propose that computational linguistics is a
useful approach for optimizing information interactions. The empirically based
insights can inform the development of digital content strategies, thereby
improving the success of information interactions.elop more sophisticated
interaction measures.
| 2,019 | Computation and Language |
Sequence Generation: From Both Sides to the Middle | The encoder-decoder framework has achieved promising process for many
sequence generation tasks, such as neural machine translation and text
summarization. Such a framework usually generates a sequence token by token
from left to right, hence (1) this autoregressive decoding procedure is
time-consuming when the output sentence becomes longer, and (2) it lacks the
guidance of future context which is crucial to avoid under translation. To
alleviate these issues, we propose a synchronous bidirectional sequence
generation (SBSG) model which predicts its outputs from both sides to the
middle simultaneously. In the SBSG model, we enable the left-to-right (L2R) and
right-to-left (R2L) generation to help and interact with each other by
leveraging interactive bidirectional attention network. Experiments on neural
machine translation (En-De, Ch-En, and En-Ro) and text summarization tasks show
that the proposed model significantly speeds up decoding while improving the
generation quality compared to the autoregressive Transformer.
| 2,019 | Computation and Language |
Investigating Biases in Textual Entailment Datasets | The ability to understand logical relationships between sentences is an
important task in language understanding. To aid in progress for this task,
researchers have collected datasets for machine learning and evaluation of
current systems. However, like in the crowdsourced Visual Question Answering
(VQA) task, some biases in the data inevitably occur. In our experiments, we
find that performing classification on just the hypotheses on the SNLI dataset
yields an accuracy of 64%. We analyze the bias extent in the SNLI and the
MultiNLI dataset, discuss its implication, and propose a simple method to
reduce the biases in the datasets.
| 2,019 | Computation and Language |
Evaluating the Supervised and Zero-shot Performance of Multi-lingual
Translation Models | We study several methods for full or partial sharing of the decoder
parameters of multilingual NMT models. We evaluate both fully supervised and
zero-shot translation performance in 110 unique translation directions using
only the WMT 2019 shared task parallel datasets for training. We use additional
test sets and re-purpose evaluation methods recently used for unsupervised MT
in order to evaluate zero-shot translation performance for language pairs where
no gold-standard parallel data is available. To our knowledge, this is the
largest evaluation of multi-lingual translation yet conducted in terms of the
total size of the training data we use, and in terms of the diversity of
zero-shot translation pairs we evaluate. We conduct an in-depth evaluation of
the translation performance of different models, highlighting the trade-offs
between methods of sharing decoder parameters. We find that models which have
task-specific decoder parameters outperform models where decoder parameters are
fully shared across all tasks.
| 2,019 | Computation and Language |
Business Taxonomy Construction Using Concept-Level Hierarchical
Clustering | Business taxonomies are indispensable tools for investors to do equity
research and make professional decisions. However, to identify the structure of
industry sectors in an emerging market is challenging for two reasons. First,
existing taxonomies are designed for mature markets, which may not be the
appropriate classification for small companies with innovative business models.
Second, emerging markets are fast-developing, thus the static business
taxonomies cannot promptly reflect the new features. In this article, we
propose a new method to construct business taxonomies automatically from the
content of corporate annual reports. Extracted concepts are hierarchically
clustered using greedy affinity propagation. Our method requires less
supervision and is able to discover new terms. Experiments and evaluation on
the Chinese National Equities Exchange and Quotations (NEEQ) market show
several advantages of the business taxonomy we build. Our results provide an
effective tool for understanding and investing in the new growth companies.
| 2,020 | Computation and Language |
On the Definition of Japanese Word | The annotation guidelines for Universal Dependencies (UD) stipulate that the
basic units of dependency annotation are syntactic words, but it is not clear
what are syntactic words in Japanese. Departing from the long tradition of
using phrasal units called bunsetsu for dependency parsing, the current UD
Japanese treebanks adopt the Short Unit Words. However, we argue that they are
not syntactic word as specified by the annotation guidelines. Although we find
non-mainstream attempts to linguistically define Japanese words, such
definitions have never been applied to corpus annotation. We discuss the costs
and benefits of adopting the rather unfamiliar criteria.
| 2,019 | Computation and Language |
Decomposable Neural Paraphrase Generation | Paraphrasing exists at different granularity levels, such as lexical level,
phrasal level and sentential level. This paper presents Decomposable Neural
Paraphrase Generator (DNPG), a Transformer-based model that can learn and
generate paraphrases of a sentence at different levels of granularity in a
disentangled way. Specifically, the model is composed of multiple encoders and
decoders with different structures, each of which corresponds to a specific
granularity. The empirical study shows that the decomposition mechanism of DNPG
makes paraphrase generation more interpretable and controllable. Based on DNPG,
we further develop an unsupervised domain adaptation method for paraphrase
generation. Experimental results show that the proposed model achieves
competitive in-domain performance compared to the state-of-the-art neural
models, and significantly better performance when adapting to a new domain.
| 2,019 | Computation and Language |
Emotionally-Aware Chatbots: A Survey | Textual conversational agent or chatbots' development gather tremendous
traction from both academia and industries in recent years. Nowadays, chatbots
are widely used as an agent to communicate with a human in some services such
as booking assistant, customer service, and also a personal partner. The
biggest challenge in building chatbot is to build a humanizing machine to
improve user engagement. Some studies show that emotion is an important aspect
to humanize machine, including chatbot. In this paper, we will provide a
systematic review of approaches in building an emotionally-aware chatbot (EAC).
As far as our knowledge, there is still no work focusing on this area. We
propose three research question regarding EAC studies. We start with the
history and evolution of EAC, then several approaches to build EAC by previous
studies, and some available resources in building EAC. Based on our
investigation, we found that in the early development, EAC exploits a simple
rule-based approach while now most of EAC use neural-based approach. We also
notice that most of EAC contain emotion classifier in their architecture, which
utilize several available affective resources. We also predict that the
development of EAC will continue to gain more and more attention from scholars,
noted by some recent studies propose new datasets for building EAC in various
languages.
| 2,019 | Computation and Language |
A Tensorized Transformer for Language Modeling | Latest development of neural models has connected the encoder and decoder
through a self-attention mechanism. In particular, Transformer, which is solely
based on self-attention, has led to breakthroughs in Natural Language
Processing (NLP) tasks. However, the multi-head attention mechanism, as a key
component of Transformer, limits the effective deployment of the model to a
resource-limited setting. In this paper, based on the ideas of tensor
decomposition and parameters sharing, we propose a novel self-attention model
(namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD). We
test and verify the proposed attention method on three language modeling tasks
(i.e., PTB, WikiText-103 and One-billion) and a neural machine translation task
(i.e., WMT-2016 English-German). Multi-linear attention can not only largely
compress the model parameters but also obtain performance improvements,
compared with a number of language modeling approaches, such as Transformer,
Transformer-XL, and Transformer with tensor train decomposition.
| 2,019 | Computation and Language |
Conversational Response Re-ranking Based on Event Causality and Role
Factored Tensor Event Embedding | We propose a novel method for selecting coherent and diverse responses for a
given dialogue context. The proposed method re-ranks response candidates
generated from conversational models by using event causality relations between
events in a dialogue history and response candidates (e.g., ``be stressed out''
precedes ``relieve stress''). We use distributed event representation based on
the Role Factored Tensor Model for a robust matching of event causality
relations due to limited event causality knowledge of the system. Experimental
results showed that the proposed method improved coherency and dialogue
continuity of system responses.
| 2,019 | Computation and Language |
Classification and Clustering of Arguments with Contextualized Word
Embeddings | We experiment with two recent contextualized word embedding methods (ELMo and
BERT) in the context of open-domain argument search. For the first time, we
show how to leverage the power of contextualized word embeddings to classify
and cluster topic-dependent arguments, achieving impressive results on both
tasks and across multiple datasets. For argument classification, we improve the
state-of-the-art for the UKP Sentential Argument Mining Corpus by 20.8
percentage points and for the IBM Debater - Evidence Sentences dataset by 7.4
percentage points. For the understudied task of argument clustering, we propose
a pre-training step which improves by 7.8 percentage points over strong
baselines on a novel dataset, and by 12.3 percentage points for the Argument
Facet Similarity (AFS) Corpus.
| 2,019 | Computation and Language |
SylNet: An Adaptable End-to-End Syllable Count Estimator for Speech | Automatic syllable count estimation (SCE) is used in a variety of
applications ranging from speaking rate estimation to detecting social activity
from wearable microphones or developmental research concerned with quantifying
speech heard by language-learning children in different environments. The
majority of previously utilized SCE methods have relied on heuristic DSP
methods, and only a small number of bi-directional long short-term memory
(BLSTM) approaches have made use of modern machine learning approaches in the
SCE task. This paper presents a novel end-to-end method called SylNet for
automatic syllable counting from speech, built on the basis of a recent
developments in neural network architectures. We describe how the entire model
can be optimized directly to minimize SCE error on the training data without
annotations aligned at the syllable level, and how it can be adapted to new
languages using limited speech data with known syllable counts. Experiments on
several different languages reveal that SylNet generalizes to languages beyond
its training data and further improves with adaptation. It also outperforms
several previously proposed methods for syllabification, including end-to-end
BLSTMs.
| 2,019 | Computation and Language |
A computational model of early language acquisition from audiovisual
experiences of young infants | Earlier research has suggested that human infants might use statistical
dependencies between speech and non-linguistic multimodal input to bootstrap
their language learning before they know how to segment words from running
speech. However, feasibility of this hypothesis in terms of real-world infant
experiences has remained unclear. This paper presents a step towards a more
realistic test of the multimodal bootstrapping hypothesis by describing a
neural network model that can learn word segments and their meanings from
referentially ambiguous acoustic input. The model is tested on recordings of
real infant-caregiver interactions using utterance-level labels for concrete
visual objects that were attended by the infant when caregiver spoke an
utterance containing the name of the object, and using random visual labels for
utterances during absence of attention. The results show that beginnings of
lexical knowledge may indeed emerge from individually ambiguous learning
scenarios. In addition, the hidden layers of the network show gradually
increasing selectivity to phonetic categories as a function of layer depth,
resembling models trained for phone recognition in a supervised manner.
| 2,019 | Computation and Language |
Translationese in Machine Translation Evaluation | The term translationese has been used to describe the presence of unusual
features of translated text. In this paper, we provide a detailed analysis of
the adverse effects of translationese on machine translation evaluation
results. Our analysis shows evidence to support differences in text originally
written in a given language relative to translated text and this can
potentially negatively impact the accuracy of machine translation evaluations.
For this reason we recommend that reverse-created test data be omitted from
future machine translation test sets. In addition, we provide a re-evaluation
of a past high-profile machine translation evaluation claiming human-parity of
MT, as well as analysis of the since re-evaluations of it. We find potential
ways of improving the reliability of all three past evaluations. One important
issue not previously considered is the statistical power of significance tests
applied in past evaluations that aim to investigate human-parity of MT. Since
the very aim of such evaluations is to reveal legitimate ties between human and
MT systems, power analysis is of particular importance, where low power could
result in claims of human parity that in fact simply correspond to Type II
error. We therefore provide a detailed power analysis of tests used in such
evaluations to provide an indication of a suitable minimum sample size of
translations for such studies. Subsequently, since no past evaluation that
aimed to investigate claims of human parity ticks all boxes in terms of
accuracy and reliability, we rerun the evaluation of the systems claiming human
parity. Finally, we provide a comprehensive check-list for future machine
translation evaluation.
| 2,019 | Computation and Language |
KaWAT: A Word Analogy Task Dataset for Indonesian | We introduced KaWAT (Kata Word Analogy Task), a new word analogy task dataset
for Indonesian. We evaluated on it several existing pretrained Indonesian word
embeddings and embeddings trained on Indonesian online news corpus. We also
tested them on two downstream tasks and found that pretrained word embeddings
helped either by reducing the training epochs or yielding significant
performance gains.
| 2,019 | Computation and Language |
Multilingual Named Entity Recognition Using Pretrained Embeddings,
Attention Mechanism and NCRF | In this paper we tackle multilingual named entity recognition task. We use
the BERT Language Model as embeddings with bidirectional recurrent network,
attention, and NCRF on the top. We apply multilingual BERT only as embedder
without any fine-tuning. We test out model on the dataset of the BSNLP shared
task, which consists of texts in Bulgarian, Czech, Polish and Russian
languages.
| 2,023 | Computation and Language |
Learning Latent Trees with Stochastic Perturbations and Differentiable
Dynamic Programming | We treat projective dependency trees as latent variables in our probabilistic
model and induce them in such a way as to be beneficial for a downstream task,
without relying on any direct tree supervision. Our approach relies on Gumbel
perturbations and differentiable dynamic programming. Unlike previous
approaches to latent tree learning, we stochastically sample global structures
and our parser is fully differentiable. We illustrate its effectiveness on
sentiment analysis and natural language inference tasks. We also study its
properties on a synthetic structure induction task. Ablation studies emphasize
the importance of both stochasticity and constraining latent structures to be
projective trees.
| 2,019 | Computation and Language |
LIAAD at SemDeep-5 Challenge: Word-in-Context (WiC) | This paper describes the LIAAD system that was ranked second place in the
Word-in-Context challenge (WiC) featured in SemDeep-5. Our solution is based on
a novel system for Word Sense Disambiguation (WSD) using contextual embeddings
and full-inventory sense embeddings. We adapt this WSD system, in a
straightforward manner, for the present task of detecting whether the same
sense occurs in a pair of sentences. Additionally, we show that our solution is
able to achieve competitive performance even without using the provided
training or development sets, mitigating potential concerns related to task
overfitting
| 2,019 | Computation and Language |
Language Modelling Makes Sense: Propagating Representations through
WordNet for Full-Coverage Word Sense Disambiguation | Contextual embeddings represent a new generation of semantic representations
learned from Neural Language Modelling (NLM) that addresses the issue of
meaning conflation hampering traditional word embeddings. In this work, we show
that contextual embeddings can be used to achieve unprecedented gains in Word
Sense Disambiguation (WSD) tasks. Our approach focuses on creating sense-level
embeddings with full-coverage of WordNet, and without recourse to explicit
knowledge of sense distributions or task-specific modelling. As a result, a
simple Nearest Neighbors (k-NN) method using our representations is able to
consistently surpass the performance of previous systems using powerful neural
sequencing models. We also analyse the robustness of our approach when ignoring
part-of-speech and lemma features, requiring disambiguation against the full
sense inventory, and revealing shortcomings to be improved. Finally, we explore
applications of our sense embeddings for concept-level analyses of contextual
embeddings and their respective NLMs.
| 2,019 | Computation and Language |
Is It Worth the Attention? A Comparative Evaluation of Attention Layers
for Argument Unit Segmentation | Attention mechanisms have seen some success for natural language processing
downstream tasks in recent years and generated new State-of-the-Art results. A
thorough evaluation of the attention mechanism for the task of Argumentation
Mining is missing, though. With this paper, we report a comparative evaluation
of attention layers in combination with a bidirectional long short-term memory
network, which is the current state-of-the-art approach to the unit
segmentation task. We also compare sentence-level contextualized word
embeddings to pre-generated ones. Our findings suggest that for this task the
additional attention layer does not improve upon a less complex approach. In
most cases, the contextualized embeddings do also not show an improvement on
the baseline score.
| 2,019 | Computation and Language |
Mutual exclusivity as a challenge for deep neural networks | Strong inductive biases allow children to learn in fast and adaptable ways.
Children use the mutual exclusivity (ME) bias to help disambiguate how words
map to referents, assuming that if an object has one label then it does not
need another. In this paper, we investigate whether or not standard neural
architectures have an ME bias, demonstrating that they lack this learning
assumption. Moreover, we show that their inductive biases are poorly matched to
lifelong learning formulations of classification and translation. We
demonstrate that there is a compelling case for designing neural networks that
reason by mutual exclusivity, which remains an open challenge.
| 2,020 | Computation and Language |
Multimodal and Multi-view Models for Emotion Recognition | Studies on emotion recognition (ER) show that combining lexical and acoustic
information results in more robust and accurate models. The majority of the
studies focus on settings where both modalities are available in training and
evaluation. However, in practice, this is not always the case; getting ASR
output may represent a bottleneck in a deployment pipeline due to computational
complexity or privacy-related constraints. To address this challenge, we study
the problem of efficiently combining acoustic and lexical modalities during
training while still providing a deployable acoustic model that does not
require lexical inputs. We first experiment with multimodal models and two
attention mechanisms to assess the extent of the benefits that lexical
information can provide. Then, we frame the task as a multi-view learning
problem to induce semantic information from a multimodal model into our
acoustic-only network using a contrastive loss function. Our multimodal model
outperforms the previous state of the art on the USC-IEMOCAP dataset reported
on lexical and acoustic information. Additionally, our multi-view-trained
acoustic network significantly surpasses models that have been exclusively
trained with acoustic features.
| 2,019 | Computation and Language |
Compound Probabilistic Context-Free Grammars for Grammar Induction | We study a formalization of the grammar induction problem that models
sentences as being generated by a compound probabilistic context-free grammar.
In contrast to traditional formulations which learn a single stochastic
grammar, our grammar's rule probabilities are modulated by a per-sentence
continuous latent variable, which induces marginal dependencies beyond the
traditional context-free assumptions. Inference in this grammar is performed by
collapsed variational inference, in which an amortized variational posterior is
placed on the continuous variable, and the latent trees are marginalized out
with dynamic programming. Experiments on English and Chinese show the
effectiveness of our approach compared to recent state-of-the-art methods when
evaluated on unsupervised parsing.
| 2,020 | Computation and Language |
Good Secretaries, Bad Truck Drivers? Occupational Gender Stereotypes in
Sentiment Analysis | In this work, we investigate the presence of occupational gender stereotypes
in sentiment analysis models. Such a task has implications for reducing
implicit biases in these models, which are being applied to an increasingly
wide variety of downstream tasks. We release a new gender-balanced dataset of
800 sentences pertaining to specific professions and propose a methodology for
using it as a test bench to evaluate sentiment analysis models. We evaluate the
presence of occupational gender stereotypes in 3 different models using our
approach, and explore their relationship with societal perceptions of
occupations.
| 2,019 | Computation and Language |
Saliency-driven Word Alignment Interpretation for Neural Machine
Translation | Despite their original goal to jointly learn to align and translate, Neural
Machine Translation (NMT) models, especially Transformer, are often perceived
as not learning interpretable word alignments. In this paper, we show that NMT
models do learn interpretable word alignments, which could only be revealed
with proper interpretation methods. We propose a series of such methods that
are model-agnostic, are able to be applied either offline or online, and do not
require parameter update or architectural change. We show that under the force
decoding setup, the alignments induced by our interpretation method are of
better quality than fast-align for some systems, and when performing free
decoding, they agree well with the alignments induced by automatic alignment
tools.
| 2,019 | Computation and Language |
Benchmarking Neural Machine Translation for Southern African Languages | Unlike major Western languages, most African languages are very
low-resourced. Furthermore, the resources that do exist are often scattered and
difficult to obtain and discover. As a result, the data and code for existing
research has rarely been shared. This has lead a struggle to reproduce reported
results, and few publicly available benchmarks for African machine translation
models exist. To start to address these problems, we trained neural machine
translation models for 5 Southern African languages on publicly-available
datasets. Code is provided for training the models and evaluate the models on a
newly released evaluation set, with the aim of spur future research in the
field for Southern African languages.
| 2,019 | Computation and Language |
Embedding Projection for Targeted Cross-Lingual Sentiment: Model
Comparisons and a Real-World Study | Sentiment analysis benefits from large, hand-annotated resources in order to
train and test machine learning models, which are often data hungry. While some
languages, e.g., English, have a vast array of these resources, most
under-resourced languages do not, especially for fine-grained sentiment tasks,
such as aspect-level or targeted sentiment analysis. To improve this situation,
we propose a cross-lingual approach to sentiment analysis that is applicable to
under-resourced languages and takes into account target-level information. This
model incorporates sentiment information into bilingual distributional
representations, by jointly optimizing them for semantics and sentiment,
showing state-of-the-art performance at sentence-level when combined with
machine translation. The adaptation to targeted sentiment analysis on multiple
domains shows that our model outperforms other projection-based bilingual
embedding methods on binary targeted sentiment tasks. Our analysis on ten
languages demonstrates that the amount of unlabeled monolingual data has
surprisingly little effect on the sentiment results. As expected, the choice of
annotated source language for projection to a target leads to better results
for source-target language pairs which are similar. Therefore, our results
suggest that more efforts should be spent on the creation of resources for less
similar languages to those which are resource-rich already. Finally, a domain
mismatch leads to a decreased performance. This suggests resources in any
language should ideally cover varieties of domains.
| 2,019 | Computation and Language |
Model-based annotation of coreference | Humans do not make inferences over texts, but over models of what texts are
about. When annotators are asked to annotate coreferent spans of text, it is
therefore a somewhat unnatural task. This paper presents an alternative in
which we preprocess documents, linking entities to a knowledge base, and turn
the coreference annotation task -- in our case limited to pronouns -- into an
annotation task where annotators are asked to assign pronouns to entities.
Model-based annotation is shown to lead to faster annotation and higher
inter-annotator agreement, and we argue that it also opens up for an
alternative approach to coreference resolution. We present two new coreference
benchmark datasets, for English Wikipedia and English teacher-student
dialogues, and evaluate state-of-the-art coreference resolvers on them.
| 2,020 | Computation and Language |
Essence Knowledge Distillation for Speech Recognition | It is well known that a speech recognition system that combines multiple
acoustic models trained on the same data significantly outperforms a
single-model system. Unfortunately, real time speech recognition using a whole
ensemble of models is too computationally expensive. In this paper, we propose
to distill the knowledge of essence in an ensemble of models (i.e. the teacher
model) to a single model (i.e. the student model) that needs much less
computation to deploy. Previously, all the soften outputs of the teacher model
are used to optimize the student model. We argue that not all the outputs of
the ensemble are necessary to be distilled. Some of the outputs may even
contain noisy information that is useless or even harmful to the training of
the student model. In addition, we propose to train the student model with a
multitask learning approach by utilizing both the soften outputs of the teacher
model and the correct hard labels. The proposed method achieves some surprising
results on the Switchboard data set. When the student model is trained together
with the correct labels and the essence knowledge from the teacher model, it
not only significantly outperforms another single model with the same
architecture that is trained only with the correct labels, but also
consistently outperforms the teacher model that is used to generate the soft
labels.
| 2,019 | Computation and Language |
Auxiliary Interference Speaker Loss for Target-Speaker Speech
Recognition | In this paper, we propose a novel auxiliary loss function for target-speaker
automatic speech recognition (ASR). Our method automatically extracts and
transcribes target speaker's utterances from a monaural mixture of multiple
speakers speech given a short sample of the target speaker. The proposed
auxiliary loss function attempts to additionally maximize interference speaker
ASR accuracy during training. This will regularize the network to achieve a
better representation for speaker separation, thus achieving better accuracy on
the target-speaker ASR. We evaluated our proposed method using
two-speaker-mixed speech in various signal-to-interference-ratio conditions. We
first built a strong target-speaker ASR baseline based on the state-of-the-art
lattice-free maximum mutual information. This baseline achieved a word error
rate (WER) of 18.06% on the test set while a normal ASR trained with clean data
produced a completely corrupted result (WER of 84.71%). Then, our proposed loss
further reduced the WER by 6.6% relative to this strong baseline, achieving a
WER of 16.87%. In addition to the accuracy improvement, we also showed that the
auxiliary output branch for the proposed loss can even be used for a secondary
ASR for interference speakers' speech.
| 2,019 | Computation and Language |
Leveraging Text Repetitions and Denoising Autoencoders in OCR
Post-correction | A common approach for improving OCR quality is a post-processing step based
on models correcting misdetected characters and tokens. These models are
typically trained on aligned pairs of OCR read text and their manually
corrected counterparts. In this paper we show that the requirement of manually
corrected training data can be alleviated by estimating the OCR errors from
repeating text spans found in large OCR read text corpora and generating
synthetic training examples following this error distribution. We use the
generated data for training a character-level neural seq2seq model and evaluate
the performance of the suggested model on a manually corrected corpus of
Finnish newspapers mostly from the 19th century. The results show that a clear
improvement over the underlying OCR system as well as previously suggested
models utilizing uniformly generated noise can be achieved.
| 2,019 | Computation and Language |
Interpretable Question Answering on Knowledge Bases and Text | Interpretability of machine learning (ML) models becomes more relevant with
their increasing adoption. In this work, we address the interpretability of ML
based question answering (QA) models on a combination of knowledge bases (KB)
and text documents. We adapt post hoc explanation methods such as LIME and
input perturbation (IP) and compare them with the self-explanatory attention
mechanism of the model. For this purpose, we propose an automatic evaluation
paradigm for explanation methods in the context of QA. We also conduct a study
with human annotators to evaluate whether explanations help them identify
better QA models. Our results suggest that IP provides better explanations than
LIME or attention, according to both automatic and human evaluation. We obtain
the same ranking of methods in both experiments, which supports the validity of
our automatic evaluation paradigm.
| 2,019 | Computation and Language |
Sharing Attention Weights for Fast Transformer | Recently, the Transformer machine translation system has shown strong results
by stacking attention layers on both the source and target-language sides. But
the inference of this model is slow due to the heavy use of dot-product
attention in auto-regressive decoding. In this paper we speed up Transformer
via a fast and lightweight attention model. More specifically, we share
attention weights in adjacent layers and enable the efficient re-use of hidden
states in a vertical manner. Moreover, the sharing policy can be jointly
learned with the MT model. We test our approach on ten WMT and NIST OpenMT
tasks. Experimental results show that it yields an average of 1.3X speed-up
(with almost no decrease in BLEU) on top of a state-of-the-art implementation
that has already adopted a cache for fast inference. Also, our approach obtains
a 1.8X speed-up when it works with the \textsc{Aan} model. This is even 16
times faster than the baseline with no use of the attention cache.
| 2,019 | Computation and Language |
Enhancing PIO Element Detection in Medical Text Using Contextualized
Embedding | In this paper, we investigate a new approach to Population, Intervention and
Outcome (PIO) element detection, a common task in Evidence Based Medicine
(EBM). The purpose of this study is two-fold: to build a training dataset for
PIO element detection with minimum redundancy and ambiguity and to investigate
possible options in utilizing state of the art embedding methods for the task
of PIO element detection. For the former purpose, we build a new and improved
dataset by investigating the shortcomings of previously released datasets. For
the latter purpose, we leverage the state of the art text embedding,
Bidirectional Encoder Representations from Transformers (BERT), and build a
multi-label classifier. We show that choosing a domain specific pre-trained
embedding further optimizes the performance of the classifier. Furthermore, we
show that the model could be enhanced by using ensemble methods and boosting
techniques provided that features are adequately chosen.
| 2,019 | Computation and Language |
A Generative Model for Punctuation in Dependency Trees | Treebanks traditionally treat punctuation marks as ordinary words, but
linguists have suggested that a tree's "true" punctuation marks are not
observed (Nunberg, 1990). These latent "underlying" marks serve to delimit or
separate constituents in the syntax tree. When the tree's yield is rendered as
a written sentence, a string rewriting mechanism transduces the underlying
marks into "surface" marks, which are part of the observed (surface) string but
should not be regarded as part of the tree. We formalize this idea in a
generative model of punctuation that admits efficient dynamic programming. We
train it without observing the underlying marks, by locally maximizing the
incomplete data likelihood (similarly to EM). When we use the trained model to
reconstruct the tree's underlying punctuation, the results appear plausible
across 5 languages, and in particular, are consistent with Nunberg's analysis
of English. We show that our generative model can be used to beat baselines on
punctuation restoration. Also, our reconstruction of a sentence's underlying
punctuation lets us appropriately render the surface punctuation (via our
trained underlying-to-surface mechanism) when we syntactically transform the
sentence.
| 2,019 | Computation and Language |
Exploring the Role of Prior Beliefs for Argument Persuasion | Public debate forums provide a common platform for exchanging opinions on a
topic of interest. While recent studies in natural language processing (NLP)
have provided empirical evidence that the language of the debaters and their
patterns of interaction play a key role in changing the mind of a reader,
research in psychology has shown that prior beliefs can affect our
interpretation of an argument and could therefore constitute a competing
alternative explanation for resistance to changing one's stance. To study the
actual effect of language use vs. prior beliefs on persuasion, we provide a new
dataset and propose a controlled setting that takes into consideration two
reader level factors: political and religious ideology. We find that prior
beliefs affected by these reader level factors play a more important role than
language use effects and argue that it is important to account for them in NLP
studies of persuasion.
| 2,019 | Computation and Language |
A Corpus for Modeling User and Language Effects in Argumentation on
Online Debating | Existing argumentation datasets have succeeded in allowing researchers to
develop computational methods for analyzing the content, structure and
linguistic features of argumentative text. They have been much less successful
in fostering studies of the effect of "user" traits -- characteristics and
beliefs of the participants -- on the debate/argument outcome as this type of
user information is generally not available. This paper presents a dataset of
78, 376 debates generated over a 10-year period along with surprisingly
comprehensive participant profiles. We also complete an example study using the
dataset to analyze the effect of selected user traits on the debate outcome in
comparison to the linguistic features typically employed in studies of this
kind.
| 2,019 | Computation and Language |
Determining Relative Argument Specificity and Stance for Complex
Argumentative Structures | Systems for automatic argument generation and debate require the ability to
(1) determine the stance of any claims employed in the argument and (2) assess
the specificity of each claim relative to the argument context. Existing work
on understanding claim specificity and stance, however, has been limited to the
study of argumentative structures that are relatively shallow, most often
consisting of a single claim that directly supports or opposes the argument
thesis. In this paper, we tackle these tasks in the context of complex
arguments on a diverse set of topics. In particular, our dataset consists of
manually curated argument trees for 741 controversial topics covering 95,312
unique claims; lines of argument are generally of depth 2 to 6. We find that as
the distance between a pair of claims increases along the argument path,
determining the relative specificity of a pair of claims becomes easier and
determining their relative stance becomes harder.
| 2,019 | Computation and Language |
Eliciting Knowledge from Experts:Automatic Transcript Parsing for
Cognitive Task Analysis | Cognitive task analysis (CTA) is a type of analysis in applied psychology
aimed at eliciting and representing the knowledge and thought processes of
domain experts. In CTA, often heavy human labor is involved to parse the
interview transcript into structured knowledge (e.g., flowchart for different
actions). To reduce human efforts and scale the process, automated CTA
transcript parsing is desirable. However, this task has unique challenges as
(1) it requires the understanding of long-range context information in
conversational text; and (2) the amount of labeled data is limited and
indirect---i.e., context-aware, noisy, and low-resource. In this paper, we
propose a weakly-supervised information extraction framework for automated CTA
transcript parsing. We partition the parsing process into a sequence labeling
task and a text span-pair relation extraction task, with distant supervision
from human-curated protocol files. To model long-range context information for
extracting sentence relations, neighbor sentences are involved as a part of
input. Different types of models for capturing context dependency are then
applied. We manually annotate real-world CTA transcripts to facilitate the
evaluation of the parsing tasks
| 2,019 | Computation and Language |
PKUSEG: A Toolkit for Multi-Domain Chinese Word Segmentation | Chinese word segmentation (CWS) is a fundamental step of Chinese natural
language processing. In this paper, we build a new toolkit, named PKUSEG, for
multi-domain word segmentation. Unlike existing single-model toolkits, PKUSEG
targets multi-domain word segmentation and provides separate models for
different domains, such as web, medicine, and tourism. Besides, due to the lack
of labeled data in many domains, we propose a domain adaptation paradigm to
introduce cross-domain semantic knowledge via a translation system. Through
this method, we generate synthetic data using a large amount of unlabeled data
in the target domain and then obtain a word segmentation model for the target
domain. We also further refine the performance of the default model with the
help of synthetic data. Experiments show that PKUSEG achieves high performance
on multiple domains. The new toolkit also supports POS tagging and model
training to adapt to various application scenarios. The toolkit is now freely
and publicly available for the usage of research and industry.
| 2,022 | Computation and Language |
Morphological Irregularity Correlates with Frequency | We present a study of morphological irregularity. Following recent work, we
define an information-theoretic measure of irregularity based on the
predictability of forms in a language. Using a neural transduction model, we
estimate this quantity for the forms in 28 languages. We first present several
validatory and exploratory analyses of irregularity. We then show that our
analyses provide evidence for a correlation between irregularity and frequency:
higher frequency items are more likely to be irregular and irregular items are
more likely be highly frequent. To our knowledge, this result is the first of
its breadth and confirms longstanding proposals from the linguistics
literature. The correlation is more robust when aggregated at the level of
whole paradigms--providing support for models of linguistic structure in which
inflected forms are unified by abstract underlying stems or lexemes. Code is
available at https://github.com/shijie-wu/neural-transducer.
| 2,019 | Computation and Language |
Inducing Syntactic Trees from BERT Representations | We use the English model of BERT and explore how a deletion of one word in a
sentence changes representations of other words. Our hypothesis is that
removing a reducible word (e.g. an adjective) does not affect the
representation of other words so much as removing e.g. the main verb, which
makes the sentence ungrammatical and of "high surprise" for the language model.
We estimate reducibilities of individual words and also of longer continuous
phrases (word n-grams), study their syntax-related properties, and then also
use them to induce full dependency trees.
| 2,019 | Computation and Language |
Lattice-Based Unsupervised Test-Time Adaptation of Neural Network
Acoustic Models | Acoustic model adaptation to unseen test recordings aims to reduce the
mismatch between training and testing conditions. Most adaptation schemes for
neural network models require the use of an initial one-best transcription for
the test data, generated by an unadapted model, in order to estimate the
adaptation transform. It has been found that adaptation methods using
discriminative objective functions - such as cross-entropy loss - often require
careful regularisation to avoid over-fitting to errors in the one-best
transcriptions. In this paper we solve this problem by performing
discriminative adaptation using lattices obtained from a first pass decoding,
an approach that can be readily integrated into the lattice-free maximum mutual
information (LF-MMI) framework. We investigate this approach on three
transcription tasks of varying difficulty: TED talks, multi-genre broadcast
(MGB) and a low-resource language (Somali). We find that our proposed approach
enables many more parameters to be adapted without over-fitting being observed,
and is successful even when the initial transcription has a WER in excess of
50%.
| 2,019 | Computation and Language |
EmotionX-KU: BERT-Max based Contextual Emotion Classifier | We propose a contextual emotion classifier based on a transferable language
model and dynamic max pooling, which predicts the emotion of each utterance in
a dialogue. A representative emotion analysis task, EmotionX, requires to
consider contextual information from colloquial dialogues and to deal with a
class imbalance problem. To alleviate these problems, our model leverages the
self-attention based transferable language model and the weighted cross entropy
loss. Furthermore, we apply post-training and fine-tuning mechanisms to enhance
the domain adaptability of our model and utilize several machine learning
techniques to improve its performance. We conduct experiments on two
emotion-labeled datasets named Friends and EmotionPush. As a result, our model
outperforms the previous state-of-the-art model and also shows competitive
performance in the EmotionX 2019 challenge. The code will be available in the
Github page.
| 2,019 | Computation and Language |
Gated Embeddings in End-to-End Speech Recognition for
Conversational-Context Fusion | We present a novel conversational-context aware end-to-end speech recognizer
based on a gated neural network that incorporates
conversational-context/word/speech embeddings. Unlike conventional speech
recognition models, our model learns longer conversational-context information
that spans across sentences and is consequently better at recognizing long
conversations. Specifically, we propose to use the text-based external word
and/or sentence embeddings (i.e., fastText, BERT) within an end-to-end
framework, yielding a significant improvement in word error rate with better
conversational-context representation. We evaluated the models on the
Switchboard conversational speech corpus and show that our model outperforms
standard end-to-end speech recognition models.
| 2,019 | Computation and Language |
Simple Natural Language Processing Tools for Danish | This technical note describes a set of baseline tools for automatic
processing of Danish text. The tools are machine-learning based, using natural
language processing models trained over previously annotated documents. They
are maintained at ITU Copenhagen and will always be freely available.
| 2,019 | Computation and Language |
Compositional Semantic Parsing Across Graphbanks | Most semantic parsers that map sentences to graph-based meaning
representations are hand-designed for specific graphbanks. We present a
compositional neural semantic parser which achieves, for the first time,
competitive accuracies across a diverse range of graphbanks. Incorporating BERT
embeddings and multi-task learning improves the accuracy further, setting new
states of the art on DM, PAS, PSD, AMR 2015 and EDS.
| 2,019 | Computation and Language |
The Impact of Preprocessing on Arabic-English Statistical and Neural
Machine Translation | Neural networks have become the state-of-the-art approach for machine
translation (MT) in many languages. While linguistically-motivated tokenization
techniques were shown to have significant effects on the performance of
statistical MT, it remains unclear if those techniques are well suited for
neural MT. In this paper, we systematically compare neural and statistical MT
models for Arabic-English translation on data preprecossed by various prominent
tokenization schemes. Furthermore, we consider a range of data and vocabulary
sizes and compare their effect on both approaches. Our empirical results show
that the best choice of tokenization scheme is largely based on the type of
model and the size of data. We also show that we can gain significant
improvements using a system selection that combines the output from neural and
statistical MT.
| 2,019 | Computation and Language |
Semantic expressive capacity with bounded memory | We investigate the capacity of mechanisms for compositional semantic parsing
to describe relations between sentences and semantic representations.
We prove that in order to represent certain relations, mechanisms which are
syntactically projective must be able to remember an unbounded number of
locations in the semantic representations, where nonprojective mechanisms need
not.
This is the first result of this kind, and has consequences both for
grammar-based and for neural systems.
| 2,019 | Computation and Language |
Relating Simple Sentence Representations in Deep Neural Networks and the
Brain | What is the relationship between sentence representations learned by deep
recurrent models against those encoded by the brain? Is there any
correspondence between hidden layers of these recurrent models and brain
regions when processing sentences? Can these deep models be used to synthesize
brain data which can then be utilized in other extrinsic tasks? We investigate
these questions using sentences with simple syntax and semantics (e.g., The
bone was eaten by the dog.). We consider multiple neural network architectures,
including recently proposed ELMo and BERT. We use magnetoencephalography (MEG)
brain recording data collected from human subjects when they were reading these
simple sentences.
Overall, we find that BERT's activations correlate the best with MEG brain
data. We also find that the deep network representation can be used to generate
brain data from new sentences to augment existing brain data.
To the best of our knowledge, this is the first work showing that the MEG
brain recording when reading a word in a sentence can be used to distinguish
earlier words in the sentence. Our exploration is also the first to use deep
neural network representations to generate synthetic brain data and to show
that it helps in improving subsequent stimuli decoding task accuracy.
| 2,019 | Computation and Language |
Training Models to Extract Treatment Plans from Clinical Notes Using
Contents of Sections with Headings | Objective: Using natural language processing (NLP) to find sentences that
state treatment plans in a clinical note, would automate plan extraction and
would further enable their use in tools that help providers and care managers.
However, as in the most NLP tasks on clinical text, creating gold standard to
train and test NLP models is tedious and expensive. Fortuitously, sometimes but
not always clinical notes contain sections with a heading that identifies the
section as a plan. Leveraging contents of such labeled sections as a noisy
training data, we assessed accuracy of NLP models trained with the data.
Methods: We used common variations of plan headings and rule-based heuristics
to find plan sections with headings in clinical notes, and we extracted
sentences from them and formed a noisy training data of plan sentences. We
trained Support Vector Machine (SVM) and Convolutional Neural Network (CNN)
models with the data. We measured accuracy of the trained models on the noisy
dataset using ten-fold cross validation and separately on a set-aside manually
annotated dataset.
Results: About 13% of 117,730 clinical notes contained treatment plans
sections with recognizable headings in the 1001 longitudinal patient records
that were obtained from Cleveland Clinic under an IRB approval. We were able to
extract and create a noisy training data of 13,492 plan sentences from the
clinical notes. CNN achieved best F measures, 0.91 and 0.97 in the
cross-validation and set-aside evaluation experiments respectively. SVM
slightly underperformed with F measures of 0.89 and 0.96 in the same
experiments.
Conclusion: Our study showed that the training supervised learning models
using noisy plan sentences was effective in identifying them in all clinical
notes. More broadly, sections with informal headings in clinical notes can be a
good source for generating effective training data.
| 2,019 | Computation and Language |
Findings of the First Shared Task on Machine Translation Robustness | We share the findings of the first shared task on improving robustness of
Machine Translation (MT). The task provides a testbed representing challenges
facing MT models deployed in the real world, and facilitates new approaches to
improve models; robustness to noisy input and domain mismatch. We focus on two
language pairs (English-French and English-Japanese), and the submitted systems
are evaluated on a blind test set consisting of noisy comments on Reddit and
professionally sourced translations. As a new task, we received 23 submissions
by 11 participating teams from universities, companies, national labs, etc. All
submitted systems achieved large improvements over baselines, with the best
improvement having +22.33 BLEU. We evaluated submissions by both human judgment
and automatic evaluation (BLEU), which shows high correlations (Pearson's r =
0.94 and 0.95). Furthermore, we conducted a qualitative analysis of the
submitted systems using compare-mt, which revealed their salient differences in
handling challenges in this task. Such analysis provides additional insights
when there is occasional disagreement between human judgment and BLEU, e.g.
systems better at producing colloquial expressions received higher score from
human judgment.
| 2,019 | Computation and Language |
A Concise Model for Multi-Criteria Chinese Word Segmentation with
Transformer Encoder | Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the
relations among the multiple heterogeneous segmentation criteria and further
improve the performance of each single criterion. Previous work usually regards
MCCWS as different tasks, which are learned together under the multi-task
learning framework. In this paper, we propose a concise but effective unified
model for MCCWS, which is fully-shared for all the criteria. By leveraging the
powerful ability of the Transformer encoder, the proposed unified model can
segment Chinese text according to a unique criterion-token indicating the
output criterion. Besides, the proposed unified model can segment both
simplified and traditional Chinese and has an excellent transfer capability.
Experiments on eight datasets with different criteria show that our model
outperforms our single-criterion baseline model and other multi-criteria
models. Source codes of this paper are available on Github
https://github.com/acphile/MCCWS.
| 2,020 | Computation and Language |
Supervised Contextual Embeddings for Transfer Learning in Natural
Language Processing Tasks | Pre-trained word embeddings are the primary method for transfer learning in
several Natural Language Processing (NLP) tasks. Recent works have focused on
using unsupervised techniques such as language modeling to obtain these
embeddings. In contrast, this work focuses on extracting representations from
multiple pre-trained supervised models, which enriches word embeddings with
task and domain specific knowledge. Experiments performed in cross-task,
cross-domain and cross-lingual settings indicate that such supervised
embeddings are helpful, especially in the low-resource setting, but the extent
of gains is dependent on the nature of the task and domain. We make our code
publicly available.
| 2,019 | Computation and Language |
Lost in Translation: Loss and Decay of Linguistic Richness in Machine
Translation | This work presents an empirical approach to quantifying the loss of lexical
richness in Machine Translation (MT) systems compared to Human Translation
(HT). Our experiments show how current MT systems indeed fail to render the
lexical diversity of human generated or translated text. The inability of MT
systems to generate diverse outputs and its tendency to exacerbate already
frequent patterns while ignoring less frequent ones, might be the underlying
cause for, among others, the currently heavily debated issues related to gender
biased output. Can we indeed, aside from biased data, talk about an algorithm
that exacerbates seen biases?
| 2,019 | Computation and Language |
Widening the Representation Bottleneck in Neural Machine Translation
with Lexical Shortcuts | The transformer is a state-of-the-art neural translation model that uses
attention to iteratively refine lexical representations with information drawn
from the surrounding context. Lexical features are fed into the first layer and
propagated through a deep network of hidden layers. We argue that the need to
represent and propagate lexical features in each layer limits the model's
capacity for learning and representing other information relevant to the task.
To alleviate this bottleneck, we introduce gated shortcut connections between
the embedding layer and each subsequent layer within the encoder and decoder.
This enables the model to access relevant lexical content dynamically, without
expending limited resources on storing it within intermediate states. We show
that the proposed modification yields consistent improvements over a baseline
transformer on standard WMT translation tasks in 5 translation directions (0.9
BLEU on average) and reduces the amount of lexical information passed along the
hidden layers. We furthermore evaluate different ways to integrate lexical
connections into the transformer architecture and present ablation experiments
exploring the effect of proposed shortcuts on model behavior.
| 2,019 | Computation and Language |
Leveraging Acoustic Cues and Paralinguistic Embeddings to Detect
Expression from Voice | Millions of people reach out to digital assistants such as Siri every day,
asking for information, making phone calls, seeking assistance, and much more.
The expectation is that such assistants should understand the intent of the
users query. Detecting the intent of a query from a short, isolated utterance
is a difficult task. Intent cannot always be obtained from speech-recognized
transcriptions. A transcription driven approach can interpret what has been
said but fails to acknowledge how it has been said, and as a consequence, may
ignore the expression present in the voice. Our work investigates whether a
system can reliably detect vocal expression in queries using acoustic and
paralinguistic embedding. Results show that the proposed method offers a
relative equal error rate (EER) decrease of 60% compared to a bag-of-word based
system, corroborating that expression is significantly represented by vocal
attributes, rather than being purely lexical. Addition of emotion embedding
helped to reduce the EER by 30% relative to the acoustic embedding,
demonstrating the relevance of emotion in expressive voice.
| 2,019 | Computation and Language |
GPT-based Generation for Classical Chinese Poetry | We present a simple yet effective method for generating high quality
classical Chinese poetry with Generative Pre-trained Language Model (GPT). The
method adopts a simple GPT model, without using any human crafted rules or
features, or designing any additional neural components. While the proposed
model learns to generate various forms of classical Chinese poems, including
Jueju, L\"{u}shi, various Cipai and Couples, the generated poems are of very
high quality. We also propose and implement a method to fine-tune the model to
generate acrostic poetry. To the best of our knowledge, this is the first to
employ GPT in developing a poetry generation system. We have released an online
mini demonstration program on Wechat to show the generation capability of the
proposed method for classical Chinese poetry.
| 2,019 | Computation and Language |
The CUED's Grammatical Error Correction Systems for BEA-2019 | We describe two entries from the Cambridge University Engineering Department
to the BEA 2019 Shared Task on grammatical error correction. Our submission to
the low-resource track is based on prior work on using finite state transducers
together with strong neural language models. Our system for the restricted
track is a purely neural system consisting of neural language models and neural
machine translation models trained with back-translation and a combination of
checkpoint averaging and fine-tuning -- without the help of any additional
tools like spell checkers. The latter system has been used inside a separate
system combination entry in cooperation with the Cambridge University Computer
Lab.
| 2,019 | Computation and Language |
Fake News Detection using Stance Classification: A Survey | This paper surveys and presents recent academic work carried out within the
field of stance classification and fake news detection. Echo chambers and the
model organism problem are examples that pose challenges to acquire data with
high quality, due to opinions being polarised in microblogs. Nevertheless it is
shown that several machine learning approaches achieve promising results in
classifying stance. Some use crowd stance for fake news detection, such as the
approach in [Dungs et al., 2018] using Hidden Markov Models. Furthermore
feature engineering have significant importance in several approaches, which is
shown in [Aker et al., 2017]. This paper additionally includes a proposal of a
system implementation based on the presented survey.
| 2,019 | Computation and Language |
Empirical Evaluation of Sequence-to-Sequence Models for Word Discovery
in Low-resource Settings | Since Bahdanau et al. [1] first introduced attention for neural machine
translation, most sequence-to-sequence models made use of attention mechanisms
[2, 3, 4]. While they produce soft-alignment matrices that could be interpreted
as alignment between target and source languages, we lack metrics to quantify
their quality, being unclear which approach produces the best alignments. This
paper presents an empirical evaluation of 3 main sequence-to-sequence models
(CNN, RNN and Transformer-based) for word discovery from unsegmented phoneme
sequences. This task consists in aligning word sequences in a source language
with phoneme sequences in a target language, inferring from it word
segmentation on the target side [5]. Evaluating word segmentation quality can
be seen as an extrinsic evaluation of the soft-alignment matrices produced
during training. Our experiments in a low-resource scenario on Mboshi and
English languages (both aligned to French) show that RNNs surprisingly
outperform CNNs and Transformer for this task. Our results are confirmed by an
intrinsic evaluation of alignment quality through the use of Average Normalized
Entropy (ANE). Lastly, we improve our best word discovery model by using an
alignment entropy confidence measure that accumulates ANE over all the
occurrences of a given alignment pair in the collection.
| 2,019 | Computation and Language |
Latent Variable Sentiment Grammar | Neural models have been investigated for sentiment classification over
constituent trees. They learn phrase composition automatically by encoding tree
structures but do not explicitly model sentiment composition, which requires to
encode sentiment class labels. To this end, we investigate two formalisms with
deep sentiment representations that capture sentiment subtype expressions by
latent variables and Gaussian mixture vectors, respectively. Experiments on
Stanford Sentiment Treebank (SST) show the effectiveness of sentiment grammar
over vanilla neural encoders. Using ELMo embeddings, our method gives the best
results on this benchmark.
| 2,019 | Computation and Language |
Observing Dialogue in Therapy: Categorizing and Forecasting Behavioral
Codes | Automatically analyzing dialogue can help understand and guide behavior in
domains such as counseling, where interactions are largely mediated by
conversation. In this paper, we study modeling behavioral codes used to asses a
psychotherapy treatment style called Motivational Interviewing (MI), which is
effective for addressing substance abuse and related problems. Specifically, we
address the problem of providing real-time guidance to therapists with a
dialogue observer that (1) categorizes therapist and client MI behavioral codes
and, (2) forecasts codes for upcoming utterances to help guide the conversation
and potentially alert the therapist. For both tasks, we define neural network
models that build upon recent successes in dialogue modeling. Our experiments
demonstrate that our models can outperform several baselines for both tasks. We
also report the results of a careful analysis that reveals the impact of the
various network design tradeoffs for modeling therapy dialogue.
| 2,019 | Computation and Language |
A Novel Bi-directional Interrelated Model for Joint Intent Detection and
Slot Filling | A spoken language understanding (SLU) system includes two main tasks, slot
filling (SF) and intent detection (ID). The joint model for the two tasks is
becoming a tendency in SLU. But the bi-directional interrelated connections
between the intent and slots are not established in the existing joint models.
In this paper, we propose a novel bi-directional interrelated model for joint
intent detection and slot filling. We introduce an SF-ID network to establish
direct connections for the two tasks to help them promote each other mutually.
Besides, we design an entirely new iteration mechanism inside the SF-ID network
to enhance the bi-directional interrelated connections. The experimental
results show that the relative improvement in the sentence-level semantic frame
accuracy of our model is 3.79% and 5.42% on ATIS and Snips datasets,
respectively, compared to the state-of-the-art model.
| 2,019 | Computation and Language |
Evaluating Language Model Finetuning Techniques for Low-resource
Languages | Unlike mainstream languages (such as English and French), low-resource
languages often suffer from a lack of expert-annotated corpora and benchmark
resources that make it hard to apply state-of-the-art techniques directly. In
this paper, we alleviate this scarcity problem for the low-resourced Filipino
language in two ways. First, we introduce a new benchmark language modeling
dataset in Filipino which we call WikiText-TL-39. Second, we show that language
model finetuning techniques such as BERT and ULMFiT can be used to consistently
train robust classifiers in low-resource settings, experiencing at most a
0.0782 increase in validation error when the number of training examples is
decreased from 10K to 1K while finetuning using a privately-held sentiment
dataset.
| 2,019 | Computation and Language |
Multilingual Bottleneck Features for Query by Example Spoken Term
Detection | State of the art solutions to query by example spoken term detection
(QbE-STD) usually rely on bottleneck feature representation of the query and
audio document to perform dynamic time warping (DTW) based template matching.
Here, we present a study on QbE-STD performance using several monolingual as
well as multilingual bottleneck features extracted from feed forward networks.
Then, we propose to employ residual networks (ResNet) to estimate the
bottleneck features and show significant improvements over the corresponding
feed forward network based features. The neural networks are trained on
GlobalPhone corpus and QbE-STD experiments are performed on a very challenging
QUESST 2014 database.
| 2,019 | Computation and Language |
Self-Supervised Dialogue Learning | The sequential order of utterances is often meaningful in coherent dialogues,
and the order changes of utterances could lead to low-quality and incoherent
conversations. We consider the order information as a crucial supervised signal
for dialogue learning, which, however, has been neglected by many previous
dialogue systems. Therefore, in this paper, we introduce a self-supervised
learning task, inconsistent order detection, to explicitly capture the flow of
conversation in dialogues. Given a sampled utterance pair triple, the task is
to predict whether it is ordered or misordered. Then we propose a
sampling-based self-supervised network SSN to perform the prediction with
sampled triple references from previous dialogue history. Furthermore, we
design a joint learning framework where SSN can guide the dialogue systems
towards more coherent and relevant dialogue learning through adversarial
training. We demonstrate that the proposed methods can be applied to both
open-domain and task-oriented dialogue scenarios, and achieve the new
state-of-the-art performance on the OpenSubtitiles and Movie-Ticket Booking
datasets.
| 2,019 | Computation and Language |
BERTphone: Phonetically-Aware Encoder Representations for
Utterance-Level Speaker and Language Recognition | We introduce BERTphone, a Transformer encoder trained on large speech corpora
that outputs phonetically-aware contextual representation vectors that can be
used for both speaker and language recognition. This is accomplished by
training on two objectives: the first, inspired by adapting BERT to the
continuous domain, involves masking spans of input frames and reconstructing
the whole sequence for acoustic representation learning; the second, inspired
by the success of bottleneck features from ASR, is a sequence-level CTC loss
applied to phoneme labels for phonetic representation learning. We pretrain two
BERTphone models (one on Fisher and one on TED-LIUM) and use them as feature
extractors into x-vector-style DNNs for both tasks. We attain a
state-of-the-art $C_{\text{avg}}$ of 6.16 on the challenging LRE07 3sec
closed-set language recognition task. On Fisher and VoxCeleb speaker
recognition tasks, we see an 18% relative reduction in speaker EER when
training on BERTphone vectors instead of MFCCs. In general, BERTphone
outperforms previous phonetic pretraining approaches on the same data. We
release our code and models at
https://github.com/awslabs/speech-representations.
| 2,020 | Computation and Language |
Inter and Intra Document Attention for Depression Risk Assessment | We take interest in the early assessment of risk for depression in social
media users. We focus on the eRisk 2018 dataset, which represents users as a
sequence of their written online contributions. We implement four RNN-based
systems to classify the users. We explore several aggregations methods to
combine predictions on individual posts. Our best model reads through all
writings of a user in parallel but uses an attention mechanism to prioritize
the most important ones at each timestep.
| 2,019 | Computation and Language |
Merge and Label: A novel neural network architecture for nested NER | Named entity recognition (NER) is one of the best studied tasks in natural
language processing. However, most approaches are not capable of handling
nested structures which are common in many applications. In this paper we
introduce a novel neural network architecture that first merges tokens and/or
entities into entities forming nested structures, and then labels each of them
independently. Unlike previous work, our merge and label approach predicts
real-valued instead of discrete segmentation structures, which allow it to
combine word and nested entity embeddings while maintaining differentiability.
%which smoothly groups entities into single vectors across multiple levels. We
evaluate our approach using the ACE 2005 Corpus, where it achieves
state-of-the-art F1 of 74.6, further improved with contextual embeddings (BERT)
to 82.4, an overall improvement of close to 8 F1 points over previous
approaches trained on the same data. Additionally we compare it against
BiLSTM-CRFs, the dominant approach for flat NER structures, demonstrating that
its ability to predict nested structures does not impact performance in simpler
cases.
| 2,019 | Computation and Language |
Analyzing Utility of Visual Context in Multimodal Speech Recognition
Under Noisy Conditions | Multimodal learning allows us to leverage information from multiple sources
(visual, acoustic and text), similar to our experience of the real world.
However, it is currently unclear to what extent auxiliary modalities improve
performance over unimodal models, and under what circumstances the auxiliary
modalities are useful. We examine the utility of the auxiliary visual context
in Multimodal Automatic Speech Recognition in adversarial settings, where we
deprive the models from partial audio signal during inference time. Our
experiments show that while MMASR models show significant gains over
traditional speech-to-text architectures (upto 4.2% WER improvements), they do
not incorporate visual information when the audio signal has been corrupted.
This shows that current methods of integrating the visual modality do not
improve model robustness to noise, and we need better visually grounded
adaptation techniques.
| 2,020 | Computation and Language |
Topic Modeling the Reading and Writing Behavior of Information Foragers | The general problem of "information foraging" in an environment about which
agents have incomplete information has been explored in many fields, including
cognitive psychology, neuroscience, economics, finance, ecology, and computer
science. In all of these areas, the searcher aims to enhance future performance
by surveying enough of existing knowledge to orient themselves in the
information space. Individuals can be viewed as conducting a cognitive search
in which they must balance exploration of ideas that are novel to them against
exploitation of knowledge in domains in which they are already expert.
In this dissertation, I present several case studies that demonstrate how
reading and writing behaviors interact to construct personal knowledge bases.
These studies use LDA topic modeling to represent the information environment
of the texts each author read and wrote. Three studies revolve around Charles
Darwin. Darwin left detailed records of every book he read for 23 years, from
disembarking from the H.M.S. Beagle to just after publication of The Origin of
Species. Additionally, he left copies of his drafts before publication. I
characterize his reading behavior, then show how that reading behavior
interacted with the drafts and subsequent revisions of The Origin of Species,
and expand the dataset to include later readings and writings. Then, through a
study of Thomas Jefferson's correspondence, I expand the study to non-book
data. Finally, through an examination of neuroscience citation data, I move
from individual behavior to collective behavior in constructing an information
environment. Together, these studies reveal "the interplay between individual
and collective phenomena where innovation takes place" (Tria et al. 2014).
| 2,019 | Computation and Language |
The University of Sydney's Machine Translation System for WMT19 | This paper describes the University of Sydney's submission of the WMT 2019
shared news translation task. We participated in the
Finnish$\rightarrow$English direction and got the best BLEU(33.0) score among
all the participants. Our system is based on the self-attentional Transformer
networks, into which we integrated the most recent effective strategies from
academic research (e.g., BPE, back translation, multi-features data selection,
data augmentation, greedy model ensemble, reranking, ConMBR system combination,
and post-processing). Furthermore, we propose a novel augmentation method
$Cycle Translation$ and a data mixture strategy $Big$/$Small$ parallel
construction to entirely exploit the synthetic corpus. Extensive experiments
show that adding the above techniques can make continuous improvements of the
BLEU scores, and the best result outperforms the baseline (Transformer ensemble
model trained with the original parallel corpus) by approximately 5.3 BLEU
score, achieving the state-of-the-art performance.
| 2,019 | Computation and Language |
Few-Shot Representation Learning for Out-Of-Vocabulary Words | Existing approaches for learning word embeddings often assume there are
sufficient occurrences for each word in the corpus, such that the
representation of words can be accurately estimated from their contexts.
However, in real-world scenarios, out-of-vocabulary (a.k.a. OOV) words that do
not appear in training corpus emerge frequently. It is challenging to learn
accurate representations of these words with only a few observations. In this
paper, we formulate the learning of OOV embeddings as a few-shot regression
problem, and address it by training a representation function to predict the
oracle embedding vector (defined as embedding trained with abundant
observations) based on limited observations. Specifically, we propose a novel
hierarchical attention-based architecture to serve as the neural regression
function, with which the context information of a word is encoded and
aggregated from K observations. Furthermore, our approach can leverage
Model-Agnostic Meta-Learning (MAML) for adapting the learned model to the new
corpus fast and robustly. Experiments show that the proposed approach
significantly outperforms existing methods in constructing accurate embeddings
for OOV words, and improves downstream tasks where these embeddings are
utilized.
| 2,019 | Computation and Language |
Do Transformer Attention Heads Provide Transparency in Abstractive
Summarization? | Learning algorithms become more powerful, often at the cost of increased
complexity. In response, the demand for algorithms to be transparent is
growing. In NLP tasks, attention distributions learned by attention-based deep
learning models are used to gain insights in the models' behavior. To which
extent is this perspective valid for all NLP tasks? We investigate whether
distributions calculated by different attention heads in a transformer
architecture can be used to improve transparency in the task of abstractive
summarization. To this end, we present both a qualitative and quantitative
analysis to investigate the behavior of the attention heads. We show that some
attention heads indeed specialize towards syntactically and semantically
distinct input. We propose an approach to evaluate to which extent the
Transformer model relies on specifically learned attention distributions. We
also discuss what this implies for using attention distributions as a means of
transparency.
| 2,019 | Computation and Language |
Weak Supervision Enhanced Generative Network for Question Generation | Automatic question generation according to an answer within the given passage
is useful for many applications, such as question answering system, dialogue
system, etc. Current neural-based methods mostly take two steps which extract
several important sentences based on the candidate answer through manual rules
or supervised neural networks and then use an encoder-decoder framework to
generate questions about these sentences. These approaches neglect the semantic
relations between the answer and the context of the whole passage which is
sometimes necessary for answering the question. To address this problem, we
propose the Weak Supervision Enhanced Generative Network (WeGen) which
automatically discovers relevant features of the passage given the answer span
in a weakly supervised manner to improve the quality of generated questions.
More specifically, we devise a discriminator, Relation Guider, to capture the
relations between the whole passage and the associated answer and then the
Multi-Interaction mechanism is deployed to transfer the knowledge dynamically
for our question generation system. Experiments show the effectiveness of our
method in both automatic evaluations and human evaluations.
| 2,019 | Computation and Language |
Using Database Rule for Weak Supervised Text-to-SQL Generation | We present a simple way to do the task of text-to-SQL problem with weak
supervision. We call it Rule-SQL. Given the question and the answer from the
database table without the SQL logic form, Rule-SQL use the rules based on
table column names and question string for the SQL exploration first and then
use the explored SQL for supervised training. We design several rules for
reducing the exploration search space. For the deep model, we leverage BERT for
the representation layer and separate the model to SELECT, AGG and WHERE parts.
The experiment result on WikiSQL outperforms the strong baseline of full
supervision and is comparable to the start-of-the-art weak supervised mothods.
| 2,019 | Computation and Language |
Modernizing Historical Documents: a User Study | Accessibility to historical documents is mostly limited to scholars. This is
due to the language barrier inherent in human language and the linguistic
properties of these documents. Given a historical document, modernization aims
to generate a new version of it, written in the modern version of the
document's language. Its goal is to tackle the language barrier, decreasing the
comprehension difficulty and making historical documents accessible to a
broader audience. In this work, we proposed a new neural machine translation
approach that profits from modern documents to enrich its systems. We tested
this approach with both automatic and human evaluation, and conducted a user
study. Results showed that modernization is successfully reaching its goal,
although it still has room for improvement.
| 2,020 | Computation and Language |
Event extraction based on open information extraction and ontology | The work presented in this master thesis consists of extracting a set of
events from texts written in natural language. For this purpose, we have based
ourselves on the basic notions of the information extraction as well as the
open information extraction. First, we applied an open information
extraction(OIE) system for the relationship extraction, to highlight the
importance of OIEs in event extraction, and we used the ontology to the event
modeling. We tested the results of our approach with test metrics. As a result,
the two-level event extraction approach has shown good performance results but
requires a lot of expert intervention in the construction of classifiers and
this will take time. In this context we have proposed an approach that reduces
the expert intervention in the relation extraction, the recognition of entities
and the reasoning which are automatic and based on techniques of adaptation and
correspondence. Finally, to prove the relevance of the extracted results, we
conducted a set of experiments using different test metrics as well as a
comparative study.
| 2,019 | Computation and Language |
EQuANt (Enhanced Question Answer Network) | Machine Reading Comprehension (MRC) is an important topic in the domain of
automated question answering and in natural language processing more generally.
Since the release of the SQuAD 1.1 and SQuAD 2 datasets, progress in the field
has been particularly significant, with current state-of-the-art models now
exhibiting near-human performance at both answering well-posed questions and
detecting questions which are unanswerable given a corresponding context. In
this work, we present Enhanced Question Answer Network (EQuANt), an MRC model
which extends the successful QANet architecture of Yu et al. to cope with
unanswerable questions. By training and evaluating EQuANt on SQuAD 2, we show
that it is indeed possible to extend QANet to the unanswerable domain. We
achieve results which are close to 2 times better than our chosen baseline
obtained by evaluating a lightweight version of the original QANet architecture
on SQuAD 2. In addition, we report that the performance of EQuANt on SQuAD 1.1
after being trained on SQuAD2 exceeds that of our lightweight QANet
architecture trained and evaluated on SQuAD 1.1, demonstrating the utility of
multi-task learning in the MRC context.
| 2,019 | Computation and Language |
Deep Conversational Recommender in Travel | When traveling to a foreign country, we are often in dire need of an
intelligent conversational agent to provide instant and informative responses
to our various queries. However, to build such a travel agent is non-trivial.
First of all, travel naturally involves several sub-tasks such as hotel
reservation, restaurant recommendation and taxi booking etc, which invokes the
need for global topic control. Secondly, the agent should consider various
constraints like price or distance given by the user to recommend an
appropriate venue. In this paper, we present a Deep Conversational Recommender
(DCR) and apply to travel. It augments the sequence-to-sequence (seq2seq)
models with a neural latent topic component to better guide response generation
and make the training easier. To consider the various constraints for venue
recommendation, we leverage a graph convolutional network (GCN) based approach
to capture the relationships between different venues and the match between
venue and dialog context. For response generation, we combine the topic-based
component with the idea of pointer networks, which allows us to effectively
incorporate recommendation results. We perform extensive evaluation on a
multi-turn task-oriented dialog dataset in travel domain and the results show
that our method achieves superior performance as compared to a wide range of
baselines.
| 2,019 | Computation and Language |
Constructing Information-Lossless Biological Knowledge Graphs from
Conditional Statements | Conditions are essential in the statements of biological literature. Without
the conditions (e.g., environment, equipment) that were precisely specified,
the facts (e.g., observations) in the statements may no longer be valid. One
biological statement has one or multiple fact(s) and/or condition(s). Their
subject and object can be either a concept or a concept's attribute. Existing
information extraction methods do not consider the role of condition in the
biological statement nor the role of attribute in the subject/object. In this
work, we design a new tag schema and propose a deep sequence tagging framework
to structure conditional statement into fact and condition tuples from
biological text. Experiments demonstrate that our method yields a
information-lossless structure of the literature.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.