Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
CLEARumor at SemEval-2019 Task 7: ConvoLving ELMo Against Rumors | This paper describes our submission to SemEval-2019 Task 7: RumourEval:
Determining Rumor Veracity and Support for Rumors. We participated in both
subtasks. The goal of subtask A is to classify the type of interaction between
a rumorous social media post and a reply post as support, query, deny, or
comment. The goal of subtask B is to predict the veracity of a given rumor. For
subtask A, we implement a CNN-based neural architecture using ELMo embeddings
of post text combined with auxiliary features and achieve a F1-score of 44.6%.
For subtask B, we employ a MLP neural network leveraging our estimates for
subtask A and achieve a F1-score of 30.1% (second place in the competition). We
provide results and analysis of our system performance and present ablation
experiments.
| 2,019 | Computation and Language |
Modeling Recurrence for Transformer | Recently, the Transformer model that is based solely on attention mechanisms,
has advanced the state-of-the-art on various machine translation tasks.
However, recent studies reveal that the lack of recurrence hinders its further
improvement of translation capacity. In response to this problem, we propose to
directly model recurrence for Transformer with an additional recurrence
encoder. In addition to the standard recurrent neural network, we introduce a
novel attentive recurrent network to leverage the strengths of both attention
and recurrent networks. Experimental results on the widely-used WMT14
English-German and WMT17 Chinese-English translation tasks demonstrate the
effectiveness of the proposed approach. Our studies also reveal that the
proposed model benefits from a short-cut that bridges the source and target
sequences with a single recurrent layer, which outperforms its deep
counterpart.
| 2,019 | Computation and Language |
Information Aggregation for Multi-Head Attention with
Routing-by-Agreement | Multi-head attention is appealing for its ability to jointly extract
different types of information from multiple representation subspaces.
Concerning the information aggregation, a common practice is to use a
concatenation followed by a linear transformation, which may not fully exploit
the expressiveness of multi-head attention. In this work, we propose to improve
the information aggregation for multi-head attention with a more powerful
routing-by-agreement algorithm. Specifically, the routing algorithm iteratively
updates the proportion of how much a part (i.e. the distinct information
learned from a specific subspace) should be assigned to a whole (i.e. the final
output representation), based on the agreement between parts and wholes.
Experimental results on linguistic probing tasks and machine translation tasks
prove the superiority of the advanced information aggregation over the standard
linear transformation.
| 2,019 | Computation and Language |
Convolutional Self-Attention Networks | Self-attention networks (SANs) have drawn increasing interest due to their
high parallelization in computation and flexibility in modeling dependencies.
SANs can be further enhanced with multi-head attention by allowing the model to
attend to information from different representation subspaces. In this work, we
propose novel convolutional self-attention networks, which offer SANs the
abilities to 1) strengthen dependencies among neighboring elements, and 2)
model the interaction between features extracted by multiple attention heads.
Experimental results of machine translation on different language pairs and
model settings show that our approach outperforms both the strong Transformer
baseline and other existing models on enhancing the locality of SANs. Comparing
with prior studies, the proposed model is parameter free in terms of
introducing no more parameters.
| 2,019 | Computation and Language |
PoMo: Generating Entity-Specific Post-Modifiers in Context | We introduce entity post-modifier generation as an instance of a
collaborative writing task. Given a sentence about a target entity, the task is
to automatically generate a post-modifier phrase that provides contextually
relevant information about the entity. For example, for the sentence, "Barack
Obama, _______, supported the #MeToo movement.", the phrase "a father of two
girls" is a contextually relevant post-modifier. To this end, we build PoMo, a
post-modifier dataset created automatically from news articles reflecting a
journalistic need for incorporating entity information that is relevant to a
particular news event. PoMo consists of more than 231K sentences with
post-modifiers and associated facts extracted from Wikidata for around 57K
unique entities. We use crowdsourcing to show that modeling contextual
relevance is necessary for accurate post-modifier generation. We adapt a number
of existing generation approaches as baselines for this dataset. Our results
show there is large room for improvement in terms of both identifying relevant
facts to include (knowing which claims are relevant gives a >20% improvement in
BLEU score), and generating appropriate post-modifier text for the context
(providing relevant claims is not sufficient for accurate generation). We
conduct an error analysis that suggests promising directions for future
research.
| 2,019 | Computation and Language |
Outlier Detection for Improved Data Quality and Diversity in Dialog
Systems | In a corpus of data, outliers are either errors: mistakes in the data that
are counterproductive, or are unique: informative samples that improve model
robustness. Identifying outliers can lead to better datasets by (1) removing
noise in datasets and (2) guiding collection of additional data to fill gaps.
However, the problem of detecting both outlier types has received relatively
little attention in NLP, particularly for dialog systems. We introduce a simple
and effective technique for detecting both erroneous and unique samples in a
corpus of short texts using neural sentence embeddings combined with
distance-based outlier detection. We also present a novel data collection
pipeline built atop our detection technique to automatically and iteratively
mine unique data samples while discarding erroneous samples. Experiments show
that our outlier detection technique is effective at finding errors while our
data collection pipeline yields highly diverse corpora that in turn produce
more robust intent classification and slot-filling models.
| 2,019 | Computation and Language |
Exploring Fine-Tuned Embeddings that Model Intensifiers for Emotion
Analysis | Adjective phrases like "a little bit surprised", "completely shocked", or
"not stunned at all" are not handled properly by currently published
state-of-the-art emotion classification and intensity prediction systems which
use pre-dominantly non-contextualized word embeddings as input. Based on this
finding, we analyze differences between embeddings used by these systems in
regard to their capability of handling such cases. Furthermore, we argue that
intensifiers in context of emotion words need special treatment, as is
established for sentiment polarity classification, but not for more
fine-grained emotion prediction. To resolve this issue, we analyze different
aspects of a post-processing pipeline which enriches the word representations
of such phrases. This includes expansion of semantic spaces at the phrase level
and sub-word level followed by retrofitting to emotion lexica. We evaluate the
impact of these steps with A La Carte and Bag-of-Substrings extensions based on
pretrained GloVe, Word2vec, and fastText embeddings against a crowd-sourced
corpus of intensity annotations for tweets containing our focus phrases. We
show that the fastText-based models do not gain from handling these specific
phrases under inspection. For Word2vec embeddings, we show that our
post-processing pipeline improves the results by up to 8% on a novel dataset
densely populated with intensifiers.
| 2,019 | Computation and Language |
NELEC at SemEval-2019 Task 3: Think Twice Before Going Deep | Existing Machine Learning techniques yield close to human performance on
text-based classification tasks. However, the presence of multi-modal noise in
chat data such as emoticons, slang, spelling mistakes, code-mixed data, etc.
makes existing deep-learning solutions perform poorly. The inability of
deep-learning systems to robustly capture these covariates puts a cap on their
performance. We propose NELEC: Neural and Lexical Combiner, a system which
elegantly combines textual and deep-learning based methods for sentiment
classification. We evaluate our system as part of the third task of 'Contextual
Emotion Detection in Text' as part of SemEval-2019. Our system performs
significantly better than the baseline, as well as our deep-learning model
benchmarks. It achieved a micro-averaged F1 score of 0.7765, ranking 3rd on the
test-set leader-board. Our code is available at
https://github.com/iamgroot42/nelec
| 2,019 | Computation and Language |
Distinguishing Clinical Sentiment: The Importance of Domain Adaptation
in Psychiatric Patient Health Records | Recently natural language processing (NLP) tools have been developed to
identify and extract salient risk indicators in electronic health records
(EHRs). Sentiment analysis, although widely used in non-medical areas for
improving decision making, has been studied minimally in the clinical setting.
In this study, we undertook, to our knowledge, the first domain adaptation of
sentiment analysis to psychiatric EHRs by defining psychiatric clinical
sentiment, performing an annotation project, and evaluating multiple
sentence-level sentiment machine learning (ML) models. Results indicate that
off-the-shelf sentiment analysis tools fail in identifying clinically positive
or negative polarity, and that the definition of clinical sentiment that we
provide is learnable with relatively small amounts of training data. This
project is an initial step towards further refining sentiment analysis methods
for clinical use. Our long-term objective is to incorporate the results of this
project as part of a machine learning model that predicts inpatient readmission
risk. We hope that this work will initiate a discussion concerning domain
adaptation of sentiment analysis to the clinical setting.
| 2,019 | Computation and Language |
An Unsupervised Autoregressive Model for Speech Representation Learning | This paper proposes a novel unsupervised autoregressive neural model for
learning generic speech representations. In contrast to other speech
representation learning methods that aim to remove noise or speaker
variabilities, ours is designed to preserve information for a wide range of
downstream tasks. In addition, the proposed model does not require any phonetic
or word boundary labels, allowing the model to benefit from large quantities of
unlabeled data. Speech representations learned by our model significantly
improve performance on both phone classification and speaker verification over
the surface features and other supervised and unsupervised approaches. Further
analysis shows that different levels of speech information are captured by our
model at different layers. In particular, the lower layers tend to be more
discriminative for speakers, while the upper layers provide more phonetic
content.
| 2,019 | Computation and Language |
An Analysis of Attention over Clinical Notes for Predictive Tasks | The shift to electronic medical records (EMRs) has engendered research into
machine learning and natural language technologies to analyze patient records,
and to predict from these clinical outcomes of interest. Two observations
motivate our aims here. First, unstructured notes contained within EMR often
contain key information, and hence should be exploited by models. Second, while
strong predictive performance is important, interpretability of models is
perhaps equally so for applications in this domain. Together, these points
suggest that neural models for EMR may benefit from incorporation of attention
over notes, which one may hope will both yield performance gains and afford
transparency in predictions. In this work we perform experiments to explore
this question using two EMR corpora and four different predictive tasks, that:
(i) inclusion of attention mechanisms is critical for neural encoder modules
that operate over notes fields in order to yield competitive performance, but,
(ii) unfortunately, while these boost predictive performance, it is decidedly
less clear whether they provide meaningful support for predictions.
| 2,019 | Computation and Language |
Cross-Lingual Transfer of Semantic Roles: From Raw Text to Semantic
Roles | We describe a transfer method based on annotation projection to develop a
dependency-based semantic role labeling system for languages for which no
supervised linguistic information other than parallel data is available. Unlike
previous work that presumes the availability of supervised features such as
lemmas, part-of-speech tags, and dependency parse trees, we only make use of
word and character features. Our deep model considers using character-based
representations as well as unsupervised stem embeddings to alleviate the need
for supervised features. Our experiments outperform a state-of-the-art method
that uses supervised lexico-syntactic features on 6 out of 7 languages in the
Universal Proposition Bank.
| 2,019 | Computation and Language |
Extracting Factual Min/Max Age Information from Clinical Trial Studies | Population age information is an essential characteristic of clinical trials.
In this paper, we focus on extracting minimum and maximum (min/max) age values
for the study samples from clinical research articles. Specifically, we
investigate the use of a neural network model for question answering to address
this information extraction task. The min/max age QA model is trained on the
massive structured clinical study records from ClinicalTrials.gov. For each
article, based on multiple min and max age values extracted from the QA model,
we predict both actual min/max age values for the study samples and filter out
non-factual age expressions. Our system improves the results over (i) a passage
retrieval based IE system and (ii) a CRF-based system by a large margin when
evaluated on an annotated dataset consisting of 50 research papers on smoking
cessation.
| 2,019 | Computation and Language |
Generate, Filter, and Rank: Grammaticality Classification for
Production-Ready NLG Systems | Neural approaches to Natural Language Generation (NLG) have been promising
for goal-oriented dialogue. One of the challenges of productionizing these
approaches, however, is the ability to control response quality, and ensure
that generated responses are acceptable. We propose the use of a generate,
filter, and rank framework, in which candidate responses are first filtered to
eliminate unacceptable responses, and then ranked to select the best response.
While acceptability includes grammatical correctness and semantic correctness,
we focus only on grammaticality classification in this paper, and show that
existing datasets for grammatical error correction don't correctly capture the
distribution of errors that data-driven generators are likely to make. We
release a grammatical classification and semantic correctness classification
dataset for the weather domain that consists of responses generated by 3
data-driven NLG systems. We then explore two supervised learning approaches
(CNNs and GBDTs) for classifying grammaticality. Our experiments show that
grammaticality classification is very sensitive to the distribution of errors
in the data, and that these distributions vary significantly with both the
source of the response as well as the domain. We show that it's possible to
achieve high precision with reasonable recall on our dataset.
| 2,022 | Computation and Language |
A General Framework for Information Extraction using Dynamic Span Graphs | We introduce a general framework for several information extraction tasks
that share span representations using dynamically constructed span graphs. The
graphs are constructed by selecting the most confident entity spans and linking
these nodes with confidence-weighted relation types and coreferences. The
dynamic span graph allows coreference and relation type confidences to
propagate through the graph to iteratively refine the span representations.
This is unlike previous multi-task frameworks for information extraction in
which the only interaction between tasks is in the shared first-layer LSTM. Our
framework significantly outperforms the state-of-the-art on multiple
information extraction tasks across multiple datasets reflecting different
domains. We further observe that the span enumeration approach is good at
detecting nested span entities, with significant F1 score improvement on the
ACE dataset.
| 2,019 | Computation and Language |
A Multi-task Learning Approach for Named Entity Recognition using Local
Detection | Named entity recognition (NER) systems that perform well require task-related
and manually annotated datasets. However, they are expensive to develop, and
are thus limited in size. As there already exists a large number of NER
datasets that share a certain degree of relationship but differ in content, it
is important to explore the question of whether such datasets can be combined
as a simple method for improving NER performance. To investigate this, we
developed a novel locally detecting multitask model using FFNNs. The model
relies on encoding variable-length sequences of words into theoretically
lossless and unique fixed-size representations. We applied this method to
several well-known NER tasks and compared the results of our model to baseline
models as well as other published results. As a result, we observed competitive
performance in nearly all of the tasks.
| 2,019 | Computation and Language |
Effective Context and Fragment Feature Usage for Named Entity
Recognition | In this paper, we explore a new approach to named entity recognition (NER)
with the goal of learning from context and fragment features more effectively,
contributing to the improvement of overall recognition performance. We use the
recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode
each sentence fragment and its left-right contexts into a fixed-size
representation. Next, we organize the context and fragment features into
groups, and feed each feature group to dedicated fully-connected layers.
Finally, we merge each group's final dedicated layers and add a shared layer
leading to a single output. The outcome of our experiments show that, given
only tokenized text and trained word embeddings, our system outperforms our
baseline models, and is competitive to the state-of-the-arts of various
well-known NER tasks.
| 2,019 | Computation and Language |
Gender Bias in Contextualized Word Embeddings | In this paper, we quantify, analyze and mitigate gender bias exhibited in
ELMo's contextualized word vectors. First, we conduct several intrinsic
analyses and find that (1) training data for ELMo contains significantly more
male than female entities, (2) the trained ELMo embeddings systematically
encode gender information and (3) ELMo unequally encodes gender information
about male and female entities. Then, we show that a state-of-the-art
coreference system that depends on ELMo inherits its bias and demonstrates
significant bias on the WinoBias probing corpus. Finally, we explore two
methods to mitigate such gender bias and show that the bias demonstrated on
WinoBias can be eliminated.
| 2,019 | Computation and Language |
Publicly Available Clinical BERT Embeddings | Contextual word embedding models such as ELMo (Peters et al., 2018) and BERT
(Devlin et al., 2018) have dramatically improved performance for many natural
language processing (NLP) tasks in recent months. However, these models have
been minimally explored on specialty corpora, such as clinical text; moreover,
in the clinical domain, no publicly-available pre-trained BERT models yet
exist. In this work, we address this need by exploring and releasing BERT
models for clinical text: one for generic clinical text and another for
discharge summaries specifically. We demonstrate that using a domain-specific
model yields performance improvements on three common clinical NLP tasks as
compared to nonspecific embeddings. These domain-specific models are not as
performant on two clinical de-identification tasks, and argue that this is a
natural consequence of the differences between de-identified source text and
synthetically non de-identified task text.
| 2,019 | Computation and Language |
ThisIsCompetition at SemEval-2019 Task 9: BERT is unstable for
out-of-domain samples | This paper describes our system, Joint Encoders for Stable Suggestion
Inference (JESSI), for the SemEval 2019 Task 9: Suggestion Mining from Online
Reviews and Forums. JESSI is a combination of two sentence encoders: (a) one
using multiple pre-trained word embeddings learned from log-bilinear regression
(GloVe) and translation (CoVe) models, and (b) one on top of word encodings
from a pre-trained deep bidirectional transformer (BERT). We include a domain
adversarial training module when training for out-of-domain samples. Our
experiments show that while BERT performs exceptionally well for in-domain
samples, several runs of the model show that it is unstable for out-of-domain
samples. The problem is mitigated tremendously by (1) combining BERT with a
non-BERT encoder, and (2) using an RNN-based classifier on top of BERT. Our
final models obtained second place with 77.78\% F-Score on Subtask A (i.e.
in-domain) and achieved an F-Score of 79.59\% on Subtask B (i.e.
out-of-domain), even without using any additional external data.
| 2,019 | Computation and Language |
The Steep Road to Happily Ever After: An Analysis of Current Visual
Storytelling Models | Visual storytelling is an intriguing and complex task that only recently
entered the research arena. In this work, we survey relevant work to date, and
conduct a thorough error analysis of three very recent approaches to visual
storytelling. We categorize and provide examples of common types of errors, and
identify key shortcomings in current work. Finally, we make recommendations for
addressing these limitations in the future.
| 2,019 | Computation and Language |
Evaluating Coherence in Dialogue Systems using Entailment | Evaluating open-domain dialogue systems is difficult due to the diversity of
possible correct answers. Automatic metrics such as BLEU correlate weakly with
human annotations, resulting in a significant bias across different models and
datasets. Some researchers resort to human judgment experimentation for
assessing response quality, which is expensive, time consuming, and not
scalable. Moreover, judges tend to evaluate a small number of dialogues,
meaning that minor differences in evaluation configuration may lead to
dissimilar results. In this paper, we present interpretable metrics for
evaluating topic coherence by making use of distributed sentence
representations. Furthermore, we introduce calculable approximations of human
judgment based on conversational coherence by adopting state-of-the-art
entailment techniques. Results show that our metrics can be used as a surrogate
for human judgment, making it easy to evaluate dialogue systems on large-scale
datasets and allowing an unbiased estimate for the quality of the responses.
| 2,020 | Computation and Language |
Step-by-Step: Separating Planning from Realization in Neural
Data-to-Text Generation | Data-to-text generation can be conceptually divided into two parts: ordering
and structuring the information (planning), and generating fluent language
describing the information (realization). Modern neural generation systems
conflate these two steps into a single end-to-end differentiable system. We
propose to split the generation process into a symbolic text-planning stage
that is faithful to the input, followed by a neural generation stage that
focuses only on realization. For training a plan-to-text generator, we present
a method for matching reference texts to their corresponding text plans. For
inference time, we describe a method for selecting high-quality text plans for
new inputs. We implement and evaluate our approach on the WebNLG benchmark. Our
results demonstrate that decoupling text planning from neural realization
indeed improves the system's reliability and adequacy while maintaining fluent
output. We observe improvements both in BLEU scores and in manual evaluations.
Another benefit of our approach is the ability to output diverse realizations
of the same input, paving the way to explicit control over the generated text
structure.
| 2,019 | Computation and Language |
Parallelizable Stack Long Short-Term Memory | Stack Long Short-Term Memory (StackLSTM) is useful for various applications
such as parsing and string-to-tree neural machine translation, but it is also
known to be notoriously difficult to parallelize for GPU training due to the
fact that the computations are dependent on discrete operations. In this paper,
we tackle this problem by utilizing state access patterns of StackLSTM to
homogenize computations with regard to different discrete operations. Our
parsing experiments show that the method scales up almost linearly with
increasing batch size, and our parallelized PyTorch implementation trains
significantly faster compared to the Dynet C++ implementation.
| 2,019 | Computation and Language |
Speeding Up Natural Language Parsing by Reusing Partial Results | This paper proposes a novel technique that applies case-based reasoning in
order to generate templates for reusable parse tree fragments, based on PoS
tags of bigrams and trigrams that demonstrate low variability in their
syntactic analyses from prior data. The aim of this approach is to improve the
speed of dependency parsers by avoiding redundant calculations. This can be
resolved by applying the predefined templates that capture results of previous
syntactic analyses and directly assigning the stored structure to a new n-gram
that matches one of the templates, instead of parsing a similar text fragment
again. The study shows that using a heuristic approach to select and reuse the
partial results increases parsing speed by reducing the input length to be
processed by a parser. The increase in parsing speed comes at some expense of
accuracy. Experiments on English show promising results: the input dimension
can be reduced by more than 20% at the cost of less than 3 points of Unlabeled
Attachment Score.
| 2,019 | Computation and Language |
Token-Level Ensemble Distillation for Grapheme-to-Phoneme Conversion | Grapheme-to-phoneme (G2P) conversion is an important task in automatic speech
recognition and text-to-speech systems. Recently, G2P conversion is viewed as a
sequence to sequence task and modeled by RNN or CNN based encoder-decoder
framework. However, previous works do not consider the practical issues when
deploying G2P model in the production system, such as how to leverage
additional unlabeled data to boost the accuracy, as well as reduce model size
for online deployment. In this work, we propose token-level ensemble
distillation for G2P conversion, which can (1) boost the accuracy by distilling
the knowledge from additional unlabeled data, and (2) reduce the model size but
maintain the high accuracy, both of which are very practical and helpful in the
online production system. We use token-level knowledge distillation, which
results in better accuracy than the sequence-level counterpart. What is more,
we adopt the Transformer instead of RNN or CNN based models to further boost
the accuracy of G2P conversion. Experiments on the publicly available CMUDict
dataset and an internal English dataset demonstrate the effectiveness of our
proposed method. Particularly, our method achieves 19.88% WER on CMUDict
dataset, outperforming the previous works by more than 4.22% WER, and setting
the new state-of-the-art results.
| 2,019 | Computation and Language |
UM-IU@LING at SemEval-2019 Task 6: Identifying Offensive Tweets Using
BERT and SVMs | This paper describes the UM-IU@LING's system for the SemEval 2019 Task 6:
OffensEval. We take a mixed approach to identify and categorize hate speech in
social media. In subtask A, we fine-tuned a BERT based classifier to detect
abusive content in tweets, achieving a macro F1 score of 0.8136 on the test
data, thus reaching the 3rd rank out of 103 submissions. In subtasks B and C,
we used a linear SVM with selected character n-gram features. For subtask C,
our system could identify the target of abuse with a macro F1 score of 0.5243,
ranking it 27th out of 65 submissions.
| 2,019 | Computation and Language |
An Integrated Approach for Keyphrase Generation via Exploring the Power
of Retrieval and Extraction | In this paper, we present a novel integrated approach for keyphrase
generation (KG). Unlike previous works which are purely extractive or
generative, we first propose a new multi-task learning framework that jointly
learns an extractive model and a generative model. Besides extracting
keyphrases, the output of the extractive model is also employed to rectify the
copy probability distribution of the generative model, such that the generative
model can better identify important contents from the given document. Moreover,
we retrieve similar documents with the given document from training data and
use their associated keyphrases as external knowledge for the generative model
to produce more accurate keyphrases. For further exploiting the power of
extraction and retrieval, we propose a neural-based merging module to combine
and re-rank the predicted keyphrases from the enhanced generative model, the
extractive model, and the retrieved keyphrases. Experiments on the five KG
benchmarks demonstrate that our integrated approach outperforms the
state-of-the-art methods.
| 2,019 | Computation and Language |
The Mathematics of Text Structure | In previous work we gave a mathematical foundation, referred to as DisCoCat,
for how words interact in a sentence in order to produce the meaning of that
sentence. To do so, we exploited the perfect structural match of grammar and
categories of meaning spaces. Here, we give a mathematical foundation, referred
to as DisCoCirc, for how sentences interact in texts in order to produce the
meaning of that text. First we revisit DisCoCat. While in DisCoCat all meanings
are fixed as states (i.e. have no input), in DisCoCirc word meanings correspond
to a type, or system, and the states of this system can evolve. Sentences are
gates within a circuit which update the variable meanings of those words. Like
in DisCoCat, word meanings can live in a variety of spaces e.g. propositional,
vectorial, or cognitive. The compositional structure are string diagrams
representing information flows, and an entire text yields a single string
diagram in which word meanings lift to the meaning of an entire text. While the
developments in this paper are independent of a physical embodiment (cf.
classical vs. quantum computing), both the compositional formalism and
suggested meaning model are highly quantum-inspired, and implementation on a
quantum computer would come with a range of benefits. We also praise Jim Lambek
for his role in mathematical linguistics in general, and the development of the
DisCo program more specifically.
| 2,020 | Computation and Language |
Tracking Discrete and Continuous Entity State for Process Understanding | Procedural text, which describes entities and their interactions as they
undergo some process, depicts entities in a uniquely nuanced way. First, each
entity may have some observable discrete attributes, such as its state or
location; modeling these involves imposing global structure and enforcing
consistency. Second, an entity may have properties which are not made explicit
but can be effectively induced and tracked by neural networks. In this paper,
we propose a structured neural architecture that reflects this dual nature of
entity evolution. The model tracks each entity recurrently, updating its hidden
continuous representation at each step to contain relevant state information.
The global discrete state structure is explicitly modeled with a neural CRF
over the changing hidden representation of the entity. This CRF can explicitly
capture constraints on entity states over time, enforcing that, for example, an
entity cannot move to a location after it is destroyed. We evaluate the
performance of our proposed model on QA tasks over process paragraphs in the
ProPara dataset and find that our model achieves state-of-the-art results.
| 2,019 | Computation and Language |
Spoken Language Intent Detection using Confusion2Vec | Decoding speaker's intent is a crucial part of spoken language understanding
(SLU). The presence of noise or errors in the text transcriptions, in real life
scenarios make the task more challenging. In this paper, we address the spoken
language intent detection under noisy conditions imposed by automatic speech
recognition (ASR) systems. We propose to employ confusion2vec word feature
representation to compensate for the errors made by ASR and to increase the
robustness of the SLU system. The confusion2vec, motivated from human speech
production and perception, models acoustic relationships between words in
addition to the semantic and syntactic relations of words in human language. We
hypothesize that ASR often makes errors relating to acoustically similar words,
and the confusion2vec with inherent model of acoustic relationships between
words is able to compensate for the errors. We demonstrate through experiments
on the ATIS benchmark dataset, the robustness of the proposed model to achieve
state-of-the-art results under noisy ASR conditions. Our system reduces
classification error rate (CER) by 20.84% and improves robustness by 37.48%
(lower CER degradation) relative to the previous state-of-the-art going from
clean to noisy transcripts. Improvements are also demonstrated when training
the intent detection models on noisy transcripts.
| 2,019 | Computation and Language |
Joint Learning of Pre-Trained and Random Units for Domain Adaptation in
Part-of-Speech Tagging | Fine-tuning neural networks is widely used to transfer valuable knowledge
from high-resource to low-resource domains. In a standard fine-tuning scheme,
source and target problems are trained using the same architecture. Although
capable of adapting to new domains, pre-trained units struggle with learning
uncommon target-specific patterns. In this paper, we propose to augment the
target-network with normalised, weighted and randomly initialised units that
beget a better adaptation while maintaining the valuable source knowledge. Our
experiments on POS tagging of social media texts (Tweets domain) demonstrate
that our method achieves state-of-the-art performances on 3 commonly used
datasets.
| 2,019 | Computation and Language |
SEQ^3: Differentiable Sequence-to-Sequence-to-Sequence Autoencoder for
Unsupervised Abstractive Sentence Compression | Neural sequence-to-sequence models are currently the dominant approach in
several natural language processing tasks, but require large parallel corpora.
We present a sequence-to-sequence-to-sequence autoencoder (SEQ^3), consisting
of two chained encoder-decoder pairs, with words used as a sequence of discrete
latent variables. We apply the proposed model to unsupervised abstractive
sentence compression, where the first and last sequences are the input and
reconstructed sentences, respectively, while the middle sequence is the
compressed sentence. Constraining the length of the latent word sequences
forces the model to distill important information from the input. A pretrained
language model, acting as a prior over the latent sequences, encourages the
compressed sentences to be human-readable. Continuous relaxations enable us to
sample from categorical distributions, allowing gradient-based optimization,
unlike alternatives that rely on reinforcement learning. The proposed model
does not require parallel text-summary pairs, achieving promising results in
unsupervised sentence compression on benchmark datasets.
| 2,019 | Computation and Language |
Unsupervised Dialog Structure Learning | Learning a shared dialog structure from a set of task-oriented dialogs is an
important challenge in computational linguistics. The learned dialog structure
can shed light on how to analyze human dialogs, and more importantly contribute
to the design and evaluation of dialog systems. We propose to extract dialog
structures using a modified VRNN model with discrete latent vectors. Different
from existing HMM-based models, our model is based on variational-autoencoder
(VAE). Such model is able to capture more dynamics in dialogs beyond the
surface forms of the language. We find that qualitatively, our method extracts
meaningful dialog structure, and quantitatively, outperforms previous models on
the ability to predict unseen data. We further evaluate the model's
effectiveness in a downstream task, the dialog system building task.
Experiments show that, by integrating the learned dialog structure into the
reward function design, the model converges faster and to a better outcome in a
reinforcement learning setting.
| 2,019 | Computation and Language |
Unsupervised Recurrent Neural Network Grammars | Recurrent neural network grammars (RNNG) are generative models of language
which jointly model syntax and surface structure by incrementally generating a
syntax tree and sentence in a top-down, left-to-right order. Supervised RNNGs
achieve strong language modeling and parsing performance, but require an
annotated corpus of parse trees. In this work, we experiment with unsupervised
learning of RNNGs. Since directly marginalizing over the space of latent trees
is intractable, we instead apply amortized variational inference. To maximize
the evidence lower bound, we develop an inference network parameterized as a
neural CRF constituency parser. On language modeling, unsupervised RNNGs
perform as well their supervised counterparts on benchmarks in English and
Chinese. On constituency grammar induction, they are competitive with recent
neural language models that induce tree structures from words through attention
mechanisms.
| 2,019 | Computation and Language |
Enriching Rare Word Representations in Neural Language Models by
Embedding Matrix Augmentation | The neural language models (NLM) achieve strong generalization capability by
learning the dense representation of words and using them to estimate
probability distribution function. However, learning the representation of rare
words is a challenging problem causing the NLM to produce unreliable
probability estimates. To address this problem, we propose a method to enrich
representations of rare words in pre-trained NLM and consequently improve its
probability estimation performance. The proposed method augments the word
embedding matrices of pre-trained NLM while keeping other parameters unchanged.
Specifically, our method updates the embedding vectors of rare words using
embedding vectors of other semantically and syntactically similar words. To
evaluate the proposed method, we enrich the rare street names in the
pre-trained NLM and use it to rescore 100-best hypotheses output from the
Singapore English speech recognition system. The enriched NLM reduces the word
error rate by 6% relative and improves the recognition accuracy of the rare
words by 16% absolute as compared to the baseline NLM.
| 2,021 | Computation and Language |
Constrained Output Embeddings for End-to-End Code-Switching Speech
Recognition with Only Monolingual Data | The lack of code-switch training data is one of the major concerns in the
development of end-to-end code-switching automatic speech recognition (ASR)
models. In this work, we propose a method to train an improved end-to-end
code-switching ASR using only monolingual data. Our method encourages the
distributions of output token embeddings of monolingual languages to be
similar, and hence, promotes the ASR model to easily code-switch between
languages. Specifically, we propose to use Jensen-Shannon divergence and cosine
distance based constraints. The former will enforce output embeddings of
monolingual languages to possess similar distributions, while the later simply
brings the centroids of two distributions to be close to each other.
Experimental results demonstrate high effectiveness of the proposed method,
yielding up to 4.5% absolute mixed error rate improvement on Mandarin-English
code-switching ASR task.
| 2,021 | Computation and Language |
Improving Domain Adaptation Translation with Domain Invariant and
Specific Information | In domain adaptation for neural machine translation, translation performance
can benefit from separating features into domain-specific features and common
features. In this paper, we propose a method to explicitly model the two kinds
of information in the encoder-decoder framework so as to exploit out-of-domain
data in in-domain training. In our method, we maintain a private encoder and a
private decoder for each domain which are used to model domain-specific
information. In the meantime, we introduce a common encoder and a common
decoder shared by all the domains which can only have domain-independent
information flow through. Besides, we add a discriminator to the shared encoder
and employ adversarial training for the whole model to reinforce the
performance of information separation and machine translation simultaneously.
Experiment results show that our method can outperform competitive baselines
greatly on multiple data sets.
| 2,019 | Computation and Language |
Semi-Supervised Few-Shot Learning for Dual Question-Answer Extraction | This paper addresses the problem of key phrase extraction from sentences.
Existing state-of-the-art supervised methods require large amounts of annotated
data to achieve good performance and generalization. Collecting labeled data
is, however, often expensive. In this paper, we redefine the problem as
question-answer extraction, and present SAMIE: Self-Asking Model for
Information Ixtraction, a semi-supervised model which dually learns to ask and
to answer questions by itself. Briefly, given a sentence $s$ and an answer $a$,
the model needs to choose the most appropriate question $\hat q$; meanwhile,
for the given sentence $s$ and same question $\hat q$ selected in the previous
step, the model will predict an answer $\hat a$. The model can support few-shot
learning with very limited supervision. It can also be used to perform
clustering analysis when no supervision is provided. Experimental results show
that the proposed method outperforms typical supervised methods especially when
given little labeled data.
| 2,019 | Computation and Language |
Crosslingual Document Embedding as Reduced-Rank Ridge Regression | There has recently been much interest in extending vector-based word
representations to multiple languages, such that words can be compared across
languages. In this paper, we shift the focus from words to documents and
introduce a method for embedding documents written in any language into a
single, language-independent vector space. For training, our approach leverages
a multilingual corpus where the same concept is covered in multiple languages
(but not necessarily via exact translations), such as Wikipedia. Our method,
Cr5 (Crosslingual reduced-rank ridge regression), starts by training a
ridge-regression-based classifier that uses language-specific bag-of-word
features in order to predict the concept that a given document is about. We
show that, when constraining the learned weight matrix to be of low rank, it
can be factored to obtain the desired mappings from language-specific
bags-of-words to language-independent embeddings. As opposed to most prior
methods, which use pretrained monolingual word vectors, postprocess them to
make them crosslingual, and finally average word vectors to obtain document
vectors, Cr5 is trained end-to-end and is thus natively crosslingual as well as
document-level. Moreover, since our algorithm uses the singular value
decomposition as its core operation, it is highly scalable. Experiments show
that our method achieves state-of-the-art performance on a crosslingual
document retrieval task. Finally, although not trained for embedding sentences
and words, it also achieves competitive performance on crosslingual sentence
and word retrieval tasks.
| 2,019 | Computation and Language |
Issue Framing in Online Discussion Fora | In online discussion fora, speakers often make arguments for or against
something, say birth control, by highlighting certain aspects of the topic. In
social science, this is referred to as issue framing. In this paper, we
introduce a new issue frame annotated corpus of online discussions. We explore
to what extent models trained to detect issue frames in newswire and social
media can be transferred to the domain of discussion fora, using a combination
of multi-task and adversarial training, assuming only unlabeled training data
in the target domain.
| 2,019 | Computation and Language |
Source codes in human communication | Although information theoretic characterizations of human communication have
become increasingly popular in linguistics, to date they have largely involved
grafting probabilistic constructs onto older ideas about grammar. Similarities
between human and digital communication have been strongly emphasized, and
differences largely ignored. However, some of these differences matter:
communication systems are based on predefined codes shared by every
sender-receiver, whereas the distributions of words in natural languages
guarantee that no speaker-hearer ever has access to an entire linguistic code,
which seemingly undermines the idea that natural languages are probabilistic
systems in any meaningful sense. This paper describes how the distributional
properties of languages meet the various challenges arising from the
differences between information systems and natural languages, along with the
very different view of human communication these properties suggest.
| 2,019 | Computation and Language |
Effectiveness of Data-Driven Induction of Semantic Spaces and
Traditional Classifiers for Sarcasm Detection | Irony and sarcasm are two complex linguistic phenomena that are widely used
in everyday language and especially over the social media, but they represent
two serious issues for automated text understanding. Many labeled corpora have
been extracted from several sources to accomplish this task, and it seems that
sarcasm is conveyed in different ways for different domains. Nonetheless, very
little work has been done for comparing different methods among the available
corpora. Furthermore, usually, each author collects and uses their own datasets
to evaluate his own method. In this paper, we show that sarcasm detection can
be tackled by applying classical machine learning algorithms to input texts
sub-symbolically represented in a Latent Semantic space. The main consequence
is that our studies establish both reference datasets and baselines for the
sarcasm detection problem that could serve the scientific community to test
newly proposed methods.
| 2,019 | Computation and Language |
Adaptation of Hierarchical Structured Models for Speech Act Recognition
in Asynchronous Conversation | We address the problem of speech act recognition (SAR) in asynchronous
conversations (forums, emails). Unlike synchronous conversations (e.g.,
meetings, phone), asynchronous domains lack large labeled datasets to train an
effective SAR model. In this paper, we propose methods to effectively leverage
abundant unlabeled conversational data and the available labeled data from
synchronous domains. We carry out our research in three main steps. First, we
introduce a neural architecture based on hierarchical LSTMs and conditional
random fields (CRF) for SAR, and show that our method outperforms existing
methods when trained on in-domain data only. Second, we improve our initial SAR
models by semi-supervised learning in the form of pretrained word embeddings
learned from a large unlabeled conversational corpus. Finally, we employ
adversarial training to improve the results further by leveraging the labeled
data from synchronous domains and by explicitly modeling the distributional
shift in two domains.
| 2,019 | Computation and Language |
Evaluation of Greek Word Embeddings | Since word embeddings have been the most popular input for many NLP tasks,
evaluating their quality is of critical importance. Most research efforts are
focusing on English word embeddings. This paper addresses the problem of
constructing and evaluating such models for the Greek language. We created a
new word analogy corpus considering the original English Word2vec word analogy
corpus and some specific linguistic aspects of the Greek language as well.
Moreover, we created a Greek version of WordSim353 corpora for a basic
evaluation of word similarities. We tested seven word vector models and our
evaluation showed that we are able to create meaningful representations. Last,
we discovered that the morphological complexity of the Greek language and
polysemy can influence the quality of the resulting word embeddings.
| 2,020 | Computation and Language |
Black is to Criminal as Caucasian is to Police: Detecting and Removing
Multiclass Bias in Word Embeddings | Online texts -- across genres, registers, domains, and styles -- are riddled
with human stereotypes, expressed in overt or subtle ways. Word embeddings,
trained on these texts, perpetuate and amplify these stereotypes, and propagate
biases to machine learning models that use word embeddings as features. In this
work, we propose a method to debias word embeddings in multiclass settings such
as race and religion, extending the work of (Bolukbasi et al., 2016) from the
binary setting, such as binary gender. Next, we propose a novel methodology for
the evaluation of multiclass debiasing. We demonstrate that our multiclass
debiasing is robust and maintains the efficacy in standard NLP tasks.
| 2,019 | Computation and Language |
Simple Question Answering with Subgraph Ranking and Joint-Scoring | Knowledge graph based simple question answering (KBSQA) is a major area of
research within question answering. Although only dealing with simple
questions, i.e., questions that can be answered through a single knowledge base
(KB) fact, this task is neither simple nor close to being solved. Targeting on
the two main steps, subgraph selection and fact selection, the research
community has developed sophisticated approaches. However, the importance of
subgraph ranking and leveraging the subject--relation dependency of a KB fact
have not been sufficiently explored. Motivated by this, we present a unified
framework to describe and analyze existing approaches. Using this framework as
a starting point, we focus on two aspects: improving subgraph selection through
a novel ranking method and leveraging the subject--relation dependency by
proposing a joint scoring CNN model with a novel loss function that enforces
the well-order of scores. Our methods achieve a new state of the art (85.44% in
accuracy) on the SimpleQuestions dataset.
| 2,020 | Computation and Language |
Evaluating KGR10 Polish word embeddings in the recognition of temporal
expressions using BiLSTM-CRF | The article introduces a new set of Polish word embeddings, built using KGR10
corpus, which contains more than 4 billion words. These embeddings are
evaluated in the problem of recognition of temporal expressions (timexes) for
the Polish language. We described the process of KGR10 corpus creation and a
new approach to the recognition problem using Bidirectional Long-Short Term
Memory (BiLSTM) network with additional CRF layer, where specific embeddings
are essential. We presented experiments and conclusions drawn from them.
| 2,019 | Computation and Language |
Analyzing and Interpreting Neural Networks for NLP: A Report on the
First BlackboxNLP Workshop | The EMNLP 2018 workshop BlackboxNLP was dedicated to resources and techniques
specifically developed for analyzing and understanding the inner-workings and
representations acquired by neural models of language. Approaches included:
systematic manipulation of input to neural networks and investigating the
impact on their performance, testing whether interpretable knowledge can be
decoded from intermediate representations acquired by neural networks,
proposing modifications to neural network architectures to make their knowledge
state or generated output more explainable, and examining the performance of
networks on simplified or formal languages. Here we review a number of
representative studies in each category.
| 2,019 | Computation and Language |
Abusive Language Detection with Graph Convolutional Networks | Abuse on the Internet represents a significant societal problem of our time.
Previous research on automated abusive language detection in Twitter has shown
that community-based profiling of users is a promising technique for this task.
However, existing approaches only capture shallow properties of online
communities by modeling follower-following relationships. In contrast, working
with graph convolutional networks (GCNs), we present the first approach that
captures not only the structure of online communities but also the linguistic
behavior of the users within them. We show that such a heterogeneous
graph-structured modeling of communities significantly advances the current
state of the art in abusive language detection.
| 2,019 | Computation and Language |
Differentiable Sampling with Flexible Reference Word Order for Neural
Machine Translation | Despite some empirical success at correcting exposure bias in machine
translation, scheduled sampling algorithms suffer from a major drawback: they
incorrectly assume that words in the reference translations and in sampled
sequences are aligned at each time step. Our new differentiable sampling
algorithm addresses this issue by optimizing the probability that the reference
can be aligned with the sampled output, based on a soft alignment predicted by
the model itself. As a result, the output distribution at each time step is
evaluated with respect to the whole predicted sequence. Experiments on IWSLT
translation tasks show that our approach improves BLEU compared to maximum
likelihood and scheduled sampling baselines. In addition, our approach is
simpler to train with no need for sampling schedule and yields models that
achieve larger improvements with smaller beam sizes.
| 2,019 | Computation and Language |
Completely Unsupervised Speech Recognition By A Generative Adversarial
Network Harmonized With Iteratively Refined Hidden Markov Models | Producing a large annotated speech corpus for training ASR systems remains
difficult for more than 95% of languages all over the world which are
low-resourced, but collecting a relatively big unlabeled data set for such
languages is more achievable. This is why some initial effort have been
reported on completely unsupervised speech recognition learned from unlabeled
data only, although with relatively high error rates. In this paper, we develop
a Generative Adversarial Network (GAN) to achieve this purpose, in which a
Generator and a Discriminator learn from each other iteratively to improve the
performance. We further use a set of Hidden Markov Models (HMMs) iteratively
refined from the machine generated labels to work in harmony with the GAN. The
initial experiments on TIMIT data set achieve an phone error rate of 33.1%,
which is 8.5% lower than the previous state-of-the-art.
| 2,019 | Computation and Language |
Revisiting Adversarial Autoencoder for Unsupervised Word Translation
with Cycle Consistency and Improved Training | Adversarial training has shown impressive success in learning bilingual
dictionary without any parallel data by mapping monolingual embeddings to a
shared space. However, recent work has shown superior performance for
non-adversarial methods in more challenging language pairs. In this work, we
revisit adversarial autoencoder for unsupervised word translation and propose
two novel extensions to it that yield more stable training and improved
results. Our method includes regularization terms to enforce cycle consistency
and input reconstruction, and puts the target encoders as an adversary against
the corresponding discriminator. Extensive experimentations with European,
non-European and low-resource languages show that our method is more robust and
achieves better performance than recently proposed adversarial and
non-adversarial approaches.
| 2,019 | Computation and Language |
AutoSeM: Automatic Task Selection and Mixing in Multi-Task Learning | Multi-task learning (MTL) has achieved success over a wide range of problems,
where the goal is to improve the performance of a primary task using a set of
relevant auxiliary tasks. However, when the usefulness of the auxiliary tasks
w.r.t. the primary task is not known a priori, the success of MTL models
depends on the correct choice of these auxiliary tasks and also a balanced
mixing ratio of these tasks during alternate training. These two problems could
be resolved via manual intuition or hyper-parameter tuning over all
combinatorial task choices, but this introduces inductive bias or is not
scalable when the number of candidate auxiliary tasks is very large. To address
these issues, we present AutoSeM, a two-stage MTL pipeline, where the first
stage automatically selects the most useful auxiliary tasks via a
Beta-Bernoulli multi-armed bandit with Thompson Sampling, and the second stage
learns the training mixing ratio of these selected auxiliary tasks via a
Gaussian Process based Bayesian optimization framework. We conduct several MTL
experiments on the GLUE language understanding tasks, and show that our AutoSeM
framework can successfully find relevant auxiliary tasks and automatically
learn their mixing ratio, achieving significant performance boosts on several
primary tasks. Finally, we present ablations for each stage of AutoSeM and
analyze the learned auxiliary task choices.
| 2,019 | Computation and Language |
Knowledge Distillation For Recurrent Neural Network Language Modeling
With Trust Regularization | Recurrent Neural Networks (RNNs) have dominated language modeling because of
their superior performance over traditional N-gram based models. In many
applications, a large Recurrent Neural Network language model (RNNLM) or an
ensemble of several RNNLMs is used. These models have large memory footprints
and require heavy computation. In this paper, we examine the effect of applying
knowledge distillation in reducing the model size for RNNLMs. In addition, we
propose a trust regularization method to improve the knowledge distillation
training for RNNLMs. Using knowledge distillation with trust regularization, we
reduce the parameter size to a third of that of the previously published best
model while maintaining the state-of-the-art perplexity result on Penn Treebank
data. In a speech recognition N-bestrescoring task, we reduce the RNNLM model
size to 18.5% of the baseline system, with no degradation in word error
rate(WER) performance on Wall Street Journal data set.
| 2,019 | Computation and Language |
Learning to Navigate Unseen Environments: Back Translation with
Environmental Dropout | A grand goal in AI is to build a robot that can accurately navigate based on
natural language instructions, which requires the agent to perceive the scene,
understand and ground language, and act in the real-world environment. One key
challenge here is to learn to navigate in new environments that are unseen
during training. Most of the existing approaches perform dramatically worse in
unseen environments as compared to seen ones. In this paper, we present a
generalizable navigational agent. Our agent is trained in two stages. The first
stage is training via mixed imitation and reinforcement learning, combining the
benefits from both off-policy and on-policy optimization. The second stage is
fine-tuning via newly-introduced 'unseen' triplets (environment, path,
instruction). To generate these unseen triplets, we propose a simple but
effective 'environmental dropout' method to mimic unseen environments, which
overcomes the problem of limited seen environment variability. Next, we apply
semi-supervised learning (via back-translation) on these dropped-out
environments to generate new paths and instructions. Empirically, we show that
our agent is substantially better at generalizability when fine-tuned with
these triplets, outperforming the state-of-art approaches by a large margin on
the private unseen test set of the Room-to-Room task, and achieving the top
rank on the leaderboard.
| 2,019 | Computation and Language |
Deep-Sentiment: Sentiment Analysis Using Ensemble of CNN and Bi-LSTM
Models | With the popularity of social networks, and e-commerce websites, sentiment
analysis has become a more active area of research in the past few years. On a
high level, sentiment analysis tries to understand the public opinion about a
specific product or topic, or trends from reviews or tweets. Sentiment analysis
plays an important role in better understanding customer/user opinion, and also
extracting social/political trends. There has been a lot of previous works for
sentiment analysis, some based on hand-engineering relevant textual features,
and others based on different neural network architectures. In this work, we
present a model based on an ensemble of long-short-term-memory (LSTM), and
convolutional neural network (CNN), one to capture the temporal information of
the data, and the other one to extract the local structure thereof. Through
experimental results, we show that using this ensemble model we can outperform
both individual models. We are also able to achieve a very high accuracy rate
compared to the previous works.
| 2,019 | Computation and Language |
Exploring Methods for the Automatic Detection of Errors in Manual
Transcription | Quality of data plays an important role in most deep learning tasks. In the
speech community, transcription of speech recording is indispensable. Since the
transcription is usually generated artificially, automatically finding errors
in manual transcriptions not only saves time and labors but benefits the
performance of tasks that need the training process. Inspired by the success of
hybrid automatic speech recognition using both language model and acoustic
model, two approaches of automatic error detection in the transcriptions have
been explored in this work. Previous study using a biased language model
approach, relying on a strong transcription-dependent language model, has been
reviewed. In this work, we propose a novel acoustic model based approach,
focusing on the phonetic sequence of speech. Both methods have been evaluated
on a completely real dataset, which was originally transcribed with errors and
strictly corrected manually afterwards.
| 2,019 | Computation and Language |
Word Similarity Datasets for Thai: Construction and Evaluation | Distributional semantics in the form of word embeddings are an essential
ingredient to many modern natural language processing systems. The
quantification of semantic similarity between words can be used to evaluate the
ability of a system to perform semantic interpretation. To this end, a number
of word similarity datasets have been created for the English language over the
last decades. For Thai language few such resources are available. In this work,
we create three Thai word similarity datasets by translating and re-rating the
popular WordSim-353, SimLex-999 and SemEval-2017-Task-2 datasets. The three
datasets contain 1852 word pairs in total and have different characteristics in
terms of difficulty, domain coverage, and notion of similarity (relatedness
vs.~similarity). These features help to gain a broader picture of the
properties of an evaluated word embedding model. We include baseline
evaluations with existing Thai embedding models, and identify the high ratio of
out-of-vocabulary words as one of the biggest challenges. All datasets,
evaluation results, and a tool for easy evaluation of new Thai embedding models
are available to the NLP community online.
| 2,019 | Computation and Language |
CODAH: An Adversarially Authored Question-Answer Dataset for Common
Sense | Commonsense reasoning is a critical AI capability, but it is difficult to
construct challenging datasets that test common sense. Recent neural question
answering systems, based on large pre-trained models of language, have already
achieved near-human-level performance on commonsense knowledge benchmarks.
These systems do not possess human-level common sense, but are able to exploit
limitations of the datasets to achieve human-level scores.
We introduce the CODAH dataset, an adversarially-constructed evaluation
dataset for testing common sense. CODAH forms a challenging extension to the
recently-proposed SWAG dataset, which tests commonsense knowledge using
sentence-completion questions that describe situations observed in video. To
produce a more difficult dataset, we introduce a novel procedure for question
acquisition in which workers author questions designed to target weaknesses of
state-of-the-art neural question answering systems. Workers are rewarded for
submissions that models fail to answer correctly both before and after
fine-tuning (in cross-validation). We create 2.8k questions via this procedure
and evaluate the performance of multiple state-of-the-art question answering
systems on our dataset. We observe a significant gap between human performance,
which is 95.3%, and the performance of the best baseline accuracy of 67.5% by
the BERT-Large model.
| 2,019 | Computation and Language |
Giving Attention to the Unexpected: Using Prosody Innovations in
Disfluency Detection | Disfluencies in spontaneous speech are known to be associated with prosodic
disruptions. However, most algorithms for disfluency detection use only word
transcripts. Integrating prosodic cues has proved difficult because of the many
sources of variability affecting the acoustic correlates. This paper introduces
a new approach to extracting acoustic-prosodic cues using text-based
distributional prediction of acoustic cues to derive vector z-score features
(innovations). We explore both early and late fusion techniques for integrating
text and prosody, showing gains over a high-accuracy text-only model.
| 2,019 | Computation and Language |
Disfluencies and Human Speech Transcription Errors | This paper explores contexts associated with errors in transcrip-tion of
spontaneous speech, shedding light on human perceptionof disfluencies and other
conversational speech phenomena. Anew version of the Switchboard corpus is
provided with disfluency annotations for careful speech transcripts, together
with results showing the impact of transcription errors on evaluation of
automatic disfluency detection.
| 2,019 | Computation and Language |
Text Generation with Exemplar-based Adaptive Decoding | We propose a novel conditioned text generation model. It draws inspiration
from traditional template-based text generation techniques, where the source
provides the content (i.e., what to say), and the template influences how to
say it. Building on the successful encoder-decoder paradigm, it first encodes
the content representation from the given input text; to produce the output, it
retrieves exemplar text from the training data as "soft templates," which are
then used to construct an exemplar-specific decoder. We evaluate the proposed
model on abstractive text summarization and data-to-text generation. Empirical
results show that this model achieves strong performance and outperforms
comparable baselines.
| 2,019 | Computation and Language |
HiGRU: Hierarchical Gated Recurrent Units for Utterance-level Emotion
Recognition | In this paper, we address three challenges in utterance-level emotion
recognition in dialogue systems: (1) the same word can deliver different
emotions in different contexts; (2) some emotions are rarely seen in general
dialogues; (3) long-range contextual information is hard to be effectively
captured. We therefore propose a hierarchical Gated Recurrent Unit (HiGRU)
framework with a lower-level GRU to model the word-level inputs and an
upper-level GRU to capture the contexts of utterance-level embeddings.
Moreover, we promote the framework to two variants, HiGRU with individual
features fusion (HiGRU-f) and HiGRU with self-attention and features fusion
(HiGRU-sf), so that the word/utterance-level individual inputs and the
long-range contextual information can be sufficiently utilized. Experiments on
three dialogue emotion datasets, IEMOCAP, Friends, and EmotionPush demonstrate
that our proposed HiGRU models attain at least 8.7%, 7.5%, 6.0% improvement
over the state-of-the-art methods on each dataset, respectively. Particularly,
by utilizing only the textual feature in IEMOCAP, our HiGRU models gain at
least 3.8% improvement over the state-of-the-art conversational memory network
(CMN) with the trimodal features of text, video, and audio.
| 2,019 | Computation and Language |
Knowledge-Augmented Language Model and its Application to Unsupervised
Named-Entity Recognition | Traditional language models are unable to efficiently model entity names
observed in text. All but the most popular named entities appear infrequently
in text providing insufficient context. Recent efforts have recognized that
context can be generalized between entity names that share the same type (e.g.,
\emph{person} or \emph{location}) and have equipped language models with access
to an external knowledge base (KB). Our Knowledge-Augmented Language Model
(KALM) continues this line of work by augmenting a traditional model with a KB.
Unlike previous methods, however, we train with an end-to-end predictive
objective optimizing the perplexity of text. We do not require any additional
information such as named entity tags. In addition to improving language
modeling performance, KALM learns to recognize named entities in an entirely
unsupervised way by using entity type information latent in the model. On a
Named Entity Recognition (NER) task, KALM achieves performance comparable with
state-of-the-art supervised models. Our work demonstrates that named entities
(and possibly other types of world knowledge) can be modeled successfully using
predictive learning and training on large corpora of text without any
additional information.
| 2,019 | Computation and Language |
Mixing syntagmatic and paradigmatic information for concept detection | In the last decades, philosophers have begun using empirical data for
conceptual analysis, but corpus-based conceptual analysis has so far failed to
develop, in part because of the absence of reliable methods to automatically
detect concepts in textual data. Previous attempts have shown that topic models
can constitute efficient concept detection heuristics, but while they leverage
the syntagmatic relations in a corpus, they fail to exploit paradigmatic
relations, and thus probably fail to model concepts accurately. In this
article, we show that using a topic model that models concepts on a space of
word embeddings (Hu and Tsujii, 2016) can lead to significant increases in
concept detection performance, as well as enable the target concept to be
expressed in more flexible ways using word vectors.
| 2,019 | Computation and Language |
Who Needs Words? Lexicon-Free Speech Recognition | Lexicon-free speech recognition naturally deals with the problem of
out-of-vocabulary (OOV) words. In this paper, we show that character-based
language models (LM) can perform as well as word-based LMs for speech
recognition, in word error rates (WER), even without restricting the decoding
to a lexicon. We study character-based LMs and show that convolutional LMs can
effectively leverage large (character) contexts, which is key for good speech
recognition performance downstream. We specifically show that the lexicon-free
decoding performance (WER) on utterances with OOV words using character-based
LMs is better than lexicon-based decoding, both with character or word-based
LMs.
| 2,019 | Computation and Language |
A Hierarchical Decoding Model For Spoken Language Understanding From
Unaligned Data | Spoken language understanding (SLU) systems can be trained on two types of
labelled data: aligned or unaligned. Unaligned data do not require word by word
annotation and is easier to be obtained. In the paper, we focus on spoken
language understanding from unaligned data whose annotation is a set of
act-slot-value triples. Previous works usually focus on improve slot-value pair
prediction and estimate dialogue act types separately, which ignores the
hierarchical structure of the act-slot-value triples. Here, we propose a novel
hierarchical decoding model which dynamically parses act, slot and value in a
structured way and employs pointer network to handle out-of-vocabulary (OOV)
values. Experiments on DSTC2 dataset, a benchmark unaligned dataset, show that
the proposed model not only outperforms previous state-of-the-art model, but
also can be generalized effectively and efficiently to unseen act-slot type
pairs and OOV values.
| 2,019 | Computation and Language |
A Graph-based Model for Joint Chinese Word Segmentation and Dependency
Parsing | Chinese word segmentation and dependency parsing are two fundamental tasks
for Chinese natural language processing. The dependency parsing is defined on
word-level. Therefore word segmentation is the precondition of dependency
parsing, which makes dependency parsing suffer from error propagation and
unable to directly make use of the character-level pre-trained language model
(such as BERT). In this paper, we propose a graph-based model to integrate
Chinese word segmentation and dependency parsing. Different from previous
transition-based joint models, our proposed model is more concise, which
results in fewer efforts of feature engineering. Our graph-based joint model
achieves better performance than previous joint models and state-of-the-art
results in both Chinese word segmentation and dependency parsing. Besides, when
BERT is combined, our model can substantially reduce the performance gap of
dependency parsing between joint models and gold-segmented word-based models.
Our code is publicly available at https://github.com/fastnlp/JointCwsParser.
| 2,019 | Computation and Language |
Seq2Biseq: Bidirectional Output-wise Recurrent Neural Networks for
Sequence Modelling | During the last couple of years, Recurrent Neural Networks (RNN) have reached
state-of-the-art performances on most of the sequence modelling problems. In
particular, the "sequence to sequence" model and the neural CRF have proved to
be very effective in this domain. In this article, we propose a new RNN
architecture for sequence labelling, leveraging gated recurrent layers to take
arbitrarily long contexts into account, and using two decoders operating
forward and backward. We compare several variants of the proposed solution and
their performances to the state-of-the-art. Most of our results are better than
the state-of-the-art or very close to it and thanks to the use of recent
technologies, our architecture can scale on corpora larger than those used in
this work.
| 2,019 | Computation and Language |
Bilingual-GAN: A Step Towards Parallel Text Generation | Latent space based GAN methods and attention based sequence to sequence
models have achieved impressive results in text generation and unsupervised
machine translation respectively. Leveraging the two domains, we propose an
adversarial latent space based model capable of generating parallel sentences
in two languages concurrently and translating bidirectionally. The bilingual
generation goal is achieved by sampling from the latent space that is shared
between both languages. First two denoising autoencoders are trained, with
shared encoders and back-translation to enforce a shared latent state between
the two languages. The decoder is shared for the two translation directions.
Next, a GAN is trained to generate synthetic "code" mimicking the languages'
shared latent space. This code is then fed into the decoder to generate text in
either language. We perform our experiments on Europarl and Multi30k datasets,
on the English-French language pair, and document our performance using both
supervised and unsupervised machine translation.
| 2,019 | Computation and Language |
Exploiting Syntactic Features in a Parsed Tree to Improve End-to-End TTS | The end-to-end TTS, which can predict speech directly from a given sequence
of graphemes or phonemes, has shown improved performance over the conventional
TTS. However, its predicting capability is still limited by the
acoustic/phonetic coverage of the training data, usually constrained by the
training set size. To further improve the TTS quality in pronunciation, prosody
and perceived naturalness, we propose to exploit the information embedded in a
syntactically parsed tree where the inter-phrase/word information of a sentence
is organized in a multilevel tree structure. Specifically, two key features:
phrase structure and relations between adjacent words are investigated.
Experimental results in subjective listening, measured on three test sets, show
that the proposed approach is effective to improve the pronunciation clarity,
prosody and naturalness of the synthesized speech of the baseline system.
| 2,019 | Computation and Language |
A New GAN-based End-to-End TTS Training Algorithm | End-to-end, autoregressive model-based TTS has shown significant performance
improvements over the conventional one. However, the autoregressive module
training is affected by the exposure bias, or the mismatch between the
different distributions of real and predicted data. While real data is
available in training, but in testing, only predicted data is available to feed
the autoregressive module. By introducing both real and generated data
sequences in training, we can alleviate the effects of the exposure bias. We
propose to use Generative Adversarial Network (GAN) along with the key idea of
Professor Forcing in training. A discriminator in GAN is jointly trained to
equalize the difference between real and predicted data. In AB subjective
listening test, the results show that the new approach is preferred over the
standard transfer learning with a CMOS improvement of 0.1. Sentence level
intelligibility tests show significant improvement in a pathological test set.
The GAN-trained new model is also more stable than the baseline to produce
better alignments for the Tacotron output.
| 2,019 | Computation and Language |
APE at Scale and its Implications on MT Evaluation Biases | In this work, we train an Automatic Post-Editing (APE) model and use it to
reveal biases in standard Machine Translation (MT) evaluation procedures. The
goal of our APE model is to correct typical errors introduced by the
translation process, and convert the "translationese" output into natural text.
Our APE model is trained entirely on monolingual data that has been round-trip
translated through English, to mimic errors that are similar to the ones
introduced by NMT. We apply our model to the output of existing NMT systems,
and demonstrate that, while the human-judged quality improves in all cases,
BLEU scores drop with forward-translated test sets. We verify these results for
the WMT18 English to German, WMT15 English to French, and WMT16 English to
Romanian tasks. Furthermore, we selectively apply our APE model on the output
of the top submissions of the most recent WMT evaluation campaigns. We see
quality improvements on all tasks of up to 2.5 BLEU points.
| 2,019 | Computation and Language |
Quizbowl: The Case for Incremental Question Answering | Scholastic trivia competitions test knowledge and intelligence through
mastery of question answering. Modern question answering benchmarks are one
variant of the Turing test. Specifically, answering a set of questions as well
as a human is a minimum bar towards demonstrating human-like intelligence. This
paper makes the case that the format of one competition -- where participants
can answer in the middle of hearing a question (incremental) -- better
differentiates the skill between (human or machine) players. Additionally,
merging a sequential decision-making sub-task with question answering (QA)
provides a good setting for research in model calibration and opponent
modeling. Thus, embedded in this task are three machine learning challenges:
(1) factoid QA over thousands of Wikipedia-like answers, (2) calibration of the
QA model's confidence scores, and (3) sequential decision-making that
incorporates knowledge of the QA model, its calibration, and what the opponent
may do. We make two contributions: (1) collecting and curating a large factoid
QA dataset and an accompanying gameplay dataset, and (2) developing a model
that addresses these three machine learning challenges. In addition to offline
evaluation, we pitted our model against some of the most accomplished trivia
players in the world in a series of exhibition matches spanning several years.
Throughout this paper, we show that collaborations with the vibrant trivia
community have contributed to the quality of our dataset, spawned new research
directions, and doubled as an exciting way to engage the public with research
in machine learning and natural language processing.
| 2,021 | Computation and Language |
Characterizing the impact of geometric properties of word embeddings on
task performance | Analysis of word embedding properties to inform their use in downstream NLP
tasks has largely been studied by assessing nearest neighbors. However,
geometric properties of the continuous feature space contribute directly to the
use of embedding features in downstream models, and are largely unexplored. We
consider four properties of word embedding geometry, namely: position relative
to the origin, distribution of features in the vector space, global pairwise
distances, and local pairwise distances. We define a sequence of
transformations to generate new embeddings that expose subsets of these
properties to downstream models and evaluate change in task performance to
understand the contribution of each property to NLP models. We transform
publicly available pretrained embeddings from three popular toolkits (word2vec,
GloVe, and FastText) and evaluate on a variety of intrinsic tasks, which model
linguistic information in the vector space, and extrinsic tasks, which use
vectors as input to machine learning models. We find that intrinsic evaluations
are highly sensitive to absolute position, while extrinsic tasks rely primarily
on local similarity. Our findings suggest that future embedding models and
post-processing techniques should focus primarily on similarity to nearby
points in vector space.
| 2,019 | Computation and Language |
Performance Monitoring for End-to-End Speech Recognition | Measuring performance of an automatic speech recognition (ASR) system without
ground-truth could be beneficial in many scenarios, especially with data from
unseen domains, where performance can be highly inconsistent. In conventional
ASR systems, several performance monitoring (PM) techniques have been
well-developed to monitor performance by looking at tri-phone posteriors or
pre-softmax activations from neural network acoustic modeling. However,
strategies for monitoring more recently developed end-to-end ASR systems have
not yet been explored, and so that is the focus of this paper. We adapt
previous PM measures (Entropy, M-measure and Auto-encoder) and apply our
proposed RNN predictor in the end-to-end setting. These measures utilize the
decoder output layer and attention probability vectors, and their predictive
power is measured with simple linear models. Our findings suggest that
decoder-level features are more feasible and informative than attention-level
probabilities for PM measures, and that M-measure on the decoder posteriors
achieves the best overall predictive performance with an average prediction
error 8.8%. Entropy measures and RNN-based prediction also show competitive
predictability, especially for unseen conditions.
| 2,019 | Computation and Language |
Data Selection with Cluster-Based Language Difference Models and Cynical
Selection | We present and apply two methods for addressing the problem of selecting
relevant training data out of a general pool for use in tasks such as machine
translation. Building on existing work on class-based language difference
models, we first introduce a cluster-based method that uses Brown clusters to
condense the vocabulary of the corpora. Secondly, we implement the cynical data
selection method, which incrementally constructs a training corpus to
efficiently model the task corpus. Both the cluster-based and the cynical data
selection approaches are used for the first time within a machine translation
system, and we perform a head-to-head comparison. Our intrinsic evaluations
show that both new methods outperform the standard Moore-Lewis approach
(cross-entropy difference), in terms of better perplexity and OOV rates on
in-domain data. The cynical approach converges much quicker, covering nearly
all of the in-domain vocabulary with 84% less data than the other methods.
Furthermore, the new approaches can be used to select machine translation
training data for training better systems. Our results confirm that class-based
selection using Brown clusters is a viable alternative to POS-based class-based
methods, and removes the reliance on a part-of-speech tagger. Additionally, we
are able to validate the recently proposed cynical data selection method,
showing that its performance in SMT models surpasses that of traditional
cross-entropy difference methods and more closely matches the sentence length
of the task corpus.
| 2,017 | Computation and Language |
BAG: Bi-directional Attention Entity Graph Convolutional Network for
Multi-hop Reasoning Question Answering | Multi-hop reasoning question answering requires deep comprehension of
relationships between various documents and queries. We propose a
Bi-directional Attention Entity Graph Convolutional Network (BAG), leveraging
relationships between nodes in an entity graph and attention information
between a query and the entity graph, to solve this task. Graph convolutional
networks are used to obtain a relation-aware representation of nodes for entity
graphs built from documents with multi-level features. Bidirectional attention
is then applied on graphs and queries to generate a query-aware nodes
representation, which will be used for the final prediction. Experimental
evaluation shows BAG achieves state-of-the-art accuracy performance on the
QAngaroo WIKIHOP dataset.
| 2,019 | Computation and Language |
Better Word Embeddings by Disentangling Contextual n-Gram Information | Pre-trained word vectors are ubiquitous in Natural Language Processing
applications. In this paper, we show how training word embeddings jointly with
bigram and even trigram embeddings, results in improved unigram embeddings. We
claim that training word embeddings along with higher n-gram embeddings helps
in the removal of the contextual information from the unigrams, resulting in
better stand-alone word embeddings. We empirically show the validity of our
hypothesis by outperforming other competing word representation models by a
significant margin on a wide variety of tasks. We make our models publicly
available.
| 2,019 | Computation and Language |
Detecting Cybersecurity Events from Noisy Short Text | It is very critical to analyze messages shared over social networks for cyber
threat intelligence and cyber-crime prevention. In this study, we propose a
method that leverages both domain-specific word embeddings and task-specific
features to detect cyber security events from tweets. Our model employs a
convolutional neural network (CNN) and a long short-term memory (LSTM)
recurrent neural network which takes word level meta-embeddings as inputs and
incorporates contextual embeddings to classify noisy short text. We collected a
new dataset of cyber security related tweets from Twitter and manually
annotated a subset of 2K of them. We experimented with this dataset and
concluded that the proposed model outperforms both traditional and neural
baselines. The results suggest that our method works well for detecting cyber
security events from noisy short text.
| 2,019 | Computation and Language |
A Variational Approach to Weakly Supervised Document-Level Multi-Aspect
Sentiment Classification | In this paper, we propose a variational approach to weakly supervised
document-level multi-aspect sentiment classification. Instead of using
user-generated ratings or annotations provided by domain experts, we use
target-opinion word pairs as "supervision." These word pairs can be extracted
by using dependency parsers and simple rules. Our objective is to predict an
opinion word given a target word while our ultimate goal is to learn a
sentiment polarity classifier to predict the sentiment polarity of each aspect
given a document. By introducing a latent variable, i.e., the sentiment
polarity, to the objective function, we can inject the sentiment polarity
classifier to the objective via the variational lower bound. We can learn a
sentiment polarity classifier by optimizing the lower bound. We show that our
method can outperform weakly supervised baselines on TripAdvisor and
BeerAdvocate datasets and can be comparable to the state-of-the-art supervised
method with hundreds of labels per aspect.
| 2,019 | Computation and Language |
From Semi-supervised to Almost-unsupervised Speech Recognition with
Very-low Resource by Jointly Learning Phonetic Structures from Audio and Text
Embeddings | Producing a large amount of annotated speech data for training ASR systems
remains difficult for more than 95% of languages all over the world which are
low-resourced. However, we note human babies start to learn the language by the
sounds (or phonetic structures) of a small number of exemplar words, and
"generalize" such knowledge to other words without hearing a large amount of
data. We initiate some preliminary work in this direction. Audio Word2Vec is
used to learn the phonetic structures from spoken words (signal segments),
while another autoencoder is used to learn the phonetic structures from text
words. The relationships among the above two can be learned jointly, or
separately after the above two are well trained. This relationship can be used
in speech recognition with very low resource. In the initial experiments on the
TIMIT dataset, only 2.1 hours of speech data (in which 2500 spoken words were
annotated and the rest unlabeled) gave a word error rate of 44.6%, and this
number can be reduced to 34.2% if 4.1 hr of speech data (in which 20000 spoken
words were annotated) were given. These results are not satisfactory, but a
good starting point.
| 2,019 | Computation and Language |
Cross-lingual Visual Verb Sense Disambiguation | Recent work has shown that visual context improves cross-lingual sense
disambiguation for nouns. We extend this line of work to the more challenging
task of cross-lingual verb sense disambiguation, introducing the MultiSense
dataset of 9,504 images annotated with English, German, and Spanish verbs. Each
image in MultiSense is annotated with an English verb and its translation in
German or Spanish. We show that cross-lingual verb sense disambiguation models
benefit from visual context, compared to unimodal baselines. We also show that
the verb sense predicted by our best disambiguation model can improve the
results of a text-only machine translation system when used for a multimodal
translation task.
| 2,019 | Computation and Language |
NLPR@SRPOL at SemEval-2019 Task 6 and Task 5: Linguistically enhanced
deep learning offensive sentence classifier | The paper presents a system developed for the SemEval-2019 competition Task 5
hat-Eval Basile et al. (2019) (team name: LU Team) and Task 6 OffensEval
Zampieri et al. (2019b) (team name: NLPR@SRPOL), where we achieved 2nd position
in Subtask C. The system combines in an ensemble several models (LSTM,
Transformer, OpenAI's GPT, Random forest, SVM) with various embeddings (custom,
ELMo, fastText, Universal Encoder) together with additional linguistic features
(number of blacklisted words, special characters, etc.). The system works with
a multi-tier blacklist and a large corpus of crawled data, annotated for
general offensiveness. In the paper we do an extensive analysis of our results
and show how the combination of features and embedding affect the performance
of the models.
| 2,019 | Computation and Language |
Simple BERT Models for Relation Extraction and Semantic Role Labeling | We present simple BERT-based models for relation extraction and semantic role
labeling. In recent years, state-of-the-art performance has been achieved using
neural models by incorporating lexical and syntactic features such as
part-of-speech tags and dependency trees. In this paper, extensive experiments
on datasets for these two tasks show that without using any external features,
a simple BERT-based model can achieve state-of-the-art performance. To our
knowledge, we are the first to successfully apply BERT in this manner. Our
models provide strong baselines for future research.
| 2,019 | Computation and Language |
Advances in Natural Language Question Answering: A Review | Question Answering has recently received high attention from artificial
intelligence communities due to the advancements in learning technologies.
Early question answering models used rule-based approaches and moved to the
statistical approach to address the vastly available information. However,
statistical approaches are shown to underperform in handling the dynamic nature
and the variation of language. Therefore, learning models have shown the
capability of handling the dynamic nature and variations in language. Many deep
learning methods have been introduced to question answering. Most of the deep
learning approaches have shown to achieve higher results compared to machine
learning and statistical methods. The dynamic nature of language has profited
from the nonlinear learning in deep learning. This has created prominent
success and a spike in work on question answering. This paper discusses the
successes and challenges in question answering question answering systems and
techniques that are used in these challenges.
| 2,019 | Computation and Language |
CNM: An Interpretable Complex-valued Network for Matching | This paper seeks to model human language by the mathematical framework of
quantum physics. With the well-designed mathematical formulations in quantum
physics, this framework unifies different linguistic units in a single
complex-valued vector space, e.g. words as particles in quantum states and
sentences as mixed systems. A complex-valued network is built to implement this
framework for semantic matching. With well-constrained complex-valued
components, the network admits interpretations to explicit physical meanings.
The proposed complex-valued network for matching (CNM) achieves comparable
performances to strong CNN and RNN baselines on two benchmarking question
answering (QA) datasets.
| 2,019 | Computation and Language |
Deep Neural Networks Ensemble for Detecting Medication Mentions in
Tweets | Objective: After years of research, Twitter posts are now recognized as an
important source of patient-generated data, providing unique insights into
population health. A fundamental step to incorporating Twitter data in
pharmacoepidemiological research is to automatically recognize medication
mentions in tweets. Given that lexical searches for medication names may fail
due to misspellings or ambiguity with common words, we propose a more advanced
method to recognize them. Methods: We present Kusuri, an Ensemble Learning
classifier, able to identify tweets mentioning drug products and dietary
supplements. Kusuri ("medication" in Japanese) is composed of two modules.
First, four different classifiers (lexicon-based, spelling-variant-based,
pattern-based and one based on a weakly-trained neural network) are applied in
parallel to discover tweets potentially containing medication names. Second, an
ensemble of deep neural networks encoding morphological, semantical and
long-range dependencies of important words in the tweets discovered is used to
make the final decision. Results: On a balanced (50-50) corpus of 15,005
tweets, Kusuri demonstrated performances close to human annotators with 93.7%
F1-score, the best score achieved thus far on this corpus. On a corpus made of
all tweets posted by 113 Twitter users (98,959 tweets, with only 0.26%
mentioning medications), Kusuri obtained 76.3% F1-score. There is not a prior
drug extraction system that compares running on such an extremely unbalanced
dataset. Conclusion: The system identifies tweets mentioning drug names with
performance high enough to ensure its usefulness and ready to be integrated in
larger natural language processing systems.
| 2,019 | Computation and Language |
ClinicalBERT: Modeling Clinical Notes and Predicting Hospital
Readmission | Clinical notes contain information about patients that goes beyond structured
data like lab values and medications. However, clinical notes have been
underused relative to structured data, because notes are high-dimensional and
sparse. This work develops and evaluates representations of clinical notes
using bidirectional transformers (ClinicalBERT). ClinicalBERT uncovers
high-quality relationships between medical concepts as judged by humans.
ClinicalBert outperforms baselines on 30-day hospital readmission prediction
using both discharge summaries and the first few days of notes in the intensive
care unit. Code and model parameters are available.
| 2,020 | Computation and Language |
A Grounded Unsupervised Universal Part-of-Speech Tagger for Low-Resource
Languages | Unsupervised part of speech (POS) tagging is often framed as a clustering
problem, but practical taggers need to \textit{ground} their clusters as well.
Grounding generally requires reference labeled data, a luxury a low-resource
language might not have. In this work, we describe an approach for low-resource
unsupervised POS tagging that yields fully grounded output and requires no
labeled training data. We find the classic method of Brown et al. (1992)
clusters well in our use case and employ a decipherment-based approach to
grounding. This approach presumes a sequence of cluster IDs is a `ciphertext'
and seeks a POS tag-to-cluster ID mapping that will reveal the POS sequence. We
show intrinsically that, despite the difficulty of the task, we obtain
reasonable performance across a variety of languages. We also show
extrinsically that incorporating our POS tagger into a name tagger leads to
state-of-the-art tagging performance in Sinhalese and Kinyarwanda, two
languages with nearly no labeled POS data available. We further demonstrate our
tagger's utility by incorporating it into a true `zero-resource' variant of the
Malopa (Ammar et al., 2016) dependency parser model that removes the current
reliance on multilingual resources and gold POS tags for new languages.
Experiments show that including our tagger makes up much of the accuracy lost
when gold POS tags are unavailable.
| 2,019 | Computation and Language |
Event-based Access to Historical Italian War Memoirs | The progressive digitization of historical archives provides new, often
domain specific, textual resources that report on facts and events which have
happened in the past; among these, memoirs are a very common type of primary
source. In this paper, we present an approach for extracting information from
Italian historical war memoirs and turning it into structured knowledge. This
is based on the semantic notions of events, participants and roles. We evaluate
quantitatively each of the key-steps of our approach and provide a graph-based
representation of the extracted knowledge, which allows to move between a Close
and a Distant Reading of the collection.
| 2,021 | Computation and Language |
Generating Animations from Screenplays | Automatically generating animation from natural language text finds
application in a number of areas e.g. movie script writing, instructional
videos, and public safety. However, translating natural language text into
animation is a challenging task. Existing text-to-animation systems can handle
only very simple sentences, which limits their applications. In this paper, we
develop a text-to-animation system which is capable of handling complex
sentences. We achieve this by introducing a text simplification step into the
process. Building on an existing animation generation system for screenwriting,
we create a robust NLP pipeline to extract information from screenplays and map
them to the system's knowledge base. We develop a set of linguistic
transformation rules that simplify complex sentences. Information extracted
from the simplified sentences is used to generate a rough storyboard and video
depicting the text. Our sentence simplification module outperforms existing
systems in terms of BLEU and SARI metrics.We further evaluated our system via a
user study: 68 % participants believe that our system generates reasonable
animation from input screenplays.
| 2,019 | Computation and Language |
Modeling Global Syntactic Variation in English Using Dialect
Classification | This paper evaluates global-scale dialect identification for 14 national
varieties of English as a means for studying syntactic variation. The paper
makes three main contributions: (i) introducing data-driven language mapping as
a method for selecting the inventory of national varieties to include in the
task; (ii) producing a large and dynamic set of syntactic features using
grammar induction rather than focusing on a few hand-selected features such as
function words; and (iii) comparing models across both web corpora and social
media corpora in order to measure the robustness of syntactic variation across
registers.
| 2,019 | Computation and Language |
Frequency vs. Association for Constraint Selection in Usage-Based
Construction Grammar | A usage-based Construction Grammar (CxG) posits that slot-constraints
generalize from common exemplar constructions. But what is the best model of
constraint generalization? This paper evaluates competing frequency-based and
association-based models across eight languages using a metric derived from the
Minimum Description Length paradigm. The experiments show that
association-based models produce better generalizations across all languages by
a significant margin.
| 2,019 | Computation and Language |
Scalable Cross-Lingual Transfer of Neural Sentence Embeddings | We develop and investigate several cross-lingual alignment approaches for
neural sentence embedding models, such as the supervised inference classifier,
InferSent, and sequential encoder-decoder models. We evaluate three alignment
frameworks applied to these models: joint modeling, representation transfer
learning, and sentence mapping, using parallel text to guide the alignment. Our
results support representation transfer as a scalable approach for modular
cross-lingual alignment of neural sentence embeddings, where we observe better
performance compared to joint models in intrinsic and extrinsic evaluations,
particularly with smaller sets of parallel data.
| 2,019 | Computation and Language |
FrameRank: A Text Processing Approach to Video Summarization | Video summarization has been extensively studied in the past decades.
However, user-generated video summarization is much less explored since there
lack large-scale video datasets within which human-generated video summaries
are unambiguously defined and annotated. Toward this end, we propose a
user-generated video summarization dataset - UGSum52 - that consists of 52
videos (207 minutes). In constructing the dataset, because of the subjectivity
of user-generated video summarization, we manually annotate 25 summaries for
each video, which are in total 1300 summaries. To the best of our knowledge, it
is currently the largest dataset for user-generated video summarization.
Based on this dataset, we present FrameRank, an unsupervised video
summarization method that employs a frame-to-frame level affinity graph to
identify coherent and informative frames to summarize a video. We use the
Kullback-Leibler(KL)-divergence-based graph to rank temporal segments according
to the amount of semantic information contained in their frames. We illustrate
the effectiveness of our method by applying it to three datasets SumMe, TVSum
and UGSum52 and show it achieves state-of-the-art results.
| 2,019 | Computation and Language |
Searching News Articles Using an Event Knowledge Graph Leveraged by
Wikidata | News agencies produce thousands of multimedia stories describing events
happening in the world that are either scheduled such as sports competitions,
political summits and elections, or breaking events such as military conflicts,
terrorist attacks, natural disasters, etc. When writing up those stories,
journalists refer to contextual background and to compare with past similar
events. However, searching for precise facts described in stories is hard. In
this paper, we propose a general method that leverages the Wikidata knowledge
base to produce semantic annotations of news articles. Next, we describe a
semantic search engine that supports both keyword based search in news articles
and structured data search providing filters for properties belonging to
specific event schemas that are automatically inferred.
| 2,019 | Computation and Language |
A high quality and phonetic balanced speech corpus for Vietnamese | This paper presents a high quality Vietnamese speech corpus that can be used
for analyzing Vietnamese speech characteristic as well as building speech
synthesis models. The corpus consists of 5400 clean-speech utterances spoken by
12 speakers including 6 males and 6 females. The corpus is designed with
phonetic balanced in mind so that it can be used for speech synthesis,
especially, speech adaptation approaches. Specifically, all speakers utter a
common dataset contains 250 phonetic balanced sentences. To increase the
variety of speech context, each speaker also utters another 200 non-shared,
phonetic-balanced sentences. The speakers are selected to cover a wide range of
age and come from different regions of the North of Vietnam. The audios are
recorded in a soundproof studio room, they are sampling at 48 kHz, 16 bits PCM,
mono channel.
| 2,018 | Computation and Language |
Gating Mechanisms for Combining Character and Word-level Word
Representations: An Empirical Study | In this paper we study how different ways of combining character and
word-level representations affect the quality of both final word and sentence
representations. We provide strong empirical evidence that modeling characters
improves the learned representations at the word and sentence levels, and that
doing so is particularly useful when representing less frequent words. We
further show that a feature-wise sigmoid gating mechanism is a robust method
for creating representations that encode semantic similarity, as it performed
reasonably well in several word similarity datasets. Finally, our findings
suggest that properly capturing semantic similarity at the word level does not
consistently yield improved performance in downstream sentence-level tasks. Our
code is available at https://github.com/jabalazs/gating
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.