Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Associating Natural Language Comment and Source Code Entities | Comments are an integral part of software development; they are natural
language descriptions associated with source code elements. Understanding
explicit associations can be useful in improving code comprehensibility and
maintaining the consistency between code and comments. As an initial step
towards this larger goal, we address the task of associating entities in
Javadoc comments with elements in Java source code. We propose an approach for
automatically extracting supervised data using revision histories of open
source projects and present a manually annotated evaluation dataset for this
task. We develop a binary classifier and a sequence labeling model by crafting
a rich feature set which encompasses various aspects of code, comments, and the
relationships between them. Experiments show that our systems outperform
several baselines learning from the proposed supervision.
| 2,019 | Computation and Language |
An Unsupervised Domain-Independent Framework for Automated Detection of
Persuasion Tactics in Text | With the increasing growth of social media, people have started relying
heavily on the information shared therein to form opinions and make decisions.
While such a reliance is motivation for a variety of parties to promote
information, it also makes people vulnerable to exploitation by slander,
misinformation, terroristic and predatorial advances. In this work, we aim to
understand and detect such attempts at persuasion. Existing works on detecting
persuasion in text make use of lexical features for detecting persuasive
tactics, without taking advantage of the possible structures inherent in the
tactics used. We formulate the task as a multi-class classification problem and
propose an unsupervised, domain-independent machine learning framework for
detecting the type of persuasion used in text, which exploits the inherent
sentence structure present in the different persuasion tactics. Our work shows
promising results as compared to existing work.
| 2,019 | Computation and Language |
SemEval-2013 Task 2: Sentiment Analysis in Twitter | In recent years, sentiment analysis in social media has attracted a lot of
research interest and has been used for a number of applications.
Unfortunately, research has been hindered by the lack of suitable datasets,
complicating the comparison between approaches. To address this issue, we have
proposed SemEval-2013 Task 2: Sentiment Analysis in Twitter, which included two
subtasks: A, an expression-level subtask, and B, a message-level subtask. We
used crowdsourcing on Amazon Mechanical Turk to label a large Twitter training
dataset along with additional test sets of Twitter and SMS messages for both
subtasks. All datasets used in the evaluation are released to the research
community. The task attracted significant interest and a total of 149
submissions from 44 teams. The best-performing team achieved an F1 of 88.9% and
69% for subtasks A and B, respectively.
| 2,013 | Computation and Language |
Proppy: A System to Unmask Propaganda in Online News | We present proppy, the first publicly available real-world, real-time
propaganda detection system for online news, which aims at raising awareness,
thus potentially limiting the impact of propaganda and helping fight
disinformation. The system constantly monitors a number of news sources,
deduplicates and clusters the news into events, and organizes the articles
about an event on the basis of the likelihood that they contain propagandistic
content. The system is trained on known propaganda sources using a variety of
stylistic features. The evaluation results on a standard dataset show
state-of-the-art results for propaganda detection.
| 2,019 | Computation and Language |
LScDC-new large scientific dictionary | In this paper, we present a scientific corpus of abstracts of academic papers
in English -- Leicester Scientific Corpus (LSC). The LSC contains 1,673,824
abstracts of research articles and proceeding papers indexed by Web of Science
(WoS) in which publication year is 2014. Each abstract is assigned to at least
one of 252 subject categories. Paper metadata include these categories and the
number of citations. We then develop scientific dictionaries named Leicester
Scientific Dictionary (LScD) and Leicester Scientific Dictionary-Core (LScDC),
where words are extracted from the LSC. The LScD is a list of 974,238 unique
words (lemmas). The LScDC is a core list (sub-list) of the LScD with 104,223
lemmas. It was created by removing LScD words appearing in not greater than 10
texts in the LSC. LScD and LScDC are available online. Both the corpus and
dictionaries are developed to be later used for quantification of meaning in
academic texts.
Finally, the core list LScDC was analysed by comparing its words and word
frequencies with a classic academic word list 'New Academic Word List (NAWL)'
containing 963 word families, which is also sampled from an academic corpus.
The major sources of the corpus where NAWL is extracted are Cambridge English
Corpus (CEC), oral sources and textbooks. We investigate whether two
dictionaries are similar in terms of common words and ranking of words. Our
comparison leads us to main conclusion: most of words of NAWL (99.6%) are
present in the LScDC but two lists differ in word ranking. This difference is
measured.
| 2,019 | Computation and Language |
Towards Robust Toxic Content Classification | Toxic content detection aims to identify content that can offend or harm its
recipients. Automated classifiers of toxic content need to be robust against
adversaries who deliberately try to bypass filters. We propose a method of
generating realistic model-agnostic attacks using a lexicon of toxic tokens,
which attempts to mislead toxicity classifiers by diluting the toxicity signal
either by obfuscating toxic tokens through character-level perturbations, or by
injecting non-toxic distractor tokens. We show that these realistic attacks
reduce the detection recall of state-of-the-art neural toxicity detectors,
including those using ELMo and BERT, by more than 50% in some cases. We explore
two approaches for defending against such attacks. First, we examine the effect
of training on synthetically noised data. Second, we propose the Contextual
Denoising Autoencoder (CDAE): a method for learning robust representations that
uses character-level and contextual information to denoise perturbed tokens. We
show that the two approaches are complementary, improving robustness to both
character-level perturbations and distractors, recovering a considerable
portion of the lost accuracy. Finally, we analyze the robustness
characteristics of the most competitive methods and outline practical
considerations for improving toxicity detectors.
| 2,019 | Computation and Language |
Integrating Lexical Knowledge in Word Embeddings using Sprinkling and
Retrofitting | Neural network based word embeddings, such as Word2Vec and GloVe, are purely
data driven in that they capture the distributional information about words
from the training corpus. Past works have attempted to improve these embeddings
by incorporating semantic knowledge from lexical resources like WordNet. Some
techniques like retrofitting modify word embeddings in the post-processing
stage while some others use a joint learning approach by modifying the
objective function of neural networks. In this paper, we discuss two novel
approaches for incorporating semantic knowledge into word embeddings. In the
first approach, we take advantage of Levy et al's work which showed that using
SVD based methods on co-occurrence matrix provide similar performance to neural
network based embeddings. We propose a 'sprinkling' technique to add semantic
relations to the co-occurrence matrix directly before factorization. In the
second approach, WordNet similarity scores are used to improve the retrofitting
method. We evaluate the proposed methods in both intrinsic and extrinsic tasks
and observe significant improvements over the baselines in many of the
datasets.
| 2,020 | Computation and Language |
Efficient Convolutional Neural Networks for Diacritic Restoration | Diacritic restoration has gained importance with the growing need for
machines to understand written texts. The task is typically modeled as a
sequence labeling problem and currently Bidirectional Long Short Term Memory
(BiLSTM) models provide state-of-the-art results. Recently, Bai et al. (2018)
show the advantages of Temporal Convolutional Neural Networks (TCN) over
Recurrent Neural Networks (RNN) for sequence modeling in terms of performance
and computational resources. As diacritic restoration benefits from both
previous as well as subsequent timesteps, we further apply and evaluate a
variant of TCN, Acausal TCN (A-TCN), which incorporates context from both
directions (previous and future) rather than strictly incorporating previous
context as in the case of TCN. A-TCN yields significant improvement over TCN
for diacritization in three different languages: Arabic, Yoruba, and
Vietnamese. Furthermore, A-TCN and BiLSTM have comparable performance, making
A-TCN an efficient alternative over BiLSTM since convolutions can be trained in
parallel. A-TCN is significantly faster than BiLSTM at inference time
(270%-334% improvement in the amount of text diacritized per minute).
| 2,019 | Computation and Language |
Long-length Legal Document Classification | One of the principal tasks of machine learning with major applications is
text classification. This paper focuses on the legal domain and, in particular,
on the classification of lengthy legal documents. The main challenge that this
study addresses is the limitation that current models impose on the length of
the input text. In addition, the present paper shows that dividing the text
into segments and later combining the resulting embeddings with a BiLSTM
architecture to form a single document embedding can improve results. These
advancements are achieved by utilising a simpler structure, rather than an
increasingly complex one, which is often the case in NLP research. The dataset
used in this paper is obtained from an online public database containing
lengthy legal documents with highly domain-specific vocabulary and thus, the
comparison of our results to the ones produced by models implemented on the
commonly used datasets would be unjustified. This work provides the foundation
for future work in document classification in the legal field.
| 2,019 | Computation and Language |
#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo
Movement | In this paper, we present a dataset containing 9,973 tweets related to the
MeToo movement that were manually annotated for five different linguistic
aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present
a detailed account of the data collection and annotation processes. The
annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha)
due to the domain expertise of the annotators and clear annotation
instructions. We analyze the data in terms of geographical distribution, label
correlations, and keywords. Lastly, we present some potential use cases of this
dataset. We expect this dataset would be of great interest to psycholinguists,
socio-linguists, and computational linguists to study the discursive space of
digitally mobilized social movements on sensitive issues like sexual
harassment.
| 2,020 | Computation and Language |
Computational Induction of Prosodic Structure | The present study has two goals relating to the grammar of prosody,
understood as the rhythms and melodies of speech. First, an overview is
provided of the computable grammatical and phonetic approaches to prosody
analysis which use hypothetico-deductive methods and are based on learned
hermeneutic intuitions about language. Second, a proposal is presented for an
inductive grounding in the physical signal, in which prosodic structure is
inferred using a language-independent method from the low-frequency spectrum of
the speech signal. The overview includes a discussion of computational aspects
of standard generative and post-generative models, and suggestions for
reformulating these to form inductive approaches. Also included is a discussion
of linguistic phonetic approaches to analysis of annotations (pairs of speech
unit labels with time-stamps) of recorded spoken utterances. The proposal
introduces the inductive approach of Rhythm Formant Theory (RFT) and the
associated Rhythm Formant Analysis (RFA) method are introduced, with the aim of
completing a gap in the linguistic hypothetico-deductive cycle by grounding in
a language-independent inductive procedure of speech signal analysis. The
validity of the method is demonstrated and applied to rhythm patterns in
read-aloud Mandarin Chinese, finding differences from English which are related
to lexical and grammatical differences between the languages, as well as
individual variation. The overall conclusions are (1) that normative
language-to-language phonological or phonetic comparisons of rhythm, for
example of Mandarin and English, are too simplistic, in view of diverse
language-internal factors due to genre and style differences as well as
utterance dynamics, and (2) that language-independent empirical grounding of
rhythm in the physical signal is called for.
| 2,019 | Computation and Language |
Multilingual is not enough: BERT for Finnish | Deep learning-based language models pretrained on large unannotated text
corpora have been demonstrated to allow efficient transfer learning for natural
language processing, with recent approaches such as the transformer-based BERT
model advancing the state of the art across a variety of tasks. While most work
on these models has focused on high-resource languages, in particular English,
a number of recent efforts have introduced multilingual models that can be
fine-tuned to address tasks in a large number of different languages. However,
we still lack a thorough understanding of the capabilities of these models, in
particular for lower-resourced languages. In this paper, we focus on Finnish
and thoroughly evaluate the multilingual BERT model on a range of tasks,
comparing it with a new Finnish BERT model trained from scratch. The new
language-specific model is shown to systematically and clearly outperform the
multilingual. While the multilingual model largely fails to reach the
performance of previously proposed methods, the custom Finnish BERT model
establishes new state-of-the-art results on all corpora for all reference
tasks: part-of-speech tagging, named entity recognition, and dependency
parsing. We release the model and all related resources created for this study
with open licenses at https://turkunlp.org/finbert .
| 2,019 | Computation and Language |
Robust Named Entity Recognition with Truecasing Pretraining | Although modern named entity recognition (NER) systems show impressive
performance on standard datasets, they perform poorly when presented with noisy
data. In particular, capitalization is a strong signal for entities in many
languages, and even state of the art models overfit to this feature, with
drastically lower performance on uncapitalized text. In this work, we address
the problem of robustness of NER systems in data with noisy or uncertain
casing, using a pretraining objective that predicts casing in text, or a
truecaser, leveraging unlabeled data. The pretrained truecaser is combined with
a standard BiLSTM-CRF model for NER by appending output distributions to
character embeddings. In experiments over several datasets of varying domain
and casing quality, we show that our new model improves performance in uncased
text, even adding value to uncased BERT embeddings. Our method achieves a new
state of the art on the WNUT17 shared task dataset.
| 2,019 | Computation and Language |
Characterizing the dynamics of learning in repeated reference games | The language we use over the course of conversation changes as we establish
common ground and learn what our partner finds meaningful. Here we draw upon
recent advances in natural language processing to provide a finer-grained
characterization of the dynamics of this learning process. We release an open
corpus (>15,000 utterances) of extended dyadic interactions in a classic
repeated reference game task where pairs of participants had to coordinate on
how to refer to initially difficult-to-describe tangram stimuli. We find that
different pairs discover a wide variety of idiosyncratic but efficient and
stable solutions to the problem of reference. Furthermore, these conventions
are shaped by the communicative context: words that are more discriminative in
the initial context (i.e. that are used for one target more than others) are
more likely to persist through the final repetition. Finally, we find
systematic structure in how a speaker's referring expressions become more
efficient over time: syntactic units drop out in clusters following positive
feedback from the listener, eventually leaving short labels containing
open-class parts of speech. These findings provide a higher resolution look at
the quantitative dynamics of ad hoc convention formation and support further
development of computational models of learning in communication.
| 2,020 | Computation and Language |
Graph-based Neural Sentence Ordering | Sentence ordering is to restore the original paragraph from a set of
sentences. It involves capturing global dependencies among sentences regardless
of their input order. In this paper, we propose a novel and flexible
graph-based neural sentence ordering model, which adopts graph recurrent
network \cite{Zhang:acl18} to accurately learn semantic representations of the
sentences. Instead of assuming connections between all pairs of input
sentences, we use entities that are shared among multiple sentences to make
more expressive graph representations with less noise. Experimental results
show that our proposed model outperforms the existing state-of-the-art systems
on several benchmark datasets, demonstrating the effectiveness of our model. We
also conduct a thorough analysis on how entities help the performance.
| 2,019 | Computation and Language |
Iterative Dual Domain Adaptation for Neural Machine Translation | Previous studies on the domain adaptation for neural machine translation
(NMT) mainly focus on the one-pass transferring out-of-domain translation
knowledge to in-domain NMT model. In this paper, we argue that such a strategy
fails to fully extract the domain-shared translation knowledge, and repeatedly
utilizing corpora of different domains can lead to better distillation of
domain-shared translation knowledge. To this end, we propose an iterative dual
domain adaptation framework for NMT. Specifically, we first pre-train in-domain
and out-of-domain NMT models using their own training corpora respectively, and
then iteratively perform bidirectional translation knowledge transfer (from
in-domain to out-of-domain and then vice versa) based on knowledge distillation
until the in-domain NMT model convergences. Furthermore, we extend the proposed
framework to the scenario of multiple out-of-domain training corpora, where the
above-mentioned transfer is performed sequentially between the in-domain and
each out-of-domain NMT models in the ascending order of their domain
similarities. Empirical results on Chinese-English and English-German
translation tasks demonstrate the effectiveness of our framework.
| 2,019 | Computation and Language |
Synchronous Speech Recognition and Speech-to-Text Translation with
Interactive Decoding | Speech-to-text translation (ST), which translates source language speech into
target language text, has attracted intensive attention in recent years.
Compared to the traditional pipeline system, the end-to-end ST model has
potential benefits of lower latency, smaller model size, and less error
propagation. However, it is notoriously difficult to implement such a model
without transcriptions as intermediate. Existing works generally apply
multi-task learning to improve translation quality by jointly training
end-to-end ST along with automatic speech recognition (ASR). However, different
tasks in this method cannot utilize information from each other, which limits
the improvement. Other works propose a two-stage model where the second model
can use the hidden state from the first one, but its cascade manner greatly
affects the efficiency of training and inference process. In this paper, we
propose a novel interactive attention mechanism which enables ASR and ST to
perform synchronously and interactively in a single model. Specifically, the
generation of transcriptions and translations not only relies on its previous
outputs but also the outputs predicted in the other task. Experiments on TED
speech translation corpora have shown that our proposed model can outperform
strong baselines on the quality of speech translation and achieve better speech
recognition performances as well.
| 2,019 | Computation and Language |
Optimized Tracking of Topic Evolution | Topic evolution modeling has been researched for a long time and has gained
considerable interest. A state-of-the-art method has been recently using word
modeling algorithms in combination with community detection mechanisms to
achieve better results in a more effective way. We analyse results of this
approach and discuss the two major challenges that this approach still faces.
Although the topics that have resulted from the recent algorithm are good in
general, they are very noisy due to many topics that are very unimportant
because of their size, words, or ambiguity. Additionally, the number of words
defining each topic is too large, making it difficult to analyse them in their
unsorted state. In this paper, we propose approaches to tackle these challenges
by adding topic filtering and network analysis metrics to define the importance
of a topic. We test different combinations of these metrics to see which
combination yields the best results. Furthermore, we add word filtering and
ranking to each topic to identify the words with the highest novelty
automatically. We evaluate our enhancement methods in two ways: human
qualitative evaluation and automatic quantitative evaluation. Moreover, we
created two case studies to test the quality of the clusters and words. In the
quantitative evaluation, we use the pairwise mutual information score to test
the coherency of topics. The quantitative evaluation also includes an analysis
of execution times for each part of the program. The results of the
experimental evaluations show that the two evaluation methods agree on the
positive feasibility of the algorithm. We then show possible extensions in the
form of usability and future improvements to the algorithm.
| 2,019 | Computation and Language |
Semantic Similarity To Improve Question Understanding in a Virtual
Patient | In medicine, a communicating virtual patient or doctor allows students to
train in medical diagnosis and develop skills to conduct a medical
consultation. In this paper, we describe a conversational virtual standardized
patient system to allow medical students to simulate a diagnosis strategy of an
abdominal surgical emergency. We exploited the semantic properties captured by
distributed word representations to search for similar questions in the virtual
patient dialogue system. We created two dialogue systems that were evaluated on
datasets collected during tests with students. The first system based on
hand-crafted rules obtains $92.29\%$ as $F1$-score on the studied clinical case
while the second system that combines rules and semantic similarity achieves
$94.88\%$. It represents an error reduction of $9.70\%$ as compared to the
rules-only-based system.
| 2,019 | Computation and Language |
Improving Knowledge-aware Dialogue Generation via Knowledge Base
Question Answering | Neural network models usually suffer from the challenge of incorporating
commonsense knowledge into the open-domain dialogue systems. In this paper, we
propose a novel knowledge-aware dialogue generation model (called TransDG),
which transfers question representation and knowledge matching abilities from
knowledge base question answering (KBQA) task to facilitate the utterance
understanding and factual knowledge selection for dialogue generation. In
addition, we propose a response guiding attention and a multi-step decoding
strategy to steer our model to focus on relevant features for response
generation. Experiments on two benchmark datasets demonstrate that our model
has robust superiority over compared methods in generating informative and
fluent dialogues. Our code is available at https://github.com/siat-nlp/TransDG.
| 2,019 | Computation and Language |
Scale-dependent Relationships in Natural Language | Natural language exhibits statistical dependencies at a wide range of scales.
For instance, the mutual information between words in natural language decays
like a power law with the temporal lag between them. However, many statistical
learning models applied to language impose a sampling scale while extracting
statistical structure. For instance, Word2Vec constructs a vector embedding
that maximizes the prediction between a target word and the context words that
appear nearby in the corpus. The size of the context is chosen by the user and
defines a strong scale; relationships over much larger temporal scales would be
invisible to the algorithm. This paper examines the family of Word2Vec
embeddings generated while systematically manipulating the sampling scale used
to define the context around each word. The primary result is that different
linguistic relationships are preferentially encoded at different scales.
Different scales emphasize different syntactic and semantic relations between
words.Moreover, the neighborhoods of a given word in the embeddings change
significantly depending on the scale. These results suggest that any individual
scale can only identify a subset of the meaningful relationships a word might
have, and point toward the importance of developing scale-free models of
semantic meaning.
| 2,019 | Computation and Language |
Predicting detection filters for small footprint open-vocabulary keyword
spotting | In this paper, we propose a fully-neural approach to open-vocabulary keyword
spotting, that allows the users to include a customizable voice interface to
their device and that does not require task-specific data. We present a keyword
detection neural network weighing less than 250KB, in which the topmost layer
performing keyword detection is predicted by an auxiliary network, that may be
run offline to generate a detector for any keyword. We show that the proposed
model outperforms acoustic keyword spotting baselines by a large margin on two
tasks of detecting keywords in utterances and three tasks of detecting isolated
speech commands. We also propose a method to fine-tune the model when specific
training data is available for some keywords, which yields a performance
similar to a standard speech command neural network while keeping the ability
of the model to be applied to new keywords.
| 2,020 | Computation and Language |
Cross-Lingual Ability of Multilingual BERT: An Empirical Study | Recent work has exhibited the surprising cross-lingual abilities of
multilingual BERT (M-BERT) -- surprising since it is trained without any
cross-lingual objective and with no aligned data. In this work, we provide a
comprehensive study of the contribution of different components in M-BERT to
its cross-lingual ability. We study the impact of linguistic properties of the
languages, the architecture of the model, and the learning objectives. The
experimental study is done in the context of three typologically different
languages -- Spanish, Hindi, and Russian -- and using two conceptually
different NLP tasks, textual entailment and named entity recognition. Among our
key conclusions is the fact that the lexical overlap between languages plays a
negligible role in the cross-lingual success, while the depth of the network is
an integral part of it. All our models and implementations can be found on our
project page: http://cogcomp.org/page/publication_view/900 .
| 2,020 | Computation and Language |
Libri-Light: A Benchmark for ASR with Limited or No Supervision | We introduce a new collection of spoken English audio suitable for training
speech recognition systems under limited or no supervision. It is derived from
open-source audio books from the LibriVox project. It contains over 60K hours
of audio, which is, to our knowledge, the largest freely-available corpus of
speech. The audio has been segmented using voice activity detection and is
tagged with SNR, speaker ID and genre descriptions. Additionally, we provide
baseline systems and evaluation metrics working under three settings: (1) the
zero resource/unsupervised setting (ABX), (2) the semi-supervised setting (PER,
CER) and (3) the distant supervision setting (WER). Settings (2) and (3) use
limited textual resources (10 minutes to 10 hours) aligned with the speech.
Setting (3) uses large amounts of unaligned text. They are evaluated on the
standard LibriSpeech dev and test sets for comparison with the supervised
state-of-the-art.
| 2,020 | Computation and Language |
To What Extent are Name Variants Used as Named Entities in Turkish
Tweets? | Social media texts differ from regular texts in various aspects. One of the
main differences is the common use of informal name variants instead of
well-formed named entities in social media compared to regular texts. These
name variants may come in the form of abbreviations, nicknames, contractions,
and hypocoristic uses, in addition to names distorted due to capitalization and
writing errors. In this paper, we present an analysis of the named entities in
a publicly-available tweet dataset in Turkish with respect to their being name
variants belonging to different categories. We also provide finer-grained
annotations of the named entities as well-formed names and different categories
of name variants, where these annotations are made publicly-available. The
analysis presented and the accompanying annotations will contribute to related
research on the treatment of named entities in social media.
| 2,019 | Computation and Language |
A Multi-task Learning Model for Chinese-oriented Aspect Polarity
Classification and Aspect Term Extraction | Aspect-based sentiment analysis (ABSA) task is a multi-grained task of
natural language processing and consists of two subtasks: aspect term
extraction (ATE) and aspect polarity classification (APC). Most of the existing
work focuses on the subtask of aspect term polarity inferring and ignores the
significance of aspect term extraction. Besides, the existing researches do not
pay attention to the research of the Chinese-oriented ABSA task. Based on the
local context focus (LCF) mechanism, this paper firstly proposes a multi-task
learning model for Chinese-oriented aspect-based sentiment analysis, namely
LCF-ATEPC. Compared with existing models, this model equips the capability of
extracting aspect term and inferring aspect term polarity synchronously,
moreover, this model is effective to analyze both Chinese and English comments
simultaneously and the experiment on a multilingual mixed dataset proved its
availability. By integrating the domain-adapted BERT model, the LCF-ATEPC model
achieved the state-of-the-art performance of aspect term extraction and aspect
polarity classification in four Chinese review datasets. Besides, the
experimental results on the most commonly used SemEval-2014 task4 Restaurant
and Laptop datasets outperform the state-of-the-art performance on the ATE and
APC subtask.
| 2,020 | Computation and Language |
Application of Word2vec in Phoneme Recognition | In this paper, we present how to hybridize a Word2vec model and an
attention-based end-to-end speech recognition model. We build a phoneme
recognition system based on Listen, Attend and Spell model. And the phoneme
recognition model uses a word2vec model to initialize the embedding matrix for
the improvement of the performance, which can increase the distance among the
phoneme vectors. At the same time, in order to solve the problem of overfitting
in the 61 phoneme recognition model on TIMIT dataset, we propose a new training
method. A 61-39 phoneme mapping comparison table is used to inverse map the
phonemes of the dataset to generate more 61 phoneme training data. At the end
of training, replace the dataset with a standard dataset for corrective
training. Our model can achieve the best result under the TIMIT dataset which
is 16.5% PER (Phoneme Error Rate).
| 2,019 | Computation and Language |
A Context-Aware Approach for Detecting Check-Worthy Claims in Political
Debates | In the context of investigative journalism, we address the problem of
automatically identifying which claims in a given document are most worthy and
should be prioritized for fact-checking. Despite its importance, this is a
relatively understudied problem. Thus, we create a new dataset of political
debates, containing statements that have been fact-checked by nine reputable
sources, and we train machine learning models to predict which claims should be
prioritized for fact-checking, i.e., we model the problem as a ranking task.
Unlike previous work, which has looked primarily at sentences in isolation, in
this paper we focus on a rich input representation modeling the context:
relationship between the target statement and the larger context of the debate,
interaction between the opponents, and reaction by the moderator and by the
public. Our experiments show state-of-the-art results, outperforming a strong
rivaling system by a margin, while also confirming the importance of the
contextual information.
| 2,017 | Computation and Language |
Open Set Authorship Attribution toward Demystifying Victorian
Periodicals | Existing research in computational authorship attribution (AA) has primarily
focused on attribution tasks with a limited number of authors in a closed-set
configuration. This restricted set-up is far from being realistic in dealing
with highly entangled real-world AA tasks that involve a large number of
candidate authors for attribution during test time. In this paper, we study AA
in historical texts using anew data set compiled from the Victorian literature.
We investigate the predictive capacity of most common English words in
distinguishing writings of most prominent Victorian novelists. We challenged
the closed-set classification assumption and discussed the limitations of
standard machine learning techniques in dealing with the open set AA task. Our
experiments suggest that a linear classifier can achieve near perfect
attribution accuracy under closed set assumption yet, the need for more robust
approaches becomes evident once a large candidate pool has to be considered in
the open-set classification setting.
| 2,019 | Computation and Language |
Chinese Named Entity Recognition Augmented with Lexicon Memory | Inspired by a concept of content-addressable retrieval from cognitive
science, we propose a novel fragment-based model augmented with a lexicon-based
memory for Chinese NER, in which both the character-level and word-level
features are combined to generate better feature representations for possible
name candidates. It is observed that locating the boundary information of
entity names is useful in order to classify them into pre-defined categories.
Position-dependent features, including prefix and suffix are introduced for NER
in the form of distributed representation. The lexicon-based memory is used to
help generate such position-dependent features and deal with the problem of
out-of-vocabulary words. Experimental results showed that the proposed model,
called LEMON, achieved state-of-the-art on four datasets.
| 2,020 | Computation and Language |
The performance evaluation of Multi-representation in the Deep Learning
models for Relation Extraction Task | Single implementing, concatenating, adding or replacing of the
representations has yielded significant improvements on many NLP tasks. Mainly
in Relation Extraction where static, contextualized and others representations
that are capable of explaining word meanings through the linguistic features
that these incorporates. In this work addresses the question of how is improved
the relation extraction using different types of representations generated by
pretrained language representation models. We benchmarked our approach using
popular word representation models, replacing and concatenating static,
contextualized and others representations of hand-extracted features. The
experiments show that representation is a crucial element to choose when DL
approach is applied. Word embeddings from Flair and BERT can be well
interpreted by a deep learning model for RE task, and replacing static word
embeddings with contextualized word representations could lead to significant
improvements. While, the hand-created representations requires is
time-consuming and not is ensure a improve in combination with others
representations.
| 2,019 | Computation and Language |
DMRM: A Dual-channel Multi-hop Reasoning Model for Visual Dialog | Visual Dialog is a vision-language task that requires an AI agent to engage
in a conversation with humans grounded in an image. It remains a challenging
task since it requires the agent to fully understand a given question before
making an appropriate response not only from the textual dialog history, but
also from the visually-grounded information. While previous models typically
leverage single-hop reasoning or single-channel reasoning to deal with this
complex multimodal reasoning task, which is intuitively insufficient. In this
paper, we thus propose a novel and more powerful Dual-channel Multi-hop
Reasoning Model for Visual Dialog, named DMRM. DMRM synchronously captures
information from the dialog history and the image to enrich the semantic
representation of the question by exploiting dual-channel reasoning.
Specifically, DMRM maintains a dual channel to obtain the question- and
history-aware image features and the question- and image-aware dialog history
features by a mulit-hop reasoning process in each channel. Additionally, we
also design an effective multimodal attention to further enhance the decoder to
generate more accurate responses. Experimental results on the VisDial v0.9 and
v1.0 datasets demonstrate that the proposed model is effective and outperforms
compared models by a significant margin.
| 2,019 | Computation and Language |
Multi-channel Reverse Dictionary Model | A reverse dictionary takes the description of a target word as input and
outputs the target word together with other words that match the description.
Existing reverse dictionary methods cannot deal with highly variable input
queries and low-frequency target words successfully. Inspired by the
description-to-word inference process of humans, we propose the multi-channel
reverse dictionary model, which can mitigate the two problems simultaneously.
Our model comprises a sentence encoder and multiple predictors. The predictors
are expected to identify different characteristics of the target word from the
input query. We evaluate our model on English and Chinese datasets including
both dictionary definitions and human-written descriptions. Experimental
results show that our model achieves the state-of-the-art performance, and even
outperforms the most popular commercial reverse dictionary system on the
human-written description dataset. We also conduct quantitative analyses and a
case study to demonstrate the effectiveness and robustness of our model. All
the code and data of this work can be obtained on
https://github.com/thunlp/MultiRD.
| 2,019 | Computation and Language |
MALA: Cross-Domain Dialogue Generation with Action Learning | Response generation for task-oriented dialogues involves two basic
components: dialogue planning and surface realization. These two components,
however, have a discrepancy in their objectives, i.e., task completion and
language quality. To deal with such discrepancy, conditioned response
generation has been introduced where the generation process is factorized into
action decision and language generation via explicit action representations. To
obtain action representations, recent studies learn latent actions in an
unsupervised manner based on the utterance lexical similarity. Such an action
learning approach is prone to diversities of language surfaces, which may
impinge task completion and language quality. To address this issue, we propose
multi-stage adaptive latent action learning (MALA) that learns semantic latent
actions by distinguishing the effects of utterances on dialogue progress. We
model the utterance effect using the transition of dialogue states caused by
the utterance and develop a semantic similarity measurement that estimates
whether utterances have similar effects. For learning semantic actions on
domains without dialogue states, MsALA extends the semantic similarity
measurement across domains progressively, i.e., from aligning shared actions to
learning domain-specific actions. Experiments using multi-domain datasets, SMD
and MultiWOZ, show that our proposed model achieves consistent improvements
over the baselines models in terms of both task completion and language
quality.
| 2,020 | Computation and Language |
Generating summaries tailored to target characteristics | Recently, research efforts have gained pace to cater to varied user
preferences while generating text summaries. While there have been attempts to
incorporate a few handpicked characteristics such as length or entities, a
holistic view around these preferences is missing and crucial insights on why
certain characteristics should be incorporated in a specific manner are absent.
With this objective, we provide a categorization around these characteristics
relevant to the task of text summarization: one, focusing on what content needs
to be generated and second, focusing on the stylistic aspects of the output
summaries. We use our insights to provide guidelines on appropriate methods to
incorporate various classes characteristics in sequence-to-sequence
summarization framework. Our experiments with incorporating topics, readability
and simplicity indicate the viability of the proposed prescriptions
| 2,019 | Computation and Language |
A Survey on Document-level Neural Machine Translation: Methods and
Evaluation | Machine translation (MT) is an important task in natural language processing
(NLP) as it automates the translation process and reduces the reliance on human
translators. With the resurgence of neural networks, the translation quality
surpasses that of the translations obtained using statistical techniques for
most language-pairs. Up until a few years ago, almost all of the neural
translation models translated sentences independently, without incorporating
the wider document-context and inter-dependencies among the sentences. The aim
of this survey paper is to highlight the major works that have been undertaken
in the space of document-level machine translation after the neural revolution,
so that researchers can recognise the current state and future directions of
this field. We provide an organisation of the literature based on novelties in
modelling and architectures as well as training and decoding strategies. In
addition, we cover evaluation strategies that have been introduced to account
for the improvements in document MT, including automatic metrics and
discourse-targeted test sets. We conclude by presenting possible avenues for
future exploration in this research field.
| 2,021 | Computation and Language |
Towards an automatic recognition of mixed languages: The
Ukrainian-Russian hybrid language Surzhyk | Language interference is common in today's multilingual societies where more
languages are being in contact and as a global final result leads to the
creation of hybrid languages. These, together with doubts on their right to be
officially recognised made emerge in the area of computational linguistics the
problem of their automatic identification and further elaboration. In this
paper, we propose a first attempt to identify the elements of a
Ukrainian-Russian hybrid language, Surzhyk, through the adoption of the
example-based rules created with the instruments of programming language R. Our
example-based study consists of: 1) analysis of spoken samples of Surzhyk
registered by Del Gaudio (2010) in Kyiv area and creation of the written
corpus; 2) production of specific rules on the identification of Surzhyk
patterns and their implementation; 3) testing the code and analysing the
effectiveness.
| 2,019 | Computation and Language |
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive
Summarization | Recent work pre-training Transformers with self-supervised objectives on
large text corpora has shown great success when fine-tuned on downstream NLP
tasks including text summarization. However, pre-training objectives tailored
for abstractive text summarization have not been explored. Furthermore there is
a lack of systematic evaluation across diverse domains. In this work, we
propose pre-training large Transformer-based encoder-decoder models on massive
text corpora with a new self-supervised objective. In PEGASUS, important
sentences are removed/masked from an input document and are generated together
as one output sequence from the remaining sentences, similar to an extractive
summary. We evaluated our best PEGASUS model on 12 downstream summarization
tasks spanning news, science, stories, instructions, emails, patents, and
legislative bills. Experiments demonstrate it achieves state-of-the-art
performance on all 12 downstream datasets measured by ROUGE scores. Our model
also shows surprising performance on low-resource summarization, surpassing
previous state-of-the-art results on 6 datasets with only 1000 examples.
Finally we validated our results using human evaluation and show that our model
summaries achieve human performance on multiple datasets.
| 2,020 | Computation and Language |
Going Beneath the Surface: Evaluating Image Captioning for
Grammaticality, Truthfulness and Diversity | Image captioning as a multimodal task has drawn much interest in recent
years. However, evaluation for this task remains a challenging problem.
Existing evaluation metrics focus on surface similarity between a candidate
caption and a set of reference captions, and do not check the actual relation
between a caption and the underlying visual content. We introduce a new
diagnostic evaluation framework for the task of image captioning, with the goal
of directly assessing models for grammaticality, truthfulness and diversity
(GTD) of generated captions. We demonstrate the potential of our evaluation
framework by evaluating existing image captioning models on a wide ranging set
of synthetic datasets that we construct for diagnostic evaluation. We
empirically show how the GTD evaluation framework, in combination with
diagnostic datasets, can provide insights into model capabilities and
limitations to supplement standard evaluations.
| 2,019 | Computation and Language |
Identifying Adversarial Sentences by Analyzing Text Complexity | Attackers create adversarial text to deceive both human perception and the
current AI systems to perform malicious purposes such as spam product reviews
and fake political posts. We investigate the difference between the adversarial
and the original text to prevent the risk. We prove that the text written by a
human is more coherent and fluent. Moreover, the human can express the idea
through the flexible text with modern words while a machine focuses on
optimizing the generated text by the simple and common words. We also suggest a
method to identify the adversarial text by extracting the features related to
our findings. The proposed method achieves high performance with 82.0% of
accuracy and 18.4% of equal error rate, which is better than the existing
methods whose the best accuracy is 77.0% corresponding to the error rate 22.8%.
| 2,019 | Computation and Language |
Discriminative Sentence Modeling for Story Ending Prediction | Story Ending Prediction is a task that needs to select an appropriate ending
for the given story, which requires the machine to understand the story and
sometimes needs commonsense knowledge. To tackle this task, we propose a new
neural network called Diff-Net for better modeling the differences of each
ending in this task. The proposed model could discriminate two endings in three
semantic levels: contextual representation, story-aware representation, and
discriminative representation. Experimental results on the Story Cloze Test
dataset show that the proposed model siginificantly outperforms various systems
by a large margin, and detailed ablation studies are given for better
understanding our model. We also carefully examine the traditional and
BERT-based models on both SCT v1.0 and v1.5 with interesting findings that may
potentially help future studies.
| 2,020 | Computation and Language |
Neural Simile Recognition with Cyclic Multitask Learning and Local
Attention | Simile recognition is to detect simile sentences and to extract simile
components, i.e., tenors and vehicles. It involves two subtasks: {\it simile
sentence classification} and {\it simile component extraction}. Recent work has
shown that standard multitask learning is effective for Chinese simile
recognition, but it is still uncertain whether the mutual effects between the
subtasks have been well captured by simple parameter sharing. We propose a
novel cyclic multitask learning framework for neural simile recognition, which
stacks the subtasks and makes them into a loop by connecting the last to the
first. It iteratively performs each subtask, taking the outputs of the previous
subtask as additional inputs to the current one, so that the interdependence
between the subtasks can be better explored. Extensive experiments show that
our framework significantly outperforms the current state-of-the-art model and
our carefully designed baselines, and the gains are still remarkable using
BERT.
| 2,019 | Computation and Language |
Annotating and normalizing biomedical NEs with limited knowledge | Named entity recognition (NER) is the very first step in the linguistic
processing of any new domain. It is currently a common process in BioNLP on
English clinical text. However, it is still in its infancy in other major
languages, as it is the case for Spanish. Presented under the umbrella of the
PharmaCoNER shared task, this paper describes a very simple method for the
annotation and normalization of pharmacological, chemical and, ultimately,
biomedical named entities in clinical cases. The system developed for the
shared task is based on limited knowledge, collected, structured and munged in
a way that clearly outperforms scores obtained by similar dictionary-based
systems for English in the past. Along with this recovering of the
knowledge-based methods for NER in subdomains, the paper also highlights the
key contribution of resource-based systems in the validation and consolidation
of both the annotation guidelines and the human annotation practices. In this
sense, some of the authors discoverings on the overall quality of human
annotated datasets question the above-mentioned `official' results obtained by
this system, that ranked second (0.91 F1-score) and first (0.916 F1-score),
respectively, in the two PharmaCoNER subtasks.
| 2,019 | Computation and Language |
CJRC: A Reliable Human-Annotated Benchmark DataSet for Chinese Judicial
Reading Comprehension | We present a Chinese judicial reading comprehension (CJRC) dataset which
contains approximately 10K documents and almost 50K questions with answers. The
documents come from judgment documents and the questions are annotated by law
experts. The CJRC dataset can help researchers extract elements by reading
comprehension technology. Element extraction is an important task in the legal
field. However, it is difficult to predefine the element types completely due
to the diversity of document types and causes of action. By contrast, machine
reading comprehension technology can quickly extract elements by answering
various questions from the long document. We build two strong baseline models
based on BERT and BiDAF. The experimental results show that there is enough
space for improvement compared to human annotators.
| 2,019 | Computation and Language |
Summary and Distance between Sets of Texts based on Topological Data
Analysis | In this paper, we use topological data analysis (TDA) tools such as
persistent homology, persistent entropy and bottleneck distance, to provide a
{\it TDA-based summary} of any given set of texts and a general method for
computing a distance between any two literary styles, authors or periods. To
this aim, deep-learning word-embedding techniques are combined with these tools
in order to study the topological properties of texts embedded in a metric
space. As a case of study, we use the written texts of three poets of the
Spanish Golden Age: Francisco de Quevedo, Luis de G\'ongora and Lope de Vega.
As far as we know, this is the first time that word embedding, bottleneck
distance, persistent homology and persistent entropy are used together to
characterize texts and to compare different literary styles.
| 2,022 | Computation and Language |
Generating Synthetic Audio Data for Attention-Based Speech Recognition
Systems | Recent advances in text-to-speech (TTS) led to the development of flexible
multi-speaker end-to-end TTS systems. We extend state-of-the-art
attention-based automatic speech recognition (ASR) systems with synthetic audio
generated by a TTS system trained only on the ASR corpora itself. ASR and TTS
systems are built separately to show that text-only data can be used to enhance
existing end-to-end ASR systems without the necessity of parameter or
architecture changes. We compare our method with language model integration of
the same text data and with simple data augmentation methods like SpecAugment
and show that performance improvements are mostly independent. We achieve
improvements of up to 33% relative in word-error-rate (WER) over a strong
baseline with data-augmentation in a low-resource environment
(LibriSpeech-100h), closing the gap to a comparable oracle experiment by more
than 50\%. We also show improvements of up to 5% relative WER over our most
recent ASR baseline on LibriSpeech-960h.
| 2,020 | Computation and Language |
An End-to-End Dialogue State Tracking System with Machine Reading
Comprehension and Wide & Deep Classification | This paper describes our approach in DSTC 8 Track 4: Schema-Guided Dialogue
State Tracking. The goal of this task is to predict the intents and slots in
each user turn to complete the dialogue state tracking (DST) based on the
information provided by the task's schema. Different from traditional
stage-wise DST, we propose an end-to-end DST system to avoid error accumulation
between the dialogue turns. The DST system consists of a machine reading
comprehension (MRC) model for non-categorical slots and a Wide & Deep model for
categorical slots. As far as we know, this is the first time that MRC and Wide
& Deep model are applied to DST problem in a fully end-to-end way. Experimental
results show that our framework achieves an excellent performance on the test
dataset including 50% zero-shot services with a joint goal accuracy of 0.8652
and a slot tagging F1-Score of 0.9835.
| 2,020 | Computation and Language |
RIMAX: Ranking Semantic Rhymes by calculating Definition Similarity | This paper presents RIMAX, a new system for detecting semantic rhymes, using
a Comprehensive Mexican Spanish Dictionary (DEM) and its Rhyming Dictionary
(REM). We use the Vector Space Model to calculate the similarity of the
definition of a query with the definitions corresponding to the assonant and
consonant rhymes of the query. The preliminary results using a manual
evaluation are very encouraging.
| 2,019 | Computation and Language |
BERTje: A Dutch BERT Model | The transformer-based pre-trained language model BERT has helped to improve
state-of-the-art performance on many natural language processing (NLP) tasks.
Using the same architecture and parameters, we developed and evaluated a
monolingual Dutch BERT model called BERTje. Compared to the multilingual BERT
model, which includes Dutch but is only based on Wikipedia text, BERTje is
based on a large and diverse dataset of 2.4 billion tokens. BERTje consistently
outperforms the equally-sized multilingual BERT model on downstream NLP tasks
(part-of-speech tagging, named-entity recognition, semantic role labeling, and
sentiment analysis). Our pre-trained Dutch BERT model is made available at
https://github.com/wietsedv/bertje.
| 2,019 | Computation and Language |
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language
Model | Recent breakthroughs of pretrained language models have shown the
effectiveness of self-supervised learning for a wide range of natural language
processing (NLP) tasks. In addition to standard syntactic and semantic NLP
tasks, pretrained models achieve strong improvements on tasks that involve
real-world knowledge, suggesting that large-scale language modeling could be an
implicit method to capture knowledge. In this work, we further investigate the
extent to which pretrained models such as BERT capture knowledge using a
zero-shot fact completion task. Moreover, we propose a simple yet effective
weakly supervised pretraining objective, which explicitly forces the model to
incorporate knowledge about real-world entities. Models trained with our new
objective yield significant improvements on the fact completion task. When
applied to downstream tasks, our model consistently outperforms BERT on four
entity-related question answering datasets (i.e., WebQuestions, TriviaQA,
SearchQA and Quasar-T) with an average 2.7 F1 improvements and a standard
fine-grained entity typing dataset (i.e., FIGER) with 5.7 accuracy gains.
| 2,019 | Computation and Language |
SberQuAD -- Russian Reading Comprehension Dataset: Description and
Analysis | SberQuAD -- a large scale analog of Stanford SQuAD in the Russian language -
is a valuable resource that has not been properly presented to the scientific
community. We fill this gap by providing a description, a thorough analysis,
and baseline experimental results.
| 2,020 | Computation and Language |
When to Talk: Chatbot Controls the Timing of Talking during Multi-turn
Open-domain Dialogue Generation | Despite the multi-turn open-domain dialogue systems have attracted more and
more attention and made great progress, the existing dialogue systems are still
very boring. Nearly all the existing dialogue models only provide a response
when the user's utterance is accepted. But during daily conversations, humans
always decide whether to continue to utter an utterance based on the context.
Intuitively, a dialogue model that can control the timing of talking
autonomously based on the conversation context can chat with humans more
naturally. In this paper, we explore the dialogue system that automatically
controls the timing of talking during the conversation. Specifically, we adopt
the decision module for the existing dialogue models. Furthermore, modeling
conversation context effectively is very important for controlling the timing
of talking. So we also adopt the graph neural networks to process the context
with the natural graph structure. Extensive experiments on two benchmarks show
that controlling the timing of talking can effectively improve the quality of
dialogue generation, and the proposed methods significantly improve the
accuracy of the timing of talking. In addition, we have publicly released the
codes of our proposed model.
| 2,019 | Computation and Language |
Hierarchical Character Embeddings: Learning Phonological and Semantic
Representations in Languages of Logographic Origin using Recursive Neural
Networks | Logographs (Chinese characters) have recursive structures (i.e. hierarchies
of sub-units in logographs) that contain phonological and semantic information,
as developmental psychology literature suggests that native speakers leverage
on the structures to learn how to read. Exploiting these structures could
potentially lead to better embeddings that can benefit many downstream tasks.
We propose building hierarchical logograph (character) embeddings from
logograph recursive structures using treeLSTM, a recursive neural network.
Using recursive neural network imposes a prior on the mapping from logographs
to embeddings since the network must read in the sub-units in logographs
according to the order specified by the recursive structures. Based on human
behavior in language learning and reading, we hypothesize that modeling
logographs' structures using recursive neural network should be beneficial. To
verify this claim, we consider two tasks (1) predicting logographs' Cantonese
pronunciation from logographic structures and (2) language modeling. Empirical
results show that the proposed hierarchical embeddings outperform baseline
approaches. Diagnostic analysis suggests that hierarchical embeddings
constructed using treeLSTM is less sensitive to distractors, thus is more
robust, especially on complex logographs.
| 2,020 | Computation and Language |
A Hierarchical Model for Data-to-Text Generation | Transcribing structured data into natural language descriptions has emerged
as a challenging task, referred to as "data-to-text". These structures
generally regroup multiple elements, as well as their attributes. Most attempts
rely on translation encoder-decoder methods which linearize elements into a
sequence. This however loses most of the structure contained in the data. In
this work, we propose to overpass this limitation with a hierarchical model
that encodes the data-structure at the element-level and the structure level.
Evaluations on RotoWire show the effectiveness of our model w.r.t. qualitative
and quantitative metrics.
| 2,019 | Computation and Language |
Modeling Intent, Dialog Policies and Response Adaptation for
Goal-Oriented Interactions | Building a machine learning driven spoken dialog system for goal-oriented
interactions involves careful design of intents and data collection along with
development of intent recognition models and dialog policy learning algorithms.
The models should be robust enough to handle various user distractions during
the interaction flow and should steer the user back into an engaging
interaction for successful completion of the interaction. In this work, we have
designed a goal-oriented interaction system where children can engage with
agents for a series of interactions involving `Meet \& Greet' and `Simon Says'
game play. We have explored various feature extractors and models for improved
intent recognition and looked at leveraging previous user and system
interactions in novel ways with attention models. We have also looked at dialog
adaptation methods for entrained response selection. Our bootstrapped models
from limited training data perform better than many baseline approaches we have
looked at for intent recognition and dialog action prediction.
| 2,019 | Computation and Language |
Exploring Context, Attention and Audio Features for Audio Visual
Scene-Aware Dialog | We are witnessing a confluence of vision, speech and dialog system
technologies that are enabling the IVAs to learn audio-visual groundings of
utterances and have conversations with users about the objects, activities and
events surrounding them. Recent progress in visual grounding techniques and
Audio Understanding are enabling machines to understand shared semantic
concepts and listen to the various sensory events in the environment. With
audio and visual grounding methods, end-to-end multimodal SDS are trained to
meaningfully communicate with us in natural language about the real dynamic
audio-visual sensory world around us. In this work, we explore the role of
`topics' as the context of the conversation along with multimodal attention
into such an end-to-end audio-visual scene-aware dialog system architecture. We
also incorporate an end-to-end audio classification ConvNet, AclNet, into our
models. We develop and test our approaches on the Audio Visual Scene-Aware
Dialog (AVSD) dataset released as a part of the DSTC7. We present the analysis
of our experiments and show that some of our model variations outperform the
baseline system released for AVSD.
| 2,019 | Computation and Language |
AMUSED: A Multi-Stream Vector Representation Method for Use in Natural
Dialogue | The problem of building a coherent and non-monotonous conversational agent
with proper discourse and coverage is still an area of open research. Current
architectures only take care of semantic and contextual information for a given
query and fail to completely account for syntactic and external knowledge which
are crucial for generating responses in a chit-chat system. To overcome this
problem, we propose an end to end multi-stream deep learning architecture which
learns unified embeddings for query-response pairs by leveraging contextual
information from memory networks and syntactic information by incorporating
Graph Convolution Networks (GCN) over their dependency parse. A stream of this
network also utilizes transfer learning by pre-training a bidirectional
transformer to extract semantic representation for each input sentence and
incorporates external knowledge through the the neighborhood of the entities
from a Knowledge Base (KB). We benchmark these embeddings on next sentence
prediction task and significantly improve upon the existing techniques.
Furthermore, we use AMUSED to represent query and responses along with its
context to develop a retrieval based conversational agent which has been
validated by expert linguists to have comprehensive engagement with humans.
| 2,019 | Computation and Language |
Implicit Knowledge in Argumentative Texts: An Annotated Corpus | When speaking or writing, people omit information that seems clear and
evident, such that only part of the message is expressed in words. Especially
in argumentative texts it is very common that (important) parts of the argument
are implied and omitted. We hypothesize that for argument analysis it will be
beneficial to reconstruct this implied information. As a starting point for
filling such knowledge gaps, we build a corpus consisting of high-quality human
annotations of missing and implied information in argumentative texts. To learn
more about the characteristics of both the argumentative texts and the added
information, we further annotate the data with semantic clause types and
commonsense knowledge relations. The outcome of our work is a carefully
de-signed and richly annotated dataset, for which we then provide an in-depth
analysis by investigating characteristic distributions and correlations of the
assigned labels. We reveal interesting patterns and intersections between the
annotation categories and properties of our dataset, which enable insights into
the characteristics of both argumentative texts and implicit knowledge in terms
of structural features and semantic information. The results of our analysis
can help to assist automated argument analysis and can guide the process of
revealing implicit information in argumentative texts automatically.
| 2,019 | Computation and Language |
Design and implementation of an open source Greek POS Tagger and Entity
Recognizer using spaCy | This paper proposes a machine learning approach to part-of-speech tagging and
named entity recognition for Greek, focusing on the extraction of morphological
features and classification of tokens into a small set of classes for named
entities. The architecture model that was used is introduced. The greek version
of the spaCy platform was added into the source code, a feature that did not
exist before our contribution, and was used for building the models.
Additionally, a part of speech tagger was trained that can detect the
morphology of the tokens and performs higher than the state-of-the-art results
when classifying only the part of speech. For named entity recognition using
spaCy, a model that extends the standard ENAMEX type (organization, location,
person) was built. Certain experiments that were conducted indicate the need
for flexibility in out-of-vocabulary words and there is an effort for resolving
this issue. Finally, the evaluation results are discussed.
| 2,019 | Computation and Language |
Can AI Generate Love Advice?: Toward Neural Answer Generation for
Non-Factoid Questions | Deep learning methods that extract answers for non-factoid questions from QA
sites are seen as critical since they can assist users in reaching their next
decisions through conversations with AI systems. The current methods, however,
have the following two problems: (1) They can not understand the ambiguous use
of words in the questions as word usage can strongly depend on the context. As
a result, the accuracies of their answer selections are not good enough. (2)
The current methods can only select from among the answers held by QA sites and
can not generate new ones. Thus, they can not answer the questions that are
somewhat different with those stored in QA sites. Our solution, Neural Answer
Construction Model, tackles these problems as it: (1) Incorporates the biases
of semantics behind questions into word embeddings while also computing them
regardless of the semantics. As a result, it can extract answers that suit the
contexts of words used in the question as well as following the common usage of
words across semantics. This improves the accuracy of answer selection. (2)
Uses biLSTM to compute the embeddings of questions as well as those of the
sentences often used to form answers. It then simultaneously learns the optimum
combination of those sentences as well as the closeness between the question
and those sentences. As a result, our model can construct an answer that
corresponds to the situation that underlies the question; it fills the gap
between answer selection and generation and is the first model to move beyond
the current simple answer selection model for non-factoid QAs. Evaluations
using datasets created for love advice stored in the Japanese QA site, Oshiete
goo, indicate that our model achieves 20% higher accuracy in answer creation
than the strong baselines. Our model is practical and has already been applied
to the love advice service in Oshiete goo.
| 2,019 | Computation and Language |
Decomposing predictability: Semantic feature overlap between words and
the dynamics of reading for meaning | The present study uses a computational approach to examine the role of
semantic constraints in normal reading. This methodology avoids confounds
inherent in conventional measures of predictability, allowing for theoretically
deeper accounts of semantic processing. We start from a definition of
associations between words based on the significant log likelihood that two
words co-occur frequently together in the sentences of a large text corpus.
Direct associations between stimulus words were controlled, and semantic
feature overlap between prime and target words was manipulated by their common
associates. The stimuli consisted of sentences of the form pronoun, verb,
article, adjective and noun, followed by a series of closed class words, e. g.
"She rides the grey elephant on one of her many exploratory voyages". The
results showed that verb-noun overlap reduces single and first fixation
durations of the target noun and adjective-noun overlap reduces go-past
durations. A dynamic spreading of activation account suggests that associates
of the prime words take some time to become activated: The verb can act on the
target noun's early eye-movement measures presented three words later, while
the adjective is presented immediately prior to the target, which induces
sentence re-examination after a difficult adjective-noun semantic integration.
| 2,019 | Computation and Language |
Zero-shot Text Classification With Generative Language Models | This work investigates the use of natural language to enable zero-shot model
adaptation to new tasks. We use text and metadata from social commenting
platforms as a source for a simple pretraining task. We then provide the
language model with natural language descriptions of classification tasks as
input and train it to generate the correct answer in natural language via a
language modeling objective. This allows the model to generalize to new
classification tasks without the need for multiple multitask classification
heads. We show the zero-shot performance of these generative language models,
trained with weak supervision, on six benchmark text classification datasets
from the torchtext library. Despite no access to training data, we achieve up
to a 45% absolute improvement in classification accuracy over random or
majority class baselines. These results show that natural language can serve as
simple and powerful descriptors for task adaptation. We believe this points the
way to new metalearning strategies for text problems.
| 2,019 | Computation and Language |
MedCAT -- Medical Concept Annotation Tool | Biomedical documents such as Electronic Health Records (EHRs) contain a large
amount of information in an unstructured format. The data in EHRs is a hugely
valuable resource documenting clinical narratives and decisions, but whilst the
text can be easily understood by human doctors it is challenging to use in
research and clinical applications. To uncover the potential of biomedical
documents we need to extract and structure the information they contain. The
task at hand is Named Entity Recognition and Linking (NER+L). The number of
entities, ambiguity of words, overlapping and nesting make the biomedical area
significantly more difficult than many others. To overcome these difficulties,
we have developed the Medical Concept Annotation Tool (MedCAT), an open-source
unsupervised approach to NER+L. MedCAT uses unsupervised machine learning to
disambiguate entities. It was validated on MIMIC-III (a freely accessible
critical care database) and MedMentions (Biomedical papers annotated with
mentions from the Unified Medical Language System). In case of NER+L, the
comparison with existing tools shows that MedCAT improves the previous best
with only unsupervised learning (F1=0.848 vs 0.691 for disease detection;
F1=0.710 vs. 0.222 for general concept detection). A qualitative analysis of
the vector embeddings learnt by MedCAT shows that it captures latent medical
knowledge available in EHRs (MIMIC-III). Unsupervised learning can improve the
performance of large scale entity extraction, but it has some limitations when
working with only a couple of entities and a small dataset. In that case
options are supervised learning or active learning, both of which are supported
in MedCAT via the MedCATtrainer extension. Our approach can detect and link
millions of different biomedical concepts with state-of-the-art performance,
whilst being lightweight, fast and easy to use.
| 2,019 | Computation and Language |
Machine Translation with Cross-lingual Word Embeddings | Learning word embeddings using distributional information is a task that has
been studied by many researchers, and a lot of studies are reported in the
literature. On the contrary, less studies were done for the case of multiple
languages. The idea is to focus on a single representation for a pair of
languages such that semantically similar words are closer to one another in the
induced representation irrespective of the language. In this way, when data are
missing for a particular language, classifiers from another language can be
used.
| 2,020 | Computation and Language |
Two Way Adversarial Unsupervised Word Translation | Word translation is a problem in machine translation that seeks to build
models that recover word level correspondence between languages. Recent
approaches to this problem have shown that word translation models can learned
with very small seeding dictionaries, and even without any starting
supervision. In this paper we propose a method to jointly find translations
between a pair of languages. Not only does our method learn translations in
both directions but it improves accuracy of those translations over past
methods.
| 2,019 | Computation and Language |
A Comparison of Architectures and Pretraining Methods for Contextualized
Multilingual Word Embeddings | The lack of annotated data in many languages is a well-known challenge within
the field of multilingual natural language processing (NLP). Therefore, many
recent studies focus on zero-shot transfer learning and joint training across
languages to overcome data scarcity for low-resource languages. In this work we
(i) perform a comprehensive comparison of state-ofthe-art multilingual word and
sentence encoders on the tasks of named entity recognition (NER) and part of
speech (POS) tagging; and (ii) propose a new method for creating multilingual
contextualized word embeddings, compare it to multiple baselines and show that
it performs at or above state-of-theart level in zero-shot transfer settings.
Finally, we show that our method allows for better knowledge sharing across
languages in a joint training setting.
| 2,019 | Computation and Language |
Na\"iveRole: Author-Contribution Extraction and Parsing from Biomedical
Manuscripts | Information about the contributions of individual authors to scientific
publications is important for assessing authors' achievements. Some biomedical
publications have a short section that describes authors' roles and
contributions. It is usually written in natural language and hence author
contributions cannot be trivially extracted in a machine-readable format. In
this paper, we present 1) A statistical analysis of roles in author
contributions sections, and 2) Na\"iveRole, a novel approach to extract
structured authors' roles from author contribution sections. For the first
part, we used co-clustering techniques, as well as Open Information Extraction,
to semi-automatically discover the popular roles within a corpus of 2,000
contributions sections from PubMed Central. The discovered roles were used to
automatically build a training set for Na\"iveRole, our role extractor
approach, based on Na\"ive Bayes. Na\"iveRole extracts roles with a
micro-averaged precision of 0.68, recall of 0.48 and F1 of 0.57. It is, to the
best of our knowledge, the first attempt to automatically extract author roles
from research papers. This paper is an extended version of a previous poster
published at JCDL 2018.
| 2,019 | Computation and Language |
A Machine Learning Framework for Authorship Identification From Texts | Authorship identification is a process in which the author of a text is
identified. Most known literary texts can easily be attributed to a certain
author because they are, for example, signed. Yet sometimes we find unfinished
pieces of work or a whole bunch of manuscripts with a wide variety of possible
authors. In order to assess the importance of such a manuscript, it is vital to
know who wrote it. In this work, we aim to develop a machine learning framework
to effectively determine authorship. We formulate the task as a single-label
multi-class text categorization problem and propose a supervised machine
learning framework incorporating stylometric features. This task is highly
interdisciplinary in that it takes advantage of machine learning, information
retrieval, and natural language processing. We present an approach and a model
which learns the differences in writing style between $50$ different authors
and is able to predict the author of a new text with high accuracy. The
accuracy is seen to increase significantly after introducing certain linguistic
stylometric features along with text features.
| 2,019 | Computation and Language |
Predicting Heart Failure Readmission from Clinical Notes Using Deep
Learning | Heart failure hospitalization is a severe burden on healthcare. How to
predict and therefore prevent readmission has been a significant challenge in
outcomes research. To address this, we propose a deep learning approach to
predict readmission from clinical notes. Unlike conventional methods that use
structured data for prediction, we leverage the unstructured clinical notes to
train deep learning models based on convolutional neural networks (CNN). We
then use the trained models to classify and predict potentially high-risk
admissions/patients. For evaluation, we trained CNNs using the discharge
summary notes in the MIMIC III database. We also trained regular machine
learning models based on random forest using the same datasets. The result
shows that deep learning models outperform the regular models in prediction
tasks. CNN method achieves a F1 score of 0.756 in general readmission
prediction and 0.733 in 30-day readmission prediction, while random forest only
achieves a F1 score of 0.674 and 0.656 respectively. We also propose a
chi-square test based method to interpret key features associated with deep
learning predicted readmissions. It reveals clinical insights about readmission
embedded in the clinical notes. Collectively, our method can make the human
evaluation process more efficient and potentially facilitate the reduction of
readmission rates.
| 2,019 | Computation and Language |
Recurrent Hierarchical Topic-Guided RNN for Language Generation | To simultaneously capture syntax and global semantics from a text corpus, we
propose a new larger-context recurrent neural network (RNN) based language
model, which extracts recurrent hierarchical semantic structure via a dynamic
deep topic model to guide natural language generation. Moving beyond a
conventional RNN-based language model that ignores long-range word dependencies
and sentence order, the proposed model captures not only intra-sentence word
dependencies, but also temporal transitions between sentences and
inter-sentence topic dependencies. For inference, we develop a hybrid of
stochastic-gradient Markov chain Monte Carlo and recurrent autoencoding
variational Bayes. Experimental results on a variety of real-world text corpora
demonstrate that the proposed model not only outperforms larger-context
RNN-based language models, but also learns interpretable recurrent multilayer
topics and generates diverse sentences and paragraphs that are syntactically
correct and semantically coherent.
| 2,020 | Computation and Language |
T3: Tree-Autoencoder Constrained Adversarial Text Generation for
Targeted Attack | Adversarial attacks against natural language processing systems, which
perform seemingly innocuous modifications to inputs, can induce arbitrary
mistakes to the target models. Though raised great concerns, such adversarial
attacks can be leveraged to estimate the robustness of NLP models. Compared
with the adversarial example generation in continuous data domain (e.g.,
image), generating adversarial text that preserves the original meaning is
challenging since the text space is discrete and non-differentiable. To handle
these challenges, we propose a target-controllable adversarial attack framework
T3, which is applicable to a range of NLP tasks. In particular, we propose a
tree-based autoencoder to embed the discrete text data into a continuous
representation space, upon which we optimize the adversarial perturbation. A
novel tree-based decoder is then applied to regularize the syntactic
correctness of the generated text and manipulate it on either sentence
(T3(Sent)) or word (T3(Word)) level. We consider two most representative NLP
tasks: sentiment analysis and question answering (QA). Extensive experimental
results and human studies show that T3 generated adversarial texts can
successfully manipulate the NLP models to output the targeted incorrect answer
without misleading the human. Moreover, we show that the generated adversarial
texts have high transferability which enables the black-box attacks in
practice. Our work sheds light on an effective and general way to examine the
robustness of NLP models. Our code is publicly available at
https://github.com/AI-secure/T3/.
| 2,020 | Computation and Language |
Analyzing Structures in the Semantic Vector Space: A Framework for
Decomposing Word Embeddings | Word embeddings are rich word representations, which in combination with deep
neural networks, lead to large performance gains for many NLP tasks. However,
word embeddings are represented by dense, real-valued vectors and they are
therefore not directly interpretable. Thus, computational operations based on
them are also not well understood. In this paper, we present an approach for
analyzing structures in the semantic vector space to get a better understanding
of the underlying semantic encoding principles. We present a framework for
decomposing word embeddings into smaller meaningful units which we call
sub-vectors. The framework opens up a wide range of possibilities analyzing
phenomena in vector space semantics, as well as solving concrete NLP problems:
We introduce the category completion task and show that a sub-vector based
approach is superior to supervised techniques; We present a sub-vector based
method for solving the word analogy task, which substantially outperforms
different variants of the traditional vector-offset method.
| 2,019 | Computation and Language |
BERTQA -- Attention on Steroids | In this work, we extend the Bidirectional Encoder Representations from
Transformers (BERT) with an emphasis on directed coattention to obtain an
improved F1 performance on the SQUAD2.0 dataset. The Transformer architecture
on which BERT is based places hierarchical global attention on the
concatenation of the context and query. Our additions to the BERT architecture
augment this attention with a more focused context to query (C2Q) and query to
context (Q2C) attention via a set of modified Transformer encoder units. In
addition, we explore adding convolution-based feature extraction within the
coattention architecture to add localized information to self-attention. We
found that coattention significantly improves the no answer F1 by 4 points in
the base and 1 point in the large architecture. After adding skip connections
the no answer F1 improved further without causing an additional loss in has
answer F1. The addition of localized feature extraction added to attention
produced an overall dev F1 of 77.03 in the base architecture. We applied our
findings to the large BERT model which contains twice as many layers and
further used our own augmented version of the SQUAD 2.0 dataset created by back
translation, which we have named SQUAD 2.Q. Finally, we performed
hyperparameter tuning and ensembled our best models for a final F1/EM of
82.317/79.442 (Attention on Steroids, PCE Test Leaderboard).
| 2,019 | Computation and Language |
Tag-less Back-Translation | An effective method to generate a large number of parallel sentences for
training improved neural machine translation (NMT) systems is the use of the
back-translations of the target-side monolingual data. The standard
back-translation method has been shown to be unable to efficiently utilize the
available huge amount of existing monolingual data because of the inability of
translation models to differentiate between the authentic and synthetic
parallel data during training. Tagging, or using gates, has been used to enable
translation models to distinguish between synthetic and authentic data,
improving standard back-translation and also enabling the use of iterative
back-translation on language pairs that underperformed using standard
back-translation. In this work, we approach back-translation as a domain
adaptation problem, eliminating the need for explicit tagging. In the approach
-- \emph{tag-less back-translation} -- the synthetic and authentic parallel
data are treated as out-of-domain and in-domain data respectively and, through
pre-training and fine-tuning, the translation model is shown to be able to
learn more efficiently from them during training. Experimental results have
shown that the approach outperforms the standard and tagged back-translation
approaches on low resource English-Vietnamese and English-German neural machine
translation.
| 2,021 | Computation and Language |
Hybrid Machine Learning Models of Classifying Residential Requests for
Smart Dispatching | This paper presents a hybrid machine learning method of classifying
residential requests in natural language to responsible departments that
provide timely responses back to residents under the vision of digital
government services in smart cities. Residential requests in natural language
descriptions cover almost every aspect of a city's daily operation. Hence the
responsible departments are fine-grained to even the level of local
communities. There are no specific general categories or labels for each
request sample. This causes two issues for supervised classification solutions,
namely (1) the request sample data is unbalanced and (2) lack of specific
labels for training. To solve these issues, we investigate a hybrid machine
learning method that generates meta-class labels by means of unsupervised
clustering algorithms; applies two-word embedding methods with three
classifiers (including two hierarchical classifiers and one residual
convolutional neural network); and selects the best performing classifier as
the classification result. We demonstrate our approach performing better
classification tasks compared to two benchmarking machine learning models,
Naive Bayes classifier and a Multiple Layer Perceptron (MLP). In addition, the
hierarchical classification method provides insights into the source of
classification errors.
| 2,019 | Computation and Language |
Harnessing Evolution of Multi-Turn Conversations for Effective Answer
Retrieval | With the improvements in speech recognition and voice generation technologies
over the last years, a lot of companies have sought to develop conversation
understanding systems that run on mobile phones or smart home devices through
natural language interfaces. Conversational assistants, such as Google
Assistant and Microsoft Cortana, can help users to complete various types of
tasks. This requires an accurate understanding of the user's information need
as the conversation evolves into multiple turns. Finding relevant context in a
conversation's history is challenging because of the complexity of natural
language and the evolution of a user's information need. In this work, we
present an extensive analysis of language, relevance, dependency of user
utterances in a multi-turn information-seeking conversation. To this aim, we
have annotated relevant utterances in the conversations released by the TREC
CaST 2019 track. The annotation labels determine which of the previous
utterances in a conversation can be used to improve the current one.
Furthermore, we propose a neural utterance relevance model based on BERT
fine-tuning, outperforming competitive baselines. We study and compare the
performance of multiple retrieval models, utilizing different strategies to
incorporate the user's context. The experimental results on both classification
and retrieval tasks show that our proposed approach can effectively identify
and incorporate the conversation context. We show that processing the current
utterance using the predicted relevant utterance leads to a 38% relative
improvement in terms of nDCG@20. Finally, to foster research in this area, we
have released the dataset of the annotations.
| 2,020 | Computation and Language |
Knowledge-guided Convolutional Networks for Chemical-Disease Relation
Extraction | Background: Automatic extraction of chemical-disease relations (CDR) from
unstructured text is of essential importance for disease treatment and drug
development. Meanwhile, biomedical experts have built many highly-structured
knowledge bases (KBs), which contain prior knowledge about chemicals and
diseases. Prior knowledge provides strong support for CDR extraction. How to
make full use of it is worth studying. Results: This paper proposes a novel
model called "Knowledge-guided Convolutional Networks (KCN)" to leverage prior
knowledge for CDR extraction. The proposed model first learns knowledge
representations including entity embeddings and relation embeddings from KBs.
Then, entity embeddings are used to control the propagation of context features
towards a chemical-disease pair with gated convolutions. After that, relation
embeddings are employed to further capture the weighted context features by a
shared attention pooling. Finally, the weighted context features containing
additional knowledge information are used for CDR extraction. Experiments on
the BioCreative V CDR dataset show that the proposed KCN achieves 71.28%
F1-score, which outperforms most of the state-of-the-art systems. Conclusions:
This paper proposes a novel CDR extraction model KCN to make full use of prior
knowledge. Experimental results demonstrate that KCN could effectively
integrate prior knowledge and contexts for the performance improvement.
| 2,019 | Computation and Language |
Combining Context and Knowledge Representations for Chemical-Disease
Relation Extraction | Automatically extracting the relationships between chemicals and diseases is
significantly important to various areas of biomedical research and health
care. Biomedical experts have built many large-scale knowledge bases (KBs) to
advance the development of biomedical research. KBs contain huge amounts of
structured information about entities and relationships, therefore plays a
pivotal role in chemical-disease relation (CDR) extraction. However, previous
researches pay less attention to the prior knowledge existing in KBs. This
paper proposes a neural network-based attention model (NAM) for CDR extraction,
which makes full use of context information in documents and prior knowledge in
KBs. For a pair of entities in a document, an attention mechanism is employed
to select important context words with respect to the relation representations
learned from KBs. Experiments on the BioCreative V CDR dataset show that
combining context and knowledge representations through the attention
mechanism, could significantly improve the CDR extraction performance while
achieve comparable results with state-of-the-art systems.
| 2,018 | Computation and Language |
Siamese Networks for Large-Scale Author Identification | Authorship attribution is the process of identifying the author of a text.
Approaches to tackling it have been conventionally divided into
classification-based ones, which work well for small numbers of candidate
authors, and similarity-based methods, which are applicable for larger numbers
of authors or for authors beyond the training set; these existing
similarity-based methods have only embodied static notions of similarity. Deep
learning methods, which blur the boundaries between classification-based and
similarity-based approaches, are promising in terms of ability to learn a
notion of similarity, but have previously only been used in a conventional
small-closed-class classification setup.
Siamese networks have been used to develop learned notions of similarity in
one-shot image tasks, and also for tasks of mostly semantic relatedness in NLP.
We examine their application to the stylistic task of authorship attribution on
datasets with large numbers of authors, looking at multiple energy functions
and neural network architectures, and show that they can substantially
outperform previous approaches.
| 2,021 | Computation and Language |
Discovering Protagonist of Sentiment with Aspect Reconstructed Capsule
Network | Most recent existing aspect-term level sentiment analysis (ATSA) approaches
combined various neural network models with delicately carved attention
mechanisms built upon given aspect and context to generate refined sentence
representations for better predictions. In these methods, aspect terms are
always provided in both training and testing process which may degrade
aspect-level analysis into sentence-level prediction. However, the annotated
aspect term might be unavailable in real-world scenarios which may challenge
the applicability of the existing methods. In this paper, we aim to improve
ATSA by discovering the potential aspect terms of the predicted sentiment
polarity when the aspect terms of a test sentence are unknown. We access this
goal by proposing a capsule network based model named CAPSAR. In CAPSAR,
sentiment categories are denoted by capsules and aspect term information is
injected into sentiment capsules through a sentiment-aspect reconstruction
procedure during the training. As a result, coherent patterns between aspects
and sentimental expressions are encapsulated by these sentiment capsules.
Experiments on three widely used benchmarks demonstrate these patterns have
potential in exploring aspect terms from test sentence when only feeding the
sentence to the model. Meanwhile, the proposed CAPSAR can clearly outperform
SOTA methods in standard ATSA tasks.
| 2,020 | Computation and Language |
Artificial mental phenomena: Psychophysics as a framework to detect
perception biases in AI models | Detecting biases in artificial intelligence has become difficult because of
the impenetrable nature of deep learning. The central difficulty is in relating
unobservable phenomena deep inside models with observable, outside quantities
that we can measure from inputs and outputs. For example, can we detect
gendered perceptions of occupations (e.g., female librarian, male electrician)
using questions to and answers from a word embedding-based system? Current
techniques for detecting biases are often customized for a task, dataset, or
method, affecting their generalization. In this work, we draw from
Psychophysics in Experimental Psychology---meant to relate quantities from the
real world (i.e., "Physics") into subjective measures in the mind (i.e.,
"Psyche")---to propose an intellectually coherent and generalizable framework
to detect biases in AI. Specifically, we adapt the two-alternative forced
choice task (2AFC) to estimate potential biases and the strength of those
biases in black-box models. We successfully reproduce previously-known biased
perceptions in word embeddings and sentiment analysis predictions. We discuss
how concepts in experimental psychology can be naturally applied to
understanding artificial mental phenomena, and how psychophysics can form a
useful methodological foundation to study fairness in AI.
| 2,019 | Computation and Language |
BioConceptVec: creating and evaluating literature-based biomedical
concept embeddings on a large scale | Capturing the semantics of related biological concepts, such as genes and
mutations, is of significant importance to many research tasks in computational
biology such as protein-protein interaction detection, gene-drug association
prediction, and biomedical literature-based discovery. Here, we propose to
leverage state-of-the-art text mining tools and machine learning models to
learn the semantics via vector representations (aka. embeddings) of over
400,000 biological concepts mentioned in the entire PubMed abstracts. Our
learned embeddings, namely BioConceptVec, can capture related concepts based on
their surrounding contextual information in the literature, which is beyond
exact term match or co-occurrence-based methods. BioConceptVec has been
thoroughly evaluated in multiple bioinformatics tasks consisting of over 25
million instances from nine different biological datasets. The evaluation
results demonstrate that BioConceptVec has better performance than existing
methods in all tasks. Finally, BioConceptVec is made freely available to the
research community and general public via
https://github.com/ncbi-nlp/BioConceptVec.
| 2,020 | Computation and Language |
What do Asian Religions Have in Common? An Unsupervised Text Analytics
Exploration | The main source of various religious teachings is their sacred texts which
vary from religion to religion based on different factors like the geographical
location or time of the birth of a particular religion. Despite these
differences, there could be similarities between the sacred texts based on what
lessons it teaches to its followers. This paper attempts to find the similarity
using text mining techniques. The corpus consisting of Asian (Tao Te Ching,
Buddhism, Yogasutra, Upanishad) and non-Asian (four Bible texts) is used to
explore findings of similarity measures like Euclidean, Manhattan, Jaccard and
Cosine on raw Document Term Frequency [DTM], normalized DTM which reveals
similarity based on word usage. The performance of Supervised learning
algorithms like K-Nearest Neighbor [KNN], Support Vector Machine [SVM] and
Random Forest is measured based on its accuracy to predict correct scared text
for any given chapter in the corpus. The K-means clustering visualizations on
Euclidean distances of raw DTM reveals that there exists a pattern of
similarity among these sacred texts with Upanishads and Tao Te Ching is the
most similar text in the corpus.
| 2,019 | Computation and Language |
"The Squawk Bot": Joint Learning of Time Series and Text Data Modalities
for Automated Financial Information Filtering | Multimodal analysis that uses numerical time series and textual corpora as
input data sources is becoming a promising approach, especially in the
financial industry. However, the main focus of such analysis has been on
achieving high prediction accuracy while little effort has been spent on the
important task of understanding the association between the two data
modalities. Performance on the time series hence receives little explanation
though human-understandable textual information is available. In this work, we
address the problem of given a numerical time series, and a general corpus of
textual stories collected in the same period of the time series, the task is to
timely discover a succinct set of textual stories associated with that time
series. Towards this goal, we propose a novel multi-modal neural model called
MSIN that jointly learns both numerical time series and categorical text
articles in order to unearth the association between them. Through multiple
steps of data interrelation between the two data modalities, MSIN learns to
focus on a small subset of text articles that best align with the performance
in the time series. This succinct set is timely discovered and presented as
recommended documents, acting as automated information filtering, for the given
time series. We empirically evaluate the performance of our model on
discovering relevant news articles for two stock time series from Apple and
Google companies, along with the daily news articles collected from the Thomson
Reuters over a period of seven consecutive years. The experimental results
demonstrate that MSIN achieves up to 84.9% and 87.2% in recalling the ground
truth articles respectively to the two examined time series, far more superior
to state-of-the-art algorithms that rely on conventional attention mechanism in
deep learning.
| 2,019 | Computation and Language |
Probing the phonetic and phonological knowledge of tones in Mandarin TTS
models | This study probes the phonetic and phonological knowledge of lexical tones in
TTS models through two experiments. Controlled stimuli for testing tonal
coarticulation and tone sandhi in Mandarin were fed into Tacotron 2 and
WaveGlow to generate speech samples, which were subject to acoustic analysis
and human evaluation. Results show that both baseline Tacotron 2 and Tacotron 2
with BERT embeddings capture the surface tonal coarticulation patterns well but
fail to consistently apply the Tone-3 sandhi rule to novel sentences.
Incorporating pre-trained BERT embeddings into Tacotron 2 improves the
naturalness and prosody performance, and yields better generalization of Tone-3
sandhi rules to novel complex sentences, although the overall accuracy for
Tone-3 sandhi was still low. Given that TTS models do capture some linguistic
phenomena, it is argued that they can be used to generate and validate certain
linguistic hypotheses. On the other hand, it is also suggested that
linguistically informed stimuli should be included in the training and the
evaluation of TTS models.
| 2,019 | Computation and Language |
Improving Abstractive Text Summarization with History Aggregation | Recent neural sequence to sequence models have provided feasible solutions
for abstractive summarization. However, such models are still hard to tackle
long text dependency in the summarization task. A high-quality summarization
system usually depends on strong encoder which can refine important information
from long input texts so that the decoder can generate salient summaries from
the encoder's memory. In this paper, we propose an aggregation mechanism based
on the Transformer model to address the challenge of long text representation.
Our model can review history information to make encoder hold more memory
capacity. Empirically, we apply our aggregation mechanism to the Transformer
model and experiment on CNN/DailyMail dataset to achieve higher quality
summaries compared to several strong baseline models on the ROUGE metrics.
| 2,019 | Computation and Language |
Predictive Biases in Natural Language Processing Models: A Conceptual
Framework and Overview | An increasing number of works in natural language processing have addressed
the effect of bias on the predicted outcomes, introducing mitigation techniques
that act on different parts of the standard NLP pipeline (data and models).
However, these works have been conducted in isolation, without a unifying
framework to organize efforts within the field. This leads to repetitive
approaches, and puts an undue focus on the effects of bias, rather than on
their origins. Research focused on bias symptoms rather than the underlying
origins could limit the development of effective countermeasures. In this
paper, we propose a unifying conceptualization: the predictive bias framework
for NLP. We summarize the NLP literature and propose a general mathematical
definition of predictive bias in NLP along with a conceptual framework,
differentiating four main origins of biases: label bias, selection bias, model
overamplification, and semantic bias. We discuss how past work has countered
each bias origin. Our framework serves to guide an introductory overview of
predictive bias in NLP, integrating existing work into a single structure and
opening avenues for future research.
| 2,020 | Computation and Language |
Falcon 2.0: An Entity and Relation Linking Tool over Wikidata | The Natural Language Processing (NLP) community has significantly contributed
to the solutions for entity and relation recognition from the text, and
possibly linking them to proper matches in Knowledge Graphs (KGs). Considering
Wikidata as the background KG, still, there are limited tools to link knowledge
within the text to Wikidata. In this paper, we present Falcon 2.0, first joint
entity, and relation linking tool over Wikidata. It receives a short natural
language text in the English language and outputs a ranked list of entities and
relations annotated with the proper candidates in Wikidata. The candidates are
represented by their Internationalized Resource Identifier (IRI) in Wikidata.
Falcon 2.0 resorts to the English language model for the recognition task
(e.g., N-Gram tiling and N-Gram splitting), and then an optimization approach
for linking task. We have empirically studied the performance of Falcon 2.0 on
Wikidata and concluded that it outperforms all the existing baselines. Falcon
2.0 is public and can be reused by the community; all the required instructions
of Falcon 2.0 are well-documented at our GitHub repository. We also demonstrate
an online API, which can be run without any technical expertise. Falcon 2.0 and
its background knowledge bases are available as resources at
https://labs.tib.eu/falcon/falcon2/.
| 2,020 | Computation and Language |
Open-domain Event Extraction and Embedding for Natural Gas Market
Prediction | We propose an approach to predict the natural gas price in several days using
historical price data and events extracted from news headlines. Most previous
methods treats price as an extrapolatable time series, those analyze the
relation between prices and news either trim their price data correspondingly
to a public news dataset, manually annotate headlines or use off-the-shelf
tools. In comparison to off-the-shelf tools, our event extraction method
detects not only the occurrence of phenomena but also the changes in
attribution and characteristics from public sources. Instead of using sentence
embedding as a feature, we use every word of the extracted events, encode and
organize them before feeding to the learning models. Empirical results show
favorable results, in terms of prediction performance, money saved and
scalability.
| 2,020 | Computation and Language |
Leveraging Lead Bias for Zero-shot Abstractive News Summarization | A typical journalistic convention in news articles is to deliver the most
salient information in the beginning, also known as the lead bias. While this
phenomenon can be exploited in generating a summary, it has a detrimental
effect on teaching a model to discriminate and extract important information in
general. We propose that this lead bias can be leveraged in our favor in a
simple and effective way to pre-train abstractive news summarization models on
large-scale unlabeled news corpora: predicting the leading sentences using the
rest of an article. We collect a massive news corpus and conduct data cleaning
and filtering via statistical analysis. We then apply self-supervised
pre-training on this dataset to existing generation models BART and T5 for
domain adaptation. Via extensive experiments on six benchmark datasets, we show
that this approach can dramatically improve the summarization quality and
achieve state-of-the-art results for zero-shot news summarization without any
fine-tuning. For example, in the DUC2003 dataset, the ROUGE-1 score of BART
increases 13.7% after the lead-bias pre-training. We deploy the model in
Microsoft News and provide public APIs as well as a demo website for
multi-lingual news summarization.
| 2,021 | Computation and Language |
N-gram Statistical Stemmer for Bangla Corpus | Stemming is a process that can be utilized to trim inflected words to stem or
root form. It is useful for enhancing the retrieval effectiveness, especially
for text search in order to solve the mismatch problems. Previous research on
Bangla stemming mostly relied on eliminating multiple suffixes from a solitary
word through a recursive rule based procedure to recover progressively
applicable relative root. Our proposed system has enhanced the aforementioned
exploration by actualizing one of the stemming algorithms called N-gram
stemming. By utilizing an affiliation measure called dice coefficient, related
sets of words are clustered depending on their character structure. The
smallest word in one cluster may be considered as the stem. We additionally
analyzed Affinity Propagation clustering algorithms with coefficient similarity
as well as with median similarity. Our result indicates N-gram stemming
techniques to be effective in general which gave us around 87% accurate
clusters.
| 2,019 | Computation and Language |
A Study of Multilingual Neural Machine Translation | Multilingual neural machine translation (NMT) has recently been investigated
from different aspects (e.g., pivot translation, zero-shot translation,
fine-tuning, or training from scratch) and in different settings (e.g., rich
resource and low resource, one-to-many, and many-to-one translation). This
paper concentrates on a deep understanding of multilingual NMT and conducts a
comprehensive study on a multilingual dataset with more than 20 languages. Our
results show that (1) low-resource language pairs benefit much from
multilingual training, while rich-resource language pairs may get hurt under
limited model capacity and training with similar languages benefits more than
dissimilar languages; (2) fine-tuning performs better than training from
scratch in the one-to-many setting while training from scratch performs better
in the many-to-one setting; (3) the bottom layers of the encoder and top layers
of the decoder capture more language-specific information, and just fine-tuning
these parts can achieve good accuracy for low-resource language pairs; (4)
direct translation is better than pivot translation when the source language is
similar to the target language (e.g., in the same language branch), even when
the size of direct training data is much smaller; (5) given a fixed training
data budget, it is better to introduce more languages into multilingual
training for zero-shot translation.
| 2,019 | Computation and Language |
Explicit Sparse Transformer: Concentrated Attention Through Explicit
Selection | Self-attention based Transformer has demonstrated the state-of-the-art
performances in a number of natural language processing tasks. Self-attention
is able to model long-term dependencies, but it may suffer from the extraction
of irrelevant information in the context. To tackle the problem, we propose a
novel model called \textbf{Explicit Sparse Transformer}. Explicit Sparse
Transformer is able to improve the concentration of attention on the global
context through an explicit selection of the most relevant segments. Extensive
experimental results on a series of natural language processing and computer
vision tasks, including neural machine translation, image captioning, and
language modeling, all demonstrate the advantages of Explicit Sparse
Transformer in model performance. We also show that our proposed sparse
attention method achieves comparable or better results than the previous sparse
attention method, but significantly reduces training and testing time. For
example, the inference speed is twice that of sparsemax in Transformer model.
Code will be available at
\url{https://github.com/lancopku/Explicit-Sparse-Transformer}
| 2,019 | Computation and Language |
Unity in Diversity: Learning Distributed Heterogeneous Sentence
Representation for Extractive Summarization | Automated multi-document extractive text summarization is a widely studied
research problem in the field of natural language understanding. Such
extractive mechanisms compute in some form the worthiness of a sentence to be
included into the summary. While the conventional approaches rely on human
crafted document-independent features to generate a summary, we develop a
data-driven novel summary system called HNet, which exploits the various
semantic and compositional aspects latent in a sentence to capture document
independent features. The network learns sentence representation in a way that,
salient sentences are closer in the vector space than non-salient sentences.
This semantic and compositional feature vector is then concatenated with the
document-dependent features for sentence ranking. Experiments on the DUC
benchmark datasets (DUC-2001, DUC-2002 and DUC-2004) indicate that our model
shows significant performance gain of around 1.5-2 points in terms of ROUGE
score compared with the state-of-the-art baselines.
| 2,019 | Computation and Language |
Hybrid MemNet for Extractive Summarization | Extractive text summarization has been an extensive research problem in the
field of natural language understanding. While the conventional approaches rely
mostly on manually compiled features to generate the summary, few attempts have
been made in developing data-driven systems for extractive summarization. To
this end, we present a fully data-driven end-to-end deep network which we call
as Hybrid MemNet for single document summarization task. The network learns the
continuous unified representation of a document before generating its summary.
It jointly captures local and global sentential information along with the
notion of summary worthy sentences. Experimental results on two different
corpora confirm that our model shows significant performance gains compared
with the state-of-the-art baselines.
| 2,017 | Computation and Language |
Coursera Corpus Mining and Multistage Fine-Tuning for Improving Lectures
Translation | Lectures translation is a case of spoken language translation and there is a
lack of publicly available parallel corpora for this purpose. To address this,
we examine a language independent framework for parallel corpus mining which is
a quick and effective way to mine a parallel corpus from publicly available
lectures at Coursera. Our approach determines sentence alignments, relying on
machine translation and cosine similarity over continuous-space sentence
representations. We also show how to use the resulting corpora in a multistage
fine-tuning based domain adaptation for high-quality lectures translation. For
Japanese--English lectures translation, we extracted parallel data of
approximately 40,000 lines and created development and test sets through manual
filtering for benchmarking translation performance. We demonstrate that the
mined corpus greatly enhances the quality of translation when used in
conjunction with out-of-domain parallel corpora via multistage training. This
paper also suggests some guidelines to gather and clean corpora, mine parallel
sentences, address noise in the mined data, and create high-quality evaluation
splits. For the sake of reproducibility, we will release our code for parallel
data creation.
| 2,020 | Computation and Language |
Language Independent Sentiment Analysis | Social media platforms and online forums generate rapid and increasing amount
of textual data. Businesses, government agencies, and media organizations seek
to perform sentiment analysis on this rich text data. The results of these
analytics are used for adapting marketing strategies, customizing products,
security and various other decision makings. Sentiment analysis has been
extensively studied and various methods have been developed for it with great
success. These methods, however apply to texts written in a specific language.
This limits applicability to a limited demographic and a specific geographic
region. In this paper we propose a general approach for sentiment analysis on
data containing texts from multiple languages. This enables all the
applications to utilize the results of sentiment analysis in a language
oblivious or language-independent fashion.
| 2,020 | Computation and Language |
Clinical XLNet: Modeling Sequential Clinical Notes and Predicting
Prolonged Mechanical Ventilation | Clinical notes contain rich data, which is unexploited in predictive modeling
compared to structured data. In this work, we developed a new text
representation Clinical XLNet for clinical notes which also leverages the
temporal information of the sequence of the notes. We evaluated our models on
prolonged mechanical ventilation prediction problem and our experiments
demonstrated that Clinical XLNet outperforms the best baselines consistently.
| 2,019 | Computation and Language |
Explicit Sentence Compression for Neural Machine Translation | State-of-the-art Transformer-based neural machine translation (NMT) systems
still follow a standard encoder-decoder framework, in which source sentence
representation can be well done by an encoder with self-attention mechanism.
Though Transformer-based encoder may effectively capture general information in
its resulting source sentence representation, the backbone information, which
stands for the gist of a sentence, is not specifically focused on. In this
paper, we propose an explicit sentence compression method to enhance the source
sentence representation for NMT. In practice, an explicit sentence compression
goal used to learn the backbone information in a sentence. We propose three
ways, including backbone source-side fusion, target-side fusion, and both-side
fusion, to integrate the compressed sentence into NMT. Our empirical tests on
the WMT English-to-French and English-to-German translation tasks show that the
proposed sentence compression method significantly improves the translation
performances over strong baselines.
| 2,019 | Computation and Language |
Synthesising Expressiveness in Peking Opera via Duration Informed
Attention Network | This paper presents a method that generates expressive singing voice of
Peking opera. The synthesis of expressive opera singing usually requires pitch
contours to be extracted as the training data, which relies on techniques and
is not able to be manually labeled. With the Duration Informed Attention
Network (DurIAN), this paper makes use of musical note instead of pitch
contours for expressive opera singing synthesis. The proposed method enables
human annotation being combined with automatic extracted features to be used as
training data thus the proposed method gives extra flexibility in data
collection for Peking opera singing synthesis. Comparing with the expressive
singing voice of Peking opera synthesised by pitch contour based system, the
proposed musical note based system produces comparable singing voice in Peking
opera with expressiveness in various aspects.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.