Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Fact-aware Sentence Split and Rephrase with Permutation Invariant
Training | Sentence Split and Rephrase aims to break down a complex sentence into
several simple sentences with its meaning preserved. Previous studies tend to
address the issue by seq2seq learning from parallel sentence pairs, which takes
a complex sentence as input and sequentially generates a series of simple
sentences. However, the conventional seq2seq learning has two limitations for
this task: (1) it does not take into account the facts stated in the long
sentence; As a result, the generated simple sentences may miss or inaccurately
state the facts in the original sentence. (2) The order variance of the simple
sentences to be generated may confuse the seq2seq model during training because
the simple sentences derived from the long source sentence could be in any
order.
To overcome the challenges, we first propose the Fact-aware Sentence
Encoding, which enables the model to learn facts from the long sentence and
thus improves the precision of sentence split; then we introduce Permutation
Invariant Training to alleviate the effects of order variance in seq2seq
learning for this task. Experiments on the WebSplit-v1.0 benchmark dataset show
that our approaches can largely improve the performance over the previous
seq2seq learning approaches. Moreover, an extrinsic evaluation on oie-benchmark
verifies the effectiveness of our approaches by an observation that splitting
long sentences with our state-of-the-art model as preprocessing is helpful for
improving OpenIE performance.
| 2,020 | Computation and Language |
Unsupervised Sentiment Analysis for Code-mixed Data | Code-mixing is the practice of alternating between two or more languages.
Mostly observed in multilingual societies, its occurrence is increasing and
therefore its importance. A major part of sentiment analysis research has been
monolingual, and most of them perform poorly on code-mixed text. In this work,
we introduce methods that use different kinds of multilingual and cross-lingual
embeddings to efficiently transfer knowledge from monolingual text to
code-mixed text for sentiment analysis of code-mixed text. Our methods can
handle code-mixed text through a zero-shot learning. Our methods beat
state-of-the-art on English-Spanish code-mixed sentiment analysis by absolute
3\% F1-score. We are able to achieve 0.58 F1-score (without parallel corpus)
and 0.62 F1-score (with parallel corpus) on the same benchmark in a zero-shot
way as compared to 0.68 F1-score in supervised settings. Our code is publicly
available.
| 2,021 | Computation and Language |
Parameter Space Factorization for Zero-Shot Learning across Tasks and
Languages | Most combinations of NLP tasks and language varieties lack in-domain examples
for supervised training because of the paucity of annotated data. How can
neural models make sample-efficient generalizations from task-language
combinations with available data to low-resource ones? In this work, we propose
a Bayesian generative model for the space of neural parameters. We assume that
this space can be factorized into latent variables for each language and each
task. We infer the posteriors over such latent variables based on data from
seen task-language combinations through variational inference. This enables
zero-shot classification on unseen combinations at prediction time. For
instance, given training data for named entity recognition (NER) in Vietnamese
and for part-of-speech (POS) tagging in Wolof, our model can perform accurate
predictions for NER in Wolof. In particular, we experiment with a typologically
diverse sample of 33 languages from 4 continents and 11 families, and show that
our model yields comparable or better results than state-of-the-art, zero-shot
cross-lingual transfer methods. Moreover, we demonstrate that approximate
Bayesian model averaging results in smoother predictive distributions, whose
entropy inversely correlates with accuracy. Hence, the proposed framework also
offers robust estimates of prediction uncertainty. Our code is located at
github.com/cambridgeltl/parameter-factorization
| 2,020 | Computation and Language |
Don't Parse, Generate! A Sequence to Sequence Architecture for
Task-Oriented Semantic Parsing | Virtual assistants such as Amazon Alexa, Apple Siri, and Google Assistant
often rely on a semantic parsing component to understand which action(s) to
execute for an utterance spoken by its users. Traditionally, rule-based or
statistical slot-filling systems have been used to parse "simple" queries; that
is, queries that contain a single action and can be decomposed into a set of
non-overlapping entities. More recently, shift-reduce parsers have been
proposed to process more complex utterances. These methods, while powerful,
impose specific limitations on the type of queries that can be parsed; namely,
they require a query to be representable as a parse tree.
In this work, we propose a unified architecture based on Sequence to Sequence
models and Pointer Generator Network to handle both simple and complex queries.
Unlike other works, our approach does not impose any restriction on the
semantic parse schema. Furthermore, experiments show that it achieves state of
the art performance on three publicly available datasets (ATIS, SNIPS, Facebook
TOP), relatively improving between 3.3% and 7.7% in exact match accuracy over
previous systems. Finally, we show the effectiveness of our approach on two
internal datasets.
| 2,020 | Computation and Language |
Self-Adversarial Learning with Comparative Discrimination for Text
Generation | Conventional Generative Adversarial Networks (GANs) for text generation tend
to have issues of reward sparsity and mode collapse that affect the quality and
diversity of generated samples. To address the issues, we propose a novel
self-adversarial learning (SAL) paradigm for improving GANs' performance in
text generation. In contrast to standard GANs that use a binary classifier as
its discriminator to predict whether a sample is real or generated, SAL employs
a comparative discriminator which is a pairwise classifier for comparing the
text quality between a pair of samples. During training, SAL rewards the
generator when its currently generated sentence is found to be better than its
previously generated samples. This self-improvement reward mechanism allows the
model to receive credits more easily and avoid collapsing towards the limited
number of real samples, which not only helps alleviate the reward sparsity
issue but also reduces the risk of mode collapse. Experiments on text
generation benchmark datasets show that our proposed approach substantially
improves both the quality and the diversity, and yields more stable performance
compared to the previous GANs for text generation.
| 2,020 | Computation and Language |
Pseudo-Bidirectional Decoding for Local Sequence Transduction | Local sequence transduction (LST) tasks are sequence transduction tasks where
there exists massive overlapping between the source and target sequences, such
as Grammatical Error Correction (GEC) and spell or OCR correction. Previous
work generally tackles LST tasks with standard sequence-to-sequence (seq2seq)
models that generate output tokens from left to right and suffer from the issue
of unbalanced outputs. Motivated by the characteristic of LST tasks, in this
paper, we propose a simple but versatile approach named Pseudo-Bidirectional
Decoding (PBD) for LST tasks. PBD copies the corresponding representation of
source tokens to the decoder as pseudo future context to enable the decoder to
attends to its bi-directional context. In addition, the bidirectional decoding
scheme and the characteristic of LST tasks motivate us to share the encoder and
the decoder of seq2seq models. The proposed PBD approach provides right side
context information for the decoder and models the inductive bias of LST tasks,
reducing the number of parameters by half and providing good regularization
effects. Experimental results on several benchmark datasets show that our
approach consistently improves the performance of standard seq2seq models on
LST tasks.
| 2,020 | Computation and Language |
Teaching Machines to Converse | The ability of a machine to communicate with humans has long been associated
with the general success of AI. This dates back to Alan Turing's epoch-making
work in the early 1950s, which proposes that a machine's intelligence can be
tested by how well it, the machine, can fool a human into believing that the
machine is a human through dialogue conversations. Many systems learn
generation rules from a minimal set of authored rules or labels on top of
hand-coded rules or templates, and thus are both expensive and difficult to
extend to open-domain scenarios. Recently, the emergence of neural network
models the potential to solve many of the problems in dialogue learning that
earlier systems cannot tackle: the end-to-end neural frameworks offer the
promise of scalability and language-independence, together with the ability to
track the dialogue state and then mapping between states and dialogue actions
in a way not possible with conventional systems. On the other hand, neural
systems bring about new challenges: they tend to output dull and generic
responses; they lack a consistent or a coherent persona; they are usually
optimized through single-turn conversations and are incapable of handling the
long-term success of a conversation; and they are not able to take the
advantage of the interactions with humans. This dissertation attempts to tackle
these challenges: Contributions are two-fold: (1) we address new challenges
presented by neural network models in open-domain dialogue generation systems;
(2) we develop interactive question-answering dialogue systems by (a) giving
the agent the ability to ask questions and (b) training a conversation agent
through interactions with humans in an online fashion, where a bot improves
through communicating with humans and learning from the mistakes that it makes.
| 2,020 | Computation and Language |
Break It Down: A Question Understanding Benchmark | Understanding natural language questions entails the ability to break down a
question into the requisite steps for computing its answer. In this work, we
introduce a Question Decomposition Meaning Representation (QDMR) for questions.
QDMR constitutes the ordered list of steps, expressed through natural language,
that are necessary for answering a question. We develop a crowdsourcing
pipeline, showing that quality QDMRs can be annotated at scale, and release the
Break dataset, containing over 83K pairs of questions and their QDMRs. We
demonstrate the utility of QDMR by showing that (a) it can be used to improve
open-domain question answering on the HotpotQA dataset, (b) it can be
deterministically converted to a pseudo-SQL formal language, which can
alleviate annotation in semantic parsing applications. Last, we use Break to
train a sequence-to-sequence model with copying that parses questions into QDMR
structures, and show that it substantially outperforms several natural
baselines.
| 2,020 | Computation and Language |
Hybrid Tiled Convolutional Neural Networks for Text Sentiment
Classification | The tiled convolutional neural network (tiled CNN) has been applied only to
computer vision for learning invariances. We adjust its architecture to NLP to
improve the extraction of the most salient features for sentiment analysis.
Knowing that the major drawback of the tiled CNN in the NLP field is its
inflexible filter structure, we propose a novel architecture called hybrid
tiled CNN that applies a filter only on the words that appear in the similar
contexts and on their neighbor words (a necessary step for preventing the loss
of some n-grams). The experiments on the datasets of IMDB movie reviews and
SemEval 2017 demonstrate the efficiency of the hybrid tiled CNN that performs
better than both CNN and tiled CNN.
| 2,020 | Computation and Language |
An efficient automated data analytics approach to large scale
computational comparative linguistics | This research project aimed to overcome the challenge of analysing human
language relationships, facilitate the grouping of languages and formation of
genealogical relationship between them by developing automated comparison
techniques. Techniques were based on the phonetic representation of certain key
words and concept. Example word sets included numbers 1-10 (curated), large
database of numbers 1-10 and sheep counting numbers 1-10 (other sources),
colours (curated), basic words (curated).
To enable comparison within the sets the measure of Edit distance was
calculated based on Levenshtein distance metric. This metric between two
strings is the minimum number of single-character edits, operations including:
insertions, deletions or substitutions. To explore which words exhibit more or
less variation, which words are more preserved and examine how languages could
be grouped based on linguistic distances within sets, several data analytics
techniques were involved. Those included density evaluation, hierarchical
clustering, silhouette, mean, standard deviation and Bhattacharya coefficient
calculations. These techniques lead to the development of a workflow which was
later implemented by combining Unix shell scripts, a developed R package and
SWI Prolog. This proved to be computationally efficient and permitted the fast
exploration of large language sets and their analysis.
| 2,020 | Computation and Language |
Pretrained Transformers for Simple Question Answering over Knowledge
Graphs | Answering simple questions over knowledge graphs is a well-studied problem in
question answering. Previous approaches for this task built on recurrent and
convolutional neural network based architectures that use pretrained word
embeddings. It was recently shown that finetuning pretrained transformer
networks (e.g. BERT) can outperform previous approaches on various natural
language processing tasks. In this work, we investigate how well BERT performs
on SimpleQuestions and provide an evaluation of both BERT and BiLSTM-based
models in datasparse scenarios.
| 2,020 | Computation and Language |
Unsupervised Bilingual Lexicon Induction Across Writing Systems | Recent embedding-based methods in unsupervised bilingual lexicon induction
have shown good results, but generally have not leveraged orthographic
(spelling) information, which can be helpful for pairs of related languages.
This work augments a state-of-the-art method with orthographic features, and
extends prior work in this space by proposing methods that can learn and
utilize orthographic correspondences even between languages with different
scripts. We demonstrate this by experimenting on three language pairs with
different scripts and varying degrees of lexical similarity.
| 2,020 | Computation and Language |
Improving Domain-Adapted Sentiment Classification by Deep Adversarial
Mutual Learning | Domain-adapted sentiment classification refers to training on a labeled
source domain to well infer document-level sentiment on an unlabeled target
domain. Most existing relevant models involve a feature extractor and a
sentiment classifier, where the feature extractor works towards learning
domain-invariant features from both domains, and the sentiment classifier is
trained only on the source domain to guide the feature extractor. As such, they
lack a mechanism to use sentiment polarity lying in the target domain. To
improve domain-adapted sentiment classification by learning sentiment from the
target domain as well, we devise a novel deep adversarial mutual learning
approach involving two groups of feature extractors, domain discriminators,
sentiment classifiers, and label probers. The domain discriminators enable the
feature extractors to obtain domain-invariant features. Meanwhile, the label
prober in each group explores document sentiment polarity of the target domain
through the sentiment prediction generated by the classifier in the peer group,
and guides the learning of the feature extractor in its own group. The proposed
approach achieves the mutual learning of the two groups in an end-to-end
manner. Experiments on multiple public datasets indicate our method obtains the
state-of-the-art performance, validating the effectiveness of mutual learning
through label probers.
| 2,020 | Computation and Language |
Bridging Text and Video: A Universal Multimodal Transformer for
Video-Audio Scene-Aware Dialog | Audio-Visual Scene-Aware Dialog (AVSD) is a task to generate responses when
chatting about a given video, which is organized as a track of the 8th Dialog
System Technology Challenge (DSTC8). To solve the task, we propose a universal
multimodal transformer and introduce the multi-task learning method to learn
joint representations among different modalities as well as generate
informative and fluent responses. Our method extends the natural language
generation pre-trained model to multimodal dialogue generation task. Our system
achieves the best performance in both objective and subjective evaluations in
the challenge.
| 2,020 | Computation and Language |
Novel Language Resources for Hindi: An Aesthetics Text Corpus and a
Comprehensive Stop Lemma List | This paper is an effort to complement the contributions made by researchers
working toward the inclusion of non-English languages in natural language
processing studies. Two novel Hindi language resources have been created and
released for public consumption. The first resource is a corpus consisting of
nearly thousand pre-processed fictional and nonfictional texts spanning over
hundred years. The second resource is an exhaustive list of stop lemmas created
from 12 corpora across multiple domains, consisting of over 13 million words,
from which more than 200,000 lemmas were generated, and 11 publicly available
stop word lists comprising over 1000 words, from which nearly 400 unique lemmas
were generated. This research lays emphasis on the use of stop lemmas instead
of stop words owing to the presence of various, but not all morphological forms
of a word in stop word lists, as opposed to the presence of only the root form
of the word, from which variations could be derived if required. It was also
observed that stop lemmas were more consistent across multiple sources as
compared to stop words. In order to generate a stop lemma list, the parts of
speech of the lemmas were investigated but rejected as it was found that there
was no significant correlation between the rank of a word in the frequency list
and its part of speech. The stop lemma list was assessed using a comparative
method. A formal evaluation method is suggested as future work arising from
this study.
| 2,022 | Computation and Language |
UIT-ViIC: A Dataset for the First Evaluation on Vietnamese Image
Captioning | Image Captioning, the task of automatic generation of image captions, has
attracted attentions from researchers in many fields of computer science, being
computer vision, natural language processing and machine learning in recent
years. This paper contributes to research on Image Captioning task in terms of
extending dataset to a different language - Vietnamese. So far, there is no
existed Image Captioning dataset for Vietnamese language, so this is the
foremost fundamental step for developing Vietnamese Image Captioning. In this
scope, we first build a dataset which contains manually written captions for
images from Microsoft COCO dataset relating to sports played with balls, we
called this dataset UIT-ViIC. UIT-ViIC consists of 19,250 Vietnamese captions
for 3,850 images. Following that, we evaluate our dataset on deep neural
network models and do comparisons with English dataset and two Vietnamese
datasets built by different methods. UIT-ViIC is published on our lab website
for research purposes.
| 2,020 | Computation and Language |
Fine-Tuning BERT for Schema-Guided Zero-Shot Dialogue State Tracking | We present our work on Track 4 in the Dialogue System Technology Challenges 8
(DSTC8). The DSTC8-Track 4 aims to perform dialogue state tracking (DST) under
the zero-shot settings, in which the model needs to generalize on unseen
service APIs given a schema definition of these target APIs. Serving as the
core for many virtual assistants such as Siri, Alexa, and Google Assistant, the
DST keeps track of the user's goal and what happened in the dialogue history,
mainly including intent prediction, slot filling, and user state tracking,
which tests models' ability of natural language understanding. Recently, the
pretrained language models have achieved state-of-the-art results and shown
impressive generalization ability on various NLP tasks, which provide a
promising way to perform zero-shot learning for language understanding. Based
on this, we propose a schema-guided paradigm for zero-shot dialogue state
tracking (SGP-DST) by fine-tuning BERT, one of the most popular pretrained
language models. The SGP-DST system contains four modules for intent
prediction, slot prediction, slot transfer prediction, and user state
summarizing respectively. According to the official evaluation results, our
SGP-DST (team12) ranked 3rd on the joint goal accuracy (primary evaluation
metric for ranking submissions) and 1st on the requsted slots F1 among 25
participant teams.
| 2,020 | Computation and Language |
Deep segmental phonetic posterior-grams based discovery of
non-categories in L2 English speech | Second language (L2) speech is often labeled with the native, phone
categories. However, in many cases, it is difficult to decide on a categorical
phone that an L2 segment belongs to. These segments are regarded as
non-categories. Most existing approaches for Mispronunciation Detection and
Diagnosis (MDD) are only concerned with categorical errors, i.e. a phone
category is inserted, deleted or substituted by another. However,
non-categorical errors are not considered. To model these non-categorical
errors, this work aims at exploring non-categorical patterns to extend the
categorical phone set. We apply a phonetic segment classifier to generate
segmental phonetic posterior-grams (SPPGs) to represent phone segment-level
information. And then we explore the non-categories by looking for the SPPGs
with more than one peak. Compared with the baseline system, this approach
explores more non-categorical patterns, and also perceptual experimental
results show that the explored non-categories are more accurate with increased
confusion degree by 7.3% and 7.5% under two different measures. Finally, we
preliminarily analyze the reason behind those non-categories.
| 2,020 | Computation and Language |
Beat the AI: Investigating Adversarial Human Annotation for Reading
Comprehension | Innovations in annotation methodology have been a catalyst for Reading
Comprehension (RC) datasets and models. One recent trend to challenge current
RC models is to involve a model in the annotation process: humans create
questions adversarially, such that the model fails to answer them correctly. In
this work we investigate this annotation methodology and apply it in three
different settings, collecting a total of 36,000 samples with progressively
stronger models in the annotation loop. This allows us to explore questions
such as the reproducibility of the adversarial effect, transfer from data
collected with varying model-in-the-loop strengths, and generalisation to data
collected without a model. We find that training on adversarially collected
samples leads to strong generalisation to non-adversarially collected datasets,
yet with progressive performance deterioration with increasingly stronger
models-in-the-loop. Furthermore, we find that stronger models can still learn
from datasets collected with substantially weaker models-in-the-loop. When
trained on data collected with a BiDAF model in the loop, RoBERTa achieves
39.9F1 on questions that it cannot answer when trained on SQuAD - only
marginally lower than when trained on data collected using RoBERTa itself
(41.0F1).
| 2,020 | Computation and Language |
Explaining Relationships Between Scientific Documents | We address the task of explaining relationships between two scientific
documents using natural language text. This task requires modeling the complex
content of long technical documents, deducing a relationship between these
documents, and expressing the details of that relationship in text. In addition
to the theoretical interest of this task, successful solutions can help improve
researcher efficiency in search and review. In this paper we establish a
dataset of 622K examples from 154K documents. We pretrain a large language
model to serve as the foundation for autoregressive approaches to the task. We
explore the impact of taking different views on the two documents, including
the use of dense representations extracted with scientific IE systems. We
provide extensive automatic and human evaluations which show the promise of
such models, but make clear challenges for future work.
| 2,021 | Computation and Language |
A Survey on Knowledge Graphs: Representation, Acquisition and
Applications | Human knowledge provides a formal understanding of the world. Knowledge
graphs that represent structural relations between entities have become an
increasingly popular research direction towards cognition and human-level
intelligence. In this survey, we provide a comprehensive review of knowledge
graph covering overall research topics about 1) knowledge graph representation
learning, 2) knowledge acquisition and completion, 3) temporal knowledge graph,
and 4) knowledge-aware applications, and summarize recent breakthroughs and
perspective directions to facilitate future research. We propose a full-view
categorization and new taxonomies on these topics. Knowledge graph embedding is
organized from four aspects of representation space, scoring function, encoding
models, and auxiliary information. For knowledge acquisition, especially
knowledge graph completion, embedding methods, path inference, and logical rule
reasoning, are reviewed. We further explore several emerging topics, including
meta relational learning, commonsense reasoning, and temporal knowledge graphs.
To facilitate future research on knowledge graphs, we also provide a curated
collection of datasets and open-source libraries on different tasks. In the
end, we have a thorough outlook on several promising research directions.
| 2,021 | Computation and Language |
Assessment of Amazon Comprehend Medical: Medication Information
Extraction | In November 27, 2018, Amazon Web Services (AWS) released Amazon Comprehend
Medical (ACM), a deep learning based system that automatically extracts
clinical concepts (which include anatomy, medical conditions, protected health
information (PH)I, test names, treatment names, and medical procedures, and
medications) from clinical text notes. Uptake and trust in any new data product
relies on independent validation across benchmark datasets and tools to
establish and confirm expected quality of results. This work focuses on the
medication extraction task, and particularly, ACM was evaluated using the
official test sets from the 2009 i2b2 Medication Extraction Challenge and 2018
n2c2 Track 2: Adverse Drug Events and Medication Extraction in EHRs. Overall,
ACM achieved F-scores of 0.768 and 0.828. These scores ranked the lowest when
compared to the three best systems in the respective challenges. To further
establish the generalizability of its medication extraction performance, a set
of random internal clinical text notes from NYU Langone Medical Center were
also included in this work. And in this corpus, ACM garnered an F-score of
0.753.
| 2,020 | Computation and Language |
Phylogenetic signal in phonotactics | Phylogenetic methods have broad potential in linguistics beyond tree
inference. Here, we show how a phylogenetic approach opens the possibility of
gaining historical insights from entirely new kinds of linguistic data--in this
instance, statistical phonotactics. We extract phonotactic data from 111
Pama-Nyungan vocabularies and apply tests for phylogenetic signal, quantifying
the degree to which the data reflect phylogenetic history. We test three
datasets: (1) binary variables recording the presence or absence of biphones
(two-segment sequences) in a lexicon (2) frequencies of transitions between
segments, and (3) frequencies of transitions between natural sound classes.
Australian languages have been characterized as having a high degree of
phonotactic homogeneity. Nevertheless, we detect phylogenetic signal in all
datasets. Phylogenetic signal is greater in finer-grained frequency data than
in binary data, and greatest in natural-class-based data. These results
demonstrate the viability of employing a new source of readily extractable data
in historical and comparative linguistics.
| 2,021 | Computation and Language |
Bertrand-DR: Improving Text-to-SQL using a Discriminative Re-ranker | To access data stored in relational databases, users need to understand the
database schema and write a query using a query language such as SQL. To
simplify this task, text-to-SQL models attempt to translate a user's natural
language question to corresponding SQL query. Recently, several generative
text-to-SQL models have been developed. We propose a novel discriminative
re-ranker to improve the performance of generative text-to-SQL models by
extracting the best SQL query from the beam output predicted by the text-to-SQL
generator, resulting in improved performance in the cases where the best query
was in the candidate list, but not at the top of the list. We build the
re-ranker as a schema agnostic BERT fine-tuned classifier. We analyze relative
strengths of the text-to-SQL and re-ranker models across different query
hardness levels, and suggest how to combine the two models for optimal
performance. We demonstrate the effectiveness of the re-ranker by applying it
to two state-of-the-art text-to-SQL models, and achieve top 4 score on the
Spider leaderboard at the time of writing this article.
| 2,020 | Computation and Language |
CoTK: An Open-Source Toolkit for Fast Development and Fair Evaluation of
Text Generation | In text generation evaluation, many practical issues, such as inconsistent
experimental settings and metric implementations, are often ignored but lead to
unfair evaluation and untenable conclusions. We present CoTK, an open-source
toolkit aiming to support fast development and fair evaluation of text
generation. In model development, CoTK helps handle the cumbersome issues, such
as data processing, metric implementation, and reproduction. It standardizes
the development steps and reduces human errors which may lead to inconsistent
experimental settings. In model evaluation, CoTK provides implementation for
many commonly used metrics and benchmark models across different experimental
settings. As a unique feature, CoTK can signify when and which metric cannot be
fairly compared. We demonstrate that it is convenient to use CoTK for model
development and evaluation, particularly across different experimental
settings.
| 2,020 | Computation and Language |
How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context | Recently semantic parsing in context has received considerable attention,
which is challenging since there are complex contextual phenomena. Previous
works verified their proposed methods in limited scenarios, which motivates us
to conduct an exploratory study on context modeling methods under real-world
semantic parsing in context. We present a grammar-based decoding semantic
parser and adapt typical context modeling methods on top of it. We evaluate 13
context modeling methods on two large complex cross-domain datasets, and our
best model achieves state-of-the-art performances on both datasets with
significant improvements. Furthermore, we summarize the most frequent
contextual phenomena, with a fine-grained analysis on representative models,
which may shed light on potential research directions. Our code is available at
https://github.com/microsoft/ContextualSP.
| 2,020 | Computation and Language |
Traduction des Grammaires Cat\'egorielles de Lambek dans les Grammaires
Cat\'egorielles Abstraites | Lambek Grammars (LG) are a computational modelling of natural language, based
on non-commutative compositional types. It has been widely studied, especially
for languages where the syntax plays a major role (like English). The goal of
this internship report is to demonstrate that every Lambek Grammar can be, not
entirely but efficiently, expressed in Abstract Categorial Grammars (ACG). The
latter is a novel modelling based on higher-order signature homomorphisms
(using $\lambda$-calculus), aiming at uniting the currently used models. The
main idea is to transform the type rewriting system of LGs into that of
Context-Free Grammars (CFG) by erasing introduction and elimination rules and
generating enough axioms so that the cut rule suffices. This iterative approach
preserves the derivations and enables us to stop the possible infinite
generative process at any step. Although the underlying algorithm was not fully
implemented, this proof provides another argument in favour of the relevance of
ACGs in Natural Language Processing.
| 2,020 | Computation and Language |
Reducing Noise from Competing Neighbours: Word Retrieval with Lateral
Inhibition in Multilink | Multilink is a computational model for word retrieval in monolingual and
multilingual individuals under different task circumstances (Dijkstra et al.,
2018). In the present study, we added lateral inhibition to Multilink's lexical
network. Parameters were fit on the basis of reaction times from the English,
British, and Dutch Lexicon Projects. We found a maximum correlation of 0.643
(N=1,205) on these data sets as a whole. Furthermore, the simulations
themselves became faster as a result of adding lateral inhibition. We tested
the fitted model to stimuli from a neighbourhood study (Mulder et al., 2018).
Lateral inhibition was found to improve Multilink's correlations for this
study, yielding an overall correlation of 0.67. Next, we explored the role of
lateral inhibition as part of the model's task/decision system by running
simulations on data from two studies concerning interlingual homographs
(Vanlangendonck et al., in press; Goertz, 2018). We found that, while lateral
inhibition plays a substantial part in the word selection process, this alone
is not enough to result in a correct response selection. To solve this problem,
we added a new task component to Multilink, especially designed to account for
the translation process of interlingual homographs, cognates, and
language-specific control words. The subsequent simulation results showed
patterns remarkably similar to those in the Goertz study. The isomorphicity of
the simulated data to the empirical data was further attested by an overall
correlation of 0.538 (N=254) between reaction times and simulated model cycle
times, as well as a condition pattern correlation of 0.853 (N=8). We conclude
that Multilink yields an excellent fit to empirical data, particularly when a
task-specific setting of the inhibition parameters is allowed.
| 2,020 | Computation and Language |
Generation-Distillation for Efficient Natural Language Understanding in
Low-Data Settings | Over the past year, the emergence of transfer learning with large-scale
language models (LM) has led to dramatic performance improvements across a
broad range of natural language understanding tasks. However, the size and
memory footprint of these large LMs makes them difficult to deploy in many
scenarios (e.g. on mobile phones). Recent research points to knowledge
distillation as a potential solution, showing that when training data for a
given task is abundant, it is possible to distill a large (teacher) LM into a
small task-specific (student) network with minimal loss of performance.
However, when such data is scarce, there remains a significant performance gap
between large pretrained LMs and smaller task-specific models, even when
training via distillation. In this paper, we bridge this gap with a novel
training approach, called generation-distillation, that leverages large
finetuned LMs in two ways: (1) to generate new (unlabeled) training examples,
and (2) to distill their knowledge into a small network using these examples.
Across three low-resource text classification datsets, we achieve comparable
performance to BERT while using 300x fewer parameters, and we outperform prior
approaches to distillation for text classification while using 3x fewer
parameters.
| 2,020 | Computation and Language |
Analysis of the quotation corpus of the Russian Wiktionary | The quantitative evaluation of quotations in the Russian Wiktionary was
performed using the developed Wiktionary parser. It was found that the number
of quotations in the dictionary is growing fast (51.5 thousands in 2011, 62
thousands in 2012). These quotations were extracted and saved in the relational
database of a machine-readable dictionary. For this database, tables related to
the quotations were designed. A histogram of distribution of quotations of
literary works written in different years was built. It was made an attempt to
explain the characteristics of the histogram by associating it with the years
of the most popular and cited (in the Russian Wiktionary) writers of the
nineteenth century. It was found that more than one-third of all the quotations
(the example sentences) contained in the Russian Wiktionary are taken by the
editors of a Wiktionary entry from the Russian National Corpus.
| 2,012 | Computation and Language |
Self-attention-based BiGRU and capsule network for named entity
recognition | Named entity recognition(NER) is one of the tasks of natural language
processing(NLP). In view of the problem that the traditional character
representation ability is weak and the neural network method is unable to
capture the important sequence information. An self-attention-based
bidirectional gated recurrent unit(BiGRU) and capsule network(CapsNet) for NER
is proposed. This model generates character vectors through bidirectional
encoder representation of transformers(BERT) pre-trained model. BiGRU is used
to capture sequence context features, and self-attention mechanism is proposed
to give different focus on the information captured by hidden layer of BiGRU.
Finally, we propose to use CapsNet for entity recognition. We evaluated the
recognition performance of the model on two datasets. Experimental results show
that the model has better performance without relying on external dictionary
information.
| 2,020 | Computation and Language |
Are Pre-trained Language Models Aware of Phrases? Simple but Strong
Baselines for Grammar Induction | With the recent success and popularity of pre-trained language models (LMs)
in natural language processing, there has been a rise in efforts to understand
their inner workings. In line with such interest, we propose a novel method
that assists us in investigating the extent to which pre-trained LMs capture
the syntactic notion of constituency. Our method provides an effective way of
extracting constituency trees from the pre-trained LMs without training. In
addition, we report intriguing findings in the induced trees, including the
fact that pre-trained LMs outperform other approaches in correctly demarcating
adverb phrases in sentences.
| 2,020 | Computation and Language |
An Efficient Architecture for Predicting the Case of Characters using
Sequence Models | The dearth of clean textual data often acts as a bottleneck in several
natural language processing applications. The data available often lacks proper
case (uppercase or lowercase) information. This often comes up when text is
obtained from social media, messaging applications and other online platforms.
This paper attempts to solve this problem by restoring the correct case of
characters, commonly known as Truecasing. Doing so improves the accuracy of
several processing tasks further down in the NLP pipeline. Our proposed
architecture uses a combination of convolutional neural networks (CNN),
bi-directional long short-term memory networks (LSTM) and conditional random
fields (CRF), which work at a character level without any explicit feature
engineering. In this study we compare our approach to previous statistical and
deep learning based approaches. Our method shows an increment of 0.83 in F1
score over the current state of the art. Since truecasing acts as a
preprocessing step in several applications, every increment in the F1 score
leads to a significant improvement in the language processing tasks.
| 2,021 | Computation and Language |
Unsupervised Multilingual Alignment using Wasserstein Barycenter | We study unsupervised multilingual alignment, the problem of finding
word-to-word translations between multiple languages without using any parallel
data. One popular strategy is to reduce multilingual alignment to the much
simplified bilingual setting, by picking one of the input languages as the
pivot language that we transit through. However, it is well-known that
transiting through a poorly chosen pivot language (such as English) may
severely degrade the translation quality, since the assumed transitive
relations among all pairs of languages may not be enforced in the training
process. Instead of going through a rather arbitrarily chosen pivot language,
we propose to use the Wasserstein barycenter as a more informative "mean"
language: it encapsulates information from all languages and minimizes all
pairwise transportation costs. We evaluate our method on standard benchmarks
and demonstrate state-of-the-art performances.
| 2,020 | Computation and Language |
PEL-BERT: A Joint Model for Protocol Entity Linking | Pre-trained models such as BERT are widely used in NLP tasks and are
fine-tuned to improve the performance of various NLP tasks consistently.
Nevertheless, the fine-tuned BERT model trained on our protocol corpus still
has a weak performance on the Entity Linking (EL) task. In this paper, we
propose a model that joints a fine-tuned language model with an RFC Domain
Model. Firstly, we design a Protocol Knowledge Base as the guideline for
protocol EL. Secondly, we propose a novel model, PEL-BERT, to link named
entities in protocols to categories in Protocol Knowledge Base. Finally, we
conduct a comprehensive study on the performance of pre-trained language models
on descriptive texts and abstract concepts. Experimental results demonstrate
that our model achieves state-of-the-art performance in EL on our annotated
dataset, outperforming all the baselines.
| 2,020 | Computation and Language |
Structural-Aware Sentence Similarity with Recursive Optimal Transport | Measuring sentence similarity is a classic topic in natural language
processing. Light-weighted similarities are still of particular practical
significance even when deep learning models have succeeded in many other tasks.
Some light-weighted similarities with more theoretical insights have been
demonstrated to be even stronger than supervised deep learning approaches.
However, the successful light-weighted models such as Word Mover's Distance
[Kusner et al., 2015] or Smooth Inverse Frequency [Arora et al., 2017] failed
to detect the difference from the structure of sentences, i.e. order of words.
To address this issue, we present Recursive Optimal Transport (ROT) framework
to incorporate the structural information with the classic OT. Moreover, we
further develop Recursive Optimal Similarity (ROTS) for sentences with the
valuable semantic insights from the connections between cosine similarity of
weighted average of word vectors and optimal transport. ROTS is
structural-aware and with low time complexity compared to optimal transport.
Our experiments over 20 sentence textural similarity (STS) datasets show the
clear advantage of ROTS over all weakly supervised approaches. Detailed
ablation study demonstrate the effectiveness of ROT and the semantic insights.
| 2,020 | Computation and Language |
Conversations with Documents. An Exploration of Document-Centered
Assistance | The role of conversational assistants has become more prevalent in helping
people increase their productivity. Document-centered assistance, for example
to help an individual quickly review a document, has seen less significant
progress, even though it has the potential to tremendously increase a user's
productivity. This type of document-centered assistance is the focus of this
paper. Our contributions are three-fold: (1) We first present a survey to
understand the space of document-centered assistance and the capabilities
people expect in this scenario. (2) We investigate the types of queries that
users will pose while seeking assistance with documents, and show that
document-centered questions form the majority of these queries. (3) We present
a set of initial machine learned models that show that (a) we can accurately
detect document-centered questions, and (b) we can build reasonably accurate
models for answering such questions. These positive results are encouraging,
and suggest that even greater results may be attained with continued study of
this interesting and novel problem space. Our findings have implications for
the design of intelligent systems to support task completion via natural
interactions with documents.
| 2,020 | Computation and Language |
Asking Questions the Human Way: Scalable Question-Answer Generation from
Text Corpus | The ability to ask questions is important in both human and machine
intelligence. Learning to ask questions helps knowledge acquisition, improves
question-answering and machine reading comprehension tasks, and helps a chatbot
to keep the conversation flowing with a human. Existing question generation
models are ineffective at generating a large amount of high-quality
question-answer pairs from unstructured text, since given an answer and an
input passage, question generation is inherently a one-to-many mapping. In this
paper, we propose Answer-Clue-Style-aware Question Generation (ACS-QG), which
aims at automatically generating high-quality and diverse question-answer pairs
from unlabeled text corpus at scale by imitating the way a human asks
questions. Our system consists of: i) an information extractor, which samples
from the text multiple types of assistive information to guide question
generation; ii) neural question generators, which generate diverse and
controllable questions, leveraging the extracted assistive information; and
iii) a neural quality controller, which removes low-quality generated data
based on text entailment. We compare our question generation models with
existing approaches and resort to voluntary human evaluation to assess the
quality of the generated question-answer pairs. The evaluation results suggest
that our system dramatically outperforms state-of-the-art neural question
generation models in terms of the generation quality, while being scalable in
the meantime. With models trained on a relatively smaller amount of data, we
can generate 2.8 million quality-assured question-answer pairs from a million
sentences found in Wikipedia.
| 2,020 | Computation and Language |
Joint Contextual Modeling for ASR Correction and Language Understanding | The quality of automatic speech recognition (ASR) is critical to Dialogue
Systems as ASR errors propagate to and directly impact downstream tasks such as
language understanding (LU). In this paper, we propose multi-task neural
approaches to perform contextual language correction on ASR outputs jointly
with LU to improve the performance of both tasks simultaneously. To measure the
effectiveness of this approach we used a public benchmark, the 2nd Dialogue
State Tracking (DSTC2) corpus. As a baseline approach, we trained task-specific
Statistical Language Models (SLM) and fine-tuned state-of-the-art Generalized
Pre-training (GPT) Language Model to re-rank the n-best ASR hypotheses,
followed by a model to identify the dialog act and slots. i) We further trained
ranker models using GPT and Hierarchical CNN-RNN models with discriminatory
losses to detect the best output given n-best hypotheses. We extended these
ranker models to first select the best ASR output and then identify the
dialogue act and slots in an end to end fashion. ii) We also proposed a novel
joint ASR error correction and LU model, a word confusion pointer network
(WCN-Ptr) with multi-head self-attention on top, which consumes the word
confusions populated from the n-best. We show that the error rates of off the
shelf ASR and following LU systems can be reduced significantly by 14% relative
with joint models trained using small amounts of in-domain data.
| 2,020 | Computation and Language |
Benchmarking Popular Classification Models' Robustness to Random and
Targeted Corruptions | Text classification models, especially neural networks based models, have
reached very high accuracy on many popular benchmark datasets. Yet, such models
when deployed in real world applications, tend to perform badly. The primary
reason is that these models are not tested against sufficient real world
natural data. Based on the application users, the vocabulary and the style of
the model's input may greatly vary. This emphasizes the need for a model
agnostic test dataset, which consists of various corruptions that are natural
to appear in the wild. Models trained and tested on such benchmark datasets,
will be more robust against real world data. However, such data sets are not
easily available. In this work, we address this problem, by extending the
benchmark datasets along naturally occurring corruptions such as Spelling
Errors, Text Noise and Synonyms and making them publicly available. Through
extensive experiments, we compare random and targeted corruption strategies
using Local Interpretable Model-Agnostic Explanations(LIME). We report the
vulnerabilities in two popular text classification models along these
corruptions and also find that targeted corruptions can expose vulnerabilities
of a model better than random choices in most cases.
| 2,020 | Computation and Language |
Similarit\`a per la ricerca del dominio di una frase | English. This document aims to study the best algorithms to verify the
belonging of a specific document to a related domain by comparing different
methods for calculating the distance between two vectors. This study has been
made possible with the help of the structures made available by the Apache
Spark framework. Starting from the study illustrated in the publication "New
frontier of textual classification: Big data and distributed calculus" by
Massimiliano Morrelli et al., We wanted to carry out a study on the possible
implementation of a solution capable of calculating the Similarity of a
sentence using the distributed environment.
Italiano. Il presente documento persegue l'obiettivo di studiare gli
algoritmi migliori per verificare l'appartenenza di un determinato documento a
un relativo dominio tramite un confronto di diversi metodi per il calcolo della
distanza fra due vettori. Tale studio \`e stato condotto con l'ausilio delle
strutture messe a disposizione dal framework Apache Spark. Partendo dallo
studio illustrato nella pubblicazione "Nuova frontiera della classificazione
testuale: Big data e calcolo distribuito" di Massimiliano Morrelli et al., si
\`e voluto realizzare uno studio sulla possibile implementazione di una
soluzione in grado di calcolare la Similarit\`a di una frase sfruttando
l'ambiente distribuito.
| 2,020 | Computation and Language |
Comparison Between Traditional Machine Learning Models And Neural
Network Models For Vietnamese Hate Speech Detection | Hate-speech detection on social network language has become one of the main
researching fields recently due to the spreading of social networks like
Facebook and Twitter. In Vietnam, the threat of offensive and harassment cause
bad impacts for online user. The VLSP - Shared task about Hate Speech Detection
on social networks showed many proposed approaches for detecting whatever
comment is clean or not. However, this problem still needs further researching.
Consequently, we compare traditional machine learning and deep learning on a
large dataset about the user's comments on social network in Vietnamese and
find out what is the advantage and disadvantage of each model by comparing
their accuracy on F1-score, then we pick two models in which has highest
accuracy in traditional machine learning models and deep neural models
respectively. Next, we compare these two models capable of predicting the right
label by referencing their confusion matrices and considering the advantages
and disadvantages of each model. Finally, from the comparison result, we
propose our ensemble method that concentrates the abilities of traditional
methods and deep learning methods.
| 2,020 | Computation and Language |
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP
Applications | In this paper, we present a novel algorithm, FastWordBug, to efficiently
generate small text perturbations in a black-box setting that forces a
sentiment analysis or text classification mode to make an incorrect prediction.
By combining the part of speech attributes of words, we propose a scoring
method that can quickly identify important words that affect text
classification. We evaluate FastWordBug on three real-world text datasets and
two state-of-the-art machine learning models under black-box setting. The
results show that our method can significantly reduce the accuracy of the
model, and at the same time, we can call the model as little as possible, with
the highest attack efficiency. We also attack two popular real-world cloud
services of NLP, and the results show that our method works as well.
| 2,020 | Computation and Language |
Massively Multilingual Document Alignment with Cross-lingual
Sentence-Mover's Distance | Document alignment aims to identify pairs of documents in two distinct
languages that are of comparable content or translations of each other. Such
aligned data can be used for a variety of NLP tasks from training cross-lingual
representations to mining parallel data for machine translation. In this paper
we develop an unsupervised scoring function that leverages cross-lingual
sentence embeddings to compute the semantic distance between documents in
different languages. These semantic distances are then used to guide a document
alignment algorithm to properly pair cross-lingual web documents across a
variety of low, mid, and high-resource language pairs. Recognizing that our
proposed scoring function and other state of the art methods are
computationally intractable for long web documents, we utilize a more tractable
greedy algorithm that performs comparably. We experimentally demonstrate that
our distance metric performs better alignment than current baselines
outperforming them by 7% on high-resource language pairs, 15% on mid-resource
language pairs, and 22% on low-resource language pairs.
| 2,020 | Computation and Language |
Two-path Deep Semi-supervised Learning for Timely Fake News Detection | News in social media such as Twitter has been generated in high volume and
speed. However, very few of them are labeled (as fake or true news) by
professionals in near real time. In order to achieve timely detection of fake
news in social media, a novel framework of two-path deep semi-supervised
learning is proposed where one path is for supervised learning and the other is
for unsupervised learning. The supervised learning path learns on the limited
amount of labeled data while the unsupervised learning path is able to learn on
a huge amount of unlabeled data. Furthermore, these two paths implemented with
convolutional neural networks (CNN) are jointly optimized to complete
semi-supervised learning. In addition, we build a shared CNN to extract the low
level features on both labeled data and unlabeled data to feed them into these
two paths. To verify this framework, we implement a Word CNN based
semi-supervised learning model and test it on two datasets, namely, LIAR and
PHEME. Experimental results demonstrate that the model built on the proposed
framework can recognize fake news effectively with very few labeled data.
| 2,020 | Computation and Language |
Modeling ASR Ambiguity for Dialogue State Tracking Using Word Confusion
Networks | Spoken dialogue systems typically use a list of top-N ASR hypotheses for
inferring the semantic meaning and tracking the state of the dialogue. However
ASR graphs, such as confusion networks (confnets), provide a compact
representation of a richer hypothesis space than a top-N ASR list. In this
paper, we study the benefits of using confusion networks with a
state-of-the-art neural dialogue state tracker (DST). We encode the
2-dimensional confnet into a 1-dimensional sequence of embeddings using an
attentional confusion network encoder which can be used with any DST system.
Our confnet encoder is plugged into the state-of-the-art 'Global-locally
Self-Attentive Dialogue State Tacker' (GLAD) model for DST and obtains
significant improvements in both accuracy and inference time compared to using
top-N ASR hypotheses.
| 2,022 | Computation and Language |
Learning Contextualized Document Representations for Healthcare Answer
Retrieval | We present Contextual Discourse Vectors (CDV), a distributed document
representation for efficient answer retrieval from long healthcare documents.
Our approach is based on structured query tuples of entities and aspects from
free text and medical taxonomies. Our model leverages a dual encoder
architecture with hierarchical LSTM layers and multi-task training to encode
the position of clinical entities and aspects alongside the document discourse.
We use our continuous representations to resolve queries with short latency
using approximate nearest neighbor search on sentence level. We apply the CDV
model for retrieving coherent answer passages from nine English public health
resources from the Web, addressing both patients and medical professionals.
Because there is no end-to-end training data available for all application
scenarios, we train our model with self-supervised data from Wikipedia. We show
that our generalized model significantly outperforms several state-of-the-art
baselines for healthcare passage ranking and is able to adapt to heterogeneous
domains without additional fine-tuning.
| 2,020 | Computation and Language |
Torch-Struct: Deep Structured Prediction Library | The literature on structured prediction for NLP describes a rich collection
of distributions and algorithms over sequences, segmentations, alignments, and
trees; however, these algorithms are difficult to utilize in deep learning
frameworks. We introduce Torch-Struct, a library for structured prediction
designed to take advantage of and integrate with vectorized,
auto-differentiation based frameworks. Torch-Struct includes a broad collection
of probabilistic structures accessed through a simple and flexible
distribution-based API that connects to any deep learning model. The library
utilizes batched, vectorized operations and exploits auto-differentiation to
produce readable, fast, and testable code. Internally, we also include a number
of general-purpose optimizations to provide cross-algorithm efficiency.
Experiments show significant performance gains over fast baselines and
case-studies demonstrate the benefits of the library. Torch-Struct is available
at https://github.com/harvardnlp/pytorch-struct.
| 2,020 | Computation and Language |
Detecting Fake News with Capsule Neural Networks | Fake news is dramatically increased in social media in recent years. This has
prompted the need for effective fake news detection algorithms. Capsule neural
networks have been successful in computer vision and are receiving attention
for use in Natural Language Processing (NLP). This paper aims to use capsule
neural networks in the fake news detection task. We use different embedding
models for news items of different lengths. Static word embedding is used for
short news items, whereas non-static word embeddings that allow incremental
up-training and updating in the training phase are used for medium length or
large news statements. Moreover, we apply different levels of n-grams for
feature extraction. Our proposed architectures are evaluated on two recent
well-known datasets in the field, namely ISOT and LIAR. The results show
encouraging performance, outperforming the state-of-the-art methods by 7.8% on
ISOT and 3.1% on the validation set, and 1% on the test set of the LIAR
dataset.
| 2,020 | Computation and Language |
On the interaction between supervision and self-play in emergent
communication | A promising approach for teaching artificial agents to use natural language
involves using human-in-the-loop training. However, recent work suggests that
current machine learning methods are too data inefficient to be trained in this
way from scratch. In this paper, we investigate the relationship between two
categories of learning signals with the ultimate goal of improving sample
efficiency: imitating human language data via supervised learning, and
maximizing reward in a simulated multi-agent environment via self-play (as done
in emergent communication), and introduce the term supervised self-play (S2P)
for algorithms using both of these signals. We find that first training agents
via supervised learning on human data followed by self-play outperforms the
converse, suggesting that it is not beneficial to emerge languages from
scratch. We then empirically investigate various S2P schedules that begin with
supervised learning in two environments: a Lewis signaling game with symbolic
inputs, and an image-based referential game with natural language descriptions.
Lastly, we introduce population based approaches to S2P, which further improves
the performance over single-agent methods.
| 2,020 | Computation and Language |
Variational Template Machine for Data-to-Text Generation | How to generate descriptions from structured data organized in tables?
Existing approaches using neural encoder-decoder models often suffer from
lacking diversity. We claim that an open set of templates is crucial for
enriching the phrase constructions and realizing varied generations. Learning
such templates is prohibitive since it often requires a large paired <table,
description> corpus, which is seldom available. This paper explores the problem
of automatically learning reusable "templates" from paired and non-paired data.
We propose the variational template machine (VTM), a novel method to generate
text descriptions from data tables. Our contributions include: a) we carefully
devise a specific model architecture and losses to explicitly disentangle text
template and semantic content information, in the latent spaces, and b)we
utilize both small parallel data and large raw text without aligned tables to
enrich the template learning. Experiments on datasets from a variety of
different domains show that VTM is able to generate more diversely while
keeping a good fluency and quality.
| 2,020 | Computation and Language |
Syntactically Look-Ahead Attention Network for Sentence Compression | Sentence compression is the task of compressing a long sentence into a short
one by deleting redundant words. In sequence-to-sequence (Seq2Seq) based
models, the decoder unidirectionally decides to retain or delete words. Thus,
it cannot usually explicitly capture the relationships between decoded words
and unseen words that will be decoded in the future time steps. Therefore, to
avoid generating ungrammatical sentences, the decoder sometimes drops important
words in compressing sentences. To solve this problem, we propose a novel
Seq2Seq model, syntactically look-ahead attention network (SLAHAN), that can
generate informative summaries by explicitly tracking both dependency parent
and child words during decoding and capturing important words that will be
decoded in the future. The results of the automatic evaluation on the Google
sentence compression dataset showed that SLAHAN achieved the best
kept-token-based-F1, ROUGE-1, ROUGE-2 and ROUGE-L scores of 85.5, 79.3, 71.3
and 79.1, respectively. SLAHAN also improved the summarization performance on
longer sentences. Furthermore, in the human evaluation, SLAHAN improved
informativeness without losing readability.
| 2,020 | Computation and Language |
Dynamic Knowledge Routing Network For Target-Guided Open-Domain
Conversation | Target-guided open-domain conversation aims to proactively and naturally
guide a dialogue agent or human to achieve specific goals, topics or keywords
during open-ended conversations. Existing methods mainly rely on single-turn
datadriven learning and simple target-guided strategy without considering
semantic or factual knowledge relations among candidate topics/keywords. This
results in poor transition smoothness and low success rate. In this work, we
adopt a structured approach that controls the intended content of system
responses by introducing coarse-grained keywords, attains smooth conversation
transition through turn-level supervised learning and knowledge relations
between candidate keywords, and drives an conversation towards an specified
target with discourse-level guiding strategy. Specially, we propose a novel
dynamic knowledge routing network (DKRN) which considers semantic knowledge
relations among candidate keywords for accurate next topic prediction of next
discourse. With the help of more accurate keyword prediction, our
keyword-augmented response retrieval module can achieve better retrieval
performance and more meaningful conversations. Besides, we also propose a novel
dual discourse-level target-guided strategy to guide conversations to reach
their goals smoothly with higher success rate. Furthermore, to push the
research boundary of target-guided open-domain conversation to match real-world
scenarios better, we introduce a new large-scale Chinese target-guided
open-domain conversation dataset (more than 900K conversations) crawled from
Sina Weibo. Quantitative and human evaluations show our method can produce
meaningful and effective target-guided conversations, significantly improving
over other state-of-the-art methods by more than 20% in success rate and more
than 0.6 in average smoothness score.
| 2,020 | Computation and Language |
Arabic Diacritic Recovery Using a Feature-Rich biLSTM Model | Diacritics (short vowels) are typically omitted when writing Arabic text, and
readers have to reintroduce them to correctly pronounce words. There are two
types of Arabic diacritics: the first are core-word diacritics (CW), which
specify the lexical selection, and the second are case endings (CE), which
typically appear at the end of the word stem and generally specify their
syntactic roles. Recovering CEs is relatively harder than recovering core-word
diacritics due to inter-word dependencies, which are often distant. In this
paper, we use a feature-rich recurrent neural network model that uses a variety
of linguistic and surface-level features to recover both core word diacritics
and case endings. Our model surpasses all previous state-of-the-art systems
with a CW error rate (CWER) of 2.86\% and a CE error rate (CEER) of 3.7% for
Modern Standard Arabic (MSA) and CWER of 2.2% and CEER of 2.5% for Classical
Arabic (CA). When combining diacritized word cores with case endings, the
resultant word error rate is 6.0% and 4.3% for MSA and CA respectively. This
highlights the effectiveness of feature engineering for such deep neural
models.
| 2,020 | Computation and Language |
CoVoST: A Diverse Multilingual Speech-To-Text Translation Corpus | Spoken language translation has recently witnessed a resurgence in
popularity, thanks to the development of end-to-end models and the creation of
new corpora, such as Augmented LibriSpeech and MuST-C. Existing datasets
involve language pairs with English as a source language, involve very specific
domains or are low resource. We introduce CoVoST, a multilingual speech-to-text
translation corpus from 11 languages into English, diversified with over 11,000
speakers and over 60 accents. We describe the dataset creation methodology and
provide empirical evidence of the quality of the data. We also provide initial
benchmarks, including, to our knowledge, the first end-to-end many-to-one
multilingual models for spoken language translation. CoVoST is released under
CC0 license and free to use. We also provide additional evaluation data derived
from Tatoeba under CC licenses.
| 2,020 | Computation and Language |
Structural Inductive Biases in Emergent Communication | In order to communicate, humans flatten a complex representation of ideas and
their attributes into a single word or a sentence. We investigate the impact of
representation learning in artificial agents by developing graph referential
games. We empirically show that agents parametrized by graph neural networks
develop a more compositional language compared to bag-of-words and sequence
models, which allows them to systematically generalize to new combinations of
familiar features.
| 2,021 | Computation and Language |
Schema-Guided Dialogue State Tracking Task at DSTC8 | This paper gives an overview of the Schema-Guided Dialogue State Tracking
task of the 8th Dialogue System Technology Challenge. The goal of this task is
to develop dialogue state tracking models suitable for large-scale virtual
assistants, with a focus on data-efficient joint modeling across domains and
zero-shot generalization to new APIs. This task provided a new dataset
consisting of over 16000 dialogues in the training set spanning 16 domains to
highlight these challenges, and a baseline model capable of zero-shot
generalization to new APIs. Twenty-five teams participated, developing a range
of neural network models, exceeding the performance of the baseline model by a
very high margin. The submissions incorporated a variety of pre-trained
encoders and data augmentation techniques. This paper describes the task
definition, dataset and evaluation methodology. We also summarize the approach
and results of the submitted systems to highlight the overall trends in the
state-of-the-art.
| 2,020 | Computation and Language |
Compositional Languages Emerge in a Neural Iterated Learning Model | The principle of compositionality, which enables natural language to
represent complex concepts via a structured combination of simpler ones, allows
us to convey an open-ended set of messages using a limited vocabulary. If
compositionality is indeed a natural property of language, we may expect it to
appear in communication protocols that are created by neural agents in language
games. In this paper, we propose an effective neural iterated learning (NIL)
algorithm that, when applied to interacting neural agents, facilitates the
emergence of a more structured type of language. Indeed, these languages
provide learning speed advantages to neural agents during training, which can
be incrementally amplified via NIL. We provide a probabilistic model of NIL and
an explanation of why the advantage of compositional language exist. Our
experiments confirm our analysis, and also demonstrate that the emerged
languages largely improve the generalizing power of the neural agent
communication.
| 2,020 | Computation and Language |
Plague Dot Text: Text mining and annotation of outbreak reports of the
Third Plague Pandemic (1894-1952) | The design of models that govern diseases in population is commonly built on
information and data gathered from past outbreaks. However, epidemic outbreaks
are never captured in statistical data alone but are communicated by
narratives, supported by empirical observations. Outbreak reports discuss
correlations between populations, locations and the disease to infer insights
into causes, vectors and potential interventions. The problem with these
narratives is usually the lack of consistent structure or strong conventions,
which prohibit their formal analysis in larger corpora. Our interdisciplinary
research investigates more than 100 reports from the third plague pandemic
(1894-1952) evaluating ways of building a corpus to extract and structure this
narrative information through text mining and manual annotation. In this paper
we discuss the progress of our ongoing exploratory project, how we enhance
optical character recognition (OCR) methods to improve text capture, our
approach to structure the narratives and identify relevant entities in the
reports. The structured corpus is made available via Solr enabling search and
analysis across the whole collection for future research dedicated, for
example, to the identification of concepts. We show preliminary visualisations
of the characteristics of causation and differences with respect to gender as a
result of syntactic-category-dependent corpus statistics. Our goal is to
develop structured accounts of some of the most significant concepts that were
used to understand the epidemiology of the third plague pandemic around the
globe. The corpus enables researchers to analyse the reports collectively
allowing for deep insights into the global epidemiological consideration of
plague in the early twentieth century.
| 2,021 | Computation and Language |
From Topic Networks to Distributed Cognitive Maps: Zipfian Topic
Universes in the Area of Volunteered Geographic Information | Are nearby places (e.g. cities) described by related words? In this article
we transfer this research question in the field of lexical encoding of
geographic information onto the level of intertextuality. To this end, we
explore Volunteered Geographic Information (VGI) to model texts addressing
places at the level of cities or regions with the help of so-called topic
networks. This is done to examine how language encodes and networks geographic
information on the aboutness level of texts. Our hypothesis is that the
networked thematizations of places are similar - regardless of their distances
and the underlying communities of authors. To investigate this we introduce
Multiplex Topic Networks (MTN), which we automatically derive from Linguistic
Multilayer Networks (LMN) as a novel model, especially of thematic networking
in text corpora. Our study shows a Zipfian organization of the thematic
universe in which geographical places (especially cities) are located in online
communication. We interpret this finding in the context of cognitive maps, a
notion which we extend by so-called thematic maps. According to our
interpretation of this finding, the organization of thematic maps as part of
cognitive maps results from a tendency of authors to generate shareable content
that ensures the continued existence of the underlying media. We test our
hypothesis by example of special wikis and extracts of Wikipedia. In this way
we come to the conclusion: Places, whether close to each other or not, are
located in neighboring places that span similar subnetworks in the topic
universe.
| 2,020 | Computation and Language |
Semantic Search of Memes on Twitter | Memes are becoming a useful source of data for analyzing behavior on social
media. However, a problem to tackle is how to correctly identify a meme. As the
number of memes published every day on social media is huge, there is a need
for automatic methods for classifying and searching in large meme datasets.
This paper proposes and compares several methods for automatically classifying
images as memes. Also, we propose a method that allows us to implement a system
for retrieving memes from a dataset using a textual query. We experimentally
evaluate the methods using a large dataset of memes collected from Twitter
users in Chile, which was annotated by a group of experts. Though some of the
evaluated methods are effective, there is still room for improvement.
| 2,020 | Computation and Language |
Generalizing meanings from partners to populations: Hierarchical
inference supports convention formation on networks | A key property of linguistic conventions is that they hold over an entire
community of speakers, allowing us to communicate efficiently even with people
we have never met before. At the same time, much of our language use is
partner-specific: we know that words may be understood differently by different
people based on our shared history. This poses a challenge for accounts of
convention formation. Exactly how do agents make the inferential leap to
community-wide expectations while maintaining partner-specific knowledge? We
propose a hierarchical Bayesian model to explain how speakers and listeners
solve this inductive problem. To evaluate our model's predictions, we conducted
an experiment where participants played an extended natural-language
communication game with different partners in a small community. We examine
several measures of generalization and find key signatures of both
partner-specificity and community convergence that distinguish our model from
alternatives. These results suggest that partner-specificity is not only
compatible with the formation of community-wide conventions, but may facilitate
it when coupled with a powerful inductive mechanism.
| 2,020 | Computation and Language |
Lightweight Convolutional Representations for On-Device Natural Language
Processing | The increasing computational and memory complexities of deep neural networks
have made it difficult to deploy them on low-resource electronic devices (e.g.,
mobile phones, tablets, wearables). Practitioners have developed numerous model
compression methods to address these concerns, but few have condensed input
representations themselves. In this work, we propose a fast, accurate, and
lightweight convolutional representation that can be swapped into any neural
model and compressed significantly (up to 32x) with a negligible reduction in
performance. In addition, we show gains over recurrent representations when
considering resource-centric metrics (e.g., model file size, latency, memory
usage) on a Samsung Galaxy S9.
| 2,020 | Computation and Language |
Identification of Indian Languages using Ghost-VLAD pooling | In this work, we propose a new pooling strategy for language identification
by considering Indian languages. The idea is to obtain utterance level features
for any variable length audio for robust language recognition. We use the
GhostVLAD approach to generate an utterance level feature vector for any
variable length input audio by aggregating the local frame level features
across time. The generated feature vector is shown to have very good language
discriminative features and helps in getting state of the art results for
language identification task. We conduct our experiments on 635Hrs of audio
data for 7 Indian languages. Our method outperforms the previous state of the
art x-vector [11] method by an absolute improvement of 1.88% in F1-score and
achieves 98.43% F1-score on the held-out test data. We compare our system with
various pooling approaches and show that GhostVLAD is the best pooling approach
for this task. We also provide visualization of the utterance level embeddings
generated using Ghost-VLAD pooling and show that this method creates embeddings
which has very good language discriminative features.
| 2,020 | Computation and Language |
Parsing as Pretraining | Recent analyses suggest that encoders pretrained for language modeling
capture certain morpho-syntactic structure. However, probing frameworks for
word vectors still do not report results on standard setups such as constituent
and dependency parsing. This paper addresses this problem and does full parsing
(on English) relying only on pretraining architectures -- and no decoding. We
first cast constituent and dependency parsing as sequence tagging. We then use
a single feed-forward layer to directly map word vectors to labels that encode
a linearized tree. This is used to: (i) see how far we can reach on syntax
modelling with just pretrained encoders, and (ii) shed some light about the
syntax-sensitivity of different word vectors (by freezing the weights of the
pretraining network during training). For evaluation, we use bracketing
F1-score and LAS, and analyze in-depth differences across representations for
span lengths and dependency displacements. The overall results surpass existing
sequence tagging parsers on the PTB (93.5%) and end-to-end EN-EWT UD (78.8%).
| 2,020 | Computation and Language |
Multi-Fusion Chinese WordNet (MCW) : Compound of Machine Learning and
Manual Correction | Princeton WordNet (PWN) is a lexicon-semantic network based on cognitive
linguistics, which promotes the development of natural language processing.
Based on PWN, five Chinese wordnets have been developed to solve the problems
of syntax and semantics. They include: Northeastern University Chinese WordNet
(NEW), Sinica Bilingual Ontological WordNet (BOW), Southeast University Chinese
WordNet (SEW), Taiwan University Chinese WordNet (CWN), Chinese Open WordNet
(COW). By using them, we found that these word networks have low accuracy and
coverage, and cannot completely portray the semantic network of PWN. So we
decided to make a new Chinese wordnet called Multi-Fusion Chinese Wordnet (MCW)
to make up those shortcomings. The key idea is to extend the SEW with the help
of Oxford bilingual dictionary and Xinhua bilingual dictionary, and then
correct it. More specifically, we used machine learning and manual adjustment
in our corrections. Two standards were formulated to help our work. We
conducted experiments on three tasks including relatedness calculation, word
similarity and word sense disambiguation for the comparison of lemma's
accuracy, at the same time, coverage also was compared. The results indicate
that MCW can benefit from coverage and accuracy via our method. However, it
still has room for improvement, especially with lemmas. In the future, we will
continue to enhance the accuracy of MCW and expand the concepts in it.
| 2,020 | Computation and Language |
K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters | We study the problem of injecting knowledge into large pre-trained models
like BERT and RoBERTa. Existing methods typically update the original
parameters of pre-trained models when injecting knowledge. However, when
multiple kinds of knowledge are injected, the historically injected knowledge
would be flushed away. To address this, we propose K-Adapter, a framework that
retains the original parameters of the pre-trained model fixed and supports the
development of versatile knowledge-infused model. Taking RoBERTa as the
backbone model, K-Adapter has a neural adapter for each kind of infused
knowledge, like a plug-in connected to RoBERTa. There is no information flow
between different adapters, thus multiple adapters can be efficiently trained
in a distributed way. As a case study, we inject two kinds of knowledge in this
work, including (1) factual knowledge obtained from automatically aligned
text-triplets on Wikipedia and Wikidata and (2) linguistic knowledge obtained
via dependency parsing. Results on three knowledge-driven tasks, including
relation classification, entity typing, and question answering, demonstrate
that each adapter improves the performance and the combination of both adapters
brings further improvements. Further analysis indicates that K-Adapter captures
versatile knowledge than RoBERTa.
| 2,020 | Computation and Language |
Discontinuous Constituent Parsing with Pointer Networks | One of the most complex syntactic representations used in computational
linguistics and NLP are discontinuous constituent trees, crucial for
representing all grammatical phenomena of languages such as German. Recent
advances in dependency parsing have shown that Pointer Networks excel in
efficiently parsing syntactic relations between words in a sentence. This kind
of sequence-to-sequence models achieve outstanding accuracies in building
non-projective dependency trees, but its potential has not been proved yet on a
more difficult task. We propose a novel neural network architecture that, by
means of Pointer Networks, is able to generate the most accurate discontinuous
constituent representations to date, even without the need of Part-of-Speech
tagging information. To do so, we internally model discontinuous constituent
structures as augmented non-projective dependency structures. The proposed
approach achieves state-of-the-art results on the two widely-used NEGRA and
TIGER benchmarks, outperforming previous work by a wide margin.
| 2,020 | Computation and Language |
Geosocial Location Classification: Associating Type to Places Based on
Geotagged Social-Media Posts | Associating type to locations can be used to enrich maps and can serve a
plethora of geospatial applications. An automatic method to do so could make
the process less expensive in terms of human labor, and faster to react to
changes. In this paper we study the problem of Geosocial Location
Classification, where the type of a site, e.g., a building, is discovered based
on social-media posts. Our goal is to correctly associate a set of messages
posted in a small radius around a given location with the corresponding
location type, e.g., school, church, restaurant or museum. We explore two
approaches to the problem: (a) a pipeline approach, where each message is first
classified, and then the location associated with the message set is inferred
from the individual message labels; and (b) a joint approach where the
individual messages are simultaneously processed to yield the desired location
type. We tested the two approaches over a dataset of geotagged tweets. Our
results demonstrate the superiority of the joint approach. Moreover, we show
that due to the unique structure of the problem, where weakly-related messages
are jointly processed to yield a single final label, linear classifiers
outperform deep neural network alternatives.
| 2,020 | Computation and Language |
Rapid Adaptation of BERT for Information Extraction on Domain-Specific
Business Documents | Techniques for automatically extracting important content elements from
business documents such as contracts, statements, and filings have the
potential to make business operations more efficient. This problem can be
formulated as a sequence labeling task, and we demonstrate the adaption of BERT
to two types of business documents: regulatory filings and property lease
agreements. There are aspects of this problem that make it easier than
"standard" information extraction tasks and other aspects that make it more
difficult, but on balance we find that modest amounts of annotated data (less
than 100 documents) are sufficient to achieve reasonable accuracy. We integrate
our models into an end-to-end cloud platform that provides both an easy-to-use
annotation interface as well as an inference interface that allows users to
upload documents and inspect model outputs.
| 2,020 | Computation and Language |
UNCC Biomedical Semantic Question Answering Systems. BioASQ: Task-7B,
Phase-B | In this paper, we detail our submission to the 2019, 7th year, BioASQ
competition. We present our approach for Task-7b, Phase B, Exact Answering
Task. These Question Answering (QA) tasks include Factoid, Yes/No, List Type
Question answering. Our system is based on a contextual word embedding model.
We have used a Bidirectional Encoder Representations from Transformers(BERT)
based system, fined tuned for biomedical question answering task using BioBERT.
In the third test batch set, our system achieved the highest MRR score for
Factoid Question Answering task. Also, for List type question answering task
our system achieved the highest recall score in the fourth test batch set.
Along with our detailed approach, we present the results for our submissions,
and also highlight identified downsides for our current approach and ways to
improve them in our future experiments.
| 2,020 | Computation and Language |
Aligning the Pretraining and Finetuning Objectives of Language Models | We demonstrate that explicitly aligning the pretraining objectives to the
finetuning objectives in language model training significantly improves the
finetuning task performance and reduces the minimum amount of finetuning
examples required. The performance margin gained from objective alignment
allows us to build language models with smaller sizes for tasks with less
available training data. We provide empirical evidence of these claims by
applying objective alignment to concept-of-interest tagging and acronym
detection tasks. We found that, with objective alignment, our 768 by 3 and 512
by 3 transformer language models can reach accuracy of 83.9%/82.5% for
concept-of-interest tagging and 73.8%/70.2% for acronym detection using only
200 finetuning examples per task, outperforming the 768 by 3 model pretrained
without objective alignment by +4.8%/+3.4% and +9.9%/+6.3%. We name finetuning
small language models in the presence of hundreds of training examples or less
"Few Example learning". In practice, Few Example Learning enabled by objective
alignment not only saves human labeling costs, but also makes it possible to
leverage language models in more real-time applications.
| 2,020 | Computation and Language |
Attractive or Faithful? Popularity-Reinforced Learning for Inspired
Headline Generation | With the rapid proliferation of online media sources and published news,
headlines have become increasingly important for attracting readers to news
articles, since users may be overwhelmed with the massive information. In this
paper, we generate inspired headlines that preserve the nature of news articles
and catch the eye of the reader simultaneously. The task of inspired headline
generation can be viewed as a specific form of Headline Generation (HG) task,
with the emphasis on creating an attractive headline from a given news article.
To generate inspired headlines, we propose a novel framework called
POpularity-Reinforced Learning for inspired Headline Generation (PORL-HG).
PORL-HG exploits the extractive-abstractive architecture with 1) Popular Topic
Attention (PTA) for guiding the extractor to select the attractive sentence
from the article and 2) a popularity predictor for guiding the abstractor to
rewrite the attractive sentence. Moreover, since the sentence selection of the
extractor is not differentiable, techniques of reinforcement learning (RL) are
utilized to bridge the gap with rewards obtained from a popularity score
predictor. Through quantitative and qualitative experiments, we show that the
proposed PORL-HG significantly outperforms the state-of-the-art headline
generation models in terms of attractiveness evaluated by both human (71.03%)
and the predictor (at least 27.60%), while the faithfulness of PORL-HG is also
comparable to the state-of-the-art generation model.
| 2,020 | Computation and Language |
Multilingual acoustic word embedding models for processing zero-resource
languages | Acoustic word embeddings are fixed-dimensional representations of
variable-length speech segments. In settings where unlabelled speech is the
only available resource, such embeddings can be used in "zero-resource" speech
search, indexing and discovery systems. Here we propose to train a single
supervised embedding model on labelled data from multiple well-resourced
languages and then apply it to unseen zero-resource languages. For this
transfer learning approach, we consider two multilingual recurrent neural
network models: a discriminative classifier trained on the joint vocabularies
of all training languages, and a correspondence autoencoder trained to
reconstruct word pairs. We test these using a word discrimination task on six
target zero-resource languages. When trained on seven well-resourced languages,
both models perform similarly and outperform unsupervised models trained on the
zero-resource languages. With just a single training language, the second model
works better, but performance depends more on the particular training--testing
language pair.
| 2,020 | Computation and Language |
A Neural Topical Expansion Framework for Unstructured Persona-oriented
Dialogue Generation | Unstructured Persona-oriented Dialogue Systems (UPDS) has been demonstrated
effective in generating persona consistent responses by utilizing predefined
natural language user persona descriptions (e.g., "I am a vegan"). However, the
predefined user persona descriptions are usually short and limited to only a
few descriptive words, which makes it hard to correlate them with the
dialogues. As a result, existing methods either fail to use the persona
description or use them improperly when generating persona consistent
responses. To address this, we propose a neural topical expansion framework,
namely Persona Exploration and Exploitation (PEE), which is able to extend the
predefined user persona description with semantically correlated content before
utilizing them to generate dialogue responses. PEE consists of two main
modules: persona exploration and persona exploitation. The former learns to
extend the predefined user persona description by mining and correlating with
existing dialogue corpus using a variational auto-encoder (VAE) based topic
model. The latter learns to generate persona consistent responses by utilizing
the predefined and extended user persona description. In order to make persona
exploitation learn to utilize user persona description more properly, we also
introduce two persona-oriented loss functions: Persona-oriented Matching
(P-Match) loss and Persona-oriented Bag-of-Words (P-BoWs) loss which
respectively supervise persona selection in encoder and decoder. Experimental
results show that our approach outperforms state-of-the-art baselines, in terms
of both automatic and human evaluations.
| 2,020 | Computation and Language |
Related Tasks can Share! A Multi-task Framework for Affective language | Expressing the polarity of sentiment as 'positive' and 'negative' usually
have limited scope compared with the intensity/degree of polarity. These two
tasks (i.e. sentiment classification and sentiment intensity prediction) are
closely related and may offer assistance to each other during the learning
process. In this paper, we propose to leverage the relatedness of multiple
tasks in a multi-task learning framework. Our multi-task model is based on
convolutional-Gated Recurrent Unit (GRU) framework, which is further assisted
by a diverse hand-crafted feature set. Evaluation and analysis suggest that
joint-learning of the related tasks in a multi-task framework can outperform
each of the individual tasks in the single-task frameworks.
| 2,020 | Computation and Language |
Citation Data of Czech Apex Courts | In this paper, we introduce the citation data of the Czech apex courts
(Supreme Court, Supreme Administrative Court and Constitutional Court). This
dataset was automatically extracted from the corpus of texts of Czech court
decisions - CzCDC 1.0. We obtained the citation data by building the natural
language processing pipeline for extraction of the court decision identifiers.
The pipeline included the (i) document segmentation model and the (ii)
reference recognition model. Furthermore, the dataset was manually processed to
achieve high-quality citation data as a base for subsequent qualitative and
quantitative analyses. The dataset will be made available to the general
public.
| 2,020 | Computation and Language |
Towards Semantic Noise Cleansing of Categorical Data based on Semantic
Infusion | Semantic Noise affects text analytics activities for the domain-specific
industries significantly. It impedes the text understanding which holds prime
importance in the critical decision making tasks. In this work, we formalize
semantic noise as a sequence of terms that do not contribute to the narrative
of the text. We look beyond the notion of standard statistically-based stop
words and consider the semantics of terms to exclude the semantic noise. We
present a novel Semantic Infusion technique to associate meta-data with the
categorical corpus text and demonstrate its near-lossless nature. Based on this
technique, we propose an unsupervised text-preprocessing framework to filter
the semantic noise using the context of the terms. Later we present the
evaluation results of the proposed framework using a web forum dataset from the
automobile-domain.
| 2,020 | Computation and Language |
Conversational Structure Aware and Context Sensitive Topic Model for
Online Discussions | Millions of online discussions are generated everyday on social media
platforms. Topic modelling is an efficient way of better understanding large
text datasets at scale. Conventional topic models have had limited success in
online discussions, and to overcome their limitations, we use the discussion
thread tree structure and propose a "popularity" metric to quantify the number
of replies to a comment to extend the frequency of word occurrences, and the
"transitivity" concept to characterize topic dependency among nodes in a nested
discussion thread. We build a Conversational Structure Aware Topic Model
(CSATM) based on popularity and transitivity to infer topics and their
assignments to comments. Experiments on real forum datasets are used to
demonstrate improved performance for topic extraction with six different
measurements of coherence and impressive accuracy for topic assignments.
| 2,020 | Computation and Language |
Irony Detection in a Multilingual Context | This paper proposes the first multilingual (French, English and Arabic) and
multicultural (Indo-European languages vs. less culturally close languages)
irony detection system. We employ both feature-based models and neural
architectures using monolingual word representation. We compare the performance
of these systems with state-of-the-art systems to identify their capabilities.
We show that these monolingual models trained separately on different languages
using multilingual word representation or text-based features can open the door
to irony detection in languages that lack of annotated data for irony.
| 2,020 | Computation and Language |
Goal-Oriented Multi-Task BERT-Based Dialogue State Tracker | Dialogue State Tracking (DST) is a core component of virtual assistants such
as Alexa or Siri. To accomplish various tasks, these assistants need to support
an increasing number of services and APIs. The Schema-Guided State Tracking
track of the 8th Dialogue System Technology Challenge highlighted the DST
problem for unseen services. The organizers introduced the Schema-Guided
Dialogue (SGD) dataset with multi-domain conversations and released a zero-shot
dialogue state tracking model. In this work, we propose a GOaL-Oriented
Multi-task BERT-based dialogue state tracker (GOLOMB) inspired by architectures
for reading comprehension question answering systems. The model "queries"
dialogue history with descriptions of slots and services as well as possible
values of slots. This allows to transfer slot values in multi-domain dialogues
and have a capability to scale to unseen slot types. Our model achieves a joint
goal accuracy of 53.97% on the SGD dataset, outperforming the baseline model.
| 2,020 | Computation and Language |
Introducing Aspects of Creativity in Automatic Poetry Generation | Poetry Generation involves teaching systems to automatically generate text
that resembles poetic work. A deep learning system can learn to generate poetry
on its own by training on a corpus of poems and modeling the particular style
of language. In this paper, we propose taking an approach that fine-tunes
GPT-2, a pre-trained language model, to our downstream task of poetry
generation. We extend prior work on poetry generation by introducing creative
elements. Specifically, we generate poems that express emotion and elicit the
same in readers, and poems that use the language of dreams---called dream
poetry. We are able to produce poems that correctly elicit the emotions of
sadness and joy 87.5 and 85 percent, respectively, of the time. We produce
dreamlike poetry by training on a corpus of texts that describe dreams. Poems
from this model are shown to capture elements of dream poetry with scores of no
less than 3.2 on the Likert scale. We perform crowdsourced human-evaluation for
all our poems. We also make use of the Coh-Metrix tool, outlining metrics we
use to gauge the quality of text generated.
| 2,020 | Computation and Language |
Translating Web Search Queries into Natural Language Questions | Users often query a search engine with a specific question in mind and often
these queries are keywords or sub-sentential fragments. For example, if the
users want to know the answer for "What's the capital of USA", they will most
probably query "capital of USA" or "USA capital" or some keyword-based
variation of this. For example, for the user entered query "capital of USA",
the most probable question intent is "What's the capital of USA?". In this
paper, we are proposing a method to generate well-formed natural language
question from a given keyword-based query, which has the same question intent
as the query. Conversion of keyword-based web query into a well-formed question
has lots of applications, with some of them being in search engines, Community
Question Answering (CQA) website and bots communication. We found a synergy
between query-to-question problem with standard machine translation(MT) task.
We have used both Statistical MT (SMT) and Neural MT (NMT) models to generate
the questions from the query. We have observed that MT models perform well in
terms of both automatic and human evaluation.
| 2,020 | Computation and Language |
Multimodal Matching Transformer for Live Commenting | Automatic live commenting aims to provide real-time comments on videos for
viewers. It encourages users engagement on online video sites, and is also a
good benchmark for video-to-text generation. Recent work on this task adopts
encoder-decoder models to generate comments. However, these methods do not
model the interaction between videos and comments explicitly, so they tend to
generate popular comments that are often irrelevant to the videos. In this
work, we aim to improve the relevance between live comments and videos by
modeling the cross-modal interactions among different modalities. To this end,
we propose a multimodal matching transformer to capture the relationships among
comments, vision, and audio. The proposed model is based on the transformer
framework and can iteratively learn the attention-aware representations for
each modality. We evaluate the model on a publicly available live commenting
dataset. Experiments show that the multimodal matching transformer model
outperforms the state-of-the-art methods.
| 2,020 | Computation and Language |
Incorporating Visual Semantics into Sentence Representations within a
Grounded Space | Language grounding is an active field aiming at enriching textual
representations with visual information. Generally, textual and visual elements
are embedded in the same representation space, which implicitly assumes a
one-to-one correspondence between modalities. This hypothesis does not hold
when representing words, and becomes problematic when used to learn sentence
representations --- the focus of this paper --- as a visual scene can be
described by a wide variety of sentences. To overcome this limitation, we
propose to transfer visual information to textual representations by learning
an intermediate representation space: the grounded space. We further propose
two new complementary objectives ensuring that (1) sentences associated with
the same visual content are close in the grounded space and (2) similarities
between related elements are preserved across modalities. We show that this
model outperforms the previous state-of-the-art on classification and semantic
relatedness tasks.
| 2,020 | Computation and Language |
On-Device Information Extraction from SMS using Hybrid Hierarchical
Classification | Cluttering of SMS inbox is one of the serious problems that users today face
in the digital world where every online login, transaction, along with
promotions generate multiple SMS. This problem not only prevents users from
searching and navigating messages efficiently but often results in users
missing out the relevant information associated with the corresponding SMS like
offer codes, payment reminders etc. In this paper, we propose a unique
architecture to organize and extract the appropriate information from SMS and
further display it in an intuitive template. In the proposed architecture, we
use a Hybrid Hierarchical Long Short Term Memory (LSTM)-Convolutional Neural
Network (CNN) to categorize SMS into multiple classes followed by a set of
entity parsers used to extract the relevant information from the classified
message. The architecture using its preprocessing techniques not only takes
into account the enormous variations observed in SMS data but also makes it
efficient for its on-device (mobile phone) functionalities in terms of
inference timing and size.
| 2,020 | Computation and Language |
Neural Machine Translation System of Indic Languages -- An Attention
based Approach | Neural machine translation (NMT) is a recent and effective technique which
led to remarkable improvements in comparison of conventional machine
translation techniques. Proposed neural machine translation model developed for
the Gujarati language contains encoder-decoder with attention mechanism. In
India, almost all the languages are originated from their ancestral language -
Sanskrit. They are having inevitable similarities including lexical and named
entity similarity. Translating into Indic languages is always be a challenging
task. In this paper, we have presented the neural machine translation system
(NMT) that can efficiently translate Indic languages like Hindi and Gujarati
that together covers more than 58.49 percentage of total speakers in the
country. We have compared the performance of our NMT model with automatic
evaluation matrices such as BLEU, perplexity and TER matrix. The comparison of
our network with Google translate is also presented where it outperformed with
a margin of 6 BLEU score on English-Gujarati translation.
| 2,019 | Computation and Language |
BERT-of-Theseus: Compressing BERT by Progressive Module Replacing | In this paper, we propose a novel model compression approach to effectively
compress BERT by progressive module replacing. Our approach first divides the
original BERT into several modules and builds their compact substitutes. Then,
we randomly replace the original modules with their substitutes to train the
compact modules to mimic the behavior of the original modules. We progressively
increase the probability of replacement through the training. In this way, our
approach brings a deeper level of interaction between the original and compact
models. Compared to the previous knowledge distillation approaches for BERT
compression, our approach does not introduce any additional loss function. Our
approach outperforms existing knowledge distillation approaches on GLUE
benchmark, showing a new perspective of model compression.
| 2,020 | Computation and Language |
A Multilingual View of Unsupervised Machine Translation | We present a probabilistic framework for multilingual neural machine
translation that encompasses supervised and unsupervised setups, focusing on
unsupervised translation. In addition to studying the vanilla case where there
is only monolingual data available, we propose a novel setup where one language
in the (source, target) pair is not associated with any parallel data, but
there may exist auxiliary parallel data that contains the other. This auxiliary
data can naturally be utilized in our probabilistic framework via a novel
cross-translation loss term. Empirically, we show that our approach results in
higher BLEU scores over state-of-the-art unsupervised models on the WMT'14
English-French, WMT'16 English-German, and WMT'16 English-Romanian datasets in
most directions. In particular, we obtain a +1.65 BLEU advantage over the
best-performing unsupervised model in the Romanian-English direction.
| 2,020 | Computation and Language |
Snippext: Semi-supervised Opinion Mining with Augmented Data | Online services are interested in solutions to opinion mining, which is the
problem of extracting aspects, opinions, and sentiments from text. One method
to mine opinions is to leverage the recent success of pre-trained language
models which can be fine-tuned to obtain high-quality extractions from reviews.
However, fine-tuning language models still requires a non-trivial amount of
training data. In this paper, we study the problem of how to significantly
reduce the amount of labeled training data required in fine-tuning language
models for opinion mining. We describe Snippext, an opinion mining system
developed over a language model that is fine-tuned through semi-supervised
learning with augmented data. A novelty of Snippext is its clever use of a
two-prong approach to achieve state-of-the-art (SOTA) performance with little
labeled training data through: (1) data augmentation to automatically generate
more labeled training data from existing ones, and (2) a semi-supervised
learning technique to leverage the massive amount of unlabeled data in addition
to the (limited amount of) labeled data. We show with extensive experiments
that Snippext performs comparably and can even exceed previous SOTA results on
several opinion mining tasks with only half the training data required.
Furthermore, it achieves new SOTA results when all training data are leveraged.
By comparison to a baseline pipeline, we found that Snippext extracts
significantly more fine-grained opinions which enable new opportunities of
downstream applications.
| 2,020 | Computation and Language |
autoNLP: NLP Feature Recommendations for Text Analytics Applications | While designing machine learning based text analytics applications, often,
NLP data scientists manually determine which NLP features to use based upon
their knowledge and experience with related problems. This results in increased
efforts during feature engineering process and renders automated reuse of
features across semantically related applications inherently difficult. In this
paper, we argue for standardization in feature specification by outlining
structure of a language for specifying NLP features and present an approach for
their reuse across applications to increase likelihood of identifying optimal
features.
| 2,020 | Computation and Language |
Description Based Text Classification with Reinforcement Learning | The task of text classification is usually divided into two stages: {\it text
feature extraction} and {\it classification}. In this standard formalization
categories are merely represented as indexes in the label vocabulary, and the
model lacks for explicit instructions on what to classify. Inspired by the
current trend of formalizing NLP problems as question answering tasks, we
propose a new framework for text classification, in which each category label
is associated with a category description. Descriptions are generated by
hand-crafted templates or using abstractive/extractive models from
reinforcement learning. The concatenation of the description and the text is
fed to the classifier to decide whether or not the current label should be
assigned to the text. The proposed strategy forces the model to attend to the
most salient texts with respect to the label, which can be regarded as a hard
version of attention, leading to better performances. We observe significant
performance boosts over strong baselines on a wide range of text classification
tasks including single-label classification, multi-label classification and
multi-aspect sentiment analysis.
| 2,020 | Computation and Language |
Blank Language Models | We propose Blank Language Model (BLM), a model that generates sequences by
dynamically creating and filling in blanks. The blanks control which part of
the sequence to expand, making BLM ideal for a variety of text editing and
rewriting tasks. The model can start from a single blank or partially completed
text with blanks at specified locations. It iteratively determines which word
to place in a blank and whether to insert new blanks, and stops generating when
no blanks are left to fill. BLM can be efficiently trained using a lower bound
of the marginal data likelihood. On the task of filling missing text snippets,
BLM significantly outperforms all other baselines in terms of both accuracy and
fluency. Experiments on style transfer and damaged ancient text restoration
demonstrate the potential of this framework for a wide range of applications.
| 2,020 | Computation and Language |
LAVA NAT: A Non-Autoregressive Translation Model with Look-Around
Decoding and Vocabulary Attention | Non-autoregressive translation (NAT) models generate multiple tokens in one
forward pass and is highly efficient at inference stage compared with
autoregressive translation (AT) methods. However, NAT models often suffer from
the multimodality problem, i.e., generating duplicated tokens or missing
tokens. In this paper, we propose two novel methods to address this issue, the
Look-Around (LA) strategy and the Vocabulary Attention (VA) mechanism. The
Look-Around strategy predicts the neighbor tokens in order to predict the
current token, and the Vocabulary Attention models long-term token dependencies
inside the decoder by attending the whole vocabulary for each position to
acquire knowledge of which token is about to generate. %We also propose a
dynamic bidirectional decoding approach to accelerate the inference process of
the LAVA model while preserving the high-quality of the generated output. Our
proposed model uses significantly less time during inference compared with
autoregressive models and most other NAT models. Our experiments on four
benchmarks (WMT14 En$\rightarrow$De, WMT14 De$\rightarrow$En, WMT16
Ro$\rightarrow$En and IWSLT14 De$\rightarrow$En) show that the proposed model
achieves competitive performance compared with the state-of-the-art
non-autoregressive and autoregressive models while significantly reducing the
time cost in inference phase.
| 2,020 | Computation and Language |
HHH: An Online Medical Chatbot System based on Knowledge Graph and
Hierarchical Bi-Directional Attention | This paper proposes a chatbot framework that adopts a hybrid model which
consists of a knowledge graph and a text similarity model. Based on this
chatbot framework, we build HHH, an online question-and-answer (QA) Healthcare
Helper system for answering complex medical questions. HHH maintains a
knowledge graph constructed from medical data collected from the Internet. HHH
also implements a novel text representation and similarity deep learning model,
Hierarchical BiLSTM Attention Model (HBAM), to find the most similar question
from a large QA dataset. We compare HBAM with other state-of-the-art language
models such as bidirectional encoder representation from transformers (BERT)
and Manhattan LSTM Model (MaLSTM). We train and test the models with a subset
of the Quora duplicate questions dataset in the medical area. The experimental
results show that our model is able to achieve a superior performance than
these existing methods.
| 2,020 | Computation and Language |
Mining Commonsense Facts from the Physical World | Textual descriptions of the physical world implicitly mention commonsense
facts, while the commonsense knowledge bases explicitly represent such facts as
triples. Compared to dramatically increased text data, the coverage of existing
knowledge bases is far away from completion. Most of the prior studies on
populating knowledge bases mainly focus on Freebase. To automatically complete
commonsense knowledge bases to improve their coverage is under-explored. In
this paper, we propose a new task of mining commonsense facts from the raw text
that describes the physical world. We build an effective new model that fuses
information from both sequence text and existing knowledge base resource. Then
we create two large annotated datasets each with approximate 200k instances for
commonsense knowledge base completion. Empirical results demonstrate that our
model significantly outperforms baselines.
| 2,020 | Computation and Language |
Rough Set based Aggregate Rank Measure & its Application to Supervised
Multi Document Summarization | Most problems in Machine Learning cater to classification and the objects of
universe are classified to a relevant class. Ranking of classified objects of
universe per decision class is a challenging problem. We in this paper propose
a novel Rough Set based membership called Rank Measure to solve to this
problem. It shall be utilized for ranking the elements to a particular class.
It differs from Pawlak Rough Set based membership function which gives an
equivalent characterization of the Rough Set based approximations. It becomes
paramount to look beyond the traditional approach of computing memberships
while handling inconsistent, erroneous and missing data that is typically
present in real world problems. This led us to propose the aggregate Rank
Measure. The contribution of the paper is three fold. Firstly, it proposes a
Rough Set based measure to be utilized for numerical characterization of within
class ranking of objects. Secondly, it proposes and establish the properties of
Rank Measure and aggregate Rank Measure based membership. Thirdly, we apply the
concept of membership and aggregate ranking to the problem of supervised Multi
Document Summarization wherein first the important class of sentences are
determined using various supervised learning techniques and are post processed
using the proposed ranking measure. The results proved to have significant
improvement in accuracy.
| 2,020 | Computation and Language |
Short Text Classification via Knowledge powered Attention with
Similarity Matrix based CNN | Short text is becoming more and more popular on the web, such as Chat
Message, SMS and Product Reviews. Accurately classifying short text is an
important and challenging task. A number of studies have difficulties in
addressing this problem because of the word ambiguity and data sparsity. To
address this issue, we propose a knowledge powered attention with similarity
matrix based convolutional neural network (KASM) model, which can compute
comprehensive information by utilizing the knowledge and deep neural network.
We use knowledge graph (KG) to enrich the semantic representation of short
text, specially, the information of parent-entity is introduced in our model.
Meanwhile, we consider the word interaction in the literal-level between short
text and the representation of label, and utilize similarity matrix based
convolutional neural network (CNN) to extract it. For the purpose of measuring
the importance of knowledge, we introduce the attention mechanisms to choose
the important information. Experimental results on five standard datasets show
that our model significantly outperforms state-of-the-art methods.
| 2,021 | Computation and Language |
Attend to the beginning: A study on using bidirectional attention for
extractive summarization | Forum discussion data differ in both structure and properties from generic
form of textual data such as news. Henceforth, summarization techniques should,
in turn, make use of such differences, and craft models that can benefit from
the structural nature of discussion data. In this work, we propose attending to
the beginning of a document, to improve the performance of extractive
summarization models when applied to forum discussion data. Evaluations
demonstrated that with the help of bidirectional attention mechanism, attending
to the beginning of a document (initial comment/post) in a discussion thread,
can introduce a consistent boost in ROUGE scores, as well as introducing a new
State Of The Art (SOTA) ROUGE scores on the forum discussions dataset.
Additionally, we explored whether this hypothesis is extendable to other
generic forms of textual data. We make use of the tendency of introducing
important information early in the text, by attending to the first few
sentences in generic textual data. Evaluations demonstrated that attending to
introductory sentences using bidirectional attention, improves the performance
of extractive summarization models when even applied to more generic form of
textual data.
| 2,020 | Computation and Language |
Abstractive Summarization for Low Resource Data using Domain Transfer
and Data Synthesis | Training abstractive summarization models typically requires large amounts of
data, which can be a limitation for many domains. In this paper we explore
using domain transfer and data synthesis to improve the performance of recent
abstractive summarization methods when applied to small corpora of student
reflections. First, we explored whether tuning state of the art model trained
on newspaper data could boost performance on student reflection data.
Evaluations demonstrated that summaries produced by the tuned model achieved
higher ROUGE scores compared to model trained on just student reflection data
or just newspaper data. The tuned model also achieved higher scores compared to
extractive summarization baselines, and additionally was judged to produce more
coherent and readable summaries in human evaluations. Second, we explored
whether synthesizing summaries of student data could additionally boost
performance. We proposed a template-based model to synthesize new data, which
when incorporated into training further increased ROUGE scores. Finally, we
showed that combining data synthesis with domain transfer achieved higher ROUGE
scores compared to only using one of the two approaches.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.