Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Automatic Reference-Based Evaluation of Pronoun Translation Misses the
Point | We compare the performance of the APT and AutoPRF metrics for pronoun
translation against a manually annotated dataset comprising human judgements as
to the correctness of translations of the PROTEST test suite. Although there is
some correlation with the human judgements, a range of issues limit the
performance of the automated metrics. Instead, we recommend the use of
semi-automatic metrics and test suites in place of fully automatic metrics.
| 2,018 | Computation and Language |
Rapid Adaptation of Neural Machine Translation to New Languages | This paper examines the problem of adapting neural machine translation
systems to new, low-resourced languages (LRLs) as effectively and rapidly as
possible. We propose methods based on starting with massively multilingual
"seed models", which can be trained ahead-of-time, and then continuing training
on data related to the LRL. We contrast a number of strategies, leading to a
novel, simple, yet effective method of "similar-language regularization", where
we jointly train on both a LRL of interest and a similar high-resourced
language to prevent over-fitting to small LRL data. Experiments demonstrate
that massively multilingual models, even without any explicit adaptation, are
surprisingly effective, achieving BLEU scores of up to 15.5 with no data from
the LRL, and that the proposed similar-language regularization method improves
over other adaptation methods by 1.7 BLEU points average over 4 LRL settings.
Code to reproduce experiments at https://github.com/neubig/rapid-adaptation
| 2,018 | Computation and Language |
Neural Semi-Markov Conditional Random Fields for Robust Character-Based
Part-of-Speech Tagging | Character-level models of tokens have been shown to be effective at dealing
with within-token noise and out-of-vocabulary words. But these models still
rely on correct token boundaries. In this paper, we propose a novel end-to-end
character-level model and demonstrate its effectiveness in multilingual
settings and when token boundaries are noisy. Our model is a semi-Markov
conditional random field with neural networks for character and segment
representation. It requires no tokenizer. The model matches state-of-the-art
baselines for various languages and significantly outperforms them on a noisy
English version of a part-of-speech tagging benchmark dataset. Our code and the
noisy dataset are publicly available at http://cistern.cis.lmu.de/semiCRF.
| 2,020 | Computation and Language |
Unsupervised Learning of Sentence Representations Using Sequence
Consistency | Computing universal distributed representations of sentences is a fundamental
task in natural language processing. We propose ConsSent, a simple yet
surprisingly powerful unsupervised method to learn such representations by
enforcing consistency constraints on sequences of tokens. We consider two
classes of such constraints -- sequences that form a sentence and between two
sequences that form a sentence when merged. We learn sentence encoders by
training them to distinguish between consistent and inconsistent examples, the
latter being generated by randomly perturbing consistent examples in six
different ways. Extensive evaluation on several transfer learning and
linguistic probing tasks shows improved performance over strong unsupervised
and supervised baselines, substantially surpassing them in several cases. Our
best results are achieved by training sentence encoders in a multitask setting
and by an ensemble of encoders trained on the individual tasks.
| 2,019 | Computation and Language |
Comparing morphological complexity of Spanish, Otomi and Nahuatl | We use two small parallel corpora for comparing the morphological complexity
of Spanish, Otomi and Nahuatl. These are languages that belong to different
linguistic families, the latter are low-resourced. We take into account two
quantitative criteria, on one hand the distribution of types over tokens in a
corpus, on the other, perplexity and entropy as indicators of word structure
predictability. We show that a language can be complex in terms of how many
different morphological word forms can produce, however, it may be less complex
in terms of predictability of its internal structure of words.
| 2,018 | Computation and Language |
Angular-Based Word Meta-Embedding Learning | Ensembling word embeddings to improve distributed word representations has
shown good success for natural language processing tasks in recent years. These
approaches either carry out straightforward mathematical operations over a set
of vectors or use unsupervised learning to find a lower-dimensional
representation. This work compares meta-embeddings trained for different
losses, namely loss functions that account for angular distance between the
reconstructed embedding and the target and those that account normalized
distances based on the vector length. We argue that meta-embeddings are better
to treat the ensemble set equally in unsupervised learning as the respective
quality of each embedding is unknown for upstream tasks prior to
meta-embedding. We show that normalization methods that account for this such
as cosine and KL-divergence objectives outperform meta-embedding trained on
standard $\ell_1$ and $\ell_2$ loss on \textit{defacto} word similarity and
relatedness datasets and find it outperforms existing meta-learning strategies.
| 2,018 | Computation and Language |
Disentangled Representation Learning for Non-Parallel Text Style
Transfer | This paper tackles the problem of disentangling the latent variables of style
and content in language models. We propose a simple yet effective approach,
which incorporates auxiliary multi-task and adversarial objectives, for label
prediction and bag-of-words prediction, respectively. We show, both
qualitatively and quantitatively, that the style and content are indeed
disentangled in the latent space. This disentangled latent representation
learning method is applied to style transfer on non-parallel corpora. We
achieve substantially better results in terms of transfer accuracy, content
preservation and language fluency, in comparison to previous state-of-the-art
approaches.
| 2,018 | Computation and Language |
REGMAPR - Text Matching Made Easy | Text matching is a fundamental problem in natural language processing. Neural
models using bidirectional LSTMs for sentence encoding and inter-sentence
attention mechanisms perform remarkably well on several benchmark datasets. We
propose REGMAPR - a simple and general architecture for text matching that does
not use inter-sentence attention. Starting from a Siamese architecture, we
augment the embeddings of the words with two features based on exact and para-
phrase match between words in the two sentences. We train the model using three
types of regularization on datasets for textual entailment, paraphrase
detection and semantic related- ness. REGMAPR performs comparably or better
than more complex neural models or models using a large number of handcrafted
features. REGMAPR achieves state-of-the-art results for paraphrase detection on
the SICK dataset and for textual entailment on the SNLI dataset among models
that do not use inter-sentence attention.
| 2,018 | Computation and Language |
D-PAGE: Diverse Paraphrase Generation | In this paper, we investigate the diversity aspect of paraphrase generation.
Prior deep learning models employ either decoding methods or add random input
noise for varying outputs. We propose a simple method Diverse Paraphrase
Generation (D-PAGE), which extends neural machine translation (NMT) models to
support the generation of diverse paraphrases with implicit rewriting patterns.
Our experimental results on two real-world benchmark datasets demonstrate that
our model generates at least one order of magnitude more diverse outputs than
the baselines in terms of a new evaluation metric Jeffrey's Divergence. We have
also conducted extensive experiments to understand various properties of our
model with a focus on diversity.
| 2,018 | Computation and Language |
What is wrong with style transfer for texts? | A number of recent machine learning papers work with an automated style
transfer for texts and, counter to intuition, demonstrate that there is no
consensus formulation of this NLP task. Different researchers propose different
algorithms, datasets and target metrics to address it. This short opinion paper
aims to discuss possible formalization of this NLP task in anticipation of a
further growing interest to it.
| 2,018 | Computation and Language |
Character-Level Language Modeling with Deeper Self-Attention | LSTMs and other RNN variants have shown strong performance on character-level
language modeling. These models are typically trained using truncated
backpropagation through time, and it is common to assume that their success
stems from their ability to remember long-term contexts. In this paper, we show
that a deep (64-layer) transformer model with fixed context outperforms RNN
variants by a large margin, achieving state of the art on two popular
benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good
results at this depth, we show that it is important to add auxiliary losses,
both at intermediate network layers and intermediate sequence positions.
| 2,018 | Computation and Language |
Deep Learning Based Natural Language Processing for End to End Speech
Translation | Deep Learning methods employ multiple processing layers to learn hierarchial
representations of data. They have already been deployed in a humongous number
of applications and have produced state-of-the-art results. Recently with the
growth in processing power of computers to be able to do high dimensional
tensor calculations, Natural Language Processing (NLP) applications have been
given a significant boost in terms of efficiency as well as accuracy. In this
paper, we will take a look at various signal processing techniques and then
application of them to produce a speech-to-text system using Deep Recurrent
Neural Networks.
| 2,018 | Computation and Language |
Discrete Structural Planning for Neural Machine Translation | Structural planning is important for producing long sentences, which is a
missing part in current language generation models. In this work, we add a
planning phase in neural machine translation to control the coarse structure of
output sentences. The model first generates some planner codes, then predicts
real output words conditioned on them. The codes are learned to capture the
coarse structure of the target sentence. In order to obtain the codes, we
design an end-to-end neural network with a discretization bottleneck, which
predicts the simplified part-of-speech tags of target sentences. Experiments
show that the translation performance are generally improved by planning ahead.
We also find that translations with different structures can be obtained by
manipulating the planner codes.
| 2,018 | Computation and Language |
Explaining Queries over Web Tables to Non-Experts | Designing a reliable natural language (NL) interface for querying tables has
been a longtime goal of researchers in both the data management and natural
language processing (NLP) communities. Such an interface receives as input an
NL question, translates it into a formal query, executes the query and returns
the results. Errors in the translation process are not uncommon, and users
typically struggle to understand whether their query has been mapped correctly.
We address this problem by explaining the obtained formal queries to non-expert
users. Two methods for query explanations are presented: the first translates
queries into NL, while the second method provides a graphic representation of
the query cell-based provenance (in its execution on a given table). Our
solution augments a state-of-the-art NL interface over web tables, enhancing it
in both its training and deployment phase. Experiments, including a user study
conducted on Amazon Mechanical Turk, show our solution to improve both the
correctness and reliability of an NL interface.
| 2,018 | Computation and Language |
Primal Meaning Recommendation via On-line Encyclopedia | Polysemy is a very common phenomenon in modern languages. Under many
circumstances, there exists a primal meaning for the expression. We define the
primal meaning of an expression to be a frequently used sense of that
expression from which its other frequent senses can be deduced. Many of the new
appearing meanings of the expressions are either originated from a primal
meaning, or are merely literal references to the original expression, e.g.,
apple (fruit), Apple (Inc), and Apple (movie). When constructing a knowledge
base from on-line encyclopedia data, it would be more efficient to be aware of
the information about the importance of the senses. In this paper, we would
like to explore a way to automatically recommend the primal meaning of an
expression based on the textual descriptions of the multiple senses of an
expression from on-line encyclopedia websites. We propose a hybrid model that
captures both the pattern of the description and the relationship between
different descriptions with both weakly supervised and unsupervised models. The
experiment results show that our method yields a good result with a P@1
(precision) score of 83.3 per cent, and a MAP (mean average precision) of 90.5
per cent, surpassing the UMFS-WE baseline by a big margin (P@1 is 61.1 per cent
and MAP is 76.3 per cent).
| 2,019 | Computation and Language |
R-grams: Unsupervised Learning of Semantic Units in Natural Language | This paper investigates data-driven segmentation using Re-Pair or Byte Pair
Encoding-techniques. In contrast to previous work which has primarily been
focused on subword units for machine translation, we are interested in the
general properties of such segments above the word level. We call these
segments r-grams, and discuss their properties and the effect they have on the
token frequency distribution. The proposed approach is evaluated by
demonstrating its viability in embedding techniques, both in monolingual and
multilingual test settings. We also provide a number of qualitative examples of
the proposed methodology, demonstrating its viability as a language-invariant
segmentation procedure.
| 2,019 | Computation and Language |
A Hassle-Free Machine Learning Method for Cohort Selection of Clinical
Trials | Traditional text classification techniques in clinical domain have heavily
relied on the manually extracted textual cues. This paper proposes a generally
supervised machine learning method that is equally hassle-free and does not use
clinical knowledge. The employed methods were simple to implement, fast to run
and yet effective. This paper proposes a novel named entity recognition (NER)
based an ensemble system capable of learning the keyword features in the
document. Instead of merely considering the whole sentence/paragraph for
analysis, the NER based keyword features can stress the important clinic
relevant phases more. In addition, to capture the semantic information in the
documents, the FastText features originating from the document level FastText
classification results are exploited.
| 2,018 | Computation and Language |
Adversarial Neural Networks for Cross-lingual Sequence Tagging | We study cross-lingual sequence tagging with little or no labeled data in the
target language. Adversarial training has previously been shown to be effective
for training cross-lingual sentence classifiers. However, it is not clear if
language-agnostic representations enforced by an adversarial language
discriminator will also enable effective transfer for token-level prediction
tasks. Therefore, we experiment with different types of adversarial training on
two tasks: dependency parsing and sentence compression. We show that
adversarial training consistently leads to improved cross-lingual performance
on each task compared to a conventionally trained baseline.
| 2,018 | Computation and Language |
Retrieve and Refine: Improved Sequence Generation Models For Dialogue | Sequence generation models for dialogue are known to have several problems:
they tend to produce short, generic sentences that are uninformative and
unengaging. Retrieval models on the other hand can surface interesting
responses, but are restricted to the given retrieval set leading to erroneous
replies that cannot be tuned to the specific context. In this work we develop a
model that combines the two approaches to avoid both their deficiencies: first
retrieve a response and then refine it -- the final sequence generator treating
the retrieval as additional context. We show on the recent CONVAI2 challenge
task our approach produces responses superior to both standard retrieval and
generation models in human evaluations.
| 2,018 | Computation and Language |
Classifier Ensembles for Dialect and Language Variety Identification | In this paper we present ensemble-based systems for dialect and language
variety identification using the datasets made available by the organizers of
the VarDial Evaluation Campaign 2018. We present a system developed to
discriminate between Flemish and Dutch in subtitles and a system trained to
discriminate between four Arabic dialects: Egyptian, Levantine, Gulf, North
African, and Modern Standard Arabic in speech broadcasts. Finally, we compare
the performance of these two systems with the other systems submitted to the
Discriminating between Dutch and Flemish in Subtitles (DFS) and the Arabic
Dialect Identification (ADI) shared tasks at VarDial 2018.
| 2,018 | Computation and Language |
Jointly Identifying and Fixing Inconsistent Readings from Information
Extraction Systems | KGCleaner is a framework to identify and correct errors in data produced and
delivered by an information extraction system. These tasks have been
understudied and KGCleaner is the first to address both. We introduce a
multi-task model that jointly learns to predict if an extracted relation is
credible and repair it if not. We evaluate our approach and other models as
instance of our framework on two collections: a Wikidata corpus of nearly 700K
facts and 5M fact-relevant sentences and a collection of 30K facts from the
2015 TAC Knowledge Base Population task. For credibility classification,
parameter efficient simple shallow neural network can achieve an absolute
performance gain of 30 $F_1$ points on Wikidata and comparable performance on
TAC. For the repair task, significant performance (at more than twice) gain can
be obtained depending on the nature of the dataset and the models.
| 2,023 | Computation and Language |
Two Local Models for Neural Constituent Parsing | Non-local features have been exploited by syntactic parsers for capturing
dependencies between sub output structures. Such features have been a key to
the success of state-of-the-art statistical parsers. With the rise of deep
learning, however, it has been shown that local output decisions can give
highly competitive accuracies, thanks to the power of dense neural input
representations that embody global syntactic information. We investigate two
conceptually simple local neural models for constituent parsing, which make
local decisions to constituent spans and CFG rules, respectively. Consistent
with previous findings along the line, our best model gives highly competitive
results, achieving the labeled bracketing F1 scores of 92.4% on PTB and 87.3%
on CTB 5.1.
| 2,018 | Computation and Language |
Top-Down Tree Structured Text Generation | Text generation is a fundamental building block in natural language
processing tasks. Existing sequential models performs autoregression directly
over the text sequence and have difficulty generating long sentences of complex
structures. This paper advocates a simple approach that treats sentence
generation as a tree-generation task. By explicitly modelling syntactic
structures in a constituent syntactic tree and performing top-down,
breadth-first tree generation, our model fixes dependencies appropriately and
performs implicit global planning. This is in contrast to transition-based
depth-first generation process, which has difficulty dealing with incomplete
texts when parsing and also does not incorporate future contexts in planning.
Our preliminary results on two generation tasks and one parsing task
demonstrate that this is an effective strategy.
| 2,018 | Computation and Language |
Embedding Grammars | Classic grammars and regular expressions can be used for a variety of
purposes, including parsing, intent detection, and matching. However, the
comparisons are performed at a structural level, with constituent elements
(words or characters) matched exactly. Recent advances in word embeddings show
that semantically related words share common features in a vector-space
representation, suggesting the possibility of a hybrid grammar and word
embedding. In this paper, we blend the structure of standard context-free
grammars with the semantic generalization capabilities of word embeddings to
create hybrid semantic grammars. These semantic grammars generalize the
specific terminals used by the programmer to other words and phrases with
related meanings, allowing the construction of compact grammars that match an
entire region of the vector space rather than matching specific elements.
| 2,018 | Computation and Language |
Cross-Lingual Cross-Platform Rumor Verification Pivoting on Multimedia
Content | With the increasing popularity of smart devices, rumors with multimedia
content become more and more common on social networks. The multimedia
information usually makes rumors look more convincing. Therefore, finding an
automatic approach to verify rumors with multimedia content is a pressing task.
Previous rumor verification research only utilizes multimedia as input
features. We propose not to use the multimedia content but to find external
information in other news platforms pivoting on it. We introduce a new features
set, cross-lingual cross-platform features that leverage the semantic
similarity between the rumors and the external information. When implemented,
machine learning methods utilizing such features achieved the state-of-the-art
rumor verification results.
| 2,018 | Computation and Language |
How Much Reading Does Reading Comprehension Require? A Critical
Investigation of Popular Benchmarks | Many recent papers address reading comprehension, where examples consist of
(question, passage, answer) tuples. Presumably, a model must combine
information from both questions and passages to predict corresponding answers.
However, despite intense interest in the topic, with hundreds of published
papers vying for leaderboard dominance, basic questions about the difficulty of
many popular benchmarks remain unanswered. In this paper, we establish sensible
baselines for the bAbI, SQuAD, CBT, CNN, and Who-did-What datasets, finding
that question- and passage-only models often perform surprisingly well. On $14$
out of $20$ bAbI tasks, passage-only models achieve greater than $50\%$
accuracy, sometimes matching the full model. Interestingly, while CBT provides
$20$-sentence stories only the last is needed for comparably accurate
prediction. By comparison, SQuAD and CNN appear better-constructed.
| 2,018 | Computation and Language |
Folksonomication: Predicting Tags for Movies from Plot Synopses Using
Emotion Flow Encoded Neural Network | Folksonomy of movies covers a wide range of heterogeneous information about
movies, like the genre, plot structure, visual experiences, soundtracks,
metadata, and emotional experiences from watching a movie. Being able to
automatically generate or predict tags for movies can help recommendation
engines improve retrieval of similar movies, and help viewers know what to
expect from a movie in advance. In this work, we explore the problem of
creating tags for movies from plot synopses. We propose a novel neural network
model that merges information from synopses and emotion flows throughout the
plots to predict a set of tags for movies. We compare our system with multiple
baselines and found that the addition of emotion flows boosts the performance
of the network by learning ~18\% more tags than a traditional machine learning
system.
| 2,018 | Computation and Language |
Putting the Horse Before the Cart:A Generator-Evaluator Framework for
Question Generation from Text | Automatic question generation (QG) is a useful yet challenging task in NLP.
Recent neural network-based approaches represent the state-of-the-art in this
task. In this work, we attempt to strengthen them significantly by adopting a
holistic and novel generator-evaluator framework that directly optimizes
objectives that reward semantics and structure. The {\it generator} is a
sequence-to-sequence model that incorporates the {\it structure} and {\it
semantics} of the question being generated. The generator predicts an answer in
the passage that the question can pivot on. Employing the copy and coverage
mechanisms, it also acknowledges other contextually important (and possibly
rare) keywords in the passage that the question needs to conform to, while not
redundantly repeating words. The {\it evaluator} model evaluates and assigns a
reward to each predicted question based on its conformity to the {\it
structure} of ground-truth questions. We propose two novel QG-specific reward
functions for text conformity and answer conformity of the generated question.
The evaluator also employs structure-sensitive rewards based on evaluation
measures such as BLEU, GLEU, and ROUGE-L, which are suitable for QG. In
contrast, most of the previous works only optimize the cross-entropy loss,
which can induce inconsistencies between training (objective) and testing
(evaluation) measures. Our evaluation shows that our approach significantly
outperforms state-of-the-art systems on the widely-used SQuAD benchmark as per
both automatic and human evaluation.
| 2,019 | Computation and Language |
Multiple Character Embeddings for Chinese Word Segmentation | Chinese word segmentation (CWS) is often regarded as a character-based
sequence labeling task in most current works which have achieved great success
with the help of powerful neural networks. However, these works neglect an
important clue: Chinese characters incorporate both semantic and phonetic
meanings. In this paper, we introduce multiple character embeddings including
Pinyin Romanization and Wubi Input, both of which are easily accessible and
effective in depicting semantics of characters. We propose a novel shared
Bi-LSTM-CRF model to fuse linguistic features efficiently by sharing the LSTM
network during the training procedure. Extensive experiments on five corpora
show that extra embeddings help obtain a significant improvement in labeling
accuracy. Specifically, we achieve the state-of-the-art performance in AS and
CityU corpora with F1 scores of 96.9 and 97.3, respectively without leveraging
any external lexical resources.
| 2,019 | Computation and Language |
Exploiting Deep Learning for Persian Sentiment Analysis | The rise of social media is enabling people to freely express their opinions
about products and services. The aim of sentiment analysis is to automatically
determine subject's sentiment (e.g., positive, negative, or neutral) towards a
particular aspect such as topic, product, movie, news etc. Deep learning has
recently emerged as a powerful machine learning technique to tackle a growing
demand of accurate sentiment analysis. However, limited work has been conducted
to apply deep learning algorithms to languages other than English, such as
Persian. In this work, two deep learning models (deep autoencoders and deep
convolutional neural networks (CNNs)) are developed and applied to a novel
Persian movie reviews dataset. The proposed deep learning models are analyzed
and compared with the state-of-the-art shallow multilayer perceptron (MLP)
based machine learning model. Simulation results demonstrate the enhanced
performance of deep learning over state-of-the-art MLP.
| 2,018 | Computation and Language |
SentiALG: Automated Corpus Annotation for Algerian Sentiment Analysis | Data annotation is an important but time-consuming and costly procedure. To
sort a text into two classes, the very first thing we need is a good annotation
guideline, establishing what is required to qualify for each class. In the
literature, the difficulties associated with an appropriate data annotation has
been underestimated. In this paper, we present a novel approach to
automatically construct an annotated sentiment corpus for Algerian dialect (a
Maghrebi Arabic dialect). The construction of this corpus is based on an
Algerian sentiment lexicon that is also constructed automatically. The
presented work deals with the two widely used scripts on Arabic social media:
Arabic and Arabizi. The proposed approach automatically constructs a sentiment
corpus containing 8000 messages (where 4000 are dedicated to Arabic and 4000 to
Arabizi). The achieved F1-score is up to 72% and 78% for an Arabic and Arabizi
test sets, respectively. Ongoing work is aimed at integrating transliteration
process for Arabizi messages to further improve the obtained results.
| 2,018 | Computation and Language |
Incorporating Consistency Verification into Neural Data-to-Document
Generation | Recent neural models for data-to-document generation have achieved remarkable
progress in producing fluent and informative texts. However, large proportions
of generated texts do not actually conform to the input data. To address this
issue, we propose a new training framework which attempts to verify the
consistency between the generated texts and the input data to guide the
training process. To measure the consistency, a relation extraction model is
applied to check information overlaps between the input data and the generated
texts. The non-differentiable consistency signal is optimized via reinforcement
learning. Experimental results on a recently released challenging dataset
ROTOWIRE show improvements from our framework in various metrics.
| 2,018 | Computation and Language |
Toward domain-invariant speech recognition via large scale training | Current state-of-the-art automatic speech recognition systems are trained to
work in specific `domains', defined based on factors like application, sampling
rate and codec. When such recognizers are used in conditions that do not match
the training domain, performance significantly drops. This work explores the
idea of building a single domain-invariant model for varied use-cases by
combining large scale training data from multiple application domains. Our
final system is trained using 162,000 hours of speech. Additionally, each
utterance is artificially distorted during training to simulate effects like
background noise, codec distortion, and sampling rates. Our results show that,
even at such a scale, a model thus trained works almost as well as those
fine-tuned to specific subsets: A single model can be robust to multiple
application domains, and variations like codecs and noise. More importantly,
such models generalize better to unseen conditions and allow for rapid
adaptation -- we show that by using as little as 10 hours of data from a new
domain, an adapted domain-invariant model can match performance of a
domain-specific model trained from scratch using 70 times as much data. We also
highlight some of the limitations of such models and areas that need addressing
in future work.
| 2,019 | Computation and Language |
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense
Inference | Given a partial description like "she opened the hood of the car," humans can
reason about the situation and anticipate what might come next ("then, she
examined the engine"). In this paper, we introduce the task of grounded
commonsense inference, unifying natural language inference and commonsense
reasoning.
We present SWAG, a new dataset with 113k multiple choice questions about a
rich spectrum of grounded situations. To address the recurring challenges of
the annotation artifacts and human biases found in many existing datasets, we
propose Adversarial Filtering (AF), a novel procedure that constructs a
de-biased dataset by iteratively training an ensemble of stylistic classifiers,
and using them to filter the data. To account for the aggressive adversarial
filtering, we use state-of-the-art language models to massively oversample a
diverse set of potential counterfactuals. Empirical results demonstrate that
while humans can solve the resulting inference problems with high accuracy
(88%), various competitive models struggle on our task. We provide
comprehensive analysis that indicates significant opportunities for future
research.
| 2,018 | Computation and Language |
Computing Word Classes Using Spectral Clustering | Clustering a lexicon of words is a well-studied problem in natural language
processing (NLP). Word clusters are used to deal with sparse data in
statistical language processing, as well as features for solving various NLP
tasks (text categorization, question answering, named entity recognition and
others).
Spectral clustering is a widely used technique in the field of image
processing and speech recognition. However, it has scarcely been explored in
the context of NLP; specifically, the method used in this (Meila and Shi, 2001)
has never been used to cluster a general word lexicon.
We apply spectral clustering to a lexicon of words, evaluating the resulting
clusters by using them as features for solving two classical NLP tasks:
semantic role labeling and dependency parsing. We compare performance with
Brown clustering, a widely-used technique for word clustering, as well as with
other clustering methods. We show that spectral clusters produce similar
results to Brown clusters, and outperform other clustering methods. In
addition, we quantify the overlap between spectral and Brown clusters, showing
that each model captures some information which is uncaptured by the other.
| 2,018 | Computation and Language |
Sememe Prediction: Learning Semantic Knowledge from Unstructured Textual
Wiki Descriptions | Huge numbers of new words emerge every day, leading to a great need for
representing them with semantic meaning that is understandable to NLP systems.
Sememes are defined as the minimum semantic units of human languages, the
combination of which can represent the meaning of a word. Manual construction
of sememe based knowledge bases is time-consuming and labor-intensive.
Fortunately, communities are devoted to composing the descriptions of words in
the wiki websites. In this paper, we explore to automatically predict lexical
sememes based on the descriptions of the words in the wiki websites. We view
this problem as a weakly ordered multi-label task and propose a Label
Distributed seq2seq model (LD-seq2seq) with a novel soft loss function to solve
the problem. In the experiments, we take a real-world sememe knowledge base
HowNet and the corresponding descriptions of the words in Baidu Wiki for
training and evaluation. The results show that our LD-seq2seq model not only
beats all the baselines significantly on the test set, but also outperforms
amateur human annotators in a random subset of the test set.
| 2,018 | Computation and Language |
Linguistic data mining with complex networks: a stylometric-oriented
approach | By representing a text by a set of words and their co-occurrences, one
obtains a word-adjacency network being a reduced representation of a given
language sample. In this paper, the possibility of using network representation
to extract information about individual language styles of literary texts is
studied. By determining selected quantitative characteristics of the networks
and applying machine learning algorithms, it is possible to distinguish between
texts of different authors. Within the studied set of texts, English and
Polish, a properly rescaled weighted clustering coefficients and weighted
degrees of only a few nodes in the word-adjacency networks are sufficient to
obtain the authorship attribution accuracy over 90%. A correspondence between
the text authorship and the word-adjacency network structure can therefore be
found. The network representation allows to distinguish individual language
styles by comparing the way the authors use particular words and punctuation
marks. The presented approach can be viewed as a generalization of the
authorship attribution methods based on simple lexical features.
Additionally, other network parameters are studied, both local and global
ones, for both the unweighted and weighted networks. Their potential to capture
the writing style diversity is discussed; some differences between languages
are observed.
| 2,019 | Computation and Language |
Paraphrase Thought: Sentence Embedding Module Imitating Human Language
Recognition | Sentence embedding is an important research topic in natural language
processing. It is essential to generate a good embedding vector that fully
reflects the semantic meaning of a sentence in order to achieve an enhanced
performance for various natural language processing tasks, such as machine
translation and document classification. Thus far, various sentence embedding
models have been proposed, and their feasibility has been demonstrated through
good performances on tasks following embedding, such as sentiment analysis and
sentence classification. However, because the performances of sentence
classification and sentiment analysis can be enhanced by using a simple
sentence representation method, it is not sufficient to claim that these models
fully reflect the meanings of sentences based on good performances for such
tasks. In this paper, inspired by human language recognition, we propose the
following concept of semantic coherence, which should be satisfied for a good
sentence embedding method: similar sentences should be located close to each
other in the embedding space. Then, we propose the Paraphrase-Thought
(P-thought) model to pursue semantic coherence as much as possible.
Experimental results on two paraphrase identification datasets (MS COCO and STS
benchmark) show that the P-thought models outperform the benchmarked sentence
embedding methods.
| 2,018 | Computation and Language |
Overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and
Verification of Political Claims. Task 1: Check-Worthiness | We present an overview of the CLEF-2018 CheckThat! Lab on Automatic
Identification and Verification of Political Claims, with focus on Task 1:
Check-Worthiness. The task asks to predict which claims in a political debate
should be prioritized for fact-checking. In particular, given a debate or a
political speech, the goal was to produce a ranked list of its sentences based
on their worthiness for fact checking. We offered the task in both English and
Arabic, based on debates from the 2016 US Presidential Campaign, as well as on
some speeches during and after the campaign. A total of 30 teams registered to
participate in the Lab and seven teams actually submitted systems for Task~1.
The most successful approaches used by the participants relied on recurrent and
multi-layer neural networks, as well as on combinations of distributional
representations, on matchings claims' vocabulary against lexicons, and on
measures of syntactic dependency. The best systems achieved mean average
precision of 0.18 and 0.15 on the English and on the Arabic test datasets,
respectively. This leaves large room for further improvement, and thus we
release all datasets and the scoring scripts, which should enable further
research in check-worthiness estimation.
| 2,018 | Computation and Language |
Improving Conditional Sequence Generative Adversarial Networks by
Stepwise Evaluation | Sequence generative adversarial networks (SeqGAN) have been used to improve
conditional sequence generation tasks, for example, chit-chat dialogue
generation. To stabilize the training of SeqGAN, Monte Carlo tree search (MCTS)
or reward at every generation step (REGS) is used to evaluate the goodness of a
generated subsequence. MCTS is computationally intensive, but the performance
of REGS is worse than MCTS. In this paper, we propose stepwise GAN (StepGAN),
in which the discriminator is modified to automatically assign scores
quantifying the goodness of each subsequence at every generation step. StepGAN
has significantly less computational costs than MCTS. We demonstrate that
StepGAN outperforms previous GAN-based methods on both synthetic experiment and
chit-chat dialogue generation.
| 2,019 | Computation and Language |
Learning Graph Embeddings from WordNet-based Similarity Measures | We present path2vec, a new approach for learning graph embeddings that relies
on structural measures of pairwise node similarities. The model learns
representations for nodes in a dense space that approximate a given
user-defined graph distance measure, such as e.g. the shortest path distance or
distance measures that take information beyond the graph structure into
account. Evaluation of the proposed model on semantic similarity and word sense
disambiguation tasks, using various WordNet-based similarity measures, show
that our approach yields competitive results, outperforming strong graph
embedding baselines. The model is computationally efficient, being orders of
magnitude faster than the direct computation of graph-based distances.
| 2,019 | Computation and Language |
Predicting Human Trustfulness from Facebook Language | Trustfulness -- one's general tendency to have confidence in unknown people
or situations -- predicts many important real-world outcomes such as mental
health and likelihood to cooperate with others such as clinicians. While
data-driven measures of interpersonal trust have previously been introduced,
here, we develop the first language-based assessment of the personality trait
of trustfulness by fitting one's language to an accepted questionnaire-based
trust score. Further, using trustfulness as a type of case study, we explore
the role of questionnaire size as well as word count in developing
language-based predictive models of users' psychological traits. We find that
leveraging a longer questionnaire can yield greater test set accuracy, while,
for training, we find it beneficial to include users who took smaller
questionnaires which offers more observations for training. Similarly, after
noting a decrease in individual prediction error as word count increased, we
found a word count-weighted training scheme was helpful when there were very
few users in the first place.
| 2,018 | Computation and Language |
Deep Bayesian Active Learning for Natural Language Processing: Results
of a Large-Scale Empirical Study | Several recent papers investigate Active Learning (AL) for mitigating the
data dependence of deep learning for natural language processing. However, the
applicability of AL to real-world problems remains an open question. While in
supervised learning, practitioners can try many different methods, evaluating
each against a validation set before selecting a model, AL affords no such
luxury. Over the course of one AL run, an agent annotates its dataset
exhausting its labeling budget. Thus, given a new task, an active learner has
no opportunity to compare models and acquisition functions. This paper provides
a large scale empirical study of deep active learning, addressing multiple
tasks and, for each, multiple datasets, multiple models, and a full suite of
acquisition functions. We find that across all settings, Bayesian active
learning by disagreement, using uncertainty estimates provided either by
Dropout or Bayes-by Backprop significantly improves over i.i.d. baselines and
usually outperforms classic uncertainty sampling.
| 2,018 | Computation and Language |
Augmenting Statistical Machine Translation with Subword Translation of
Out-of-Vocabulary Words | Most statistical machine translation systems cannot translate words that are
unseen in the training data. However, humans can translate many classes of
out-of-vocabulary (OOV) words (e.g., novel morphological variants,
misspellings, and compounds) without context by using orthographic clues.
Following this observation, we describe and evaluate several general methods
for OOV translation that use only subword information. We pose the OOV
translation problem as a standalone task and intrinsically evaluate our
approaches on fourteen typologically diverse languages across varying resource
levels. Adding OOV translators to a statistical machine translation system
yields consistent BLEU gains (0.5 points on average, and up to 2.0) for all
fourteen languages, especially in low-resource scenarios.
| 2,018 | Computation and Language |
Read + Verify: Machine Reading Comprehension with Unanswerable Questions | Machine reading comprehension with unanswerable questions aims to abstain
from answering when no answer can be inferred. In addition to extract answers,
previous works usually predict an additional "no-answer" probability to detect
unanswerable cases. However, they fail to validate the answerability of the
question by verifying the legitimacy of the predicted answer. To address this
problem, we propose a novel read-then-verify system, which not only utilizes a
neural reader to extract candidate answers and produce no-answer probabilities,
but also leverages an answer verifier to decide whether the predicted answer is
entailed by the input snippets. Moreover, we introduce two auxiliary losses to
help the reader better handle answer extraction as well as no-answer detection,
and investigate three different architectures for the answer verifier. Our
experiments on the SQuAD 2.0 dataset show that our system achieves a score of
74.2 F1 on the test set, achieving state-of-the-art results at the time of
submission (Aug. 28th, 2018).
| 2,018 | Computation and Language |
Story Disambiguation: Tracking Evolving News Stories across News and
Social Streams | Following a particular news story online is an important but difficult task,
as the relevant information is often scattered across different domains/sources
(e.g., news articles, blogs, comments, tweets), presented in various formats
and language styles, and may overlap with thousands of other stories. In this
work we join the areas of topic tracking and entity disambiguation, and propose
a framework named Story Disambiguation - a cross-domain story tracking approach
that builds on real-time entity disambiguation and a learning-to-rank framework
to represent and update the rich semantic structure of news stories. Given a
target news story, specified by a seed set of documents, the goal is to
effectively select new story-relevant documents from an incoming document
stream. We represent stories as entity graphs and we model the story tracking
problem as a learning-to-rank task. This enables us to track content with high
accuracy, from multiple domains, in real-time. We study a range of text, entity
and graph based features to understand which type of features are most
effective for representing stories. We further propose new semi-supervised
learning techniques to automatically update the story representation over time.
Our empirical study shows that we outperform the accuracy of state-of-the-art
methods for tracking mixed-domain document streams, while requiring fewer
labeled data to seed the tracked stories. This is particularly the case for
local news stories that are easily over shadowed by other trending stories, and
for complex news stories with ambiguous content in noisy stream environments.
| 2,018 | Computation and Language |
Syntree2Vec - An algorithm to augment syntactic hierarchy into word
embeddings | Word embeddings aims to map sense of the words into a lower dimensional
vector space in order to reason over them. Training embeddings on domain
specific data helps express concepts more relevant to their use case but comes
at a cost of accuracy when data is less. Our effort is to minimise this by
infusing syntactic knowledge into the embeddings. We propose a graph based
embedding algorithm inspired from node2vec. Experimental results have shown
that our algorithm improves the syntactic strength and gives robust performance
on meagre data.
| 2,018 | Computation and Language |
Improved Language Modeling by Decoding the Past | Highly regularized LSTMs achieve impressive results on several benchmark
datasets in language modeling. We propose a new regularization method based on
decoding the last token in the context using the predicted distribution of the
next token. This biases the model towards retaining more contextual
information, in turn improving its ability to predict the next token. With
negligible overhead in the number of parameters and training time, our Past
Decode Regularization (PDR) method achieves a word level perplexity of 55.6 on
the Penn Treebank and 63.5 on the WikiText-2 datasets using a single softmax.
We also show gains by using PDR in combination with a mixture-of-softmaxes,
achieving a word level perplexity of 53.8 and 60.5 on these datasets. In
addition, our method achieves 1.169 bits-per-character on the Penn Treebank
Character dataset for character level language modeling. These results
constitute a new state-of-the-art in their respective settings.
| 2,019 | Computation and Language |
SeVeN: Augmenting Word Embeddings with Unsupervised Relation Vectors | We present SeVeN (Semantic Vector Networks), a hybrid resource that encodes
relationships between words in the form of a graph. Different from traditional
semantic networks, these relations are represented as vectors in a continuous
vector space. We propose a simple pipeline for learning such relation vectors,
which is based on word vector averaging in combination with an ad hoc
autoencoder. We show that by explicitly encoding relational information in a
dedicated vector space we can capture aspects of word meaning that are
complementary to what is captured by word embeddings. For example, by examining
clusters of relation vectors, we observe that relational similarities can be
identified at a more abstract level than with traditional word vector
differences. Finally, we test the effectiveness of semantic vector networks in
two tasks: measuring word similarity and neural text categorization. SeVeN is
available at bitbucket.org/luisespinosa/seven.
| 2,018 | Computation and Language |
Learning to Compose over Tree Structures via POS Tags | Recursive Neural Network (RecNN), a type of models which compose words or
phrases recursively over syntactic tree structures, has been proven to have
superior ability to obtain sentence representation for a variety of NLP tasks.
However, RecNN is born with a thorny problem that a shared compositional
function for each node of trees can't capture the complex semantic
compositionality so that the expressive power of model is limited. In this
paper, in order to address this problem, we propose Tag-Guided
HyperRecNN/TreeLSTM (TG-HRecNN/TreeLSTM), which introduces hypernetwork into
RecNNs to take as inputs Part-of-Speech (POS) tags of word/phrase and generate
the semantic composition parameters dynamically. Experimental results on five
datasets for two typical NLP tasks show proposed models both obtain significant
improvement compared with RecNN and TreeLSTM consistently. Our TG-HTreeLSTM
outperforms all existing RecNN-based models and achieves or is competitive with
state-of-the-art on four sentence classification benchmarks. The effectiveness
of our models is also demonstrated by qualitative analysis.
| 2,018 | Computation and Language |
Emoji Sentiment Scores of Writers using Odds Ratio and Fisher Exact Test | The sentiment of a given emoji is traditionally calculated by averaging the
ratings {-1, 0 or +1} given by various users to a given context where the emoji
appears. However, using such formula complicates the statistical significance
analysis particularly for low sample sizes. Here, we provide sentiment scores
using odds and a sentiment mapping to a 4-icon scale. We show how odds ratio
statistics leads to simpler sentiment analysis. Finally, we provide a list of
sentiment scores with the often-missing exact p-values and CI for the most
common emoji.
| 2,018 | Computation and Language |
A Recipe for Arabic-English Neural Machine Translation | In this paper, we present a recipe for building a good Arabic-English neural
machine translation. We compare neural systems with traditional phrase-based
systems using various parallel corpora including UN, ISI and Ummah. We also
investigate the importance of special preprocessing of the Arabic script. The
presented results are based on test sets from NIST MT 2005 and 2012. The best
neural system produces a gain of +13 BLEU points compared to an equivalent
simple phrase-based system in NIST MT12 test set. Unexpectedly, we find that
tuning a model trained on the whole data using a small high quality corpus like
Ummah gives a substantial improvement (+3 BLEU points). We also find that
training a neural system with a small Arabic-English corpus is competitive to a
traditional phrase-based system.
| 2,018 | Computation and Language |
Hierarchical Neural Networks for Sequential Sentence Classification in
Medical Scientific Abstracts | Prevalent models based on artificial neural network (ANN) for sentence
classification often classify sentences in isolation without considering the
context in which sentences appear. This hampers the traditional sentence
classification approaches to the problem of sequential sentence classification,
where structured prediction is needed for better overall classification
performance. In this work, we present a hierarchical sequential labeling
network to make use of the contextual information within surrounding sentences
to help classify the current sentence. Our model outperforms the
state-of-the-art results by 2%-3% on two benchmarking datasets for sequential
sentence classification in medical scientific abstracts.
| 2,018 | Computation and Language |
Source-Critical Reinforcement Learning for Transferring Spoken Language
Understanding to a New Language | To deploy a spoken language understanding (SLU) model to a new language,
language transferring is desired to avoid the trouble of acquiring and labeling
a new big SLU corpus. Translating the original SLU corpus into the target
language is an attractive strategy. However, SLU corpora consist of plenty of
semantic labels (slots), which general-purpose translators cannot handle well,
not to mention additional culture differences. This paper focuses on the
language transferring task given a tiny in-domain parallel SLU corpus. The
in-domain parallel corpus can be used as the first adaptation on the general
translator. But more importantly, we show how to use reinforcement learning
(RL) to further finetune the adapted translator, where translated sentences
with more proper slot tags receive higher rewards. We evaluate our approach on
Chinese to English language transferring for SLU systems. The experimental
results show that the generated English SLU corpus via adaptation and
reinforcement learning gives us over 97% in the slot F1 score and over 84%
accuracy in domain classification. It demonstrates the effectiveness of the
proposed language transferring method. Compared with naive translation, our
proposed method improves domain classification accuracy by relatively 22%, and
the slot filling F1 score by relatively more than 71%.
| 2,018 | Computation and Language |
Linked Recurrent Neural Networks | Recurrent Neural Networks (RNNs) have been proven to be effective in modeling
sequential data and they have been applied to boost a variety of tasks such as
document classification, speech recognition and machine translation. Most of
existing RNN models have been designed for sequences assumed to be identically
and independently distributed (i.i.d). However, in many real-world
applications, sequences are naturally linked. For example, web documents are
connected by hyperlinks; and genes interact with each other. On the one hand,
linked sequences are inherently not i.i.d., which poses tremendous challenges
to existing RNN models. On the other hand, linked sequences offer link
information in addition to the sequential information, which enables
unprecedented opportunities to build advanced RNN models. In this paper, we
study the problem of RNN for linked sequences. In particular, we introduce a
principled approach to capture link information and propose a linked Recurrent
Neural Network (LinkedRNN), which models sequential and link information
coherently. We conduct experiments on real-world datasets from multiple domains
and the experimental results validate the effectiveness of the proposed
framework.
| 2,018 | Computation and Language |
Adapting the Neural Encoder-Decoder Framework from Single to
Multi-Document Summarization | Generating a text abstract from a set of documents remains a challenging
task. The neural encoder-decoder framework has recently been exploited to
summarize single documents, but its success can in part be attributed to the
availability of large parallel data automatically acquired from the Web. In
contrast, parallel data for multi-document summarization are scarce and costly
to obtain. There is a pressing need to adapt an encoder-decoder model trained
on single-document summarization data to work with multiple-document input. In
this paper, we present an initial investigation into a novel adaptation method.
It exploits the maximal marginal relevance method to select representative
sentences from multi-document input, and leverages an abstractive
encoder-decoder model to fuse disparate sentences to an abstractive summary.
The adaptation method is robust and itself requires no training data. Our
system compares favorably to state-of-the-art extractive and abstractive
approaches judged by automatic metrics and human assessors.
| 2,018 | Computation and Language |
Automatic Detection of Vague Words and Sentences in Privacy Policies | Website privacy policies represent the single most important source of
information for users to gauge how their personal data are collected, used and
shared by companies. However, privacy policies are often vague and people
struggle to understand the content. Their opaqueness poses a significant
challenge to both users and policy regulators. In this paper, we seek to
identify vague content in privacy policies. We construct the first corpus of
human-annotated vague words and sentences and present empirical studies on
automatic vagueness detection. In particular, we investigate context-aware and
context-agnostic models for predicting vague words, and explore
auxiliary-classifier generative adversarial networks for characterizing
sentence vagueness. Our experimental results demonstrate the effectiveness of
proposed approaches. Finally, we provide suggestions for resolving vagueness
and improving the usability of privacy policies.
| 2,018 | Computation and Language |
SentencePiece: A simple and language independent subword tokenizer and
detokenizer for Neural Text Processing | This paper describes SentencePiece, a language-independent subword tokenizer
and detokenizer designed for Neural-based text processing, including Neural
Machine Translation. It provides open-source C++ and Python implementations for
subword units. While existing subword segmentation tools assume that the input
is pre-tokenized into word sequences, SentencePiece can train subword models
directly from raw sentences, which allows us to make a purely end-to-end and
language independent system. We perform a validation experiment of NMT on
English-Japanese machine translation, and find that it is possible to achieve
comparable accuracy to direct subword training from raw sentences. We also
compare the performance of subword training and segmentation with various
configurations. SentencePiece is available under the Apache 2 license at
https://github.com/google/sentencepiece.
| 2,018 | Computation and Language |
Lexicosyntactic Inference in Neural Models | We investigate neural models' ability to capture lexicosyntactic inferences:
inferences triggered by the interaction of lexical and syntactic information.
We take the task of event factuality prediction as a case study and build a
factuality judgment dataset for all English clause-embedding verbs in various
syntactic contexts. We use this dataset, which we make publicly available, to
probe the behavior of current state-of-the-art neural systems, showing that
these systems make certain systematic errors that are clearly visible through
the lens of factuality prediction.
| 2,018 | Computation and Language |
XL-NBT: A Cross-lingual Neural Belief Tracking Framework | Task-oriented dialog systems are becoming pervasive, and many companies
heavily rely on them to complement human agents for customer service in call
centers. With globalization, the need for providing cross-lingual customer
support becomes more urgent than ever. However, cross-lingual support poses
great challenges---it requires a large amount of additional annotated data from
native speakers. In order to bypass the expensive human annotation and achieve
the first step towards the ultimate goal of building a universal dialog system,
we set out to build a cross-lingual state tracking framework. Specifically, we
assume that there exists a source language with dialog belief tracking
annotations while the target languages have no annotated dialog data of any
form. Then, we pre-train a state tracker for the source language as a teacher,
which is able to exploit easy-to-access parallel data. We then distill and
transfer its own knowledge to the student state tracker in target languages. We
specifically discuss two types of common parallel resources: bilingual corpus
and bilingual dictionary, and design different transfer learning strategies
accordingly. Experimentally, we successfully use English state tracker as the
teacher to transfer its knowledge to both Italian and German trackers and
achieve promising results.
| 2,018 | Computation and Language |
Neural Machine Translation of Text from Non-Native Speakers | Neural Machine Translation (NMT) systems are known to degrade when confronted
with noisy data, especially when the system is trained only on clean data. In
this paper, we show that augmenting training data with sentences containing
artificially-introduced grammatical errors can make the system more robust to
such errors. In combination with an automatic grammar error correction system,
we can recover 1.5 BLEU out of 2.4 BLEU lost due to grammatical errors. We also
present a set of Spanish translations of the JFLEG grammar error correction
corpus, which allows for testing NMT robustness to real grammatical errors.
| 2,019 | Computation and Language |
Multi-Perspective Context Aggregation for Semi-supervised Cloze-style
Reading Comprehension | Cloze-style reading comprehension has been a popular task for measuring the
progress of natural language understanding in recent years. In this paper, we
design a novel multi-perspective framework, which can be seen as the joint
training of heterogeneous experts and aggregate context information from
different perspectives. Each perspective is modeled by a simple aggregation
module. The outputs of multiple aggregation modules are fed into a one-timestep
pointer network to get the final answer. At the same time, to tackle the
problem of insufficient labeled data, we propose an efficient sampling
mechanism to automatically generate more training examples by matching the
distribution of candidates between labeled and unlabeled data. We conduct our
experiments on a recently released cloze-test dataset CLOTH (Xie et al., 2017),
which consists of nearly 100k questions designed by professional teachers.
Results show that our method achieves new state-of-the-art performance over
previous strong baselines.
| 2,018 | Computation and Language |
Question Generation from SQL Queries Improves Neural Semantic Parsing | We study how to learn a semantic parser of state-of-the-art accuracy with
less supervised training data. We conduct our study on WikiSQL, the largest
hand-annotated semantic parsing dataset to date. First, we demonstrate that
question generation is an effective method that empowers us to learn a
state-of-the-art neural network based semantic parser with thirty percent of
the supervised training data. Second, we show that applying question generation
to the full supervised training data further improves the state-of-the-art
model. In addition, we observe that there is a logarithmic relationship between
the accuracy of a semantic parser and the amount of training data.
| 2,018 | Computation and Language |
Post-Processing of Word Representations via Variance Normalization and
Dynamic Embedding | Although embedded vector representations of words offer impressive
performance on many natural language processing (NLP) applications, the
information of ordered input sequences is lost to some extent if only
context-based samples are used in the training. For further performance
improvement, two new post-processing techniques, called post-processing via
variance normalization (PVN) and post-processing via dynamic embedding (PDE),
are proposed in this work. The PVN method normalizes the variance of principal
components of word vectors while the PDE method learns orthogonal latent
variables from ordered input sequences. The PVN and the PDE methods can be
integrated to achieve better performance. We apply these post-processing
techniques to two popular word embedding methods (i.e., word2vec and GloVe) to
yield their post-processed representations. Extensive experiments are conducted
to demonstrate the effectiveness of the proposed post-processing techniques.
| 2,019 | Computation and Language |
State-of-the-art Chinese Word Segmentation with Bi-LSTMs | A wide variety of neural-network architectures have been proposed for the
task of Chinese word segmentation.
Surprisingly, we find that a bidirectional LSTM model, when combined with
standard deep learning techniques and best practices, can achieve better
accuracy on many of the popular datasets as compared to models based on more
complex neural-network architectures.
Furthermore, our error analysis shows that out-of-vocabulary words remain
challenging for neural-network models, and many of the remaining errors are
unlikely to be fixed through architecture changes.
Instead, more effort should be made on exploring resources for further
improvement.
| 2,018 | Computation and Language |
Adaptive Document Retrieval for Deep Question Answering | State-of-the-art systems in deep question answering proceed as follows: (1)
an initial document retrieval selects relevant documents, which (2) are then
processed by a neural network in order to extract the final answer. Yet the
exact interplay between both components is poorly understood, especially
concerning the number of candidate documents that should be retrieved. We show
that choosing a static number of documents -- as used in prior research --
suffers from a noise-information trade-off and yields suboptimal results. As a
remedy, we propose an adaptive document retrieval model. This learns the
optimal candidate number for document retrieval, conditional on the size of the
corpus and the query. We report extensive experimental results showing that our
adaptive approach outperforms state-of-the-art methods on multiple benchmark
datasets, as well as in the context of corpora with variable sizes.
| 2,018 | Computation and Language |
Detecting cognitive impairments by agreeing on interpretations of
linguistic features | Linguistic features have shown promising applications for detecting various
cognitive impairments. To improve detection accuracies, increasing the amount
of data or the number of linguistic features have been two applicable
approaches. However, acquiring additional clinical data can be expensive, and
hand-crafting features is burdensome. In this paper, we take a third approach,
proposing Consensus Networks (CNs), a framework to classify after reaching
agreements between modalities. We divide linguistic features into
non-overlapping subsets according to their modalities, and let neural networks
learn low-dimensional representations that agree with each other. These
representations are passed into a classifier network. All neural networks are
optimized iteratively.
In this paper, we also present two methods that improve the performance of
CNs. We then present ablation studies to illustrate the effectiveness of
modality division. To understand further what happens in CNs, we visualize the
representations during training. Overall, using all of the 413 linguistic
features, our models significantly outperform traditional classifiers, which
are used by the state-of-the-art papers.
| 2,019 | Computation and Language |
Adversarial Removal of Demographic Attributes from Text Data | Recent advances in Representation Learning and Adversarial Training seem to
succeed in removing unwanted features from the learned representation. We show
that demographic information of authors is encoded in -- and can be recovered
from -- the intermediate representations learned by text-based neural
classifiers. The implication is that decisions of classifiers trained on
textual data are not agnostic to -- and likely condition on -- demographic
attributes. When attempting to remove such demographic information using
adversarial training, we find that while the adversarial component achieves
chance-level development-set accuracy during training, a post-hoc classifier,
trained on the encoded sentences from the first part, still manages to reach
substantially higher classification accuracies on the same data. This behavior
is consistent across several tasks, demographic properties and datasets. We
explore several techniques to improve the effectiveness of the adversarial
component. Our main conclusion is a cautionary one: do not rely on the
adversarial training to achieve invariant representation to sensitive features.
| 2,018 | Computation and Language |
Watset: Local-Global Graph Clustering with Applications in Sense and
Frame Induction | We present a detailed theoretical and computational analysis of the Watset
meta-algorithm for fuzzy graph clustering, which has been found to be widely
applicable in a variety of domains. This algorithm creates an intermediate
representation of the input graph that reflects the "ambiguity" of its nodes.
Then, it uses hard clustering to discover clusters in this "disambiguated"
intermediate graph. After outlining the approach and analyzing its
computational complexity, we demonstrate that Watset shows competitive results
in three applications: unsupervised synset induction from a synonymy graph,
unsupervised semantic frame induction from dependency triples, and unsupervised
semantic class induction from a distributional thesaurus. Our algorithm is
generic and can be also applied to other networks of linguistic data.
| 2,019 | Computation and Language |
You Shall Know the Most Frequent Sense by the Company it Keeps | Identification of the most frequent sense of a polysemous word is an
important semantic task. We introduce two concepts that can benefit MFS
detection: companions, which are the most frequently co-occurring words, and
the most frequent translation in a bitext. We present two novel methods that
incorporate these new concepts, and show that they advance the state of the art
on MFS detection.
| 2,019 | Computation and Language |
Neural Relation Extraction via Inner-Sentence Noise Reduction and
Transfer Learning | Extracting relations is critical for knowledge base completion and
construction in which distant supervised methods are widely used to extract
relational facts automatically with the existing knowledge bases. However, the
automatically constructed datasets comprise amounts of low-quality sentences
containing noisy words, which is neglected by current distant supervised
methods resulting in unacceptable precisions. To mitigate this problem, we
propose a novel word-level distant supervised approach for relation extraction.
We first build Sub-Tree Parse(STP) to remove noisy words that are irrelevant to
relations. Then we construct a neural network inputting the sub-tree while
applying the entity-wise attention to identify the important semantic features
of relational words in each instance. To make our model more robust against
noisy words, we initialize our network with a priori knowledge learned from the
relevant task of entity classification by transfer learning. We conduct
extensive experiments using the corpora of New York Times(NYT) and Freebase.
Experiments show that our approach is effective and improves the area of
Precision/Recall(PR) from 0.35 to 0.39 over the state-of-the-art work.
| 2,018 | Computation and Language |
Interactive Semantic Parsing for If-Then Recipes via Hierarchical
Reinforcement Learning | Given a text description, most existing semantic parsers synthesize a program
in one shot. However, it is quite challenging to produce a correct program
solely based on the description, which in reality is often ambiguous or
incomplete. In this paper, we investigate interactive semantic parsing, where
the agent can ask the user clarification questions to resolve ambiguities via a
multi-turn dialogue, on an important type of programs called "If-Then recipes."
We develop a hierarchical reinforcement learning (HRL) based agent that
significantly improves the parsing performance with minimal questions to the
user. Results under both simulation and human evaluation show that our agent
substantially outperforms non-interactive semantic parsers and rule-based
agents.
| 2,018 | Computation and Language |
Lessons from Natural Language Inference in the Clinical Domain | State of the art models using deep neural networks have become very good in
learning an accurate mapping from inputs to outputs. However, they still lack
generalization capabilities in conditions that differ from the ones encountered
during training. This is even more challenging in specialized, and knowledge
intensive domains, where training data is limited. To address this gap, we
introduce MedNLI - a dataset annotated by doctors, performing a natural
language inference task (NLI), grounded in the medical history of patients. We
present strategies to: 1) leverage transfer learning using datasets from the
open domain, (e.g. SNLI) and 2) incorporate domain knowledge from external data
and lexical sources (e.g. medical terminologies). Our results demonstrate
performance gains using both strategies.
| 2,018 | Computation and Language |
Semi-Supervised Learning for Neural Keyphrase Generation | We study the problem of generating keyphrases that summarize the key points
for a given document. While sequence-to-sequence (seq2seq) models have achieved
remarkable performance on this task (Meng et al., 2017), model training often
relies on large amounts of labeled data, which is only applicable to
resource-rich domains. In this paper, we propose semi-supervised keyphrase
generation methods by leveraging both labeled data and large-scale unlabeled
samples for learning. Two strategies are proposed. First, unlabeled documents
are first tagged with synthetic keyphrases obtained from unsupervised keyphrase
extraction methods or a selflearning algorithm, and then combined with labeled
samples for training. Furthermore, we investigate a multi-task learning
framework to jointly learn to generate keyphrases as well as the titles of the
articles. Experimental results show that our semi-supervised learning-based
methods outperform a state-of-the-art model trained with labeled data only.
| 2,019 | Computation and Language |
The Influence of Down-Sampling Strategies on SVD Word Embedding
Stability | The stability of word embedding algorithms, i.e., the consistency of the word
representations they reveal when trained repeatedly on the same data set, has
recently raised concerns. We here compare word embedding algorithms on three
corpora of different sizes, and evaluate both their stability and accuracy. We
find strong evidence that down-sampling strategies (used as part of their
training procedures) are particularly influential for the stability of
SVDPPMI-type embeddings. This finding seems to explain diverging reports on
their stability and lead us to a simple modification which provides superior
stability as well as accuracy on par with skip-gram embeddings.
| 2,019 | Computation and Language |
Measuring Semantic Abstraction of Multilingual NMT with Paraphrase
Recognition and Generation Tasks | In this paper, we investigate whether multilingual neural translation models
learn stronger semantic abstractions of sentences than bilingual ones. We test
this hypotheses by measuring the perplexity of such models when applied to
paraphrases of the source language. The intuition is that an encoder produces
better representations if a decoder is capable of recognizing synonymous
sentences in the same language even though the model is never trained for that
task. In our setup, we add 16 different auxiliary languages to a bidirectional
bilingual baseline model (English-French) and test it with in-domain and
out-of-domain paraphrases in English. The results show that the perplexity is
significantly reduced in each of the cases, indicating that meaning can be
grounded in translation. This is further supported by a study on paraphrase
generation that we also include at the end of the paper.
| 2,019 | Computation and Language |
Analysis of Speeches in Indian Parliamentary Debates | With the increasing usage of the internet, more and more data is being
digitized including parliamentary debates but they are in an unstructured
format. There is a need to convert them into a structured format for linguistic
analysis. Much work has been done on parliamentary data such as Hansard,
American congressional floor-debate data on various aspects but less on
pragmatics. In this paper, we provide a dataset for the synopsis of Indian
parliamentary debates and perform stance classification of speeches i.e
identifying if the speaker is supporting the bill/issue or against it. We also
analyze the intention of the speeches beyond mere sentences i.e pragmatics in
the parliament. Based on thorough manual analysis of the debates, we developed
an annotation scheme of 4 mutually exclusive categories to analyze the purpose
of the speeches: to find out ISSUES, to BLAME, to APPRECIATE and for CALL FOR
ACTION. We have annotated the dataset provided, with these 4 categories and
conducted preliminary experiments for automatic detection of the categories.
Our automated classification approach gave us promising results.
| 2,018 | Computation and Language |
Demonstrating PAR4SEM - A Semantic Writing Aid with Adaptive
Paraphrasing | In this paper, we present Par4Sem, a semantic writing aid tool based on
adaptive paraphrasing. Unlike many annotation tools that are primarily used to
collect training examples, Par4Sem is integrated into a real word application,
in this case a writing aid tool, in order to collect training examples from
usage data. Par4Sem is a tool, which supports an adaptive, iterative, and
interactive process where the underlying machine learning models are updated
for each iteration using new training examples from usage data. After
motivating the use of ever-learning tools in NLP applications, we evaluate
Par4Sem by adopting it to a text simplification task through mere usage.
| 2,018 | Computation and Language |
Adversarial training for multi-context joint entity and relation
extraction | Adversarial training (AT) is a regularization method that can be used to
improve the robustness of neural network methods by adding small perturbations
in the training data. We show how to use AT for the tasks of entity recognition
and relation extraction. In particular, we demonstrate that applying AT to a
general purpose baseline model for jointly extracting entities and relations,
allows improving the state-of-the-art effectiveness on several datasets in
different contexts (i.e., news, biomedical, and real estate data) and for
different languages (English and Dutch).
| 2,019 | Computation and Language |
Multi-Source Pointer Network for Product Title Summarization | In this paper, we study the product title summarization problem in E-commerce
applications for display on mobile devices. Comparing with conventional
sentence summarization, product title summarization has some extra and
essential constraints. For example, factual errors or loss of the key
information are intolerable for E-commerce applications. Therefore, we abstract
two more constraints for product title summarization: (i) do not introduce
irrelevant information; (ii) retain the key information (e.g., brand name and
commodity name). To address these issues, we propose a novel multi-source
pointer network by adding a new knowledge encoder for pointer network. The
first constraint is handled by pointer mechanism. For the second constraint, we
restore the key information by copying words from the knowledge encoder with
the help of the soft gating mechanism. For evaluation, we build a large
collection of real-world product titles along with human-written short titles.
Experimental results demonstrate that our model significantly outperforms the
other baselines. Finally, online deployment of our proposed model has yielded a
significant business impact, as measured by the click-through rate.
| 2,018 | Computation and Language |
A Skeleton-Based Model for Promoting Coherence Among Sentences in
Narrative Story Generation | Narrative story generation is a challenging problem because it demands the
generated sentences with tight semantic connections, which has not been well
studied by most existing generative models. To address this problem, we propose
a skeleton-based model to promote the coherence of generated stories. Different
from traditional models that generate a complete sentence at a stroke, the
proposed model first generates the most critical phrases, called skeleton, and
then expands the skeleton to a complete and fluent sentence. The skeleton is
not manually defined, but learned by a reinforcement learning method. Compared
to the state-of-the-art models, our skeleton-based model can generate
significantly more coherent text according to human evaluation and automatic
evaluation. The G-score is improved by 20.1% in the human evaluation. The code
is available at https://github.com/lancopku/Skeleton-Based-Generation-Model
| 2,018 | Computation and Language |
Gaussian Word Embedding with a Wasserstein Distance Loss | Compared with word embedding based on point representation,
distribution-based word embedding shows more flexibility in expressing
uncertainty and therefore embeds richer semantic information when representing
words. The Wasserstein distance provides a natural notion of dissimilarity with
probability measures and has a closed-form solution when measuring the distance
between two Gaussian distributions. Therefore, with the aim of representing
words in a highly efficient way, we propose to operate a Gaussian word
embedding model with a loss function based on the Wasserstein distance. Also,
external information from ConceptNet will be used to semi-supervise the results
of the Gaussian word embedding. Thirteen datasets from the word similarity
task, together with one from the word entailment task, and six datasets from
the downstream document classification task will be evaluated in this paper to
test our hypothesis.
| 2,018 | Computation and Language |
QuAC : Question Answering in Context | We present QuAC, a dataset for Question Answering in Context that contains
14K information-seeking QA dialogs (100K questions in total). The dialogs
involve two crowd workers: (1) a student who poses a sequence of freeform
questions to learn as much as possible about a hidden Wikipedia text, and (2) a
teacher who answers the questions by providing short excerpts from the text.
QuAC introduces challenges not found in existing machine comprehension
datasets: its questions are often more open-ended, unanswerable, or only
meaningful within the dialog context, as we show in a detailed qualitative
evaluation. We also report results for a number of reference models, including
a recently state-of-the-art reading comprehension architecture extended to
model dialog context. Our best model underperforms humans by 20 F1, suggesting
that there is significant room for future work on this data. Dataset, baseline,
and leaderboard available at http://quac.ai.
| 2,018 | Computation and Language |
CoQA: A Conversational Question Answering Challenge | Humans gather information by engaging in conversations involving a series of
interconnected questions and answers. For machines to assist in information
gathering, it is therefore essential to enable them to answer conversational
questions. We introduce CoQA, a novel dataset for building Conversational
Question Answering systems. Our dataset contains 127k questions with answers,
obtained from 8k conversations about text passages from seven diverse domains.
The questions are conversational, and the answers are free-form text with their
corresponding evidence highlighted in the passage. We analyze CoQA in depth and
show that conversational questions have challenging phenomena not present in
existing reading comprehension datasets, e.g., coreference and pragmatic
reasoning. We evaluate strong conversational and reading comprehension models
on CoQA. The best system obtains an F1 score of 65.4%, which is 23.4 points
behind human performance (88.8%), indicating there is ample room for
improvement. We launch CoQA as a challenge to the community at
http://stanfordnlp.github.io/coqa/
| 2,019 | Computation and Language |
ISNA-Set: A novel English Corpus of Iran NEWS | News agencies publish news on their websites all over the world. Moreover,
creating novel corpuses is necessary to bring natural processing to new
domains. Textual processing of online news is challenging in terms of the
strategy of collecting data, the complex structure of news websites, and
selecting or designing suitable algorithms for processing these types of data.
Despite the previous works which focus on creating corpuses for Iran news in
Persian, in this paper, we introduce a new corpus for English news of a
national news agency. ISNA-Set is a new dataset of English news of Iranian
Students News Agency (ISNA), as one of the most famous news agencies in Iran.
We statistically analyze the data and the sentiments of news, and also extract
entities and part-of-speech tagging.
| 2,018 | Computation and Language |
Has Machine Translation Achieved Human Parity? A Case for Document-level
Evaluation | Recent research suggests that neural machine translation achieves parity with
professional human translation on the WMT Chinese--English news translation
task. We empirically test this claim with alternative evaluation protocols,
contrasting the evaluation of single sentences and entire documents. In a
pairwise ranking experiment, human raters assessing adequacy and fluency show a
stronger preference for human over machine translation when evaluating
documents as compared to isolated sentences. Our findings emphasise the need to
shift towards document-level evaluation as machine translation improves to the
degree that errors which are hard or impossible to spot at the sentence-level
become decisive in discriminating quality of different translation outputs.
| 2,018 | Computation and Language |
Language Identification in Code-Mixed Data using Multichannel Neural
Networks and Context Capture | An accurate language identification tool is an absolute necessity for
building complex NLP systems to be used on code-mixed data. Lot of work has
been recently done on the same, but there's still room for improvement.
Inspired from the recent advancements in neural network architectures for
computer vision tasks, we have implemented multichannel neural networks
combining CNN and LSTM for word level language identification of code-mixed
data. Combining this with a Bi-LSTM-CRF context capture module, accuracies of
93.28% and 93.32% is achieved on our two testing sets.
| 2,018 | Computation and Language |
Deciding the status of controversial phonemes using frequency
distributions; an application to semiconsonants in Spanish | Exploiting the fact that natural languages are complex systems, the present
exploratory article proposes a direct method based on frequency distributions
that may be useful when making a decision on the status of problematic
phonemes, an open problem in linguistics. The main notion is that natural
languages, which can be considered from a complex outlook as information
processing machines, and which somehow manage to set appropriate levels of
redundancy, already "made the choice" whether a linguistic unit is a phoneme or
not, and this would be reflected in a greater smoothness in a frequency versus
rank graph. For the particular case we chose to study, we conclude that it is
reasonable to consider the Spanish semiconsonant /w/ as a separate phoneme from
its vowel counterpart /u/, on the one hand, and possibly also the semiconsonant
/j/ as a separate phoneme from its vowel counterpart /i/, on the other. As
language has been so central a topic in the study of complexity, this
discussion grants us, in addition, an opportunity to gain insight into emerging
properties in the broader complex systems debate.
| 2,018 | Computation and Language |
Keyphrase Generation with Correlation Constraints | In this paper, we study automatic keyphrase generation. Although conventional
approaches to this task show promising results, they neglect correlation among
keyphrases, resulting in duplication and coverage issues. To solve these
problems, we propose a new sequence-to-sequence architecture for keyphrase
generation named CorrRNN, which captures correlation among multiple keyphrases
in two ways. First, we employ a coverage vector to indicate whether the word in
the source document has been summarized by previous phrases to improve the
coverage for keyphrases. Second, preceding phrases are taken into account to
eliminate duplicate phrases and improve result coherence. Experiment results
show that our model significantly outperforms the state-of-the-art method on
benchmark datasets in terms of both accuracy and diversity.
| 2,018 | Computation and Language |
Neural Latent Extractive Document Summarization | Extractive summarization models require sentence-level labels, which are
usually created heuristically (e.g., with rule-based methods) given that most
summarization datasets only have document-summary pairs. Since these labels
might be suboptimal, we propose a latent variable extractive model where
sentences are viewed as latent variables and sentences with activated variables
are used to infer gold summaries. During training the loss comes
\emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset
show that our model improves over a strong extractive baseline trained on
heuristically approximated labels and also performs competitively to several
recent models.
| 2,018 | Computation and Language |
Identifying High-Quality Chinese News Comments Based on Multi-Target
Text Matching Model | With the development of information technology, there is an explosive growth
in the number of online comment concerning news, blogs and so on. The massive
comments are overloaded, and often contain some misleading and unwelcome
information. Therefore, it is necessary to identify high-quality comments and
filter out low-quality comments. In this work, we introduce a novel task:
high-quality comment identification (HQCI), which aims to automatically assess
the quality of online comments. First, we construct a news comment corpus,
which consists of news, comments, and the corresponding quality label. Second,
we analyze the dataset, and find the quality of comments can be measured in
three aspects: informativeness, consistency, and novelty. Finally, we propose a
novel multi-target text matching model, which can measure three aspects by
referring to the news and surrounding comments. Experimental results show that
our method can outperform various baselines by a large margin on the news
dataset.
| 2,018 | Computation and Language |
A Characterwise Windowed Approach to Hebrew Morphological Segmentation | This paper presents a novel approach to the segmentation of orthographic word
forms in contemporary Hebrew, focusing purely on splitting without carrying out
morphological analysis or disambiguation. Casting the analysis task as
character-wise binary classification and using adjacent character and
word-based lexicon-lookup features, this approach achieves over 98% accuracy on
the benchmark SPMRL shared task data for Hebrew, and 97% accuracy on a new out
of domain Wikipedia dataset, an improvement of ~4% and 5% over previous state
of the art performance.
| 2,018 | Computation and Language |
Hierarchical Neural Network for Extracting Knowledgeable Snippets and
Documents | In this study, we focus on extracting knowledgeable snippets and annotating
knowledgeable documents from Web corpus, consisting of the documents from
social media and We-media. Informally, knowledgeable snippets refer to the text
describing concepts, properties of entities, or relations among entities, while
knowledgeable documents are the ones with enough knowledgeable snippets. These
knowledgeable snippets and documents could be helpful in multiple applications,
such as knowledge base construction and knowledge-oriented service. Previous
studies extracted the knowledgeable snippets using the pattern-based method.
Here, we propose the semantic-based method for this task. Specifically, a CNN
based model is developed to extract knowledgeable snippets and annotate
knowledgeable documents simultaneously. Additionally, a "low-level sharing,
high-level splitting" structure of CNN is designed to handle the documents from
different content domains. Compared with building multiple domain-specific
CNNs, this joint model not only critically saves the training time, but also
improves the prediction accuracy visibly. The superiority of the proposed
method is demonstrated in a real dataset from Wechat public platform.
| 2,018 | Computation and Language |
Reducing Gender Bias in Abusive Language Detection | Abusive language detection models tend to have a problem of being biased
toward identity words of a certain group of people because of imbalanced
training datasets. For example, "You are a good woman" was considered "sexist"
when trained on an existing dataset. Such model bias is an obstacle for models
to be robust enough for practical use. In this work, we measure gender biases
on models trained with different abusive language datasets, while analyzing the
effect of different pre-trained word embeddings and model architectures. We
also experiment with three bias mitigation methods: (1) debiased word
embeddings, (2) gender swap data augmentation, and (3) fine-tuning with a
larger corpus. These methods can effectively reduce gender bias by 90-98% and
can be extended to correct model bias in other scenarios.
| 2,018 | Computation and Language |
Finding Good Representations of Emotions for Text Classification | It is important for machines to interpret human emotions properly for better
human-machine communications, as emotion is an essential part of human-to-human
communications. One aspect of emotion is reflected in the language we use. How
to represent emotions in texts is a challenge in natural language processing
(NLP). Although continuous vector representations like word2vec have become the
new norm for NLP problems, their limitations are that they do not take emotions
into consideration and can unintentionally contain bias toward certain
identities like different genders.
This thesis focuses on improving existing representations in both word and
sentence levels by explicitly taking emotions inside text and model bias into
account in their training process. Our improved representations can help to
build more robust machine learning models for affect-related text
classification like sentiment/emotion analysis and abusive language detection.
We first propose representations called emotional word vectors (EVEC), which
is learned from a convolutional neural network model with an emotion-labeled
corpus, which is constructed using hashtags. Secondly, we extend to learning
sentence-level representations with a huge corpus of texts with the pseudo task
of recognizing emojis. Our results show that, with the representations trained
from millions of tweets with weakly supervised labels such as hashtags and
emojis, we can solve sentiment/emotion analysis tasks more effectively.
Lastly, as examples of model bias in representations of existing approaches,
we explore a specific problem of automatic detection of abusive language. We
address the issue of gender bias in various neural network models by conducting
experiments to measure and reduce those biases in the representations in order
to build more robust classification models.
| 2,018 | Computation and Language |
Improving Matching Models with Hierarchical Contextualized
Representations for Multi-turn Response Selection | In this paper, we study context-response matching with pre-trained
contextualized representations for multi-turn response selection in
retrieval-based chatbots. Existing models, such as Cove and ELMo, are trained
with limited context (often a single sentence or paragraph), and may not work
well on multi-turn conversations, due to the hierarchical nature, informal
language, and domain-specific words. To address the challenges, we propose
pre-training hierarchical contextualized representations, including contextual
word-level and sentence-level representations, by learning a dialogue
generation model from large-scale conversations with a hierarchical
encoder-decoder architecture. Then the two levels of representations are
blended into the input and output layer of a matching model respectively.
Experimental results on two benchmark conversation datasets indicate that the
proposed hierarchical contextualized representations can bring significantly
and consistently improvement to existing matching models for response
selection.
| 2,019 | Computation and Language |
The Gap of Semantic Parsing: A Survey on Automatic Math Word Problem
Solvers | Solving mathematical word problems (MWPs) automatically is challenging,
primarily due to the semantic gap between human-readable words and
machine-understandable logics. Despite the long history dated back to the1960s,
MWPs have regained intensive attention in the past few years with the
advancement of Artificial Intelligence (AI). Solving MWPs successfully is
considered as a milestone towards general AI. Many systems have claimed
promising results in self-crafted and small-scale datasets. However, when
applied on large and diverse datasets, none of the proposed methods in the
literature achieves high precision, revealing that current MWP solvers still
have much room for improvement. This motivated us to present a comprehensive
survey to deliver a clear and complete picture of automatic math problem
solvers. In this survey, we emphasize on algebraic word problems, summarize
their extracted features and proposed techniques to bridge the semantic gap and
compare their performance in the publicly accessible datasets. We also cover
automatic solvers for other types of math problems such as geometric problems
that require the understanding of diagrams. Finally, we identify several
emerging research directions for the readers with interests in MWPs.
| 2,019 | Computation and Language |
Learning Sentiment Memories for Sentiment Modification without Parallel
Data | The task of sentiment modification requires reversing the sentiment of the
input and preserving the sentiment-independent content. However, aligned
sentences with the same content but different sentiments are usually
unavailable. Due to the lack of such parallel data, it is hard to extract
sentiment independent content and reverse the sentiment in an unsupervised way.
Previous work usually can not reconcile sentiment transformation and content
preservation. In this paper, motivated by the fact the non-emotional context
(e.g., "staff") provides strong cues for the occurrence of emotional words
(e.g., "friendly"), we propose a novel method that automatically extracts
appropriate sentiment information from learned sentiment memories according to
specific context. Experiments show that our method substantially improves the
content preservation degree and achieves the state-of-the-art performance.
| 2,018 | Computation and Language |
An Attention-Gated Convolutional Neural Network for Sentence
Classification | The classification of sentences is very challenging, since sentences contain
the limited contextual information. In this paper, we proposed an
Attention-Gated Convolutional Neural Network (AGCNN) for sentence
classification, which generates attention weights from the feature's context
windows of different sizes by using specialized convolution encoders. It makes
full use of limited contextual information to extract and enhance the influence
of important features in predicting the sentence's category. Experimental
results demonstrated that our model can achieve up to 3.1% higher accuracy than
standard CNN models, and gain competitive results over the baselines on four
out of the six tasks. Besides, we designed an activation function, namely,
Natural Logarithm rescaled Rectified Linear Unit (NLReLU). Experiments showed
that NLReLU can outperform ReLU and is comparable to other well-known
activation functions on AGCNN.
| 2,018 | Computation and Language |
Expansional Retrofitting for Word Vector Enrichment | Retrofitting techniques, which inject external resources into word
representations, have compensated the weakness of distributed representations
in semantic and relational knowledge between words. Implicitly retrofitting
word vectors by expansional technique outperforms retrofitting in word
similarity tasks with word vector generalization. In this paper, we propose
unsupervised extrofitting: expansional retrofitting (extrofitting) without
external semantic lexicons. We also propose deep extrofitting: in-depth
stacking of extrofitting and further combinations of extrofitting with
retrofitting. When experimenting with GloVe, we show that our methods
outperform the previous methods on most of word similarity tasks while
requiring only synonyms as an external resource. Lastly, we show the effect of
word vector enrichment on text classification task, as a downstream task.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.