Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Preparing Bengali-English Code-Mixed Corpus for Sentiment Analysis of
Indian Languages | Analysis of informative contents and sentiments of social users has been
attempted quite intensively in the recent past. Most of the systems are usable
only for monolingual data and fails or gives poor results when used on data
with code-mixing property. To gather attention and encourage researchers to
work on this crisis, we prepared gold standard Bengali-English code-mixed data
with language and polarity tag for sentiment analysis purposes. In this paper,
we discuss the systems we prepared to collect and filter raw Twitter data. In
order to reduce manual work while annotation, hybrid systems combining rule
based and supervised models were developed for both language and sentiment
tagging. The final corpus was annotated by a group of annotators following a
few guidelines. The gold standard corpus thus obtained has impressive
inter-annotator agreement obtained in terms of Kappa values. Various metrics
like Code-Mixed Index (CMI), Code-Mixed Factor (CF) along with various aspects
(language and emotion) also qualitatively polled the code-mixed and sentiment
properties of the corpus.
| 2,018 | Computation and Language |
Entity-Aware Language Model as an Unsupervised Reranker | In language modeling, it is difficult to incorporate entity relationships
from a knowledge-base. One solution is to use a reranker trained with global
features, in which global features are derived from n-best lists. However,
training such a reranker requires manually annotated n-best lists, which is
expensive to obtain. We propose a method based on the contrastive estimation
method that alleviates the need for such data. Experiments in the music domain
demonstrate that global features, as well as features extracted from an
external knowledge-base, can be incorporated into our reranker. Our final
model, a simple ensemble of a language model and reranker, achieves a 0.44\%
absolute word error rate improvement over an LSTM language model on the blind
test data.
| 2,018 | Computation and Language |
Semantic Parsing Natural Language into SPARQL: Improving Target Language
Representation with Neural Attention | Semantic parsing is the process of mapping a natural language sentence into a
formal representation of its meaning. In this work we use the neural network
approach to transform natural language sentence into a query to an ontology
database in the SPARQL language. This method does not rely on handcraft-rules,
high-quality lexicons, manually-built templates or other handmade complex
structures. Our approach is based on vector space model and neural networks.
The proposed model is based in two learning steps. The first step generates a
vector representation for the sentence in natural language and SPARQL query.
The second step uses this vector representation as input to a neural network
(LSTM with attention mechanism) to generate a model able to encode natural
language and decode SPARQL.
| 2,018 | Computation and Language |
A Feature-Rich Vietnamese Named-Entity Recognition Model | In this paper, we present a feature-based named-entity recognition (NER)
model that achieves the start-of-the-art accuracy for Vietnamese language. We
combine word, word-shape features, PoS, chunk, Brown-cluster-based features,
and word-embedding-based features in the Conditional Random Fields (CRF) model.
We also explore the effects of word segmentation, PoS tagging, and chunking
results of many popular Vietnamese NLP toolkits on the accuracy of the proposed
feature-based NER model. Up to now, our work is the first work that
systematically performs an extrinsic evaluation of basic Vietnamese NLP
toolkits on the downstream NER task. Experimental results show that while
automatically-generated word segmentation is useful, PoS and chunking
information generated by Vietnamese NLP tools does not show their benefits for
the proposed feature-based NER model.
| 2,018 | Computation and Language |
Concept2vec: Metrics for Evaluating Quality of Embeddings for
Ontological Concepts | Although there is an emerging trend towards generating embeddings for
primarily unstructured data and, recently, for structured data, no systematic
suite for measuring the quality of embeddings has been proposed yet. This
deficiency is further sensed with respect to embeddings generated for
structured data because there are no concrete evaluation metrics measuring the
quality of the encoded structure as well as semantic patterns in the embedding
space. In this paper, we introduce a framework containing three distinct tasks
concerned with the individual aspects of ontological concepts: (i) the
categorization aspect, (ii) the hierarchical aspect, and (iii) the relational
aspect. Then, in the scope of each task, a number of intrinsic metrics are
proposed for evaluating the quality of the embeddings. Furthermore, w.r.t. this
framework, multiple experimental studies were run to compare the quality of the
available embedding models. Employing this framework in future research can
reduce misjudgment and provide greater insight about quality comparisons of
embeddings for ontological concepts. We positioned our sampled data and code at
https://github.com/alshargi/Concept2vec under GNU General Public License v3.0.
| 2,019 | Computation and Language |
Automatic Detection of Online Jihadist Hate Speech | We have developed a system that automatically detects online jihadist hate
speech with over 80% accuracy, by using techniques from Natural Language
Processing and Machine Learning. The system is trained on a corpus of 45,000
subversive Twitter messages collected from October 2014 to December 2016. We
present a qualitative and quantitative analysis of the jihadist rhetoric in the
corpus, examine the network of Twitter users, outline the technical procedure
used to train the system, and discuss examples of use.
| 2,018 | Computation and Language |
Monitoring Targeted Hate in Online Environments | Hateful comments, swearwords and sometimes even death threats are becoming a
reality for many people today in online environments. This is especially true
for journalists, politicians, artists, and other public figures. This paper
describes how hate directed towards individuals can be measured in online
environments using a simple dictionary-based approach. We present a case study
on Swedish politicians, and use examples from this study to discuss
shortcomings of the proposed dictionary-based approach. We also outline
possibilities for potential refinements of the proposed approach.
| 2,018 | Computation and Language |
Enhanced Word Representations for Bridging Anaphora Resolution | Most current models of word representations(e.g.,GloVe) have successfully
captured fine-grained semantics. However, semantic similarity exhibited in
these word embeddings is not suitable for resolving bridging anaphora, which
requires the knowledge of associative similarity (i.e., relatedness) instead of
semantic similarity information between synonyms or hypernyms. We create word
embeddings (embeddings_PP) to capture such relatedness by exploring the
syntactic structure of noun phrases. We demonstrate that using embeddings_PP
alone achieves around 30% of accuracy for bridging anaphora resolution on the
ISNotes corpus. Furthermore, we achieve a substantial gain over the
state-of-the-art system (Hou et al., 2013) for bridging antecedent selection.
| 2,018 | Computation and Language |
Neural Lattice Language Models | In this work, we propose a new language modeling paradigm that has the
ability to perform both prediction and moderation of information flow at
multiple granularities: neural lattice language models. These models construct
a lattice of possible paths through a sentence and marginalize across this
lattice to calculate sequence probabilities or optimize parameters. This
approach allows us to seamlessly incorporate linguistic intuitions - including
polysemy and existence of multi-word lexical items - into our language model.
Experiments on multiple language modeling tasks show that English neural
lattice language models that utilize polysemous embeddings are able to improve
perplexity by 9.95% relative to a word-level baseline, and that a Chinese model
that handles multi-character tokens is able to improve perplexity by 20.94%
relative to a character-level baseline.
| 2,018 | Computation and Language |
How to evaluate sentiment classifiers for Twitter time-ordered data? | Social media are becoming an increasingly important source of information
about the public mood regarding issues such as elections, Brexit, stock market,
etc. In this paper we focus on sentiment classification of Twitter data.
Construction of sentiment classifiers is a standard text mining task, but here
we address the question of how to properly evaluate them as there is no settled
way to do so. Sentiment classes are ordered and unbalanced, and Twitter
produces a stream of time-ordered data. The problem we address concerns the
procedures used to obtain reliable estimates of performance measures, and
whether the temporal ordering of the training and test data matters. We
collected a large set of 1.5 million tweets in 13 European languages. We
created 138 sentiment models and out-of-sample datasets, which are used as a
gold standard for evaluations. The corresponding 138 in-sample datasets are
used to empirically compare six different estimation procedures: three variants
of cross-validation, and three variants of sequential validation (where test
set always follows the training set). We find no significant difference between
the best cross-validation and sequential validation. However, we observe that
all cross-validation variants tend to overestimate the performance, while the
sequential methods tend to underestimate it. Standard cross-validation with
random selection of examples is significantly worse than the blocked
cross-validation, and should not be used to evaluate classifiers in
time-ordered data scenarios.
| 2,018 | Computation and Language |
MCScript: A Novel Dataset for Assessing Machine Comprehension Using
Script Knowledge | We introduce a large dataset of narrative texts and questions about these
texts, intended to be used in a machine comprehension task that requires
reasoning using commonsense knowledge. Our dataset complements similar datasets
in that we focus on stories about everyday activities, such as going to the
movies or working in the garden, and that the questions require commonsense
knowledge, or more specifically, script knowledge, to be answered. We show that
our mode of data collection via crowdsourcing results in a substantial amount
of such inference questions. The dataset forms the basis of a shared task on
commonsense and script knowledge organized at SemEval 2018 and provides
challenging test cases for the broader natural language understanding
community.
| 2,018 | Computation and Language |
FEVER: a large-scale dataset for Fact Extraction and VERification | In this paper we introduce a new publicly available dataset for verification
against textual sources, FEVER: Fact Extraction and VERification. It consists
of 185,445 claims generated by altering sentences extracted from Wikipedia and
subsequently verified without knowledge of the sentence they were derived from.
The claims are classified as Supported, Refuted or NotEnoughInfo by annotators
achieving 0.6841 in Fleiss $\kappa$. For the first two classes, the annotators
also recorded the sentence(s) forming the necessary evidence for their
judgment. To characterize the challenge of the dataset presented, we develop a
pipeline approach and compare it to suitably designed oracles. The best
accuracy we achieve on labeling a claim accompanied by the correct evidence is
31.87%, while if we ignore the evidence we achieve 50.91%. Thus we believe that
FEVER is a challenging testbed that will help stimulate progress on claim
verification against textual sources.
| 2,018 | Computation and Language |
SentEval: An Evaluation Toolkit for Universal Sentence Representations | We introduce SentEval, a toolkit for evaluating the quality of universal
sentence representations. SentEval encompasses a variety of tasks, including
binary and multi-class classification, natural language inference and sentence
similarity. The set of tasks was selected based on what appears to be the
community consensus regarding the appropriate evaluations for universal
sentence representations. The toolkit comes with scripts to download and
preprocess datasets, and an easy interface to evaluate sentence encoders. The
aim is to provide a fairer, less cumbersome and more centralized way for
evaluating sentence representations.
| 2,018 | Computation and Language |
Challenges in Discriminating Profanity from Hate Speech | In this study we approach the problem of distinguishing general profanity
from hate speech in social media, something which has not been widely
considered. Using a new dataset annotated specifically for this task, we employ
supervised classification along with a set of features that includes n-grams,
skip-grams and clustering-based word representations. We apply approaches based
on single classifiers as well as more advanced ensemble classifiers and stacked
generalization, achieving the best result of 80% accuracy for this 3-class
classification task. Analysis of the results reveals that discriminating hate
speech and profanity is not a simple task, which may require features that
capture a deeper understanding of the text not always possible with surface
n-grams. The variability of gold labels in the annotated data, due to
differences in the subjective adjudications of the annotators, is also an
issue. Other directions for future work are discussed.
| 2,018 | Computation and Language |
A Simple and Effective Approach to the Story Cloze Test | In the Story Cloze Test, a system is presented with a 4-sentence prompt to a
story, and must determine which one of two potential endings is the 'right'
ending to the story. Previous work has shown that ignoring the training set and
training a model on the validation set can achieve high accuracy on this task
due to stylistic differences between the story endings in the training set and
validation and test sets. Following this approach, we present a simpler
fully-neural approach to the Story Cloze Test using skip-thought embeddings of
the stories in a feed-forward network that achieves close to state-of-the-art
performance on this task without any feature engineering. We also find that
considering just the last sentence of the prompt instead of the whole prompt
yields higher accuracy with our approach.
| 2,018 | Computation and Language |
Advancing Connectionist Temporal Classification With Attention Modeling | In this study, we propose advancing all-neural speech recognition by directly
incorporating attention modeling within the Connectionist Temporal
Classification (CTC) framework. In particular, we derive new context vectors
using time convolution features to model attention as part of the CTC network.
To further improve attention modeling, we utilize content information extracted
from a network representing an implicit language model. Finally, we introduce
vector based attention weights that are applied on context vectors across both
time and their individual components. We evaluate our system on a 3400 hours
Microsoft Cortana voice assistant task and demonstrate that our proposed model
consistently outperforms the baseline model achieving about 20% relative
reduction in word error rates.
| 2,018 | Computation and Language |
Advancing Acoustic-to-Word CTC Model | The acoustic-to-word model based on the connectionist temporal classification
(CTC) criterion was shown as a natural end-to-end (E2E) model directly
targeting words as output units. However, the word-based CTC model suffers from
the out-of-vocabulary (OOV) issue as it can only model limited number of words
in the output layer and maps all the remaining words into an OOV output node.
Hence, such a word-based CTC model can only recognize the frequent words
modeled by the network output nodes. Our first attempt to improve the
acoustic-to-word model is a hybrid CTC model which consults a letter-based CTC
when the word-based CTC model emits OOV tokens during testing time. Then, we
propose a much better solution by training a mixed-unit CTC model which
decomposes all the OOV words into sequences of frequent words and multi-letter
units. Evaluated on a 3400 hours Microsoft Cortana voice assistant task, the
final acoustic-to-word solution improves the baseline word-based CTC by
relative 12.09% word error rate (WER) reduction when combined with our proposed
attention CTC. Such an E2E model without using any language model (LM) or
complex decoder outperforms the traditional context-dependent phoneme CTC which
has strong LM and decoder by relative 6.79%.
| 2,018 | Computation and Language |
Achieving Human Parity on Automatic Chinese to English News Translation | Machine translation has made rapid advances in recent years. Millions of
people are using it today in online translation systems and mobile applications
in order to communicate across language barriers. The question naturally arises
whether such systems can approach or achieve parity with human translations. In
this paper, we first address the problem of how to define and accurately
measure human parity in translation. We then describe Microsoft's machine
translation system and measure the quality of its translations on the widely
used WMT 2017 news translation task from Chinese to English. We find that our
latest neural machine translation system has reached a new state-of-the-art,
and that the translation quality is at human parity when compared to
professional human translations. We also find that it significantly exceeds the
quality of crowd-sourced non-professional translations.
| 2,018 | Computation and Language |
Word2Bits - Quantized Word Vectors | Word vectors require significant amounts of memory and storage, posing issues
to resource limited devices like mobile phones and GPUs. We show that high
quality quantized word vectors using 1-2 bits per parameter can be learned by
introducing a quantization function into Word2Vec. We furthermore show that
training with the quantization function acts as a regularizer. We train word
vectors on English Wikipedia (2017) and evaluate them on standard word
similarity and analogy tasks and on question answering (SQuAD). Our quantized
word vectors not only take 8-16x less space than full precision (32 bit) word
vectors but also outperform them on word similarity tasks and question
answering.
| 2,018 | Computation and Language |
HFL-RC System at SemEval-2018 Task 11: Hybrid Multi-Aspects Model for
Commonsense Reading Comprehension | This paper describes the system which got the state-of-the-art results at
SemEval-2018 Task 11: Machine Comprehension using Commonsense Knowledge. In
this paper, we present a neural network called Hybrid Multi-Aspects (HMA)
model, which mimic the human's intuitions on dealing with the multiple-choice
reading comprehension. In this model, we aim to produce the predictions in
multiple aspects by calculating attention among the text, question and choices,
and combine these results for final predictions. Experimental results show that
our HMA model could give substantial improvements over the baseline system and
got the first place on the final test set leaderboard with the accuracy of
84.13%.
| 2,018 | Computation and Language |
Structure Regularized Neural Network for Entity Relation Classification
for Chinese Literature Text | Relation classification is an important semantic processing task in the field
of natural language processing. In this paper, we propose the task of relation
classification for Chinese literature text. A new dataset of Chinese literature
text is constructed to facilitate the study in this task. We present a novel
model, named Structure Regularized Bidirectional Recurrent Convolutional Neural
Network (SR-BRCNN), to identify the relation between entities. The proposed
model learns relation representations along the shortest dependency path (SDP)
extracted from the structure regularized dependency tree, which has the
benefits of reducing the complexity of the whole model. Experimental results
show that the proposed method significantly improves the F1 score by 10.3, and
outperforms the state-of-the-art approaches on Chinese literature text.
| 2,018 | Computation and Language |
RUSSE'2018: A Shared Task on Word Sense Induction for the Russian
Language | The paper describes the results of the first shared task on word sense
induction (WSI) for the Russian language. While similar shared tasks were
conducted in the past for some Romance and Germanic languages, we explore the
performance of sense induction and disambiguation methods for a Slavic language
that shares many features with other Slavic languages, such as rich morphology
and virtually free word order. The participants were asked to group contexts of
a given word in accordance with its senses that were not provided beforehand.
For instance, given a word "bank" and a set of contexts for this word, e.g.
"bank is a financial institution that accepts deposits" and "river bank is a
slope beside a body of water", a participant was asked to cluster such contexts
in the unknown in advance number of clusters corresponding to, in this case,
the "company" and the "area" senses of the word "bank". For the purpose of this
evaluation campaign, we developed three new evaluation datasets based on sense
inventories that have different sense granularity. The contexts in these
datasets were sampled from texts of Wikipedia, the academic corpus of Russian,
and an explanatory dictionary of Russian. Overall, 18 teams participated in the
competition submitting 383 models. Multiple teams managed to substantially
outperform competitive state-of-the-art baselines from the previous years based
on sense embeddings.
| 2,018 | Computation and Language |
RUSSE: The First Workshop on Russian Semantic Similarity | The paper gives an overview of the Russian Semantic Similarity Evaluation
(RUSSE) shared task held in conjunction with the Dialogue 2015 conference.
There exist a lot of comparative studies on semantic similarity, yet no
analysis of such measures was ever performed for the Russian language.
Exploring this problem for the Russian language is even more interesting,
because this language has features, such as rich morphology and free word
order, which make it significantly different from English, German, and other
well-studied languages. We attempt to bridge this gap by proposing a shared
task on the semantic similarity of Russian nouns. Our key contribution is an
evaluation methodology based on four novel benchmark datasets for the Russian
language. Our analysis of the 105 submissions from 19 teams reveals that
successful approaches for English, such as distributional and skip-gram models,
are directly applicable to Russian as well. On the one hand, the best results
in the contest were obtained by sophisticated supervised models that combine
evidence from different sources. On the other hand, completely unsupervised
approaches, such as a skip-gram model estimated on a large-scale corpus, were
able score among the top 5 systems.
| 2,018 | Computation and Language |
Enriching Frame Representations with Distributionally Induced Senses | We introduce a new lexical resource that enriches the Framester knowledge
graph, which links Framnet, WordNet, VerbNet and other resources, with semantic
features from text corpora. These features are extracted from distributionally
induced sense inventories and subsequently linked to the manually-constructed
frame representations to boost the performance of frame disambiguation in
context. Since Framester is a frame-based knowledge graph, which enables
full-fledged OWL querying and reasoning, our resource paves the way for the
development of novel, deeper semantic-aware applications that could benefit
from the combination of knowledge from text and complex symbolic
representations of events and participants. Together with the resource we also
provide the software we developed for the evaluation in the task of Word Frame
Disambiguation (WFD).
| 2,018 | Computation and Language |
RankME: Reliable Human Ratings for Natural Language Generation | Human evaluation for natural language generation (NLG) often suffers from
inconsistent user ratings. While previous research tends to attribute this
problem to individual user preferences, we show that the quality of human
judgements can also be improved by experimental design. We present a novel
rank-based magnitude estimation method (RankME), which combines the use of
continuous scales and relative assessments. We show that RankME significantly
improves the reliability and consistency of human ratings compared to
traditional evaluation methods. In addition, we show that it is possible to
evaluate NLG systems according to multiple, distinct criteria, which is
important for error analysis. Finally, we demonstrate that RankME, in
combination with Bayesian estimation of system quality, is a cost-effective
alternative for ranking multiple NLG systems.
| 2,018 | Computation and Language |
Corpus Statistics in Text Classification of Online Data | Transformation of Machine Learning (ML) from a boutique science to a
generally accepted technology has increased importance of reproduction and
transportability of ML studies. In the current work, we investigate how corpus
characteristics of textual data sets correspond to text classification results.
We work with two data sets gathered from sub-forums of an online health-related
forum. Our empirical results are obtained for a multi-class sentiment analysis
application.
| 2,018 | Computation and Language |
Deep learning for affective computing: text-based emotion recognition in
decision support | Emotions widely affect human decision-making. This fact is taken into account
by affective computing with the goal of tailoring decision support to the
emotional states of individuals. However, the accurate recognition of emotions
within narrative documents presents a challenging undertaking due to the
complexity and ambiguity of language. Performance improvements can be achieved
through deep learning; yet, as demonstrated in this paper, the specific nature
of this task requires the customization of recurrent neural networks with
regard to bidirectional processing, dropout layers as a means of
regularization, and weighted loss functions. In addition, we propose
sent2affect, a tailored form of transfer learning for affective computing: here
the network is pre-trained for a different task (i.e. sentiment analysis),
while the output layer is subsequently tuned to the task of emotion
recognition. The resulting performance is evaluated in a holistic setting
across 6 benchmark datasets, where we find that both recurrent neural networks
and transfer learning consistently outperform traditional machine learning.
Altogether, the findings have considerable implications for the use of
affective computing.
| 2,018 | Computation and Language |
Experiments with Neural Networks for Small and Large Scale Authorship
Verification | We propose two models for a special case of authorship verification problem.
The task is to investigate whether the two documents of a given pair are
written by the same author. We consider the authorship verification problem for
both small and large scale datasets. The underlying small-scale problem has two
main challenges: First, the authors of the documents are unknown to us because
no previous writing samples are available. Second, the two documents are short
(a few hundred to a few thousand words) and may differ considerably in the
genre and/or topic. To solve it we propose transformation encoder to transform
one document of the pair into the other. This document transformation generates
a loss which is used as a recognizable feature to verify if the authors of the
pair are identical. For the large scale problem where various authors are
engaged and more examples are available with larger length, a parallel
recurrent neural network is proposed. It compares the language models of the
two documents. We evaluate our methods on various types of datasets including
Authorship Identification datasets of PAN competition, Amazon reviews, and
machine learning articles. Experiments show that both methods achieve stable
and competitive performance compared to the baselines.
| 2,018 | Computation and Language |
Argumentation theory for mathematical argument | To adequately model mathematical arguments the analyst must be able to
represent the mathematical objects under discussion and the relationships
between them, as well as inferences drawn about these objects and relationships
as the discourse unfolds. We introduce a framework with these properties, which
has been used to analyse mathematical dialogues and expository texts. The
framework can recover salient elements of discourse at, and within, the
sentence level, as well as the way mathematical content connects to form larger
argumentative structures. We show how the framework might be used to support
computational reasoning, and argue that it provides a more natural way to
examine the process of proving theorems than do Lamport's structured proofs.
| 2,018 | Computation and Language |
Dear Sir or Madam, May I introduce the GYAFC Dataset: Corpus, Benchmarks
and Metrics for Formality Style Transfer | Style transfer is the task of automatically transforming a piece of text in
one particular style into another. A major barrier to progress in this field
has been a lack of training and evaluation datasets, as well as benchmarks and
automatic metrics. In this work, we create the largest corpus for a particular
stylistic transfer (formality) and show that techniques from the machine
translation community can serve as strong baselines for future work. We also
discuss challenges of using automatic metrics.
| 2,018 | Computation and Language |
The Web as a Knowledge-base for Answering Complex Questions | Answering complex questions is a time-consuming activity for humans that
requires reasoning and integration of information. Recent work on reading
comprehension made headway in answering simple questions, but tackling complex
questions is still an ongoing research challenge. Conversely, semantic parsers
have been successful at handling compositionality, but only when the
information resides in a target knowledge-base. In this paper, we present a
novel framework for answering broad and complex questions, assuming answering
simple questions is possible using a search engine and a reading comprehension
model. We propose to decompose complex questions into a sequence of simple
questions, and compute the final answer from the sequence of answers. To
illustrate the viability of our approach, we create a new dataset of complex
questions, ComplexWebQuestions, and present a model that decomposes questions
and interacts with the web to compute an answer. We empirically demonstrate
that question decomposition improves performance from 20.8 precision@1 to 27.5
precision@1 on this new dataset.
| 2,018 | Computation and Language |
Sentiment Analysis of Code-Mixed Indian Languages: An Overview of
SAIL_Code-Mixed Shared Task @ICON-2017 | Sentiment analysis is essential in many real-world applications such as
stance detection, review analysis, recommendation system, and so on. Sentiment
analysis becomes more difficult when the data is noisy and collected from
social media. India is a multilingual country; people use more than one
languages to communicate within themselves. The switching in between the
languages is called code-switching or code-mixing, depending upon the type of
mixing. This paper presents overview of the shared task on sentiment analysis
of code-mixed data pairs of Hindi-English and Bengali-English collected from
the different social media platform. The paper describes the task, dataset,
evaluation, baseline and participant's systems.
| 2,018 | Computation and Language |
Acoustic feature learning using cross-domain articulatory measurements | Previous work has shown that it is possible to improve speech recognition by
learning acoustic features from paired acoustic-articulatory data, for example
by using canonical correlation analysis (CCA) or its deep extensions. One
limitation of this prior work is that the learned feature models are difficult
to port to new datasets or domains, and articulatory data is not available for
most speech corpora. In this work we study the problem of acoustic feature
learning in the setting where we have access to an external, domain-mismatched
dataset of paired speech and articulatory measurements, either with or without
labels. We develop methods for acoustic feature learning in these settings,
based on deep variational CCA and extensions that use both source and target
domain data and labels. Using this approach, we improve phonetic recognition
accuracies on both TIMIT and Wall Street Journal and analyze a number of design
choices.
| 2,018 | Computation and Language |
Polyglot Semantic Parsing in APIs | Traditional approaches to semantic parsing (SP) work by training individual
models for each available parallel dataset of text-meaning pairs. In this
paper, we explore the idea of polyglot semantic translation, or learning
semantic parsing models that are trained on multiple datasets and natural
languages. In particular, we focus on translating text to code signature
representations using the software component datasets of Richardson and Kuhn
(2017a,b). The advantage of such models is that they can be used for parsing a
wide variety of input natural languages and output programming languages, or
mixed input languages, using a single unified model. To facilitate modeling of
this type, we develop a novel graph-based decoding framework that achieves
state-of-the-art performance on the above datasets, and apply this method to
two other benchmark SP tasks.
| 2,018 | Computation and Language |
Controlling Decoding for More Abstractive Summaries with Copy-Based
Networks | Attention-based neural abstractive summarization systems equipped with copy
mechanisms have shown promising results. Despite this success, it has been
noticed that such a system generates a summary by mostly, if not entirely,
copying over phrases, sentences, and sometimes multiple consecutive sentences
from an input paragraph, effectively performing extractive summarization. In
this paper, we verify this behavior using the latest neural abstractive
summarization system - a pointer-generator network. We propose a simple
baseline method that allows us to control the amount of copying without
retraining. Experiments indicate that the method provides a strong baseline for
abstractive systems looking to obtain high ROUGE scores while minimizing
overlap with the source article, substantially reducing the n-gram overlap with
the original article while keeping within 2 points of the original model's
ROUGE score.
| 2,018 | Computation and Language |
Learning to Generate Wikipedia Summaries for Underserved Languages from
Wikidata | While Wikipedia exists in 287 languages, its content is unevenly distributed
among them. In this work, we investigate the generation of open domain
Wikipedia summaries in underserved languages using structured data from
Wikidata. To this end, we propose a neural network architecture equipped with
copy actions that learns to generate single-sentence and comprehensible textual
summaries from Wikidata triples. We demonstrate the effectiveness of the
proposed approach by evaluating it against a set of baselines on two languages
of different natures: Arabic, a morphological rich language with a larger
vocabulary than English, and Esperanto, a constructed language known for its
easy acquisition.
| 2,018 | Computation and Language |
Neural Text Generation: Past, Present and Beyond | This paper presents a systematic survey on recent development of neural text
generation models. Specifically, we start from recurrent neural network
language models with the traditional maximum likelihood estimation training
scheme and point out its shortcoming for text generation. We thus introduce the
recently proposed methods for text generation based on reinforcement learning,
re-parametrization tricks and generative adversarial nets (GAN) techniques. We
compare different properties of these models and the corresponding techniques
to handle their common problems such as gradient vanishing and generation
diversity. Finally, we conduct a benchmarking experiment with different types
of neural text generation models on two well-known datasets and discuss the
empirical results along with the aforementioned model properties.
| 2,018 | Computation and Language |
Dynamic Natural Language Processing with Recurrence Quantification
Analysis | Writing and reading are dynamic processes. As an author composes a text, a
sequence of words is produced. This sequence is one that, the author hopes,
causes a revisitation of certain thoughts and ideas in others. These processes
of composition and revisitation by readers are ordered in time. This means that
text itself can be investigated under the lens of dynamical systems. A common
technique for analyzing the behavior of dynamical systems, known as recurrence
quantification analysis (RQA), can be used as a method for analyzing sequential
structure of text. RQA treats text as a sequential measurement, much like a
time series, and can thus be seen as a kind of dynamic natural language
processing (NLP). The extension has several benefits. Because it is part of a
suite of time series analysis tools, many measures can be extracted in one
common framework. Secondly, the measures have a close relationship with some
commonly used measures from natural language processing. Finally, using
recurrence analysis offers an opportunity expand analysis of text by developing
theoretical descriptions derived from complex dynamic systems. We showcase an
example analysis on 8,000 texts from the Gutenberg Project, compare it to
well-known NLP approaches, and describe an R package (crqanlp) that can be used
in conjunction with R library crqa.
| 2,018 | Computation and Language |
English-Catalan Neural Machine Translation in the Biomedical Domain
through the cascade approach | This paper describes the methodology followed to build a neural machine
translation system in the biomedical domain for the English-Catalan language
pair. This task can be considered a low-resourced task from the point of view
of the domain and the language pair. To face this task, this paper reports
experiments on a cascade pivot strategy through Spanish for the neural machine
translation using the English-Spanish SCIELO and Spanish-Catalan El Peri\'odico
database. To test the final performance of the system, we have created a new
test data set for English-Catalan in the biomedical domain which is freely
available on request.
| 2,018 | Computation and Language |
Why not be Versatile? Applications of the SGNMT Decoder for Machine
Translation | SGNMT is a decoding platform for machine translation which allows paring
various modern neural models of translation with different kinds of constraints
and symbolic models. In this paper, we describe three use cases in which SGNMT
is currently playing an active role: (1) teaching as SGNMT is being used for
course work and student theses in the MPhil in Machine Learning, Speech and
Language Technology at the University of Cambridge, (2) research as most of the
research work of the Cambridge MT group is based on SGNMT, and (3) technology
transfer as we show how SGNMT is helping to transfer research findings from the
laboratory to the industry, eg. into a product of SDL plc.
| 2,018 | Computation and Language |
eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing | Training models for the automatic correction of machine-translated text
usually relies on data consisting of (source, MT, human post- edit) triplets
providing, for each source sentence, examples of translation errors with the
corresponding corrections made by a human post-editor. Ideally, a large amount
of data of this kind should allow the model to learn reliable correction
patterns and effectively apply them at test stage on unseen (source, MT) pairs.
In practice, however, their limited availability calls for solutions that also
integrate in the training process other sources of knowledge. Along this
direction, state-of-the-art results have been recently achieved by systems
that, in addition to a limited amount of available training data, exploit
artificial corpora that approximate elements of the "gold" training instances
with automatic translations. Following this idea, we present eSCAPE, the
largest freely-available Synthetic Corpus for Automatic Post-Editing released
so far. eSCAPE consists of millions of entries in which the MT element of the
training triplets has been obtained by translating the source side of
publicly-available parallel corpora, and using the target side as an artificial
human post-edit. Translations are obtained both with phrase-based and neural
models. For each MT paradigm, eSCAPE contains 7.2 million triplets for
English-German and 3.3 millions for English-Italian, resulting in a total of
14,4 and 6,6 million instances respectively. The usefulness of eSCAPE is proved
through experiments in a general-domain scenario, the most challenging one for
automatic post-editing. For both language directions, the models trained on our
artificial data always improve MT quality with statistically significant gains.
The current version of eSCAPE can be freely downloaded from:
http://hltshare.fbk.eu/QT21/eSCAPE.html.
| 2,018 | Computation and Language |
Expressivity in TTS from Semantics and Pragmatics | In this paper we present ongoing work to produce an expressive TTS reader
that can be used both in text and dialogue applications. The system called
SPARSAR has been used to read (English) poetry so far but it can now be applied
to any text. The text is fully analyzed both at phonetic and phonological
level, and at syntactic and semantic level. In addition, the system has access
to a restricted list of typical pragmatically marked phrases and expressions
that are used to convey specific discourse function and speech acts and need
specialized intonational contours. The text is transformed into a poem-like
structures, where each line corresponds to a Breath Group, semantically and
syntactically consistent. Stanzas correspond to paragraph boundaries.
Analogical parameters are related to ToBI theoretical indices but their number
is doubled. In this paper, we concentrate on short stories and fables.
| 2,018 | Computation and Language |
Multimodal Sentiment Analysis: Addressing Key Issues and Setting up the
Baselines | We compile baselines, along with dataset split, for multimodal sentiment
analysis. In this paper, we explore three different deep-learning based
architectures for multimodal sentiment classification, each improving upon the
previous. Further, we evaluate these architectures with multiple datasets with
fixed train/test partition. We also discuss some major issues, frequently
ignored in multimodal sentiment analysis research, e.g., role of
speaker-exclusive models, importance of different modalities, and
generalizability. This framework illustrates the different facets of analysis
to be considered while performing multimodal sentiment analysis and, hence,
serves as a new benchmark for future research in this emerging field.
| 2,019 | Computation and Language |
UnibucKernel: A kernel-based learning method for complex word
identification | In this paper, we present a kernel-based learning approach for the 2018
Complex Word Identification (CWI) Shared Task. Our approach is based on
combining multiple low-level features, such as character n-grams, with
high-level semantic features that are either automatically learned using word
embeddings or extracted from a lexical knowledge base, namely WordNet. After
feature extraction, we employ a kernel method for the learning phase. The
feature matrix is first transformed into a normalized kernel matrix. For the
binary classification task (simple versus complex), we employ Support Vector
Machines. For the regression task, in which we have to predict the complexity
level of a word (a word is more complex if it is labeled as complex by more
annotators), we employ v-Support Vector Regression. We applied our approach
only on the three English data sets containing documents from Wikipedia,
WikiNews and News domains. Our best result during the competition was the third
place on the English Wikipedia data set. However, in this paper, we also report
better post-competition results.
| 2,018 | Computation and Language |
AllenNLP: A Deep Semantic Natural Language Processing Platform | This paper describes AllenNLP, a platform for research on deep learning
methods in natural language understanding. AllenNLP is designed to support
researchers who want to build novel language understanding models quickly and
easily. It is built on top of PyTorch, allowing for dynamic computation graphs,
and provides (1) a flexible data API that handles intelligent batching and
padding, (2) high-level abstractions for common operations in working with
text, and (3) a modular and extensible experiment framework that makes doing
good science easy. It also includes reference implementations of high quality
approaches for both core semantic problems (e.g. semantic role labeling (Palmer
et al., 2005)) and language understanding applications (e.g. machine
comprehension (Rajpurkar et al., 2016)). AllenNLP is an ongoing open-source
effort maintained by engineers and researchers at the Allen Institute for
Artificial Intelligence.
| 2,018 | Computation and Language |
InfyNLP at SMM4H Task 2: Stacked Ensemble of Shallow Convolutional
Neural Networks for Identifying Personal Medication Intake from Twitter | This paper describes Infosys's participation in the "2nd Social Media Mining
for Health Applications Shared Task at AMIA, 2017, Task 2". Mining social media
messages for health and drug related information has received significant
interest in pharmacovigilance research. This task targets at developing
automated classification models for identifying tweets containing descriptions
of personal intake of medicines. Towards this objective we train a stacked
ensemble of shallow convolutional neural network (CNN) models on an annotated
dataset provided by the organizers. We use random search for tuning the
hyper-parameters of the CNN and submit an ensemble of best models for the
prediction task. Our system secured first place among 9 teams, with a
micro-averaged F-score of 0.693.
| 2,018 | Computation and Language |
Attention on Attention: Architectures for Visual Question Answering
(VQA) | Visual Question Answering (VQA) is an increasingly popular topic in deep
learning research, requiring coordination of natural language processing and
computer vision modules into a single architecture. We build upon the model
which placed first in the VQA Challenge by developing thirteen new attention
mechanisms and introducing a simplified classifier. We performed 300 GPU hours
of extensive hyperparameter and architecture searches and were able to achieve
an evaluation score of 64.78%, outperforming the existing state-of-the-art
single model's validation score of 63.15%.
| 2,018 | Computation and Language |
$\rho$-hot Lexicon Embedding-based Two-level LSTM for Sentiment Analysis | Sentiment analysis is a key component in various text mining applications.
Numerous sentiment classification techniques, including conventional and deep
learning-based methods, have been proposed in the literature. In most existing
methods, a high-quality training set is assumed to be given. Nevertheless,
constructing a high-quality training set that consists of highly accurate
labels is challenging in real applications. This difficulty stems from the fact
that text samples usually contain complex sentiment representations, and their
annotation is subjective. We address this challenge in this study by leveraging
a new labeling strategy and utilizing a two-level long short-term memory
network to construct a sentiment classifier. Lexical cues are useful for
sentiment analysis, and they have been utilized in conventional studies. For
example, polar and privative words play important roles in sentiment analysis.
A new encoding strategy, that is, $\rho$-hot encoding, is proposed to alleviate
the drawbacks of one-hot encoding and thus effectively incorporate useful
lexical cues. We compile three Chinese data sets on the basis of our label
strategy and proposed methodology. Experiments on the three data sets
demonstrate that the proposed method outperforms state-of-the-art algorithms.
| 2,018 | Computation and Language |
Expeditious Generation of Knowledge Graph Embeddings | Knowledge Graph Embedding methods aim at representing entities and relations
in a knowledge base as points or vectors in a continuous vector space. Several
approaches using embeddings have shown promising results on tasks such as link
prediction, entity recommendation, question answering, and triplet
classification. However, only a few methods can compute low-dimensional
embeddings of very large knowledge bases without needing state-of-the-art
computational resources. In this paper, we propose KG2Vec, a simple and fast
approach to Knowledge Graph Embedding based on the skip-gram model. Instead of
using a predefined scoring function, we learn it relying on Long Short-Term
Memories. We show that our embeddings achieve results comparable with the most
scalable approaches on knowledge graph completion as well as on a new metric.
Yet, KG2Vec can embed large graphs in lesser time by processing more than 250
million triples in less than 7 hours on common hardware.
| 2,018 | Computation and Language |
Olive Oil is Made of Olives, Baby Oil is Made for Babies: Interpreting
Noun Compounds using Paraphrases in a Neural Model | Automatic interpretation of the relation between the constituents of a noun
compound, e.g. olive oil (source) and baby oil (purpose) is an important task
for many NLP applications. Recent approaches are typically based on either
noun-compound representations or paraphrases. While the former has initially
shown promising results, recent work suggests that the success stems from
memorizing single prototypical words for each relation. We explore a neural
paraphrasing approach that demonstrates superior performance when such
memorization is not possible.
| 2,018 | Computation and Language |
An Analysis of Neural Language Modeling at Multiple Scales | Many of the leading approaches in language modeling introduce novel, complex
and specialized architectures. We take existing state-of-the-art word level
language models based on LSTMs and QRNNs and extend them to both larger
vocabularies as well as character-level granularity. When properly tuned, LSTMs
and QRNNs achieve state-of-the-art results on character-level (Penn Treebank,
enwik8) and word-level (WikiText-103) datasets, respectively. Results are
obtained in only 12 hours (WikiText-103) to 2 days (enwik8) using a single
modern GPU.
| 2,018 | Computation and Language |
Learning Eligibility in Cancer Clinical Trials using Deep Neural
Networks | Interventional cancer clinical trials are generally too restrictive, and some
patients are often excluded on the basis of comorbidity, past or concomitant
treatments, or the fact that they are over a certain age. The efficacy and
safety of new treatments for patients with these characteristics are,
therefore, not defined. In this work, we built a model to automatically predict
whether short clinical statements were considered inclusion or exclusion
criteria. We used protocols from cancer clinical trials that were available in
public registries from the last 18 years to train word-embeddings, and we
constructed a~dataset of 6M short free-texts labeled as eligible or not
eligible. A text classifier was trained using deep neural networks, with
pre-trained word-embeddings as inputs, to predict whether or not short
free-text statements describing clinical information were considered eligible.
We additionally analyzed the semantic reasoning of the word-embedding
representations obtained and were able to identify equivalent treatments for a
type of tumor analogous with the drugs used to treat other tumors. We show that
representation learning using {deep} neural networks can be successfully
leveraged to extract the medical knowledge from clinical trial protocols for
potentially assisting practitioners when prescribing treatments.
| 2,018 | Computation and Language |
Quality expectations of machine translation | Machine Translation (MT) is being deployed for a range of use-cases by
millions of people on a daily basis. There should, therefore, be no doubt as to
the utility of MT. However, not everyone is convinced that MT can be useful,
especially as a productivity enhancer for human translators. In this chapter, I
address this issue, describing how MT is currently deployed, how its output is
evaluated and how this could be enhanced, especially as MT quality itself
improves. Central to these issues is the acceptance that there is no longer a
single 'gold standard' measure of quality, such that the situation in which MT
is deployed needs to be borne in mind, especially with respect to the expected
'shelf-life' of the translation itself.
| 2,018 | Computation and Language |
A Feature-Based Model for Nested Named-Entity Recognition at VLSP-2018
NER Evaluation Campaign | In this report, we describe our participant named-entity recognition system
at VLSP 2018 evaluation campaign. We formalized the task as a sequence labeling
problem using BIO encoding scheme. We applied a feature-based model which
combines word, word-shape features, Brown-cluster-based features, and
word-embedding-based features. We compare several methods to deal with nested
entities in the dataset. We showed that combining tags of entities at all
levels for training a sequence labeling model (joint-tag model) improved the
accuracy of nested named-entity recognition.
| 2,018 | Computation and Language |
Word sense induction using word embeddings and community detection in
complex networks | Word Sense Induction (WSI) is the ability to automatically induce word senses
from corpora. The WSI task was first proposed to overcome the limitations of
manually annotated corpus that are required in word sense disambiguation
systems. Even though several works have been proposed to induce word senses,
existing systems are still very limited in the sense that they make use of
structured, domain-specific knowledge sources. In this paper, we devise a
method that leverages recent findings in word embeddings research to generate
context embeddings, which are embeddings containing information about the
semantical context of a word. In order to induce senses, we modeled the set of
ambiguous words as a complex network. In the generated network, two instances
(nodes) are connected if the respective context embeddings are similar. Upon
using well-established community detection methods to cluster the obtained
context embeddings, we found that the proposed method yields excellent
performance for the WSI task. Our method outperformed competing algorithms and
baselines, in a completely unsupervised manner and without the need of any
additional structured knowledge source.
| 2,019 | Computation and Language |
Contextual Salience for Fast and Accurate Sentence Vectors | Unsupervised vector representations of sentences or documents are a major
building block for many language tasks such as sentiment classification.
However, current methods are uninterpretable and slow or require large training
datasets. Recent word vector-based proposals implicitly assume that distances
in a word embedding space are equally important, regardless of context. We
introduce contextual salience (CoSal), a measure of word importance that uses
the distribution of context vectors to normalize distances and weights. CoSal
relies on the insight that unusual word vectors disproportionately affect
phrase vectors. A bag-of-words model with CoSal-based weights produces accurate
unsupervised sentence or document representations for classification, requiring
little computation to evaluate and only a single covariance calculation to
``train." CoSal supports small contexts, out-of context words and outperforms
SkipThought on most benchmarks, beats tf-idf on all benchmarks, and is
competitive with the unsupervised state-of-the-art.
| 2,020 | Computation and Language |
MultiBooked: A Corpus of Basque and Catalan Hotel Reviews Annotated for
Aspect-level Sentiment Classification | While sentiment analysis has become an established field in the NLP
community, research into languages other than English has been hindered by the
lack of resources. Although much research in multi-lingual and cross-lingual
sentiment analysis has focused on unsupervised or semi-supervised approaches,
these still require a large number of resources and do not reach the
performance of supervised approaches. With this in mind, we introduce two
datasets for supervised aspect-level sentiment analysis in Basque and Catalan,
both of which are under-resourced languages. We provide high-quality
annotations and benchmarks with the hope that they will be useful to the
growing community of researchers working on these languages.
| 2,018 | Computation and Language |
Studio Ousia's Quiz Bowl Question Answering System | In this chapter, we describe our question answering system, which was the
winning system at the Human-Computer Question Answering (HCQA) Competition at
the Thirty-first Annual Conference on Neural Information Processing Systems
(NIPS). The competition requires participants to address a factoid question
answering task referred to as quiz bowl. To address this task, we use two novel
neural network models and combine these models with conventional information
retrieval models using a supervised machine learning model. Our system achieved
the best performance among the systems submitted in the competition and won a
match against six top human quiz experts by a wide margin.
| 2,018 | Computation and Language |
Multilingual bottleneck features for subword modeling in zero-resource
languages | How can we effectively develop speech technology for languages where no
transcribed data is available? Many existing approaches use no annotated
resources at all, yet it makes sense to leverage information from large
annotated corpora in other languages, for example in the form of multilingual
bottleneck features (BNFs) obtained from a supervised speech recognition
system. In this work, we evaluate the benefits of BNFs for subword modeling
(feature extraction) in six unseen languages on a word discrimination task.
First we establish a strong unsupervised baseline by combining two existing
methods: vocal tract length normalisation (VTLN) and the correspondence
autoencoder (cAE). We then show that BNFs trained on a single language already
beat this baseline; including up to 10 languages results in additional
improvements which cannot be matched by just adding more data from a single
language. Finally, we show that the cAE can improve further on the BNFs if
high-quality same-word pairs are available.
| 2,018 | Computation and Language |
On the difficulty of a distributional semantics of spoken language | In the domain of unsupervised learning most work on speech has focused on
discovering low-level constructs such as phoneme inventories or word-like
units. In contrast, for written language, where there is a large body of work
on unsupervised induction of semantic representations of words, whole sentences
and longer texts. In this study we examine the challenges of adapting these
approaches from written to spoken language. We conjecture that unsupervised
learning of the semantics of spoken language becomes feasible if we abstract
from the surface variability. We simulate this setting with a dataset of
utterances spoken by a realistic but uniform synthetic voice. We evaluate two
simple unsupervised models which, to varying degrees of success, learn semantic
representations of speech fragments. Finally we present inconclusive results on
human speech, and discuss the challenges inherent in learning distributional
semantic representations on unrestricted natural spoken language.
| 2,018 | Computation and Language |
Stance Detection on Tweets: An SVM-based Approach | Stance detection is a subproblem of sentiment analysis where the stance of
the author of a piece of natural language text for a particular target (either
explicitly stated in the text or not) is explored. The stance output is usually
given as Favor, Against, or Neither. In this paper, we target at stance
detection on sports-related tweets and present the performance results of our
SVM-based stance classifiers on such tweets. First, we describe three versions
of our proprietary tweet data set annotated with stance information, all of
which are made publicly available for research purposes. Next, we evaluate SVM
classifiers using different feature sets for stance detection on this data set.
The employed features are based on unigrams, bigrams, hashtags, external links,
emoticons, and lastly, named entities. The results indicate that joint use of
the features based on unigrams, hashtags, and named entities by SVM classifiers
is a plausible approach for stance detection problem on sports-related tweets.
| 2,018 | Computation and Language |
Speech2Vec: A Sequence-to-Sequence Framework for Learning Word
Embeddings from Speech | In this paper, we propose a novel deep neural network architecture,
Speech2Vec, for learning fixed-length vector representations of audio segments
excised from a speech corpus, where the vectors contain semantic information
pertaining to the underlying spoken words, and are close to other vectors in
the embedding space if their corresponding underlying spoken words are
semantically similar. The proposed model can be viewed as a speech version of
Word2Vec. Its design is based on a RNN Encoder-Decoder framework, and borrows
the methodology of skipgrams or continuous bag-of-words for training. Learning
word embeddings directly from speech enables Speech2Vec to make use of the
semantic information carried by speech that does not exist in plain text. The
learned word embeddings are evaluated and analyzed on 13 widely used word
similarity benchmarks, and outperform word embeddings learned by Word2Vec from
the transcriptions.
| 2,018 | Computation and Language |
Automated Evaluation of Out-of-Context Errors | We present a new approach to evaluate computational models for the task of
text understanding by the means of out-of-context error detection. Through the
novel design of our automated modification process, existing large-scale data
sources can be adopted for a vast number of text understanding tasks. The data
is thereby altered on a semantic level, allowing models to be tested against a
challenging set of modified text passages that require to comprise a broader
narrative discourse. Our newly introduced task targets actual real-world
problems of transcription and translation systems by inserting authentic
out-of-context errors. The automated modification process is applied to the
2016 TEDTalk corpus. Entirely automating the process allows the adoption of
complete datasets at low cost, facilitating supervised learning procedures and
deeper networks to be trained and tested. To evaluate the quality of the
modification algorithm a language model and a supervised binary classification
model are trained and tested on the altered dataset. A human baseline
evaluation is examined to compare the results with human performance. The
outcome of the evaluation task indicates the difficulty to detect semantic
errors for machine-learning algorithms and humans, showing that the errors
cannot be identified when limited to a single sentence.
| 2,018 | Computation and Language |
Leveraging translations for speech transcription in low-resource
settings | Recently proposed data collection frameworks for endangered language
documentation aim not only to collect speech in the language of interest, but
also to collect translations into a high-resource language that will render the
collected resource interpretable. We focus on this scenario and explore whether
we can improve transcription quality under these extremely low-resource
settings with the assistance of text translations. We present a neural
multi-source model and evaluate several variations of it on three low-resource
datasets. We find that our multi-source model with shared attention outperforms
the baselines, reducing transcription character error rate by up to 12.3%.
| 2,018 | Computation and Language |
WikiRank: Improving Keyphrase Extraction Based on Background Knowledge | Keyphrase is an efficient representation of the main idea of documents. While
background knowledge can provide valuable information about documents, they are
rarely incorporated in keyphrase extraction methods. In this paper, we propose
WikiRank, an unsupervised method for keyphrase extraction based on the
background knowledge from Wikipedia. Firstly, we construct a semantic graph for
the document. Then we transform the keyphrase extraction problem into an
optimization problem on the graph. Finally, we get the optimal keyphrase set to
be the output. Our method obtains improvements over other state-of-art models
by more than 2% in F1-score.
| 2,018 | Computation and Language |
Style Tokens: Unsupervised Style Modeling, Control and Transfer in
End-to-End Speech Synthesis | In this work, we propose "global style tokens" (GSTs), a bank of embeddings
that are jointly trained within Tacotron, a state-of-the-art end-to-end speech
synthesis system. The embeddings are trained with no explicit labels, yet learn
to model a large range of acoustic expressiveness. GSTs lead to a rich set of
significant results. The soft interpretable "labels" they generate can be used
to control synthesis in novel ways, such as varying speed and speaking style -
independently of the text content. They can also be used for style transfer,
replicating the speaking style of a single audio clip across an entire
long-form text corpus. When trained on noisy, unlabeled found data, GSTs learn
to factorize noise and speaker identity, providing a path towards highly
scalable but robust speech synthesis.
| 2,018 | Computation and Language |
Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with
Tacotron | We present an extension to the Tacotron speech synthesis architecture that
learns a latent embedding space of prosody, derived from a reference acoustic
representation containing the desired prosody. We show that conditioning
Tacotron on this learned embedding space results in synthesized audio that
matches the prosody of the reference signal with fine time detail even when the
reference and synthesis speakers are different. Additionally, we show that a
reference prosody embedding can be used to synthesize text that is different
from that of the reference utterance. We define several quantitative and
subjective metrics for evaluating prosody transfer, and report results with
accompanying audio samples from single-speaker and 44-speaker Tacotron models
on a prosody transfer task.
| 2,018 | Computation and Language |
Near-lossless Binarization of Word Embeddings | Word embeddings are commonly used as a starting point in many NLP models to
achieve state-of-the-art performances. However, with a large vocabulary and
many dimensions, these floating-point representations are expensive both in
terms of memory and calculations which makes them unsuitable for use on
low-resource devices. The method proposed in this paper transforms real-valued
embeddings into binary embeddings while preserving semantic information,
requiring only 128 or 256 bits for each vector. This leads to a small memory
footprint and fast vector operations. The model is based on an autoencoder
architecture, which also allows to reconstruct original vectors from the binary
ones. Experimental results on semantic similarity, text classification and
sentiment analysis tasks show that the binarization of word embeddings only
leads to a loss of ~2% in accuracy while vector size is reduced by 97%.
Furthermore, a top-k benchmark demonstrates that using these binary vectors is
30 times faster than using real-valued vectors.
| 2,020 | Computation and Language |
Multi-range Reasoning for Machine Comprehension | We propose MRU (Multi-Range Reasoning Units), a new fast compositional
encoder for machine comprehension (MC). Our proposed MRU encoders are
characterized by multi-ranged gating, executing a series of parameterized
contract-and-expand layers for learning gating vectors that benefit from long
and short-term dependencies. The aims of our approach are as follows: (1)
learning representations that are concurrently aware of long and short-term
context, (2) modeling relationships between intra-document blocks and (3) fast
and efficient sequence encoding. We show that our proposed encoder demonstrates
promising results both as a standalone encoder and as well as a complementary
building block. We conduct extensive experiments on three challenging MC
datasets, namely RACE, SearchQA and NarrativeQA, achieving highly competitive
performance on all. On the RACE benchmark, our model outperforms DFN (Dynamic
Fusion Networks) by 1.5%-6% without using any recurrent or convolution layers.
Similarly, we achieve competitive performance relative to AMANDA on the
SearchQA benchmark and BiDAF on the NarrativeQA benchmark without using any
LSTM/GRU layers. Finally, incorporating MRU encoders with standard BiLSTM
architectures further improves performance, achieving state-of-the-art results.
| 2,018 | Computation and Language |
Simple Large-scale Relation Extraction from Unstructured Text | Knowledge-based question answering relies on the availability of facts, the
majority of which cannot be found in structured sources (e.g. Wikipedia
info-boxes, Wikidata). One of the major components of extracting facts from
unstructured text is Relation Extraction (RE). In this paper we propose a novel
method for creating distant (weak) supervision labels for training a
large-scale RE system. We also provide new evidence about the effectiveness of
neural network approaches by decoupling the model architecture from the feature
design of a state-of-the-art neural network system. Surprisingly, a much
simpler classifier trained on similar features performs on par with the highly
complex neural network system (at 75x reduction to the training time),
suggesting that the features are a bigger contributor to the final performance.
| 2,018 | Computation and Language |
Machine Learning and Applied Linguistics | This entry introduces the topic of machine learning and provides an overview
of its relevance for applied linguistics and language learning. The discussion
will focus on giving an introduction to the methods and applications of machine
learning in applied linguistics, and will provide references for further study.
| 2,018 | Computation and Language |
Low-Resource Speech-to-Text Translation | Speech-to-text translation has many potential applications for low-resource
languages, but the typical approach of cascading speech recognition with
machine translation is often impossible, since the transcripts needed to train
a speech recognizer are usually not available for low-resource languages.
Recent work has found that neural encoder-decoder models can learn to directly
translate foreign speech in high-resource scenarios, without the need for
intermediate transcription. We investigate whether this approach also works in
settings where both data and computation are limited. To make the approach
efficient, we make several architectural changes, including a change from
character-level to word-level decoding. We find that this choice yields crucial
speed improvements that allow us to train with fewer computational resources,
yet still performs well on frequent words. We explore models trained on between
20 and 160 hours of data, and find that although models trained on less data
have considerably lower BLEU scores, they can still predict words with
relatively high precision and recall---around 50% for a model trained on 50
hours of data, versus around 60% for the full 160 hour model. Thus, they may
still be useful for some low-resource scenarios.
| 2,018 | Computation and Language |
Scene Graph Parsing as Dependency Parsing | In this paper, we study the problem of parsing structured knowledge graphs
from textual descriptions. In particular, we consider the scene graph
representation that considers objects together with their attributes and
relations: this representation has been proved useful across a variety of
vision and language applications. We begin by introducing an alternative but
equivalent edge-centric view of scene graphs that connect to dependency parses.
Together with a careful redesign of label and action space, we combine the
two-stage pipeline used in prior work (generic dependency parsing followed by
simple post-processing) into one, enabling end-to-end training. The scene
graphs generated by our learned neural dependency parser achieve an F-score
similarity of 49.67% to ground truth graphs on our evaluation set, surpassing
best previous approaches by 5%. We further demonstrate the effectiveness of our
learned parser on image retrieval applications.
| 2,018 | Computation and Language |
Pay More Attention - Neural Architectures for Question-Answering | Machine comprehension is a representative task of natural language
understanding. Typically, we are given context paragraph and the objective is
to answer a question that depends on the context. Such a problem requires to
model the complex interactions between the context paragraph and the question.
Lately, attention mechanisms have been found to be quite successful at these
tasks and in particular, attention mechanisms with attention flow from both
context-to-question and question-to-context have been proven to be quite
useful. In this paper, we study two state-of-the-art attention mechanisms
called Bi-Directional Attention Flow (BiDAF) and Dynamic Co-Attention Network
(DCN) and propose a hybrid scheme combining these two architectures that gives
better overall performance. Moreover, we also suggest a new simpler attention
mechanism that we call Double Cross Attention (DCA) that provides better
results compared to both BiDAF and Co-Attention mechanisms while providing
similar performance as the hybrid scheme. The objective of our paper is to
focus particularly on the attention layer and to suggest improvements on that.
Our experimental evaluations show that both our proposed models achieve
superior results on the Stanford Question Answering Dataset (SQuAD) compared to
BiDAF and DCN attention mechanisms.
| 2,018 | Computation and Language |
The Geometry of Culture: Analyzing Meaning through Word Embeddings | We demonstrate the utility of a new methodological tool, neural-network word
embedding models, for large-scale text analysis, revealing how these models
produce richer insights into cultural associations and categories than possible
with prior methods. Word embeddings represent semantic relations between words
as geometric relationships between vectors in a high-dimensional space,
operationalizing a relational model of meaning consistent with contemporary
theories of identity and culture. We show that dimensions induced by word
differences (e.g. man - woman, rich - poor, black - white, liberal -
conservative) in these vector spaces closely correspond to dimensions of
cultural meaning, and the projection of words onto these dimensions reflects
widely shared cultural connotations when compared to surveyed responses and
labeled historical data. We pilot a method for testing the stability of these
associations, then demonstrate applications of word embeddings for
macro-cultural investigation with a longitudinal analysis of the coevolution of
gender and class associations in the United States over the 20th century and a
comparative analysis of historic distinctions between markers of gender and
class in the U.S. and Britain. We argue that the success of these
high-dimensional models motivates a move towards "high-dimensional theorizing"
of meanings, identities and cultural processes.
| 2,019 | Computation and Language |
Text Segmentation as a Supervised Learning Task | Text segmentation, the task of dividing a document into contiguous segments
based on its semantic structure, is a longstanding challenge in language
understanding. Previous work on text segmentation focused on unsupervised
methods such as clustering or graph search, due to the paucity in labeled data.
In this work, we formulate text segmentation as a supervised learning problem,
and present a large new dataset for text segmentation that is automatically
extracted and labeled from Wikipedia. Moreover, we develop a segmentation model
based on this dataset and show that it generalizes well to unseen natural text.
| 2,018 | Computation and Language |
StaQC: A Systematically Mined Question-Code Dataset from Stack Overflow | Stack Overflow (SO) has been a great source of natural language questions and
their code solutions (i.e., question-code pairs), which are critical for many
tasks including code retrieval and annotation. In most existing research,
question-code pairs were collected heuristically and tend to have low quality.
In this paper, we investigate a new problem of systematically mining
question-code pairs from Stack Overflow (in contrast to heuristically
collecting them). It is formulated as predicting whether or not a code snippet
is a standalone solution to a question. We propose a novel Bi-View Hierarchical
Neural Network which can capture both the programming content and the textual
context of a code snippet (i.e., two views) to make a prediction. On two
manually annotated datasets in Python and SQL domain, our framework
substantially outperforms heuristic methods with at least 15% higher F1 and
accuracy. Furthermore, we present StaQC (Stack Overflow Question-Code pairs),
the largest dataset to date of ~148K Python and ~120K SQL question-code pairs,
automatically mined from SO using our framework. Under various case studies, we
demonstrate that StaQC can greatly help develop data-hungry models for
associating natural language with programming language.
| 2,018 | Computation and Language |
Aggression-annotated Corpus of Hindi-English Code-mixed Data | As the interaction over the web has increased, incidents of aggression and
related events like trolling, cyberbullying, flaming, hate speech, etc. too
have increased manifold across the globe. While most of these behaviour like
bullying or hate speech have predated the Internet, the reach and extent of the
Internet has given these an unprecedented power and influence to affect the
lives of billions of people. So it is of utmost significance and importance
that some preventive measures be taken to provide safeguard to the people using
the web such that the web remains a viable medium of communication and
connection, in general. In this paper, we discuss the development of an
aggression tagset and an annotated corpus of Hindi-English code-mixed data from
two of the most popular social networking and social media platforms in India,
Twitter and Facebook. The corpus is annotated using a hierarchical tagset of 3
top-level tags and 10 level 2 tags. The final dataset contains approximately
18k tweets and 21k facebook comments and is being released for further research
in the field.
| 2,018 | Computation and Language |
Automatic Identification of Closely-related Indian Languages: Resources
and Experiments | In this paper, we discuss an attempt to develop an automatic language
identification system for 5 closely-related Indo-Aryan languages of India,
Awadhi, Bhojpuri, Braj, Hindi and Magahi. We have compiled a comparable corpora
of varying length for these languages from various resources. We discuss the
method of creation of these corpora in detail. Using these corpora, a language
identification system was developed, which currently gives state of the art
accuracy of 96.48\%. We also used these corpora to study the similarity between
the 5 languages at the lexical level, which is the first data-based study of
the extent of closeness of these languages.
| 2,018 | Computation and Language |
Self-Attentional Acoustic Models | Self-attention is a method of encoding sequences of vectors by relating these
vectors to each-other based on pairwise similarities. These models have
recently shown promising results for modeling discrete sequences, but they are
non-trivial to apply to acoustic modeling due to computational and modeling
issues. In this paper, we apply self-attention to acoustic modeling, proposing
several improvements to mitigate these issues: First, self-attention memory
grows quadratically in the sequence length, which we address through a
downsampling technique. Second, we find that previous approaches to incorporate
position information into the model are unsuitable and explore other
representations and hybrid models to this end. Third, to stress the importance
of local context in the acoustic signal, we propose a Gaussian biasing approach
that allows explicit control over the context range. Experiments find that our
model approaches a strong baseline based on LSTMs with network-in-network
connections while being much faster to compute. Besides speed, we find that
interpretability is a strength of self-attentional acoustic models, and
demonstrate that self-attention heads learn a linguistically plausible division
of labor.
| 2,018 | Computation and Language |
Unsupervised Separation of Transliterable and Native Words for Malayalam | Differentiating intrinsic language words from transliterable words is a key
step aiding text processing tasks involving different natural languages. We
consider the problem of unsupervised separation of transliterable words from
native words for text in Malayalam language. Outlining a key observation on the
diversity of characters beyond the word stem, we develop an optimization method
to score words based on their nativeness. Our method relies on the usage of
probability distributions over character n-grams that are refined in step with
the nativeness scorings in an iterative optimization formulation. Using an
empirical evaluation, we illustrate that our method, DTIM, provides significant
improvements in nativeness scoring for Malayalam, establishing DTIM as the
preferred method for the task.
| 2,018 | Computation and Language |
CliCR: A Dataset of Clinical Case Reports for Machine Reading
Comprehension | We present a new dataset for machine comprehension in the medical domain. Our
dataset uses clinical case reports with around 100,000 gap-filling queries
about these cases. We apply several baselines and state-of-the-art neural
readers to the dataset, and observe a considerable gap in performance (20% F1)
between the best human and machine readers. We analyze the skills required for
successful answering and show how reader performance varies depending on the
applicable skills. We find that inferences using domain knowledge and object
tracking are the most frequently required skills, and that recognizing omitted
information and spatio-temporal reasoning are the most difficult for the
machines.
| 2,018 | Computation and Language |
English verb regularization in books and tweets | The English language has evolved dramatically throughout its lifespan, to the
extent that a modern speaker of Old English would be incomprehensible without
translation. One concrete indicator of this process is the movement from
irregular to regular (-ed) forms for the past tense of verbs. In this study we
quantify the extent of verb regularization using two vastly disparate datasets:
(1) Six years of published books scanned by Google (2003--2008), and (2) A
decade of social media messages posted to Twitter (2008--2017). We find that
the extent of verb regularization is greater on Twitter, taken as a whole, than
in English Fiction books. Regularization is also greater for tweets geotagged
in the United States relative to American English books, but the opposite is
true for tweets geotagged in the United Kingdom relative to British English
books. We also find interesting regional variations in regularization across
counties in the United States. However, once differences in population are
accounted for, we do not identify strong correlations with socio-demographic
variables such as education or income.
| 2,018 | Computation and Language |
Heat Kernel analysis of Syntactic Structures | We consider two different data sets of syntactic parameters and we discuss
how to detect relations between parameters through a heat kernel method
developed by Belkin-Niyogi, which produces low dimensional representations of
the data, based on Laplace eigenfunctions, that preserve neighborhood
information. We analyze the different connectivity and clustering structures
that arise in the two datasets, and the regions of maximal variance in the
two-parameter space of the Belkin-Niyogi construction, which identify
preferable choices of independent variables. We compute clustering coefficients
and their variance.
| 2,018 | Computation and Language |
Mittens: An Extension of GloVe for Learning Domain-Specialized
Representations | We present a simple extension of the GloVe representation learning model that
begins with general-purpose representations and updates them based on data from
a specialized domain. We show that the resulting representations can lead to
faster learning and better results on a variety of tasks.
| 2,018 | Computation and Language |
Multi-Modal Data Augmentation for End-to-End ASR | We present a new end-to-end architecture for automatic speech recognition
(ASR) that can be trained using \emph{symbolic} input in addition to the
traditional acoustic input. This architecture utilizes two separate encoders:
one for acoustic input and another for symbolic input, both sharing the
attention and decoder parameters. We call this architecture a multi-modal data
augmentation network (MMDA), as it can support multi-modal (acoustic and
symbolic) input and enables seamless mixing of large text datasets with
significantly smaller transcribed speech corpora during training. We study
different ways of transforming large text corpora into a symbolic form suitable
for training our MMDA network. Our best MMDA setup obtains small improvements
on character error rate (CER), and as much as 7-10\% relative word error rate
(WER) improvement over a baseline both with and without an external language
model.
| 2,018 | Computation and Language |
Deep Communicating Agents for Abstractive Summarization | We present deep communicating agents in an encoder-decoder architecture to
address the challenges of representing a long document for abstractive
summarization. With deep communicating agents, the task of encoding a long text
is divided across multiple collaborating agents, each in charge of a subsection
of the input text. These encoders are connected to a single decoder, trained
end-to-end using reinforcement learning to generate a focused and coherent
summary. Empirical results demonstrate that multiple communicating encoders
lead to a higher quality summary compared to several strong baselines,
including those based on a single encoder or multiple non-communicating
encoders.
| 2,018 | Computation and Language |
Topic Modeling Based Multi-modal Depression Detection | Major depressive disorder is a common mental disorder that affects almost 7%
of the adult U.S. population. The 2017 Audio/Visual Emotion Challenge (AVEC)
asks participants to build a model to predict depression levels based on the
audio, video, and text of an interview ranging between 7-33 minutes. Since
averaging features over the entire interview will lose most temporal
information, how to discover, capture, and preserve useful temporal details for
such a long interview are significant challenges. Therefore, we propose a novel
topic modeling based approach to perform context-aware analysis of the
recording. Our experiments show that the proposed approach outperforms
context-unaware methods and the challenge baselines for all metrics.
| 2,018 | Computation and Language |
Handling Verb Phrase Anaphora with Dependent Types and Events | This paper studies how dependent typed events can be used to treat verb
phrase anaphora. We introduce a framework that extends Dependent Type Semantics
(DTS) with a new atomic type for neo-Davidsonian events and an extended
@-operator that can return new events that share properties of events
referenced by verb phrase anaphora. The proposed framework, along with
illustrative examples of its use, are presented after a brief overview of the
necessary background and of the major challenges posed by verb phrase anaphora.
| 2,018 | Computation and Language |
Machine Speech Chain with One-shot Speaker Adaptation | In previous work, we developed a closed-loop speech chain model based on deep
learning, in which the architecture enabled the automatic speech recognition
(ASR) and text-to-speech synthesis (TTS) components to mutually improve their
performance. This was accomplished by the two parts teaching each other using
both labeled and unlabeled data. This approach could significantly improve
model performance within a single-speaker speech dataset, but only a slight
increase could be gained in multi-speaker tasks. Furthermore, the model is
still unable to handle unseen speakers. In this paper, we present a new speech
chain mechanism by integrating a speaker recognition model inside the loop. We
also propose extending the capability of TTS to handle unseen speakers by
implementing one-shot speaker adaptation. This enables TTS to mimic voice
characteristics from one speaker to another with only a one-shot speaker
sample, even from a text without any speaker information. In the speech chain
loop mechanism, ASR also benefits from the ability to further learn an
arbitrary speaker's characteristics from the generated speech waveform,
resulting in a significant improvement in the recognition rate.
| 2,018 | Computation and Language |
Neural Network Architecture for Credibility Assessment of Textual Claims | Text articles with false claims, especially news, have recently become
aggravating for the Internet users. These articles are in wide circulation and
readers face difficulty discerning fact from fiction. Previous work on
credibility assessment has focused on factual analysis and linguistic features.
The task's main challenge is the distinction between the features of true and
false articles. In this paper, we propose a novel approach called Credibility
Outcome (CREDO) which aims at scoring the credibility of an article in an open
domain setting.
CREDO consists of different modules for capturing various features
responsible for the credibility of an article. These features includes
credibility of the article's source and author, semantic similarity between the
article and related credible articles retrieved from a knowledge base, and
sentiments conveyed by the article. A neural network architecture learns the
contribution of each of these modules to the overall credibility of an article.
Experiments on Snopes dataset reveals that CREDO outperforms the
state-of-the-art approaches based on linguistic features.
| 2,024 | Computation and Language |
Meta-Learning a Dynamical Language Model | We consider the task of word-level language modeling and study the
possibility of combining hidden-states-based short-term representations with
medium-term representations encoded in dynamical weights of a language model.
Our work extends recent experiments on language models with dynamically
evolving weights by casting the language modeling problem into an online
learning-to-learn framework in which a meta-learner is trained by
gradient-descent to continuously update a language model weights.
| 2,018 | Computation and Language |
Towards Unsupervised Automatic Speech Recognition Trained by Unaligned
Speech and Text only | Automatic speech recognition (ASR) has been widely researched with supervised
approaches, while many low-resourced languages lack audio-text aligned data,
and supervised methods cannot be applied on them.
In this work, we propose a framework to achieve unsupervised ASR on a read
English speech dataset, where audio and text are unaligned. In the first stage,
each word-level audio segment in the utterances is represented by a vector
representation extracted by a sequence-of-sequence autoencoder, in which
phonetic information and speaker information are disentangled.
Secondly, semantic embeddings of audio segments are trained from the vector
representations using a skip-gram model. Last but not the least, an
unsupervised method is utilized to transform semantic embeddings of audio
segments to text embedding space, and finally the transformed embeddings are
mapped to words.
With the above framework, we are towards unsupervised ASR trained by
unaligned text and speech only.
| 2,018 | Computation and Language |
Computer-Assisted Text Analysis for Social Science: Topic Models and
Beyond | Topic models are a family of statistical-based algorithms to summarize,
explore and index large collections of text documents. After a decade of
research led by computer scientists, topic models have spread to social science
as a new generation of data-driven social scientists have searched for tools to
explore large collections of unstructured text. Recently, social scientists
have contributed to topic model literature with developments in causal
inference and tools for handling the problem of multi-modality. In this paper,
I provide a literature review on the evolution of topic modeling including
extensions for document covariates, methods for evaluation and interpretation,
and advances in interactive visualizations along with each aspect's relevance
and application for social science research.
| 2,018 | Computation and Language |
Actor-Critic based Training Framework for Abstractive Summarization | We present a training framework for neural abstractive summarization based on
actor-critic approaches from reinforcement learning. In the traditional neural
network based methods, the objective is only to maximize the likelihood of the
predicted summaries, no other assessment constraints are considered, which may
generate low-quality summaries or even incorrect sentences. To alleviate this
problem, we employ an actor-critic framework to enhance the training procedure.
For the actor, we employ the typical attention based sequence-to-sequence
(seq2seq) framework as the policy network for summary generation. For the
critic, we combine the maximum likelihood estimator with a well designed global
summary quality estimator which is a neural network based binary classifier
aiming to make the generated summaries indistinguishable from the human-written
ones. Policy gradient method is used to conduct the parameter learning. An
alternating training strategy is proposed to conduct the joint training of the
actor and critic models. Extensive experiments on some benchmark datasets in
different languages show that our framework achieves improvements over the
state-of-the-art methods.
| 2,018 | Computation and Language |
Identifying Semantic Divergences in Parallel Text without Annotations | Recognizing that even correct translations are not always semantically
equivalent, we automatically detect meaning divergences in parallel sentence
pairs with a deep neural model of bilingual semantic similarity which can be
trained for any parallel corpus without any manual annotation. We show that our
semantic model detects divergences more accurately than models based on surface
features derived from word alignments, and that these divergences matter for
neural machine translation.
| 2,018 | Computation and Language |
Colorless green recurrent networks dream hierarchically | Recurrent neural networks (RNNs) have achieved impressive results in a
variety of linguistic processing tasks, suggesting that they can induce
non-trivial properties of language. We investigate here to what extent RNNs
learn to track abstract hierarchical syntactic structure. We test whether RNNs
trained with a generic language modeling objective in four languages (Italian,
English, Hebrew, Russian) can predict long-distance number agreement in various
constructions. We include in our evaluation nonsensical sentences where RNNs
cannot rely on semantic or lexical cues ("The colorless green ideas I ate with
the chair sleep furiously"), and, for Italian, we compare model performance to
human intuitions. Our language-model-trained RNNs make reliable predictions
about long-distance agreement, and do not lag much behind human performance. We
thus bring support to the hypothesis that RNNs are not just shallow-pattern
extractors, but they also acquire deeper grammatical competence.
| 2,018 | Computation and Language |
Universal Sentence Encoder | We present models for encoding sentences into embedding vectors that
specifically target transfer learning to other NLP tasks. The models are
efficient and result in accurate performance on diverse transfer tasks. Two
variants of the encoding models allow for trade-offs between accuracy and
compute resources. For both variants, we investigate and report the
relationship between model complexity, resource consumption, the availability
of transfer task training data, and task performance. Comparisons are made with
baselines that use word level transfer learning via pretrained word embeddings
as well as baselines do not use any transfer learning. We find that transfer
learning using sentence embeddings tends to outperform word level transfer.
With transfer learning via sentence embeddings, we observe surprisingly good
performance with minimal amounts of supervised training data for a transfer
task. We obtain encouraging results on Word Embedding Association Tests (WEAT)
targeted at detecting model bias. Our pre-trained sentence encoding models are
made freely available for download and on TF Hub.
| 2,018 | Computation and Language |
Deep Recurrent Neural Networks for Product Attribute Extraction in
eCommerce | Extracting accurate attribute qualities from product titles is a vital
component in delivering eCommerce customers with a rewarding online shopping
experience via an enriched faceted search. We demonstrate the potential of Deep
Recurrent Networks in this domain, primarily models such as Bidirectional LSTMs
and Bidirectional LSTM-CRF with or without an attention mechanism. These have
improved overall F1 scores, as compared to the previous benchmarks (More et
al.) by at least 0.0391, showcasing an overall precision of 97.94%, recall of
94.12% and the F1 score of 0.9599. This has made us achieve a significant
coverage of important facets or attributes of products which not only shows the
efficacy of deep recurrent models over previous machine learning benchmarks but
also greatly enhances the overall customer experience while shopping online.
| 2,018 | Computation and Language |
Robust Cross-lingual Hypernymy Detection using Dependency Context | Cross-lingual Hypernymy Detection involves determining if a word in one
language ("fruit") is a hypernym of a word in another language ("pomme" i.e.
apple in French). The ability to detect hypernymy cross-lingually can aid in
solving cross-lingual versions of tasks such as textual entailment and event
coreference. We propose BISPARSE-DEP, a family of unsupervised approaches for
cross-lingual hypernymy detection, which learns sparse, bilingual word
embeddings based on dependency contexts. We show that BISPARSE-DEP can
significantly improve performance on this task, compared to approaches based
only on lexical context. Our approach is also robust, showing promise for
low-resource settings: our dependency-based embeddings can be learned using a
parser trained on related languages, with negligible loss in performance. We
also crowd-source a challenging dataset for this task on four languages --
Russian, French, Arabic, and Chinese. Our embeddings and datasets are publicly
available.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.