Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
SPEECH-COCO: 600k Visually Grounded Spoken Captions Aligned to MSCOCO
Data Set | This paper presents an augmentation of MSCOCO dataset where speech is added
to image and text. Speech captions are generated using text-to-speech (TTS)
synthesis resulting in 616,767 spoken captions (more than 600h) paired with
images. Disfluencies and speed perturbation are added to the signal in order to
sound more natural. Each speech signal (WAV) is paired with a JSON file
containing exact timecode for each word/syllable/phoneme in the spoken caption.
Such a corpus could be used for Language and Vision (LaVi) tasks including
speech input or output instead of text. Investigating multimodal learning
schemes for unsupervised speech pattern discovery is also possible with this
corpus, as demonstrated by a preliminary study conducted on a subset of the
corpus (10h, 10k spoken captions). The dataset is available on Zenodo:
https://zenodo.org/record/4282267
| 2,020 | Computation and Language |
All that is English may be Hindi: Enhancing language identification
through automatic ranking of likeliness of word borrowing in social media | In this paper, we present a set of computational methods to identify the
likeliness of a word being borrowed, based on the signals from social media. In
terms of Spearman correlation coefficient values, our methods perform more than
two times better (nearly 0.62) in predicting the borrowing likeliness compared
to the best performing baseline (nearly 0.26) reported in literature. Based on
this likeliness estimate we asked annotators to re-annotate the language tags
of foreign words in predominantly native contexts. In 88 percent of cases the
annotators felt that the foreign language tag should be replaced by native
language tag, thus indicating a huge scope for improvement of automatic
language identification systems.
| 2,017 | Computation and Language |
Men Are from Mars, Women Are from Venus: Evaluation and Modelling of
Verbal Associations | We present a quantitative analysis of human word association pairs and study
the types of relations presented in the associations. We put our main focus on
the correlation between response types and respondent characteristics such as
occupation and gender by contrasting syntagmatic and paradigmatic associations.
Finally, we propose a personalised distributed word association model and show
the importance of incorporating demographic factors into the models commonly
used in natural language processing.
| 2,017 | Computation and Language |
Implicit Entity Linking in Tweets | Over the years, Twitter has become one of the largest communication platforms
providing key data to various applications such as brand monitoring, trend
detection, among others. Entity linking is one of the major tasks in natural
language understanding from tweets and it associates entity mentions in text to
corresponding entries in knowledge bases in order to provide unambiguous
interpretation and additional con- text. State-of-the-art techniques have
focused on linking explicitly mentioned entities in tweets with reasonable
success. However, we argue that in addition to explicit mentions i.e. The movie
Gravity was more ex- pensive than the mars orbiter mission entities (movie
Gravity) can also be mentioned implicitly i.e. This new space movie is crazy.
you must watch it!. This paper introduces the problem of implicit entity
linking in tweets. We propose an approach that models the entities by
exploiting their factual and contextual knowledge. We demonstrate how to use
these models to perform implicit entity linking on a ground truth dataset with
397 tweets from two domains, namely, Movie and Book. Specifically, we show: 1)
the importance of linking implicit entities and its value addition to the
standard entity linking task, and 2) the importance of exploiting contextual
knowledge associated with an entity for linking their implicit mentions. We
also make the ground truth dataset publicly available to foster the research in
this new research area.
| 2,017 | Computation and Language |
Video Highlight Prediction Using Audience Chat Reactions | Sports channel video portals offer an exciting domain for research on
multimodal, multilingual analysis. We present methods addressing the problem of
automatic video highlight prediction based on joint visual features and textual
analysis of the real-world audience discourse with complex slang, in both
English and traditional Chinese. We present a novel dataset based on League of
Legends championships recorded from North American and Taiwanese Twitch.tv
channels (will be released for further research), and demonstrate strong
results on these using multimodal, character-level CNN-RNN model architectures.
| 2,017 | Computation and Language |
Self-organized Hierarchical Softmax | We propose a new self-organizing hierarchical softmax formulation for
neural-network-based language models over large vocabularies. Instead of using
a predefined hierarchical structure, our approach is capable of learning word
clusters with clear syntactical and semantic meaning during the language model
training process. We provide experiments on standard benchmarks for language
modeling and sentence compression tasks. We find that this approach is as fast
as other efficient softmax approximations, while achieving comparable or even
better performance relative to similar full softmax models.
| 2,017 | Computation and Language |
Gradient-based Inference for Networks with Output Constraints | Practitioners apply neural networks to increasingly complex problems in
natural language processing, such as syntactic parsing and semantic role
labeling that have rich output structures. Many such structured-prediction
problems require deterministic constraints on the output values; for example,
in sequence-to-sequence syntactic parsing, we require that the sequential
outputs encode valid trees. While hidden units might capture such properties,
the network is not always able to learn such constraints from the training data
alone, and practitioners must then resort to post-processing. In this paper, we
present an inference method for neural networks that enforces deterministic
constraints on outputs without performing rule-based post-processing or
expensive discrete search. Instead, in the spirit of gradient-based training,
we enforce constraints with gradient-based inference (GBI): for each input at
test-time, we nudge continuous model weights until the network's unconstrained
inference procedure generates an output that satisfies the constraints. We
study the efficacy of GBI on three tasks with hard constraints: semantic role
labeling, syntactic parsing, and sequence transduction. In each case, the
algorithm not only satisfies constraints but improves accuracy, even when the
underlying network is state-of-the-art.
| 2,019 | Computation and Language |
Temporal dynamics of semantic relations in word embeddings: an
application to predicting armed conflict participants | This paper deals with using word embedding models to trace the temporal
dynamics of semantic relations between pairs of words. The set-up is similar to
the well-known analogies task, but expanded with a time dimension. To this end,
we apply incremental updating of the models with new training texts, including
incremental vocabulary expansion, coupled with learned transformation matrices
that let us map between members of the relation. The proposed approach is
evaluated on the task of predicting insurgent armed groups based on
geographical locations. The gold standard data for the time span 1994--2010 is
extracted from the UCDP Armed Conflicts dataset. The results show that the
method is feasible and outperforms the baselines, but also that important work
still remains to be done.
| 2,017 | Computation and Language |
Determining Semantic Textual Similarity using Natural Deduction Proofs | Determining semantic textual similarity is a core research subject in natural
language processing. Since vector-based models for sentence representation
often use shallow information, capturing accurate semantics is difficult. By
contrast, logical semantic representations capture deeper levels of sentence
semantics, but their symbolic nature does not offer graded notions of textual
similarity. We propose a method for determining semantic textual similarity by
combining shallow features with features extracted from natural deduction
proofs of bidirectional entailment relations between sentence pairs. For the
natural deduction proofs, we use ccg2lambda, a higher-order automatic inference
system, which converts Combinatory Categorial Grammar (CCG) derivation trees
into semantic representations and conducts natural deduction proofs.
Experiments show that our system was able to outperform other logic-based
systems and that features derived from the proofs are effective for learning
textual similarity.
| 2,017 | Computation and Language |
Analysis of Italian Word Embeddings | In this work we analyze the performances of two of the most used word
embeddings algorithms, skip-gram and continuous bag of words on Italian
language. These algorithms have many hyper-parameter that have to be carefully
tuned in order to obtain accurate word representation in vectorial space. We
provide an accurate analysis and an evaluation, showing what are the best
configuration of parameters for specific tasks.
| 2,017 | Computation and Language |
Detecting and Explaining Causes From Text For a Time Series Event | Explaining underlying causes or effects about events is a challenging but
valuable task. We define a novel problem of generating explanations of a time
series event by (1) searching cause and effect relationships of the time series
with textual data and (2) constructing a connecting chain between them to
generate an explanation. To detect causal features from text, we propose a
novel method based on the Granger causality of time series between features
extracted from text such as N-grams, topics, sentiments, and their composition.
The generation of the sequence of causal entities requires a commonsense
causative knowledge base with efficient reasoning. To ensure good
interpretability and appropriate lexical usage we combine symbolic and neural
representations, using a neural reasoning algorithm trained on commonsense
causal tuples to predict the next cause step. Our quantitative and human
analysis show empirical evidence that our method successfully extracts
meaningful causality relationships between time series with textual features
and generates appropriate explanation between them.
| 2,018 | Computation and Language |
Deep Residual Learning for Weakly-Supervised Relation Extraction | Deep residual learning (ResNet) is a new method for training very deep neural
networks using identity map-ping for shortcut connections. ResNet has won the
ImageNet ILSVRC 2015 classification task, and achieved state-of-the-art
performances in many computer vision tasks. However, the effect of residual
learning on noisy natural language processing tasks is still not well
understood. In this paper, we design a novel convolutional neural network (CNN)
with residual learning, and investigate its impacts on the task of distantly
supervised noisy relation extraction. In contradictory to popular beliefs that
ResNet only works well for very deep networks, we found that even with 9 layers
of CNNs, using identity mapping could significantly improve the performance for
distantly-supervised relation extraction.
| 2,017 | Computation and Language |
Strawman: an Ensemble of Deep Bag-of-Ngrams for Sentiment Analysis | This paper describes a builder entry, named "strawman", to the sentence-level
sentiment analysis task of the "Build It, Break It" shared task of the First
Workshop on Building Linguistically Generalizable NLP Systems. The goal of a
builder is to provide an automated sentiment analyzer that would serve as a
target for breakers whose goal is to find pairs of minimally-differing
sentences that break the analyzer.
| 2,017 | Computation and Language |
Effective Inference for Generative Neural Parsing | Generative neural models have recently achieved state-of-the-art results for
constituency parsing. However, without a feasible search procedure, their use
has so far been limited to reranking the output of external parsers in which
decoding is more tractable. We describe an alternative to the conventional
action-level beam search used for discriminative neural models that enables us
to decode directly in these generative models. We then show that by improving
our basic candidate selection strategy and using a coarse pruning function, we
can improve accuracy while exploring significantly less of the search space.
Applied to the model of Choe and Charniak (2016), our inference procedure
obtains 92.56 F1 on section 23 of the Penn Treebank, surpassing prior
state-of-the-art results for single-model systems.
| 2,017 | Computation and Language |
ASDA : Analyseur Syntaxique du Dialecte Alg{\'e}rien dans un but
d'analyse s{\'e}mantique | Opinion mining and sentiment analysis in social media is a research issue
having a great interest in the scientific community. However, before begin this
analysis, we are faced with a set of problems. In particular, the problem of
the richness of languages and dialects within these media. To address this
problem, we propose in this paper an approach of construction and
implementation of Syntactic analyzer named ASDA. This tool represents a parser
for the Algerian dialect that label the terms of a given corpus. Thus, we
construct a labeling table containing for each term its stem, different
prefixes and suffixes, allowing us to determine the different grammatical parts
a sort of POS tagging. This labeling will serve us later in the semantic
processing of the Algerian dialect, like the automatic translation of this
dialect or sentiment analysis
| 2,017 | Computation and Language |
A Shared Task on Bandit Learning for Machine Translation | We introduce and describe the results of a novel shared task on bandit
learning for machine translation. The task was organized jointly by Amazon and
Heidelberg University for the first time at the Second Conference on Machine
Translation (WMT 2017). The goal of the task is to encourage research on
learning machine translation from weak user feedback instead of human
references or post-edits. On each of a sequence of rounds, a machine
translation system is required to propose a translation for an input, and
receives a real-valued estimate of the quality of the proposed translation for
learning. This paper describes the shared task's learning and evaluation setup,
using services hosted on Amazon Web Services (AWS), the data and evaluation
metrics, and the results of various machine translation architectures and
learning protocols.
| 2,017 | Computation and Language |
Adapting Sequence Models for Sentence Correction | In a controlled experiment of sequence-to-sequence approaches for the task of
sentence correction, we find that character-based models are generally more
effective than word-based models and models that encode subword information via
convolutions, and that modeling the output data as a series of diffs improves
effectiveness over standard approaches. Our strongest sequence-to-sequence
model improves over our strongest phrase-based statistical machine translation
model, with access to the same data, by 6 M2 (0.5 GLEU) points. Additionally,
in the data environment of the standard CoNLL-2014 setup, we demonstrate that
modeling (and tuning against) diffs yields similar or better M2 scores with
simpler models and/or significantly less data than previous
sequence-to-sequence approaches.
| 2,017 | Computation and Language |
Learning to Predict Charges for Criminal Cases with Legal Basis | The charge prediction task is to determine appropriate charges for a given
case, which is helpful for legal assistant systems where the user input is fact
description. We argue that relevant law articles play an important role in this
task, and therefore propose an attention-based neural network method to jointly
model the charge prediction task and the relevant article extraction task in a
unified framework. The experimental results show that, besides providing legal
basis, the relevant articles can also clearly improve the charge prediction
results, and our full model can effectively predict appropriate charges for
cases with different expression styles.
| 2,018 | Computation and Language |
Improving coreference resolution with automatically predicted prosodic
information | Adding manually annotated prosodic information, specifically pitch accents
and phrasing, to the typical text-based feature set for coreference resolution
has previously been shown to have a positive effect on German data. Practical
applications on spoken language, however, would rely on automatically predicted
prosodic information. In this paper we predict pitch accents (and phrase
boundaries) using a convolutional neural network (CNN) model from acoustic
features extracted from the speech signal. After an assessment of the quality
of these automatic prosodic annotations, we show that they also significantly
improve coreference resolution.
| 2,017 | Computation and Language |
Online Deception Detection Refueled by Real World Data Collection | The lack of large realistic datasets presents a bottleneck in online
deception detection studies. In this paper, we apply a data collection method
based on social network analysis to quickly identify high-quality deceptive and
truthful online reviews from Amazon. The dataset contains more than 10,000
deceptive reviews and is diverse in product domains and reviewers. Using this
dataset, we explore effective general features for online deception detection
that perform well across domains. We demonstrate that with generalized features
- advertising speak and writing complexity scores - deception detection
performance can be further improved by adding additional deceptive reviews from
assorted domains in training. Finally, reviewer level evaluation gives an
interesting insight into different deceptive reviewers' writing styles.
| 2,017 | Computation and Language |
A Weakly Supervised Approach to Train Temporal Relation Classifiers and
Acquire Regular Event Pairs Simultaneously | Capabilities of detecting temporal relations between two events can benefit
many applications. Most of existing temporal relation classifiers were trained
in a supervised manner. Instead, we explore the observation that regular event
pairs show a consistent temporal relation despite of their various contexts,
and these rich contexts can be used to train a contextual temporal relation
classifier, which can further recognize new temporal relation contexts and
identify new regular event pairs. We focus on detecting after and before
temporal relations and design a weakly supervised learning approach that
extracts thousands of regular event pairs and learns a contextual temporal
relation classifier simultaneously. Evaluation shows that the acquired regular
event pairs are of high quality and contain rich commonsense knowledge and
domain specific knowledge. In addition, the weakly supervised trained temporal
relation classifier achieves comparable performance with the state-of-the-art
supervised systems.
| 2,017 | Computation and Language |
Bilingual Document Alignment with Latent Semantic Indexing | We apply cross-lingual Latent Semantic Indexing to the Bilingual Document
Alignment Task at WMT16. Reduced-rank singular value decomposition of a
bilingual term-document matrix derived from known English/French page pairs in
the training data allows us to map monolingual documents into a joint semantic
space. Two variants of cosine similarity between the vectors that place each
document into the joint semantic space are combined with a measure of string
similarity between corresponding URLs to produce 1:1 alignments of
English/French web pages in a variety of domains. The system achieves a recall
of ca. 88% if no in-domain data is used for building the latent semantic model,
and 93% if such data is included.
Analysing the system's errors on the training data, we argue that evaluating
aligner performance based on exact URL matches under-estimates their true
performance and propose an alternative that is able to account for duplicates
and near-duplicates in the underlying data.
| 2,017 | Computation and Language |
Sentiment Analysis on Financial News Headlines using Training Dataset
Augmentation | This paper discusses the approach taken by the UWaterloo team to arrive at a
solution for the Fine-Grained Sentiment Analysis problem posed by Task 5 of
SemEval 2017. The paper describes the document vectorization and sentiment
score prediction techniques used, as well as the design and implementation
decisions taken while building the system for this task. The system uses text
vectorization models, such as N-gram, TF-IDF and paragraph embeddings, coupled
with regression model variants to predict the sentiment scores. Amongst the
methods examined, unigrams and bigrams coupled with simple linear regression
obtained the best baseline accuracy. The paper also explores data augmentation
methods to supplement the training dataset. This system was designed for
Subtask 2 (News Statements and Headlines).
| 2,017 | Computation and Language |
Zero-Shot Activity Recognition with Verb Attribute Induction | In this paper, we investigate large-scale zero-shot activity recognition by
modeling the visual and linguistic attributes of action verbs. For example, the
verb "salute" has several properties, such as being a light movement, a social
act, and short in duration. We use these attributes as the internal mapping
between visual and textual representations to reason about a previously unseen
action. In contrast to much prior work that assumes access to gold standard
attributes for zero-shot classes and focuses primarily on object attributes,
our model uniquely learns to infer action attributes from dictionary
definitions and distributed word representations. Experimental results confirm
that action attributes inferred from language can provide a predictive signal
for zero-shot prediction of previously unseen activities.
| 2,017 | Computation and Language |
Topology Analysis of International Networks Based on Debates in the
United Nations | In complex, high dimensional and unstructured data it is often difficult to
extract meaningful patterns. This is especially the case when dealing with
textual data. Recent studies in machine learning, information theory and
network science have developed several novel instruments to extract the
semantics of unstructured data, and harness it to build a network of relations.
Such approaches serve as an efficient tool for dimensionality reduction and
pattern detection. This paper applies semantic network science to extract
ideological proximity in the international arena, by focusing on the data from
General Debates in the UN General Assembly on the topics of high salience to
international community. UN General Debate corpus (UNGDC) covers all high-level
debates in the UN General Assembly from 1970 to 2014, covering all UN member
states. The research proceeds in three main steps. First, Latent Dirichlet
Allocation (LDA) is used to extract the topics of the UN speeches, and
therefore semantic information. Each country is then assigned a vector
specifying the exposure to each of the topics identified. This intermediate
output is then used in to construct a network of countries based on information
theoretical metrics where the links capture similar vectorial patterns in the
topic distributions. Topology of the networks is then analyzed through network
properties like density, path length and clustering. Finally, we identify
specific topological features of our networks using the map equation framework
to detect communities in our networks of countries.
| 2,017 | Computation and Language |
Curriculum Learning and Minibatch Bucketing in Neural Machine
Translation | We examine the effects of particular orderings of sentence pairs on the
on-line training of neural machine translation (NMT). We focus on two types of
such orderings: (1) ensuring that each minibatch contains sentences similar in
some aspect and (2) gradual inclusion of some sentence types as the training
progresses (so called "curriculum learning"). In our English-to-Czech
experiments, the internal homogeneity of minibatches has no effect on the
training but some of our "curricula" achieve a small improvement over the
baseline.
| 2,017 | Computation and Language |
Learning Language Representations for Typology Prediction | One central mystery of neural NLP is what neural models "know" about their
subject matter. When a neural machine translation system learns to translate
from one language to another, does it learn the syntax or semantics of the
languages? Can this knowledge be extracted from the system to fill holes in
human scientific knowledge? Existing typological databases contain relatively
full feature specifications for only a few hundred languages. Exploiting the
existence of parallel texts in more than a thousand languages, we build a
massive many-to-one neural machine translation (NMT) system from 1017 languages
into English, and use this to predict information missing from typological
databases. Experiments show that the proposed method is able to infer not only
syntactic, but also phonological and phonetic inventory features, and improves
over a baseline that has access to information about the languages' geographic
and phylogenetic neighbors.
| 2,017 | Computation and Language |
Joint Named Entity Recognition and Stance Detection in Tweets | Named entity recognition (NER) is a well-established task of information
extraction which has been studied for decades. More recently, studies reporting
NER experiments on social media texts have emerged. On the other hand, stance
detection is a considerably new research topic usually considered within the
scope of sentiment analysis. Stance detection studies are mostly applied to
texts of online debates where the stance of the text owner for a particular
target, either explicitly or implicitly mentioned in text, is explored. In this
study, we investigate the possible contribution of named entities to the stance
detection task in tweets. We report the evaluation results of NER experiments
as well as that of the subsequent stance detection experiments using named
entities, on a publicly-available stance-annotated data set of tweets. Our
results indicate that named entities obtained with a high-performance NER
system can contribute to stance detection performance on tweets.
| 2,017 | Computation and Language |
Skill2vec: Machine Learning Approach for Determining the Relevant Skills
from Job Description | Unsupervise learned word embeddings have seen tremendous success in numerous
Natural Language Processing (NLP) tasks in recent years. The main contribution
of this paper is to develop a technique called Skill2vec, which applies machine
learning techniques in recruitment to enhance the search strategy to find
candidates possessing the appropriate skills. Skill2vec is a neural network
architecture inspired by Word2vec, developed by Mikolov et al. in 2013. It
transforms skills to new vector space, which has the characteristics of
calculation and presents skills relationships. We conducted an experiment
evaluation manually by a recruitment company's domain experts to demonstrate
the effectiveness of our approach.
| 2,019 | Computation and Language |
Low-Resource Neural Headline Generation | Recent neural headline generation models have shown great results, but are
generally trained on very large datasets. We focus our efforts on improving
headline quality on smaller datasets by the means of pretraining. We propose
new methods that enable pre-training all the parameters of the model and
utilize all available text, resulting in improvements by up to 32.4% relative
in perplexity and 2.84 points in ROUGE.
| 2,017 | Computation and Language |
Combining Thesaurus Knowledge and Probabilistic Topic Models | In this paper we present the approach of introducing thesaurus knowledge into
probabilistic topic models. The main idea of the approach is based on the
assumption that the frequencies of semantically related words and phrases,
which are met in the same texts, should be enhanced: this action leads to their
larger contribution into topics found in these texts. We have conducted
experiments with several thesauri and found that for improving topic models, it
is useful to utilize domain-specific knowledge. If a general thesaurus, such as
WordNet, is used, the thesaurus-based improvement of topic models can be
achieved with excluding hyponymy relations in combined topic models.
| 2,017 | Computation and Language |
Reporting Score Distributions Makes a Difference: Performance Study of
LSTM-networks for Sequence Tagging | In this paper we show that reporting a single performance score is
insufficient to compare non-deterministic approaches. We demonstrate for common
sequence tagging tasks that the seed value for the random number generator can
result in statistically significant (p < 10^-4) differences for
state-of-the-art systems. For two recent systems for NER, we observe an
absolute difference of one percentage point F1-score depending on the selected
seed value, making these systems perceived either as state-of-the-art or
mediocre. Instead of publishing and reporting single performance scores, we
propose to compare score distributions based on multiple executions. Based on
the evaluation of 50.000 LSTM-networks for five sequence tagging tasks, we
present network architectures that produce both superior performance as well as
are more stable with respect to the remaining hyperparameters.
| 2,017 | Computation and Language |
Linguistically Motivated Vocabulary Reduction for Neural Machine
Translation from Turkish to English | The necessity of using a fixed-size word vocabulary in order to control the
model complexity in state-of-the-art neural machine translation (NMT) systems
is an important bottleneck on performance, especially for morphologically rich
languages. Conventional methods that aim to overcome this problem by using
sub-word or character-level representations solely rely on statistics and
disregard the linguistic properties of words, which leads to interruptions in
the word structure and causes semantic and syntactic losses. In this paper, we
propose a new vocabulary reduction method for NMT, which can reduce the
vocabulary of a given input corpus at any rate while also considering the
morphological properties of the language. Our method is based on unsupervised
morphology learning and can be, in principle, used for pre-processing any
language pair. We also present an alternative word segmentation method based on
supervised morphological analysis, which aids us in measuring the accuracy of
our model. We evaluate our method in Turkish-to-English NMT task where the
input language is morphologically rich and agglutinative. We analyze different
representation methods in terms of translation accuracy as well as the semantic
and syntactic properties of the generated output. Our method obtains a
significant improvement of 2.3 BLEU points over the conventional vocabulary
reduction technique, showing that it can provide better accuracy in open
vocabulary translation of morphologically rich languages.
| 2,017 | Computation and Language |
Regularization techniques for fine-tuning in neural machine translation | We investigate techniques for supervised domain adaptation for neural machine
translation where an existing model trained on a large out-of-domain dataset is
adapted to a small in-domain dataset. In this scenario, overfitting is a major
challenge. We investigate a number of techniques to reduce overfitting and
improve transfer learning, including regularization techniques such as dropout
and L2-regularization towards an out-of-domain prior. In addition, we introduce
tuneout, a novel regularization technique inspired by dropout. We apply these
techniques, alone and in combination, to neural machine translation, obtaining
improvements on IWSLT datasets for English->German and English->Russian. We
also investigate the amounts of in-domain training data needed for domain
adaptation in NMT, and find a logarithmic relationship between the amount of
training data and gain in BLEU score.
| 2,017 | Computation and Language |
SemEval-2017 Task 1: Semantic Textual Similarity - Multilingual and
Cross-lingual Focused Evaluation | Semantic Textual Similarity (STS) measures the meaning similarity of
sentences. Applications include machine translation (MT), summarization,
generation, question answering (QA), short answer grading, semantic search,
dialog and conversational systems. The STS shared task is a venue for assessing
the current state-of-the-art. The 2017 task focuses on multilingual and
cross-lingual pairs with one sub-track exploring MT quality estimation (MTQE)
data. The task obtained strong participation from 31 teams, with 17
participating in all language tracks. We summarize performance and review a
selection of well performing methods. Analysis highlights common errors,
providing insight into the limitations of existing models. To support ongoing
work on semantic representations, the STS Benchmark is introduced as a new
shared training and evaluation set carefully selected from the corpus of
English STS shared task data (2012-2017).
| 2,017 | Computation and Language |
The Code2Text Challenge: Text Generation in Source Code Libraries | We propose a new shared task for tactical data-to-text generation in the
domain of source code libraries. Specifically, we focus on text generation of
function descriptions from example software projects. Data is drawn from
existing resources used for studying the related problem of semantic parser
induction (Richardson and Kuhn, 2017b; Richardson and Kuhn, 2017a), and spans a
wide variety of both natural languages and programming languages. In this
paper, we describe these existing resources, which will serve as training and
development data for the task, and discuss plans for building new independent
test sets.
| 2,018 | Computation and Language |
Learned in Translation: Contextualized Word Vectors | Computer vision has benefited from initializing multiple deep layers with
weights pretrained on large supervised training sets like ImageNet. Natural
language processing (NLP) typically sees initialization of only the lowest
layer of deep models with pretrained word vectors. In this paper, we use a deep
LSTM encoder from an attentional sequence-to-sequence model trained for machine
translation (MT) to contextualize word vectors. We show that adding these
context vectors (CoVe) improves performance over using only unsupervised word
and character vectors on a wide variety of common NLP tasks: sentiment analysis
(SST, IMDb), question classification (TREC), entailment (SNLI), and question
answering (SQuAD). For fine-grained sentiment analysis and entailment, CoVe
improves performance of our baseline models to the state of the art.
| 2,018 | Computation and Language |
Grounding Language for Transfer in Deep Reinforcement Learning | In this paper, we explore the utilization of natural language to drive
transfer for reinforcement learning (RL). Despite the wide-spread application
of deep RL techniques, learning generalized policy representations that work
across domains remains a challenging problem. We demonstrate that textual
descriptions of environments provide a compact intermediate channel to
facilitate effective policy transfer. Specifically, by learning to ground the
meaning of text to the dynamics of the environment such as transitions and
rewards, an autonomous agent can effectively bootstrap policy learning on a new
domain given its description. We employ a model-based RL approach consisting of
a differentiable planning module, a model-free component and a factorized state
representation to effectively use entity descriptions. Our model outperforms
prior work on both transfer and multi-task scenarios in a variety of different
environments. For instance, we achieve up to 14% and 11.5% absolute improvement
over previously existing models in terms of average and initial rewards,
respectively.
| 2,018 | Computation and Language |
Neural Rating Regression with Abstractive Tips Generation for
Recommendation | Recently, some E-commerce sites launch a new interaction box called Tips on
their mobile apps. Users can express their experience and feelings or provide
suggestions using short texts typically several words or one sentence. In
essence, writing some tips and giving a numerical rating are two facets of a
user's product assessment action, expressing the user experience and feelings.
Jointly modeling these two facets is helpful for designing a better
recommendation system. While some existing models integrate text information
such as item specifications or user reviews into user and item latent factors
for improving the rating prediction, no existing works consider tips for
improving recommendation quality. We propose a deep learning based framework
named NRT which can simultaneously predict precise ratings and generate
abstractive tips with good linguistic quality simulating user experience and
feelings. For abstractive tips generation, gated recurrent neural networks are
employed to "translate" user and item latent representations into a concise
sentence. Extensive experiments on benchmark datasets from different domains
show that NRT achieves significant improvements over the state-of-the-art
methods. Moreover, the generated tips can vividly predict the user experience
and feelings.
| 2,017 | Computation and Language |
Using Linguistic Features to Improve the Generalization Capability of
Neural Coreference Resolvers | Coreference resolution is an intermediate step for text understanding. It is
used in tasks and domains for which we do not necessarily have coreference
annotated corpora. Therefore, generalization is of special importance for
coreference resolution. However, while recent coreference resolvers have
notable improvements on the CoNLL dataset, they struggle to generalize properly
to new domains or datasets. In this paper, we investigate the role of
linguistic features in building more generalizable coreference resolvers. We
show that generalization improves only slightly by merely using a set of
additional linguistic features. However, employing features and subsets of
their values that are informative for coreference resolution, considerably
improves generalization. Thanks to better generalization, our system achieves
state-of-the-art results in out-of-domain evaluations, e.g., on WikiCoref, our
system, which is trained on CoNLL, achieves on-par performance with a system
designed for this dataset.
| 2,018 | Computation and Language |
An Investigation into the Pedagogical Features of Documents | Characterizing the content of a technical document in terms of its learning
utility can be useful for applications related to education, such as generating
reading lists from large collections of documents. We refer to this learning
utility as the "pedagogical value" of the document to the learner. While
pedagogical value is an important concept that has been studied extensively
within the education domain, there has been little work exploring it from a
computational, i.e., natural language processing (NLP), perspective. To allow a
computational exploration of this concept, we introduce the notion of
"pedagogical roles" of documents (e.g., Tutorial and Survey) as an intermediary
component for the study of pedagogical value. Given the lack of available
corpora for our exploration, we create the first annotated corpus of
pedagogical roles and use it to test baseline techniques for automatic
prediction of such roles.
| 2,017 | Computation and Language |
Natural Language Processing with Small Feed-Forward Networks | We show that small and shallow feed-forward neural networks can achieve near
state-of-the-art results on a range of unstructured and structured language
processing tasks while being considerably cheaper in memory and computational
requirements than deep recurrent models. Motivated by resource-constrained
environments like mobile phones, we showcase simple techniques for obtaining
such small neural network models, and investigate different tradeoffs when
deciding how to allocate a small memory budget.
| 2,017 | Computation and Language |
Improving Part-of-Speech Tagging for NLP Pipelines | This paper outlines the results of sentence level linguistics based rules for
improving part-of-speech tagging. It is well known that the performance of
complex NLP systems is negatively affected if one of the preliminary stages is
less than perfect. Errors in the initial stages in the pipeline have a
snowballing effect on the pipeline's end performance. We have created a set of
linguistics based rules at the sentence level which adjust part-of-speech tags
from state-of-the-art taggers. Comparison with state-of-the-art taggers on
widely used benchmarks demonstrate significant improvements in tagging accuracy
and consequently in the quality and accuracy of NLP systems.
| 2,017 | Computation and Language |
SenGen: Sentence Generating Neural Variational Topic Model | We present a new topic model that generates documents by sampling a topic for
one whole sentence at a time, and generating the words in the sentence using an
RNN decoder that is conditioned on the topic of the sentence. We argue that
this novel formalism will help us not only visualize and model the topical
discourse structure in a document better, but also potentially lead to more
interpretable topics since we can now illustrate topics by sampling
representative sentences instead of bag of words or phrases. We present a
variational auto-encoder approach for learning in which we use a factorized
variational encoder that independently models the posterior over topical
mixture vectors of documents using a feed-forward network, and the posterior
over topic assignments to sentences using an RNN. Our preliminary experiments
on two different datasets indicate early promise, but also expose many
challenges that remain to be addressed.
| 2,017 | Computation and Language |
A Continuously Growing Dataset of Sentential Paraphrases | A major challenge in paraphrase research is the lack of parallel corpora. In
this paper, we present a new method to collect large-scale sentential
paraphrases from Twitter by linking tweets through shared URLs. The main
advantage of our method is its simplicity, as it gets rid of the classifier or
human in the loop needed to select data before annotation and subsequent
application of paraphrase identification algorithms in the previous work. We
present the largest human-labeled paraphrase corpus to date of 51,524 sentence
pairs and the first cross-domain benchmarking for automatic paraphrase
identification. In addition, we show that more than 30,000 new sentential
paraphrases can be easily and continuously captured every month at ~70%
precision, and demonstrate their utility for downstream NLP tasks through
phrasal paraphrase extraction. We make our code and data freely available.
| 2,017 | Computation and Language |
A Generative Parser with a Discriminative Recognition Algorithm | Generative models defining joint distributions over parse trees and sentences
are useful for parsing and language modeling, but impose restrictions on the
scope of features and are often outperformed by discriminative models. We
propose a framework for parsing and language modeling which marries a
generative model with a discriminative recognition model in an encoder-decoder
setting. We provide interpretations of the framework based on expectation
maximization and variational inference, and show that it enables parsing and
language modeling within a single implementation. On the English Penn
Treen-bank, our framework obtains competitive performance on constituency
parsing while matching the state-of-the-art single-model language modeling
score.
| 2,017 | Computation and Language |
Deriving Verb Predicates By Clustering Verbs with Arguments | Hand-built verb clusters such as the widely used Levin classes (Levin, 1993)
have proved useful, but have limited coverage. Verb classes automatically
induced from corpus data such as those from VerbKB (Wijaya, 2016), on the other
hand, can give clusters with much larger coverage, and can be adapted to
specific corpora such as Twitter. We present a method for clustering the
outputs of VerbKB: verbs with their multiple argument types, e.g.
"marry(person, person)", "feel(person, emotion)." We make use of a novel
low-dimensional embedding of verbs and their arguments to produce high quality
clusters in which the same verb can be in different clusters depending on its
argument type. The resulting verb clusters do a better job than hand-built
clusters of predicting sarcasm, sentiment, and locus of control in tweets.
| 2,017 | Computation and Language |
A Lightweight Front-end Tool for Interactive Entity Population | Entity population, a task of collecting entities that belong to a particular
category, has attracted attention from vertical domains. There is still a high
demand for creating entity dictionaries in vertical domains, which are not
covered by existing knowledge bases. We develop a lightweight front-end tool
for facilitating interactive entity population. We implement key components
necessary for effective interactive entity population: 1) GUI-based dashboards
to quickly modify an entity dictionary, and 2) entity highlighting on documents
for quickly viewing the current progress. We aim to reduce user cost from
beginning to end, including package installation and maintenance. The
implementation enables users to use this tool on their web browsers without any
additional packages --- users can focus on their missions to create entity
dictionaries. Moreover, an entity expansion module is implemented as external
APIs. This design makes it easy to continuously improve interactive entity
population pipelines. We are making our demo publicly available
(http://bit.ly/luwak-demo).
| 2,017 | Computation and Language |
End-to-End Neural Segmental Models for Speech Recognition | Segmental models are an alternative to frame-based models for sequence
prediction, where hypothesized path weights are based on entire segment scores
rather than a single frame at a time. Neural segmental models are segmental
models that use neural network-based weight functions. Neural segmental models
have achieved competitive results for speech recognition, and their end-to-end
training has been explored in several studies. In this work, we review neural
segmental models, which can be viewed as consisting of a neural network-based
acoustic encoder and a finite-state transducer decoder. We study end-to-end
segmental models with different weight functions, including ones based on
frame-level neural classifiers and on segmental recurrent neural networks. We
study how reducing the search space size impacts performance under different
weight functions. We also compare several loss functions for end-to-end
training. Finally, we explore training approaches, including multi-stage vs.
end-to-end training and multitask training that combines segmental and
frame-level losses.
| 2,018 | Computation and Language |
Improved Representation Learning for Predicting Commonsense Ontologies | Recent work in learning ontologies (hierarchical and partially-ordered
structures) has leveraged the intrinsic geometry of spaces of learned
representations to make predictions that automatically obey complex structural
constraints. We explore two extensions of one such model, the order-embedding
model for hierarchical relation learning, with an aim towards improved
performance on text data for commonsense knowledge representation. Our first
model jointly learns ordering relations and non-hierarchical knowledge in the
form of raw text. Our second extension exploits the partial order structure of
the training data to find long-distance triplet constraints among embeddings
which are poorly enforced by the pairwise training procedure. We find that both
incorporating free text and augmented training constraints improve over the
original order-embedding model and other strong baselines.
| 2,017 | Computation and Language |
Low-Rank Hidden State Embeddings for Viterbi Sequence Labeling | In textual information extraction and other sequence labeling tasks it is now
common to use recurrent neural networks (such as LSTM) to form rich embedded
representations of long-term input co-occurrence patterns. Representation of
output co-occurrence patterns is typically limited to a hand-designed graphical
model, such as a linear-chain CRF representing short-term Markov dependencies
among successive labels. This paper presents a method that learns embedded
representations of latent output structure in sequence data. Our model takes
the form of a finite-state machine with a large number of latent states per
label (a latent variable CRF), where the state-transition matrix is
factorized---effectively forming an embedded representation of
state-transitions capable of enforcing long-term label dependencies, while
supporting exact Viterbi inference over output labels. We demonstrate accuracy
improvements and interpretable latent structure in a synthetic but complex task
based on CoNLL named entity recognition.
| 2,017 | Computation and Language |
Analyzing Neural MT Search and Model Performance | In this paper, we offer an in-depth analysis about the modeling and search
performance. We address the question if a more complex search algorithm is
necessary. Furthermore, we investigate the question if more complex models
which might only be applicable during rescoring are promising.
By separating the search space and the modeling using $n$-best list
reranking, we analyze the influence of both parts of an NMT system
independently. By comparing differently performing NMT systems, we show that
the better translation is already in the search space of the translation
systems with less performance. This results indicate that the current search
algorithms are sufficient for the NMT systems. Furthermore, we could show that
even a relatively small $n$-best list of $50$ hypotheses already contain
notably better translations.
| 2,017 | Computation and Language |
Deep Recurrent Generative Decoder for Abstractive Text Summarization | We propose a new framework for abstractive text summarization based on a
sequence-to-sequence oriented encoder-decoder model equipped with a deep
recurrent generative decoder (DRGN).
Latent structure information implied in the target summaries is learned based
on a recurrent latent random model for improving the summarization quality.
Neural variational inference is employed to address the intractable posterior
inference for the recurrent latent variables.
Abstractive summaries are generated based on both the generative latent
variables and the discriminative deterministic states.
Extensive experiments on some benchmark datasets in different languages show
that DRGN achieves improvements over the state-of-the-art methods.
| 2,017 | Computation and Language |
Dynamic Data Selection for Neural Machine Translation | Intelligent selection of training data has proven a successful technique to
simultaneously increase training efficiency and translation performance for
phrase-based machine translation (PBMT). With the recent increase in popularity
of neural machine translation (NMT), we explore in this paper to what extent
and how NMT can also benefit from data selection. While state-of-the-art data
selection (Axelrod et al., 2011) consistently performs well for PBMT, we show
that gains are substantially lower for NMT. Next, we introduce dynamic data
selection for NMT, a method in which we vary the selected subset of training
data between different training epochs. Our experiments show that the best
results are achieved when applying a technique we call gradual fine-tuning,
with improvements up to +2.6 BLEU over the original data selection approach and
up to +3.1 BLEU over a general baseline.
| 2,017 | Computation and Language |
The University of Edinburgh's Neural MT Systems for WMT17 | This paper describes the University of Edinburgh's submissions to the WMT17
shared news translation and biomedical translation tasks. We participated in 12
translation directions for news, translating between English and Czech, German,
Latvian, Russian, Turkish and Chinese. For the biomedical task we submitted
systems for English to Czech, German, Polish and Romanian. Our systems are
neural machine translation systems trained with Nematus, an attentional
encoder-decoder. We follow our setup from last year and build BPE-based models
with parallel and back-translated monolingual training data. Novelties this
year include the use of deep architectures, layer normalization, and more
compact models due to weight tying and improvements in BPE segmentations. We
perform extensive ablative experiments, reporting on the effectivenes of layer
normalization, deep architectures, and different ensembling techniques.
| 2,017 | Computation and Language |
Dynamic Entity Representations in Neural Language Models | Understanding a long document requires tracking how entities are introduced
and evolve over time. We present a new type of language model, EntityNLM, that
can explicitly model entities, dynamically update their representations, and
contextually generate their mentions. Our model is generative and flexible; it
can model an arbitrary number of entities in context while generating each
entity mention at an arbitrary length. In addition, it can be used for several
different tasks such as language modeling, coreference resolution, and entity
prediction. Experimental results with all these tasks demonstrate that our
model consistently outperforms strong baselines and prior work.
| 2,017 | Computation and Language |
Combining Generative and Discriminative Approaches to Unsupervised
Dependency Parsing via Dual Decomposition | Unsupervised dependency parsing aims to learn a dependency parser from
unannotated sentences. Existing work focuses on either learning generative
models using the expectation-maximization algorithm and its variants, or
learning discriminative models using the discriminative clustering algorithm.
In this paper, we propose a new learning strategy that learns a generative
model and a discriminative model jointly based on the dual decomposition
method. Our method is simple and general, yet effective to capture the
advantages of both models and improve their learning results. We tested our
method on the UD treebank and achieved a state-of-the-art performance on thirty
languages.
| 2,017 | Computation and Language |
Dependency Grammar Induction with Neural Lexicalization and Big Training
Data | We study the impact of big models (in terms of the degree of lexicalization)
and big data (in terms of the training corpus size) on dependency grammar
induction. We experimented with L-DMV, a lexicalized version of Dependency
Model with Valence and L-NDMV, our lexicalized extension of the Neural
Dependency Model with Valence. We find that L-DMV only benefits from very small
degrees of lexicalization and moderate sizes of training corpora. L-NDMV can
benefit from big training data and lexicalization of greater degrees,
especially when enhanced with good model initialization, and it achieves a
result that is competitive with the current state-of-the-art.
| 2,017 | Computation and Language |
Enterprise to Computer: Star Trek chatbot | Human interactions and human-computer interactions are strongly influenced by
style as well as content. Adding a persona to a chatbot makes it more
human-like and contributes to a better and more engaging user experience. In
this work, we propose a design for a chatbot that captures the "style" of Star
Trek by incorporating references from the show along with peculiar tones of the
fictional characters therein. Our Enterprise to Computer bot (E2Cbot) treats
Star Trek dialog style and general dialog style differently, using two
recurrent neural network Encoder-Decoder models. The Star Trek dialog style
uses sequence to sequence (SEQ2SEQ) models (Sutskever et al., 2014; Bahdanau et
al., 2014) trained on Star Trek dialogs. The general dialog style uses Word
Graph to shift the response of the SEQ2SEQ model into the Star Trek domain. We
evaluate the bot both in terms of perplexity and word overlap with Star Trek
vocabulary and subjectively using human evaluators.
| 2,017 | Computation and Language |
Towards Semantic Modeling of Contradictions and Disagreements: A Case
Study of Medical Guidelines | We introduce a formal distinction between contradictions and disagreements in
natural language texts, motivated by the need to formally reason about
contradictory medical guidelines. This is a novel and potentially very useful
distinction, and has not been discussed so far in NLP and logic. We also
describe a NLP system capable of automated finding contradictory medical
guidelines; the system uses a combination of text analysis and information
retrieval modules. We also report positive evaluation results on a small corpus
of contradictory medical recommendations.
| 2,017 | Computation and Language |
Domain Aware Neural Dialog System | We investigate the task of building a domain aware chat system which
generates intelligent responses in a conversation comprising of different
domains. The domain, in this case, is the topic or theme of the conversation.
To achieve this, we present DOM-Seq2Seq, a domain aware neural network model
based on the novel technique of using domain-targeted sequence-to-sequence
models (Sutskever et al., 2014) and a domain classifier. The model captures
features from current utterance and domains of the previous utterances to
facilitate the formation of relevant responses. We evaluate our model on
automatic metrics and compare our performance with the Seq2Seq model.
| 2,017 | Computation and Language |
Exploiting Linguistic Resources for Neural Machine Translation Using
Multi-task Learning | Linguistic resources such as part-of-speech (POS) tags have been extensively
used in statistical machine translation (SMT) frameworks and have yielded
better performances. However, usage of such linguistic annotations in neural
machine translation (NMT) systems has been left under-explored.
In this work, we show that multi-task learning is a successful and a easy
approach to introduce an additional knowledge into an end-to-end neural
attentional model. By jointly training several natural language processing
(NLP) tasks in one system, we are able to leverage common information and
improve the performance of the individual task.
We analyze the impact of three design decisions in multi-task learning: the
tasks used in training, the training schedule, and the degree of parameter
sharing across the tasks, which is defined by the network architecture. The
experiments are conducted for an German to English translation task. As
additional linguistic resources, we exploit POS information and named-entities
(NE). Experiments show that the translation quality can be improved by up to
1.5 BLEU points under the low-resource condition. The performance of the POS
tagger is also improved using the multi-task learning scheme.
| 2,017 | Computation and Language |
Revisiting Activation Regularization for Language RNNs | Recurrent neural networks (RNNs) serve as a fundamental building block for
many sequence tasks across natural language processing. Recent research has
focused on recurrent dropout techniques or custom RNN cells in order to improve
performance. Both of these can require substantial modifications to the machine
learning model or to the underlying RNN configurations. We revisit traditional
regularization techniques, specifically L2 regularization on RNN activations
and slowness regularization over successive hidden states, to improve the
performance of RNNs on the task of language modeling. Both of these techniques
require minimal modification to existing RNN architectures and result in
performance improvements comparable or superior to more complicated
regularization techniques or custom cell architectures. These regularization
techniques can be used without any modification on optimized LSTM
implementations such as the NVIDIA cuDNN LSTM.
| 2,017 | Computation and Language |
CRF Autoencoder for Unsupervised Dependency Parsing | Unsupervised dependency parsing, which tries to discover linguistic
dependency structures from unannotated data, is a very challenging task. Almost
all previous work on this task focuses on learning generative models. In this
paper, we develop an unsupervised dependency parsing model based on the CRF
autoencoder. The encoder part of our model is discriminative and globally
normalized which allows us to use rich features as well as universal linguistic
priors. We propose an exact algorithm for parsing as well as a tractable
learning algorithm. We evaluated the performance of our model on eight
multilingual treebanks and found that our model achieved comparable performance
with state-of-the-art approaches.
| 2,017 | Computation and Language |
Reader-Aware Multi-Document Summarization: An Enhanced Model and The
First Dataset | We investigate the problem of reader-aware multi-document summarization
(RA-MDS) and introduce a new dataset for this problem. To tackle RA-MDS, we
extend a variational auto-encodes (VAEs) based MDS framework by jointly
considering news documents and reader comments. To conduct evaluation for
summarization performance, we prepare a new dataset. We describe the methods
for data collection, aspect annotation, and summary writing as well as
scrutinizing by experts. Experimental results show that reader comments can
improve the summarization performance, which also demonstrates the usefulness
of the proposed dataset. The annotated dataset for RA-MDS is available online.
| 2,017 | Computation and Language |
The UMD Neural Machine Translation Systems at WMT17 Bandit Learning Task | We describe the University of Maryland machine translation systems submitted
to the WMT17 German-English Bandit Learning Task. The task is to adapt a
translation system to a new domain, using only bandit feedback: the system
receives a German sentence to translate, produces an English sentence, and only
gets a scalar score as feedback. Targeting these two challenges (adaptation and
bandit learning), we built a standard neural machine translation system and
extended it in two ways: (1) robust reinforcement learning techniques to learn
effectively from the bandit feedback, and (2) domain adaptation using data
selection from a large corpus of parallel data.
| 2,017 | Computation and Language |
Recurrent Neural Network-Based Sentence Encoder with Gated Attention for
Natural Language Inference | The RepEval 2017 Shared Task aims to evaluate natural language understanding
models for sentence representation, in which a sentence is represented as a
fixed-length vector with neural networks and the quality of the representation
is tested with a natural language inference task. This paper describes our
system (alpha) that is ranked among the top in the Shared Task, on both the
in-domain test set (obtaining a 74.9% accuracy) and on the cross-domain test
set (also attaining a 74.9% accuracy), demonstrating that the model generalizes
well to the cross-domain data. Our model is equipped with intra-sentence
gated-attention composition which helps achieve a better performance. In
addition to submitting our model to the Shared Task, we have also tested it on
the Stanford Natural Language Inference (SNLI) dataset. We obtain an accuracy
of 85.5%, which is the best reported result on SNLI when cross-sentence
attention is not allowed, the same condition enforced in RepEval 2017.
| 2,017 | Computation and Language |
Hashtag Healthcare: From Tweets to Mental Health Journals Using Deep
Transfer Learning | As the popularity of social media platforms continues to rise, an
ever-increasing amount of human communication and self- expression takes place
online. Most recent research has focused on mining social media for public user
opinion about external entities such as product reviews or sentiment towards
political news. However, less attention has been paid to analyzing users'
internalized thoughts and emotions from a mental health perspective. In this
paper, we quantify the semantic difference between public Tweets and private
mental health journals used in online cognitive behavioral therapy. We will use
deep transfer learning techniques for analyzing the semantic gap between the
two domains. We show that for the task of emotional valence prediction, social
media can be successfully harnessed to create more accurate, robust, and
personalized mental health models. Our results suggest that the semantic gap
between public and private self-expression is small, and that utilizing the
abundance of available social media is one way to overcome the small sample
sizes of mental health data, which are commonly limited by availability and
privacy concerns.
| 2,017 | Computation and Language |
The Argument Reasoning Comprehension Task: Identification and
Reconstruction of Implicit Warrants | Reasoning is a crucial part of natural language argumentation. To comprehend
an argument, one must analyze its warrant, which explains why its claim follows
from its premises. As arguments are highly contextualized, warrants are usually
presupposed and left implicit. Thus, the comprehension does not only require
language understanding and logic skills, but also depends on common sense. In
this paper we develop a methodology for reconstructing warrants systematically.
We operationalize it in a scalable crowdsourcing process, resulting in a freely
licensed dataset with warrants for 2k authentic arguments from news comments.
On this basis, we present a new challenging task, the argument reasoning
comprehension task. Given an argument with a claim and a premise, the goal is
to choose the correct implicit warrant from two options. Both warrants are
plausible and lexically close, but lead to contradicting claims. A solution to
this task will define a substantial step towards automatic warrant
reconstruction. However, experiments with several neural attention and language
models reveal that current approaches do not suffice.
| 2,022 | Computation and Language |
Massively Multilingual Neural Grapheme-to-Phoneme Conversion | Grapheme-to-phoneme conversion (g2p) is necessary for text-to-speech and
automatic speech recognition systems. Most g2p systems are monolingual: they
require language-specific data or handcrafting of rules. Such systems are
difficult to extend to low resource languages, for which data and handcrafted
rules are not available. As an alternative, we present a neural
sequence-to-sequence approach to g2p which is trained on
spelling--pronunciation pairs in hundreds of languages. The system shares a
single encoder and decoder across all languages, allowing it to utilize the
intrinsic similarities between different writing systems. We show an 11%
improvement in phoneme error rate over an approach based on adapting
high-resource monolingual g2p models to low-resource languages. Our model is
also much more compact relative to previous approaches.
| 2,017 | Computation and Language |
Language Design as Information Renormalization | Here we consider some well-known facts in syntax from a physics perspective,
allowing us to establish equivalences between both fields with many
consequences. Mainly, we observe that the operation MERGE, put forward by N.
Chomsky in 1995, can be interpreted as a physical information coarse-graining.
Thus, MERGE in linguistics entails information renormalization in physics,
according to different time scales. We make this point mathematically formal in
terms of language models. In this setting, MERGE amounts to a probability
tensor implementing a coarse-graining, akin to a probabilistic context-free
grammar. The probability vectors of meaningful sentences are given by
stochastic tensor networks (TN) built from diagonal tensors and which are
mostly loop-free, such as Tree Tensor Networks and Matrix Product States, thus
being computationally very efficient to manipulate. We show that this implies
the polynomially-decaying (long-range) correlations experimentally observed in
language, and also provides arguments in favour of certain types of neural
networks for language processing. Moreover, we show how to obtain such language
models from quantum states that can be efficiently prepared on a quantum
computer, and use this to find bounds on the perplexity of the probability
distribution of words in a sentence. Implications of our results are discussed
across several ambits.
| 2,022 | Computation and Language |
Predicting the Law Area and Decisions of French Supreme Court Cases | In this paper, we investigate the application of text classification methods
to predict the law area and the decision of cases judged by the French Supreme
Court. We also investigate the influence of the time period in which a ruling
was made over the textual form of the case description and the extent to which
it is necessary to mask the judge's motivation for a ruling to emulate a
real-world test scenario. We report results of 96% f1 score in predicting a
case ruling, 90% f1 score in predicting the law area of a case, and 75.9% f1
score in estimating the time span when a ruling has been issued using a linear
Support Vector Machine (SVM) classifier trained on lexical features.
| 2,017 | Computation and Language |
Automatic Question-Answering Using A Deep Similarity Neural Network | Automatic question-answering is a classical problem in natural language
processing, which aims at designing systems that can automatically answer a
question, in the same way as human does. In this work, we propose a deep
learning based model for automatic question-answering. First the questions and
answers are embedded using neural probabilistic modeling. Then a deep
similarity neural network is trained to find the similarity score of a pair of
answer and question. Then for each question, the best answer is found as the
one with the highest similarity score. We first train this model on a
large-scale public question-answering database, and then fine-tune it to
transfer to the customer-care chat data. We have also tested our framework on a
public question-answering database and achieved very good performance.
| 2,017 | Computation and Language |
Referenceless Quality Estimation for Natural Language Generation | Traditional automatic evaluation measures for natural language generation
(NLG) use costly human-authored references to estimate the quality of a system
output. In this paper, we propose a referenceless quality estimation (QE)
approach based on recurrent neural networks, which predicts a quality score for
a NLG system output by comparing it to the source meaning representation only.
Our method outperforms traditional metrics and a constant baseline in most
respects; we also show that synthetic data helps to increase correlation
results by 21% compared to the base system. Our results are comparable to
results obtained in similar QE tasks despite the more challenging setting.
| 2,017 | Computation and Language |
A Syllable-based Technique for Word Embeddings of Korean Words | Word embedding has become a fundamental component to many NLP tasks such as
named entity recognition and machine translation. However, popular models that
learn such embeddings are unaware of the morphology of words, so it is not
directly applicable to highly agglutinative languages such as Korean. We
propose a syllable-based learning model for Korean using a convolutional neural
network, in which word representation is composed of trained syllable vectors.
Our model successfully produces morphologically meaningful representation of
Korean words compared to the original Skip-gram embeddings. The results also
show that it is quite robust to the Out-of-Vocabulary problem.
| 2,017 | Computation and Language |
Extractive Multi Document Summarization using Dynamical Measurements of
Complex Networks | Due to the large amount of textual information available on Internet, it is
of paramount relevance to use techniques that find relevant and concise
content. A typical task devoted to the identification of informative sentences
in documents is the so called extractive document summarization task. In this
paper, we use complex network concepts to devise an extractive Multi Document
Summarization (MDS) method, which extracts the most central sentences from
several textual sources. In the proposed model, texts are represented as
networks, where nodes represent sentences and the edges are established based
on the number of shared words. Differently from previous works, the
identification of relevant terms is guided by the characterization of nodes via
dynamical measurements of complex networks, including symmetry, accessibility
and absorption time. The evaluation of the proposed system revealed that
excellent results were obtained with particular dynamical measurements,
including those based on the exploration of networks via random walks.
| 2,017 | Computation and Language |
Neural Machine Translation with Word Predictions | In the encoder-decoder architecture for neural machine translation (NMT), the
hidden states of the recurrent structures in the encoder and decoder carry the
crucial information about the sentence.These vectors are generated by
parameters which are updated by back-propagation of translation errors through
time. We argue that propagating errors through the end-to-end recurrent
structures are not a direct way of control the hidden vectors. In this paper,
we propose to use word predictions as a mechanism for direct supervision. More
specifically, we require these vectors to be able to predict the vocabulary in
target sentence. Our simple mechanism ensures better representations in the
encoder and decoder without using any extra data or annotation. It is also
helpful in reducing the target side vocabulary and improving the decoding
efficiency. Experiments on Chinese-English and German-English machine
translation tasks show BLEU improvements by 4.53 and 1.3, respectively
| 2,017 | Computation and Language |
A Comparison of Neural Models for Word Ordering | We compare several language models for the word-ordering task and propose a
new bag-to-sequence neural model based on attention-based sequence-to-sequence
models. We evaluate the model on a large German WMT data set where it
significantly outperforms existing models. We also describe a novel search
strategy for LM-based word ordering and report results on the English Penn
Treebank. Our best model setup outperforms prior work both in terms of speed
and quality.
| 2,017 | Computation and Language |
Translating Phrases in Neural Machine Translation | Phrases play an important role in natural language understanding and machine
translation (Sag et al., 2002; Villavicencio et al., 2005). However, it is
difficult to integrate them into current neural machine translation (NMT) which
reads and generates sentences word by word. In this work, we propose a method
to translate phrases in NMT by integrating a phrase memory storing target
phrases from a phrase-based statistical machine translation (SMT) system into
the encoder-decoder architecture of NMT. At each decoding step, the phrase
memory is first re-written by the SMT model, which dynamically generates
relevant target phrases with contextual information provided by the NMT model.
Then the proposed model reads the phrase memory to make probability estimations
for all phrases in the phrase memory. If phrase generation is carried on, the
NMT decoder selects an appropriate phrase from the memory to perform phrase
translation and updates its decoding state by consuming the words in the
selected phrase. Otherwise, the NMT decoder generates a word from the
vocabulary as the general NMT decoder does. Experiment results on the Chinese
to English translation show that the proposed model achieves significant
improvements over the baseline on various test sets.
| 2,017 | Computation and Language |
Memory-augmented Neural Machine Translation | Neural machine translation (NMT) has achieved notable success in recent
times, however it is also widely recognized that this approach has limitations
with handling infrequent words and word pairs. This paper presents a novel
memory-augmented NMT (M-NMT) architecture, which stores knowledge about how
words (usually infrequently encountered ones) should be translated in a memory
and then utilizes them to assist the neural model. We use this memory mechanism
to combine the knowledge learned from a conventional statistical machine
translation system and the rules learned by an NMT system, and also propose a
solution for out-of-vocabulary (OOV) words based on this framework. Our
experiments on two Chinese-English translation tasks demonstrated that the
M-NMT architecture outperformed the NMT baseline by $9.0$ and $2.7$ BLEU points
on the two tasks, respectively. Additionally, we found this architecture
resulted in a much more effective OOV treatment compared to competitive
methods.
| 2,017 | Computation and Language |
What is the Role of Recurrent Neural Networks (RNNs) in an Image Caption
Generator? | In neural image captioning systems, a recurrent neural network (RNN) is
typically viewed as the primary `generation' component. This view suggests that
the image features should be `injected' into the RNN. This is in fact the
dominant view in the literature. Alternatively, the RNN can instead be viewed
as only encoding the previously generated words. This view suggests that the
RNN should only be used to encode linguistic features and that only the final
representation should be `merged' with the image features at a later stage.
This paper compares these two architectures. We find that, in general, late
merging outperforms injection, suggesting that RNNs are better viewed as
encoders, rather than generators.
| 2,017 | Computation and Language |
Multimodal Classification for Analysing Social Media | Classification of social media data is an important approach in understanding
user behavior on the Web. Although information on social media can be of
different modalities such as texts, images, audio or videos, traditional
approaches in classification usually leverage only one prominent modality.
Techniques that are able to leverage multiple modalities are often complex and
susceptible to the absence of some modalities. In this paper, we present simple
models that combine information from different modalities to classify social
media content and are able to handle the above problems with existing
techniques. Our models combine information from different modalities using a
pooling layer and an auxiliary learning task is used to learn a common feature
space. We demonstrate the performance of our models and their robustness to the
missing of some modalities in the emotion classification domain. Our
approaches, although being simple, can not only achieve significantly higher
accuracies than traditional fusion approaches but also have comparable results
when only one modality is available.
| 2,017 | Computation and Language |
Regularizing and Optimizing LSTM Language Models | Recurrent neural networks (RNNs), such as long short-term memory networks
(LSTMs), serve as a fundamental building block for many sequence learning
tasks, including machine translation, language modeling, and question
answering. In this paper, we consider the specific problem of word-level
language modeling and investigate strategies for regularizing and optimizing
LSTM-based models. We propose the weight-dropped LSTM which uses DropConnect on
hidden-to-hidden weights as a form of recurrent regularization. Further, we
introduce NT-ASGD, a variant of the averaged stochastic gradient method,
wherein the averaging trigger is determined using a non-monotonic condition as
opposed to being tuned by the user. Using these and other regularization
strategies, we achieve state-of-the-art word level perplexities on two data
sets: 57.3 on Penn Treebank and 65.8 on WikiText-2. In exploring the
effectiveness of a neural cache in conjunction with our proposed model, we
achieve an even lower state-of-the-art perplexity of 52.8 on Penn Treebank and
52.0 on WikiText-2.
| 2,017 | Computation and Language |
Video Highlights Detection and Summarization with Lag-Calibration based
on Concept-Emotion Mapping of Crowd-sourced Time-Sync Comments | With the prevalence of video sharing, there are increasing demands for
automatic video digestion such as highlight detection. Recently, platforms with
crowdsourced time-sync video comments have emerged worldwide, providing a good
opportunity for highlight detection. However, this task is non-trivial: (1)
time-sync comments often lag behind their corresponding shot; (2) time-sync
comments are semantically sparse and noisy; (3) to determine which shots are
highlights is highly subjective. The present paper aims to tackle these
challenges by proposing a framework that (1) uses concept-mapped lexical-chains
for lag calibration; (2) models video highlights based on comment intensity and
combination of emotion and concept concentration of each shot; (3) summarize
each detected highlight using improved SumBasic with emotion and concept
mapping. Experiments on large real-world datasets show that our highlight
detection method and summarization method both outperform other benchmarks with
considerable margins.
| 2,017 | Computation and Language |
Asking Too Much? The Rhetorical Role of Questions in Political Discourse | Questions play a prominent role in social interactions, performing rhetorical
functions that go beyond that of simple informational exchange. The surface
form of a question can signal the intention and background of the person asking
it, as well as the nature of their relation with the interlocutor. While the
informational nature of questions has been extensively examined in the context
of question-answering applications, their rhetorical aspects have been largely
understudied.
In this work we introduce an unsupervised methodology for extracting surface
motifs that recur in questions, and for grouping them according to their latent
rhetorical role. By applying this framework to the setting of question sessions
in the UK parliament, we show that the resulting typology encodes key aspects
of the political discourse---such as the bifurcation in questioning behavior
between government and opposition parties---and reveals new insights into the
effects of a legislator's tenure and political career ambitions.
| 2,017 | Computation and Language |
ISS-MULT: Intelligent Sample Selection for Multi-Task Learning in
Question Answering | Transferring knowledge from a source domain to another domain is useful,
especially when gathering new data is very expensive and time-consuming. Deep
networks have been well-studied for question answering tasks in recent years;
however, no prominent research for transfer learning through deep neural
networks exists in the question answering field. In this paper, two main
methods (INIT and MULT) in this field are examined. Then, a new method named
Intelligent sample selection (ISS-MULT) is proposed to improve the MULT method
for question answering tasks. Different datasets, specificay SQuAD, SelQA,
WikiQA, NewWikiQA and InforBoxQA, are used for evaluation. Moreover, two
different tasks of question answering - answer selection and answer triggering
- are evaluated to examine the effectiveness of transfer learning. The results
show that using transfer learning generally improves the performance if the
corpora are related and are based on the same policy. In addition, using
ISS-MULT could finely improve the MULT method for question answering tasks, and
these improvements prove more significant in the answer triggering task.
| 2,019 | Computation and Language |
Corpus-level Fine-grained Entity Typing | This paper addresses the problem of corpus-level entity typing, i.e.,
inferring from a large corpus that an entity is a member of a class such as
"food" or "artist". The application of entity typing we are interested in is
knowledge base completion, specifically, to learn which classes an entity is a
member of. We propose FIGMENT to tackle this problem. FIGMENT is embedding-
based and combines (i) a global model that scores based on aggregated
contextual information of an entity and (ii) a context model that first scores
the individual occurrences of an entity and then aggregates the scores. Each of
the two proposed models has some specific properties. For the global model,
learning high quality entity representations is crucial because it is the only
source used for the predictions. Therefore, we introduce representations using
name and contexts of entities on the three levels of entity, word, and
character. We show each has complementary information and a multi-level
representation is the best. For the context model, we need to use distant
supervision since the context-level labels are not available for entities.
Distant supervised labels are noisy and this harms the performance of models.
Therefore, we introduce and apply new algorithms for noise mitigation using
multi-instance learning. We show the effectiveness of our models in a large
entity typing dataset, built from Freebase.
| 2,018 | Computation and Language |
Reinforced Video Captioning with Entailment Rewards | Sequence-to-sequence models have shown promising improvements on the temporal
task of video captioning, but they optimize word-level cross-entropy loss
during training. First, using policy gradient and mixed-loss methods for
reinforcement learning, we directly optimize sentence-level task-based metrics
(as rewards), achieving significant improvements over the baseline, based on
both automatic metrics and human evaluation on multiple datasets. Next, we
propose a novel entailment-enhanced reward (CIDEnt) that corrects
phrase-matching based metrics (such as CIDEr) to only allow for
logically-implied partial matches and avoid contradictions, achieving further
significant improvements over the CIDEr-reward model. Overall, our
CIDEnt-reward model achieves the new state-of-the-art on the MSR-VTT dataset.
| 2,017 | Computation and Language |
Shortcut-Stacked Sentence Encoders for Multi-Domain Inference | We present a simple sequential sentence encoder for multi-domain natural
language inference. Our encoder is based on stacked bidirectional LSTM-RNNs
with shortcut connections and fine-tuning of word embeddings. The overall
supervised model uses the above encoder to encode two input sentences into two
vectors, and then uses a classifier over the vector combination to label the
relationship between these two sentences as that of entailment, contradiction,
or neural. Our Shortcut-Stacked sentence encoders achieve strong improvements
over existing encoders on matched and mismatched multi-domain natural language
inference (top non-ensemble single-model result in the EMNLP RepEval 2017
Shared Task (Nangia et al., 2017)). Moreover, they achieve the new
state-of-the-art encoding result on the original SNLI dataset (Bowman et al.,
2015).
| 2,017 | Computation and Language |
Learning how to Active Learn: A Deep Reinforcement Learning Approach | Active learning aims to select a small subset of data for annotation such
that a classifier learned on the data is highly accurate. This is usually done
using heuristic selection methods, however the effectiveness of such methods is
limited and moreover, the performance of heuristics varies between datasets. To
address these shortcomings, we introduce a novel formulation by reframing the
active learning as a reinforcement learning problem and explicitly learning a
data selection policy, where the policy takes the role of the active learning
heuristic. Importantly, our method allows the selection policy learned using
simulation on one language to be transferred to other languages. We demonstrate
our method using cross-lingual named entity recognition, observing uniform
improvements over traditional active learning.
| 2,017 | Computation and Language |
Mining fine-grained opinions on closed captions of YouTube videos with
an attention-RNN | Video reviews are the natural evolution of written product reviews. In this
paper we target this phenomenon and introduce the first dataset created from
closed captions of YouTube product review videos as well as a new attention-RNN
model for aspect extraction and joint aspect extraction and sentiment
classification. Our model provides state-of-the-art performance on aspect
extraction without requiring the usage of hand-crafted features on the SemEval
ABSA corpus, while it outperforms the baseline on the joint task. In our
dataset, the attention-RNN model outperforms the baseline for both tasks, but
we observe important performance drops for all models in comparison to SemEval.
These results, as well as further experiments on domain adaptation for aspect
extraction, suggest that differences between speech and written text, which
have been discussed extensively in the literature, also extend to the domain of
product reviews, where they are relevant for fine-grained opinion mining.
| 2,017 | Computation and Language |
Neural-based Context Representation Learning for Dialog Act
Classification | We explore context representation learning methods in neural-based models for
dialog act classification. We propose and compare extensively different methods
which combine recurrent neural network architectures and attention mechanisms
(AMs) at different context levels. Our experimental results on two benchmark
datasets show consistent improvements compared to the models without contextual
information and reveal that the most suitable AM in the architecture depends on
the nature of the dataset.
| 2,017 | Computation and Language |
Which Encoding is the Best for Text Classification in Chinese, English,
Japanese and Korean? | This article offers an empirical study on the different ways of encoding
Chinese, Japanese, Korean (CJK) and English languages for text classification.
Different encoding levels are studied, including UTF-8 bytes, characters,
words, romanized characters and romanized words. For all encoding levels,
whenever applicable, we provide comparisons with linear models, fastText and
convolutional networks. For convolutional networks, we compare between encoding
mechanisms using character glyph images, one-hot (or one-of-n) encoding, and
embedding. In total there are 473 models, using 14 large-scale text
classification datasets in 4 languages including Chinese, English, Japanese and
Korean. Some conclusions from these results include that byte-level one-hot
encoding based on UTF-8 consistently produces competitive results for
convolutional networks, that word-level n-grams linear models are competitive
even without perfect word segmentation, and that fastText provides the best
result using character-level n-gram encoding but can overfit when the features
are overly rich.
| 2,017 | Computation and Language |
Recent Trends in Deep Learning Based Natural Language Processing | Deep learning methods employ multiple processing layers to learn hierarchical
representations of data and have produced state-of-the-art results in many
domains. Recently, a variety of model designs and methods have blossomed in the
context of natural language processing (NLP). In this paper, we review
significant deep learning related models and methods that have been employed
for numerous NLP tasks and provide a walk-through of their evolution. We also
summarize, compare and contrast the various models and put forward a detailed
understanding of the past, present and future of deep learning in NLP.
| 2,018 | Computation and Language |
KeyXtract Twitter Model - An Essential Keywords Extraction Model for
Twitter Designed using NLP Tools | Since a tweet is limited to 140 characters, it is ambiguous and difficult for
traditional Natural Language Processing (NLP) tools to analyse. This research
presents KeyXtract which enhances the machine learning based Stanford CoreNLP
Part-of-Speech (POS) tagger with the Twitter model to extract essential
keywords from a tweet. The system was developed using rule-based parsers and
two corpora. The data for the research was obtained from a Twitter profile of a
telecommunication company. The system development consisted of two stages. At
the initial stage, a domain specific corpus was compiled after analysing the
tweets. The POS tagger extracted the Noun Phrases and Verb Phrases while the
parsers removed noise and extracted any other keywords missed by the POS
tagger. The system was evaluated using the Turing Test. After it was tested and
compared against Stanford CoreNLP, the second stage of the system was developed
addressing the shortcomings of the first stage. It was enhanced using Named
Entity Recognition and Lemmatization. The second stage was also tested using
the Turing test and its pass rate increased from 50.00% to 83.33%. The
performance of the final system output was measured using the F1 score.
Stanford CoreNLP with the Twitter model had an average F1 of 0.69 while the
improved system had a F1 of 0.77. The accuracy of the system could be improved
by using a complete domain specific corpus. Since the system used linguistic
features of a sentence, it could be applied to other NLP tools.
| 2,017 | Computation and Language |
Hierarchically-Attentive RNN for Album Summarization and Storytelling | We address the problem of end-to-end visual storytelling. Given a photo
album, our model first selects the most representative (summary) photos, and
then composes a natural language story for the album. For this task, we make
use of the Visual Storytelling dataset and a model composed of three
hierarchically-attentive Recurrent Neural Nets (RNNs) to: encode the album
photos, select representative (summary) photos, and compose the story.
Automatic and human evaluations show our model achieves better performance on
selection, generation, and retrieval than baselines.
| 2,017 | Computation and Language |
Identifying Reference Spans: Topic Modeling and Word Embeddings help IR | The CL-SciSumm 2016 shared task introduced an interesting problem: given a
document D and a piece of text that cites D, how do we identify the text spans
of D being referenced by the piece of text? The shared task provided the first
annotated dataset for studying this problem. We present an analysis of our
continued work in improving our system's performance on this task. We
demonstrate how topic models and word embeddings can be used to surpass the
previously best performing system.
| 2,017 | Computation and Language |
Location Name Extraction from Targeted Text Streams using
Gazetteer-based Statistical Language Models | Extracting location names from informal and unstructured social media data
requires the identification of referent boundaries and partitioning compound
names. Variability, particularly systematic variability in location names
(Carroll, 1983), challenges the identification task. Some of this variability
can be anticipated as operations within a statistical language model, in this
case drawn from gazetteers such as OpenStreetMap (OSM), Geonames, and DBpedia.
This permits evaluation of an observed n-gram in Twitter targeted text as a
legitimate location name variant from the same location-context. Using n-gram
statistics and location-related dictionaries, our Location Name Extraction tool
(LNEx) handles abbreviations and automatically filters and augments the
location names in gazetteers (handling name contractions and auxiliary
contents) to help detect the boundaries of multi-word location names and
thereby delimit them in texts.
We evaluated our approach on 4,500 event-specific tweets from three targeted
streams to compare the performance of LNEx against that of ten state-of-the-art
taggers that rely on standard semantic, syntactic and/or orthographic features.
LNEx improved the average F-Score by 33-179%, outperforming all taggers.
Further, LNEx is capable of stream processing.
| 2,018 | Computation and Language |
Towards Neural Speaker Modeling in Multi-Party Conversation: The Task,
Dataset, and Models | Neural network-based dialog systems are attracting increasing attention in
both academia and industry. Recently, researchers have begun to realize the
importance of speaker modeling in neural dialog systems, but there lacks
established tasks and datasets. In this paper, we propose speaker
classification as a surrogate task for general speaker modeling, and collect
massive data to facilitate research in this direction. We further investigate
temporal-based and content-based models of speakers, and propose several
hybrids of them. Experiments show that speaker classification is feasible, and
that hybrid models outperform each single component.
| 2,018 | Computation and Language |
Neural and Statistical Methods for Leveraging Meta-information in
Machine Translation | In this paper, we discuss different methods which use meta information and
richer context that may accompany source language input to improve machine
translation quality. We focus on category information of input text as meta
information, but the proposed methods can be extended to all textual and
non-textual meta information that might be available for the input text or
automatically predicted using the text content. The main novelty of this work
is to use state-of-the-art neural network methods to tackle this problem within
a statistical machine translation (SMT) framework. We observe translation
quality improvements up to 3% in terms of BLEU score in some text categories.
| 2,017 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.