Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern- and
Graph-based Information to Identify Discriminative Attributes | This paper describes BomJi, a supervised system for capturing discriminative
attributes in word pairs (e.g. yellow as discriminative for banana over
watermelon). The system relies on an XGB classifier trained on carefully
engineered graph-, pattern- and word embedding based features. It participated
in the SemEval- 2018 Task 10 on Capturing Discriminative Attributes, achieving
an F1 score of 0:73 and ranking 2nd out of 26 participant systems.
| 2,018 | Computation and Language |
Inherent Biases in Reference based Evaluation for Grammatical Error
Correction and Text Simplification | The prevalent use of too few references for evaluating text-to-text
generation is known to bias estimates of their quality ({\it low coverage bias}
or LCB). This paper shows that overcoming LCB in Grammatical Error Correction
(GEC) evaluation cannot be attained by re-scaling or by increasing the number
of references in any feasible range, contrary to previous suggestions. This is
due to the long-tailed distribution of valid corrections for a sentence.
Concretely, we show that LCB incentivizes GEC systems to avoid correcting even
when they can generate a valid correction. Consequently, existing systems
obtain comparable or superior performance compared to humans, by making few but
targeted changes to the input. Similar effects on Text Simplification further
support our claims.
| 2,019 | Computation and Language |
Toward Diverse Text Generation with Inverse Reinforcement Learning | Text generation is a crucial task in NLP. Recently, several adversarial
generative models have been proposed to improve the exposure bias problem in
text generation. Though these models gain great success, they still suffer from
the problems of reward sparsity and mode collapse. In order to address these
two problems, in this paper, we employ inverse reinforcement learning (IRL) for
text generation. Specifically, the IRL framework learns a reward function on
training data, and then an optimal policy to maximum the expected total reward.
Similar to the adversarial models, the reward and policy function in IRL are
optimized alternately. Our method has two advantages: (1) the reward function
can produce more dense reward signals. (2) the generation policy, trained by
"entropy regularized" policy gradient, encourages to generate more diversified
texts. Experiment results demonstrate that our proposed method can generate
higher quality texts than the previous methods.
| 2,018 | Computation and Language |
Newsroom: A Dataset of 1.3 Million Summaries with Diverse Extractive
Strategies | We present NEWSROOM, a summarization dataset of 1.3 million articles and
summaries written by authors and editors in newsrooms of 38 major news
publications. Extracted from search and social media metadata between 1998 and
2017, these high-quality summaries demonstrate high diversity of summarization
styles. In particular, the summaries combine abstractive and extractive
strategies, borrowing words and phrases from articles at varying rates. We
analyze the extraction strategies used in NEWSROOM summaries against other
datasets to quantify the diversity and difficulty of our new data, and train
existing methods on the data to evaluate its utility and challenges.
| 2,020 | Computation and Language |
Sampling strategies in Siamese Networks for unsupervised speech
representation learning | Recent studies have investigated siamese network architectures for learning
invariant speech representations using same-different side information at the
word level. Here we investigate systematically an often ignored component of
siamese networks: the sampling procedure (how pairs of same vs. different
tokens are selected). We show that sampling strategies taking into account
Zipf's Law, the distribution of speakers and the proportions of same and
different pairs of words significantly impact the performance of the network.
In particular, we show that word frequency compression improves learning across
a large range of variations in number of training pairs. This effect does not
apply to the same extent to the fully unsupervised setting, where the pairs of
same-different words are obtained by spoken term discovery. We apply these
results to pairs of words discovered using an unsupervised algorithm and show
an improvement on state-of-the-art in unsupervised representation learning
using siamese networks.
| 2,018 | Computation and Language |
Accelerating NMT Batched Beam Decoding with LMBR Posteriors for
Deployment | We describe a batched beam decoding algorithm for NMT with LMBR n-gram
posteriors, showing that LMBR techniques still yield gains on top of the best
recently reported results with Transformers. We also discuss acceleration
strategies for deployment, and the effect of the beam size and batching on
memory and speed.
| 2,018 | Computation and Language |
A Portuguese Native Language Identification Dataset | In this paper we present NLI-PT, the first Portuguese dataset compiled for
Native Language Identification (NLI), the task of identifying an author's first
language based on their second language writing. The dataset includes 1,868
student essays written by learners of European Portuguese, native speakers of
the following L1s: Chinese, English, Spanish, German, Russian, French,
Japanese, Italian, Dutch, Tetum, Arabic, Polish, Korean, Romanian, and Swedish.
NLI-PT includes the original student text and four different types of
annotation: POS, fine-grained POS, constituency parses, and dependency parses.
NLI-PT can be used not only in NLI but also in research on several topics in
the field of Second Language Acquisition and educational NLP. We discuss
possible applications of this dataset and present the results obtained for the
first lexical baseline system for Portuguese NLI.
| 2,018 | Computation and Language |
Syntactic Patterns Improve Information Extraction for Medical Search | Medical professionals search the published literature by specifying the type
of patients, the medical intervention(s) and the outcome measure(s) of
interest. In this paper we demonstrate how features encoding syntactic patterns
improve the performance of state-of-the-art sequence tagging models (both
linear and neural) for information extraction of these medically relevant
categories. We present an analysis of the type of patterns exploited, and the
semantic space induced for these, i.e., the distributed representations learned
for identified multi-token patterns. We show that these learned representations
differ substantially from those of the constituent unigrams, suggesting that
the patterns capture contextual information that is otherwise lost.
| 2,018 | Computation and Language |
Memory-augmented Dialogue Management for Task-oriented Dialogue Systems | Dialogue management (DM) decides the next action of a dialogue system
according to the current dialogue state, and thus plays a central role in
task-oriented dialogue systems. Since dialogue management requires to have
access to not only local utterances, but also the global semantics of the
entire dialogue session, modeling the long-range history information is a
critical issue. To this end, we propose a novel Memory-Augmented Dialogue
management model (MAD) which employs a memory controller and two additional
memory structures, i.e., a slot-value memory and an external memory. The
slot-value memory tracks the dialogue state by memorizing and updating the
values of semantic slots (for instance, cuisine, price, and location), and the
external memory augments the representation of hidden states of traditional
recurrent neural networks through storing more context information. To update
the dialogue state efficiently, we also propose slot-level attention on user
utterances to extract specific semantic information for each slot. Experiments
show that our model can obtain state-of-the-art performance and outperforms
existing baselines.
| 2,018 | Computation and Language |
Dynamic Sentence Sampling for Efficient Training of Neural Machine
Translation | Traditional Neural machine translation (NMT) involves a fixed training
procedure where each sentence is sampled once during each epoch. In reality,
some sentences are well-learned during the initial few epochs; however, using
this approach, the well-learned sentences would continue to be trained along
with those sentences that were not well learned for 10-30 epochs, which results
in a wastage of time. Here, we propose an efficient method to dynamically
sample the sentences in order to accelerate the NMT training. In this approach,
a weight is assigned to each sentence based on the measured difference between
the training costs of two iterations. Further, in each epoch, a certain
percentage of sentences are dynamically sampled according to their weights.
Empirical results based on the NIST Chinese-to-English and the WMT
English-to-German tasks depict that the proposed method can significantly
accelerate the NMT training and improve the NMT performance.
| 2,019 | Computation and Language |
An Annotated Corpus for Machine Reading of Instructions in Wet Lab
Protocols | We describe an effort to annotate a corpus of natural language instructions
consisting of 622 wet lab protocols to facilitate automatic or semi-automatic
conversion of protocols into a machine-readable format and benefit biological
research. Experimental results demonstrate the utility of our corpus for
developing machine learning approaches to shallow semantic parsing of
instructional texts. We make our annotated Wet Lab Protocol Corpus available to
the research community.
| 2,018 | Computation and Language |
Nugget Proposal Networks for Chinese Event Detection | Neural network based models commonly regard event detection as a word-wise
classification task, which suffer from the mismatch problem between words and
event triggers, especially in languages without natural word delimiters such as
Chinese. In this paper, we propose Nugget Proposal Networks (NPNs), which can
solve the word-trigger mismatch problem by directly proposing entire trigger
nuggets centered at each character regardless of word boundaries. Specifically,
NPNs perform event detection in a character-wise paradigm, where a hybrid
representation for each character is first learned to capture both structural
and semantic information from both characters and words. Then based on learned
representations, trigger nuggets are proposed and categorized by exploiting
character compositional structures of Chinese event triggers. Experiments on
both ACE2005 and TAC KBP 2017 datasets show that NPNs significantly outperform
the state-of-the-art methods.
| 2,018 | Computation and Language |
Adaptive Scaling for Sparse Detection in Information Extraction | This paper focuses on detection tasks in information extraction, where
positive instances are sparsely distributed and models are usually evaluated
using F-measure on positive classes. These characteristics often result in
deficient performance of neural network based detection models. In this paper,
we propose adaptive scaling, an algorithm which can handle the positive
sparsity problem and directly optimize over F-measure via dynamic
cost-sensitive learning. To this end, we borrow the idea of marginal utility
from economics and propose a theoretical framework for instance importance
measuring without introducing any additional hyper-parameters. Experiments show
that our algorithm leads to a more effective and stable training of neural
network based detection models.
| 2,018 | Computation and Language |
Joint Bootstrapping Machines for High Confidence Relation Extraction | Semi-supervised bootstrapping techniques for relationship extraction from
text iteratively expand a set of initial seed instances. Due to the lack of
labeled data, a key challenge in bootstrapping is semantic drift: if a false
positive instance is added during an iteration, then all following iterations
are contaminated. We introduce BREX, a new bootstrapping method that protects
against such contamination by highly effective confidence assessment. This is
achieved by using entity and template seeds jointly (as opposed to just one as
in previous work), by expanding entities and templates in parallel and in a
mutually constraining fashion in each iteration and by introducing
higherquality similarity measures for templates. Experimental results show that
BREX achieves an F1 that is 0.13 (0.87 vs. 0.74) better than the state of the
art for four relationships.
| 2,018 | Computation and Language |
Capturing Ambiguity in Crowdsourcing Frame Disambiguation | FrameNet is a computational linguistics resource composed of semantic frames,
high-level concepts that represent the meanings of words. In this paper, we
present an approach to gather frame disambiguation annotations in sentences
using a crowdsourcing approach with multiple workers per sentence to capture
inter-annotator disagreement. We perform an experiment over a set of 433
sentences annotated with frames from the FrameNet corpus, and show that the
aggregated crowd annotations achieve an F1 score greater than 0.67 as compared
to expert linguists. We highlight cases where the crowd annotation was correct
even though the expert is in disagreement, arguing for the need to have
multiple annotators per sentence. Most importantly, we examine cases in which
crowd workers could not agree, and demonstrate that these cases exhibit
ambiguity, either in the sentence, frame, or the task itself, and argue that
collapsing such cases to a single, discrete truth value (i.e. correct or
incorrect) is inappropriate, creating arbitrary targets for machine learning.
| 2,018 | Computation and Language |
Multitask Parsing Across Semantic Representations | The ability to consolidate information of different types is at the core of
intelligence, and has tremendous practical value in allowing learning for one
task to benefit from generalizations learned for others. In this paper we
tackle the challenging task of improving semantic parsing performance, taking
UCCA parsing as a test case, and AMR, SDP and Universal Dependencies (UD)
parsing as auxiliary tasks. We experiment on three languages, using a uniform
transition-based system and learning architecture for all parsing tasks.
Despite notable conceptual, formal and domain differences, we show that
multitask learning significantly improves UCCA parsing in both in-domain and
out-of-domain settings.
| 2,018 | Computation and Language |
Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical
Discharge Summaries | In this study, we explored application of Word2Vec and Doc2Vec for sentiment
analysis of clinical discharge summaries. We applied unsupervised learning
since the data sets did not have sentiment annotations. Note that unsupervised
learning is a more realistic scenario than supervised learning which requires
an access to a training set of sentiment-annotated data. We aim to detect if
there exists any underlying bias towards or against a certain disease. We used
SentiWordNet to establish a gold sentiment standard for the data sets and
evaluate performance of Word2Vec and Doc2Vec methods. We have shown that the
Word2vec and Doc2Vec methods complement each other results in sentiment
analysis of the data sets.
| 2,018 | Computation and Language |
Multi-representation Ensembles and Delayed SGD Updates Improve
Syntax-based NMT | We explore strategies for incorporating target syntax into Neural Machine
Translation. We specifically focus on syntax in ensembles containing multiple
sentence representations. We formulate beam search over such ensembles using
WFSTs, and describe a delayed SGD update training procedure that is especially
effective for long representations like linearized syntax. Our approach gives
state-of-the-art performance on a difficult Japanese-English task.
| 2,018 | Computation and Language |
Customized Image Narrative Generation via Interactive Visual Question
Generation and Answering | Image description task has been invariably examined in a static manner with
qualitative presumptions held to be universally applicable, regardless of the
scope or target of the description. In practice, however, different viewers may
pay attention to different aspects of the image, and yield different
descriptions or interpretations under various contexts. Such diversity in
perspectives is difficult to derive with conventional image description
techniques. In this paper, we propose a customized image narrative generation
task, in which the users are interactively engaged in the generation process by
providing answers to the questions. We further attempt to learn the user's
interest via repeating such interactive stages, and to automatically reflect
the interest in descriptions for new images. Experimental results demonstrate
that our model can generate a variety of descriptions from single image that
cover a wider range of topics than conventional models, while being
customizable to the target user of interaction.
| 2,018 | Computation and Language |
Interactive Language Acquisition with One-shot Visual Concept Learning
through a Conversational Game | Building intelligent agents that can communicate with and learn from humans
in natural language is of great value. Supervised language learning is limited
by the ability of capturing mainly the statistics of training data, and is
hardly adaptive to new scenarios or flexible for acquiring new knowledge
without inefficient retraining or catastrophic forgetting. We highlight the
perspective that conversational interaction serves as a natural interface both
for language learning and for novel knowledge acquisition and propose a joint
imitation and reinforcement approach for grounded language learning through an
interactive conversational game. The agent trained with this approach is able
to actively acquire information by asking questions about novel objects and use
the just-learned knowledge in subsequent conversations in a one-shot fashion.
Results compared with other methods verified the effectiveness of the proposed
approach.
| 2,018 | Computation and Language |
"I ain't tellin' white folks nuthin": A quantitative exploration of the
race-related problem of candour in the WPA slave narratives | From 1936-38, the Works Progress Administration interviewed thousands of
former slaves about their life experiences. While these interviews are crucial
to understanding the "peculiar institution" from the standpoint of the slave
himself, issues relating to bias cloud analyses of these interviews. The
problem I investigate is the problem of candour in the WPA slave narratives: it
is widely held in the historical community that the strict racial caste system
of the Deep South compelled black ex-slaves to tell white interviewers what
they thought they wanted to hear, suggesting that there was a significant
difference candour depending on whether their interviewer was white or black.
In this work, I attempt to quantitatively characterise this race-related
problem of candour. Prior work has either been of an impressionistic,
qualitative nature, or utilised exceedingly simple quantitative methodology. In
contrast, I use more sophisticated statistical methods: in particular word
frequency and sentiment analysis and comparative topic modelling with LDA to
try and identify differences in the content and sentiment expressed by
ex-slaves in front of white interviewers versus black interviewers. While my
sentiment analysis methodology was ultimately unsuccessful due to the
complexity of the task, my word frequency analysis and comparative topic
modelling methods both showed strong evidence that the content expressed in
front of white interviewers was different from that of black interviewers. In
particular, I found that the ex-slaves spoke much more about unfavourable
aspects of slavery like whipping and slave patrollers in front of interviewers
of their own race. I hope that my more-sophisticated statistical methodology
helps improve the robustness of the argument for the existence of this problem
of candour in the slave narratives, which some would seek to deny for
revisionist purposes.
| 2,018 | Computation and Language |
Exploring Conversational Language Generation for Rich Content about
Hotels | Dialogue systems for hotel and tourist information have typically simplified
the richness of the domain, focusing system utterances on only a few selected
attributes such as price, location and type of rooms. However, much more
content is typically available for hotels, often as many as 50 distinct
instantiated attributes for an individual entity. New methods are needed to use
this content to generate natural dialogues for hotel information, and in
general for any domain with such rich complex content. We describe three
experiments aimed at collecting data that can inform an NLG for hotels
dialogues, and show, not surprisingly, that the sentences in the original
written hotel descriptions provided on webpages for each hotel are
stylistically not a very good match for conversational interaction. We quantify
the stylistic features that characterize the differences between the original
textual data and the collected dialogic data. We plan to use these in stylistic
models for generation, and for scoring retrieved utterances for use in hotel
dialogues
| 2,018 | Computation and Language |
Accelerating Neural Transformer via an Average Attention Network | With parallelizable attention networks, the neural Transformer is very fast
to train. However, due to the auto-regressive architecture and self-attention
in the decoder, the decoding procedure becomes slow. To alleviate this issue,
we propose an average attention network as an alternative to the self-attention
network in the decoder of the neural Transformer. The average attention network
consists of two layers, with an average layer that models dependencies on
previous positions and a gating layer that is stacked over the average layer to
enhance the expressiveness of the proposed attention network. We apply this
network on the decoder part of the neural Transformer to replace the original
target-side self-attention model. With masking tricks and dynamic programming,
our model enables the neural Transformer to decode sentences over four times
faster than its original version with almost no loss in training time and
translation performance. We conduct a series of experiments on WMT17
translation tasks, where on 6 different language pairs, we obtain robust and
consistent speed-ups in decoding.
| 2,018 | Computation and Language |
Exploring Emoji Usage and Prediction Through a Temporal Variation Lens | The frequent use of Emojis on social media platforms has created a new form
of multimodal social interaction. Developing methods for the study and
representation of emoji semantics helps to improve future multimodal
communication systems. In this paper, we explore the usage and semantics of
emojis over time. We compare emoji embeddings trained on a corpus of different
seasons and show that some emojis are used differently depending on the time of
the year. Moreover, we propose a method to take into account the time
information for emoji prediction systems, outperforming state-of-the-art
systems. We show that, using the time information, the accuracy of some emojis
can be significantly improved.
| 2,018 | Computation and Language |
KNPTC: Knowledge and Neural Machine Translation Powered Chinese Pinyin
Typo Correction | Chinese pinyin input methods are very important for Chinese language
processing. Actually, users may make typos inevitably when they input pinyin.
Moreover, pinyin typo correction has become an increasingly important task with
the popularity of smartphones and the mobile Internet. How to exploit the
knowledge of users typing behaviors and support the typo correction for acronym
pinyin remains a challenging problem. To tackle these challenges, we propose
KNPTC, a novel approach based on neural machine translation (NMT). In contrast
to previous work, KNPTC is able to integrate explicit knowledge into NMT for
pinyin typo correction, and is able to learn to correct a variety of typos
without the guidance of manually selected constraints or languagespecific
features. In this approach, we first obtain the transition probabilities
between adjacent letters based on large-scale real-life datasets. Then, we
construct the "ground-truth" alignments of training sentence pairs by utilizing
these probabilities. Furthermore, these alignments are integrated into NMT to
capture sensible pinyin typo correction patterns. KNPTC is applied to correct
typos in real-life datasets, which achieves 32.77% increment on average in
accuracy rate of typo correction compared against the state-of-the-art system.
| 2,018 | Computation and Language |
Aspect Term Extraction with History Attention and Selective
Transformation | Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment
Analysis, aims to extract explicit aspect expressions from online user reviews.
We present a new framework for tackling ATE. It can exploit two useful clues,
namely opinion summary and aspect detection history. Opinion summary is
distilled from the whole input sentence, conditioned on each current token for
aspect prediction, and thus the tailor-made summary can help aspect prediction
on this token. Another clue is the information of aspect detection history, and
it is distilled from the previous aspect predictions so as to leverage the
coordinate structure and tagging schema constraints to upgrade the aspect
prediction. Experimental results over four benchmark datasets clearly
demonstrate that our framework can outperform all state-of-the-art methods.
| 2,018 | Computation and Language |
Unsupervised Cross-Lingual Information Retrieval using Monolingual Data
Only | We propose a fully unsupervised framework for ad-hoc cross-lingual
information retrieval (CLIR) which requires no bilingual data at all. The
framework leverages shared cross-lingual word embedding spaces in which terms,
queries, and documents can be represented, irrespective of their actual
language. The shared embedding spaces are induced solely on the basis of
monolingual corpora in two languages through an iterative process based on
adversarial neural networks. Our experiments on the standard CLEF CLIR
collections for three language pairs of varying degrees of language similarity
(English-Dutch/Italian/Finnish) demonstrate the usefulness of the proposed
fully unsupervised approach. Our CLIR models with unsupervised cross-lingual
embeddings outperform baselines that utilize cross-lingual embeddings induced
relying on word-level and document-level alignments. We then demonstrate that
further improvements can be achieved by unsupervised ensemble CLIR models. We
believe that the proposed framework is the first step towards development of
effective CLIR models for language pairs and domains where parallel data are
scarce or non-existent.
| 2,018 | Computation and Language |
Tensorized Self-Attention: Efficiently Modeling Pairwise and Global
Dependencies Together | Neural networks equipped with self-attention have parallelizable computation,
light-weight structure, and the ability to capture both long-range and local
dependencies. Further, their expressive power and performance can be boosted by
using a vector to measure pairwise dependency, but this requires to expand the
alignment matrix to a tensor, which results in memory and computation
bottlenecks. In this paper, we propose a novel attention mechanism called
"Multi-mask Tensorized Self-Attention" (MTSA), which is as fast and as
memory-efficient as a CNN, but significantly outperforms previous
CNN-/RNN-/attention-based models. MTSA 1) captures both pairwise (token2token)
and global (source2token) dependencies by a novel compatibility function
composed of dot-product and additive attentions, 2) uses a tensor to represent
the feature-wise alignment scores for better expressive power but only requires
parallelizable matrix multiplications, and 3) combines multi-head with
multi-dimensional attentions, and applies a distinct positional mask to each
head (subspace), so the memory and computation can be distributed to multiple
heads, each with sequential information encoded independently. The experiments
show that a CNN/RNN-free model based on MTSA achieves state-of-the-art or
competitive performance on nine NLP benchmarks with compelling memory- and
time-efficiency.
| 2,019 | Computation and Language |
Split and Rephrase: Better Evaluation and a Stronger Baseline | Splitting and rephrasing a complex sentence into several shorter sentences
that convey the same meaning is a challenging problem in NLP. We show that
while vanilla seq2seq models can reach high scores on the proposed benchmark
(Narayan et al., 2017), they suffer from memorization of the training set which
contains more than 89% of the unique simple sentences from the validation and
test sets. To aid this, we present a new train-development-test data split and
neural models augmented with a copy-mechanism, outperforming the best reported
baseline by 8.68 BLEU and fostering further progress on the task.
| 2,018 | Computation and Language |
Hypothesis Only Baselines in Natural Language Inference | We propose a hypothesis only baseline for diagnosing Natural Language
Inference (NLI). Especially when an NLI dataset assumes inference is occurring
based purely on the relationship between a context and a hypothesis, it follows
that assessing entailment relations while ignoring the provided context is a
degenerate solution. Yet, through experiments on ten distinct NLI datasets, we
find that this approach, which we refer to as a hypothesis-only model, is able
to significantly outperform a majority class baseline across a number of NLI
datasets. Our analysis suggests that statistical irregularities may allow a
model to perform NLI in some datasets beyond what should be achievable without
access to the context.
| 2,018 | Computation and Language |
Constituency Parsing with a Self-Attentive Encoder | We demonstrate that replacing an LSTM encoder with a self-attentive
architecture can lead to improvements to a state-of-the-art discriminative
constituency parser. The use of attention makes explicit the manner in which
information is propagated between different locations in the sentence, which we
use to both analyze our model and propose potential improvements. For example,
we find that separating positional and content information in the encoder can
lead to improved parsing accuracy. Additionally, we evaluate different
approaches for lexical representation. Our parser achieves new state-of-the-art
results for single models trained on the Penn Treebank: 93.55 F1 without the
use of any external data, and 95.13 F1 when using pre-trained word
representations. Our parser also outperforms the previous best-published
accuracy figures on 8 of the 9 languages in the SPMRL dataset.
| 2,018 | Computation and Language |
Automatic Coding for Neonatal Jaundice From Free Text Data Using
Ensemble Methods | This study explores the creation of a machine learning model to automatically
identify whether a Neonatal Intensive Care Unit (NICU) patient was diagnosed
with neonatal jaundice during a particular hospitalization based on their
associated clinical notes. We develop a number of techniques for text
preprocessing and feature selection and compare the effectiveness of different
classification models. We show that using ensemble decision tree
classification, both with AdaBoost and with bagging, outperforms support vector
machines (SVM), the current state-of-the-art technique for neonatal jaundice
coding.
| 2,018 | Computation and Language |
What you can cram into a single vector: Probing sentence embeddings for
linguistic properties | Although much effort has recently been devoted to training high-quality
sentence embeddings, we still have a poor understanding of what they are
capturing. "Downstream" tasks, often based on sentence classification, are
commonly used to evaluate the quality of sentence representations. The
complexity of the tasks makes it however difficult to infer what kind of
information is present in the representations. We introduce here 10 probing
tasks designed to capture simple linguistic features of sentences, and we use
them to study embeddings generated by three different encoders trained in eight
distinct ways, uncovering intriguing properties of both encoders and training
methods.
| 2,018 | Computation and Language |
Transformation Networks for Target-Oriented Sentiment Classification | Target-oriented sentiment classification aims at classifying sentiment
polarities over individual opinion targets in a sentence. RNN with attention
seems a good fit for the characteristics of this task, and indeed it achieves
the state-of-the-art performance. After re-examining the drawbacks of attention
mechanism and the obstacles that block CNN to perform well in this
classification task, we propose a new model to overcome these issues. Instead
of attention, our model employs a CNN layer to extract salient features from
the transformed word representations originated from a bi-directional RNN
layer. Between the two layers, we propose a component to generate
target-specific representations of words in the sentence, meanwhile incorporate
a mechanism for preserving the original contextual information from the RNN
layer. Experiments show that our model achieves a new state-of-the-art
performance on a few benchmarks.
| 2,018 | Computation and Language |
Stack-Pointer Networks for Dependency Parsing | We introduce a novel architecture for dependency parsing: \emph{stack-pointer
networks} (\textbf{\textsc{StackPtr}}). Combining pointer
networks~\citep{vinyals2015pointer} with an internal stack, the proposed model
first reads and encodes the whole sentence, then builds the dependency tree
top-down (from root-to-leaf) in a depth-first fashion. The stack tracks the
status of the depth-first search and the pointer networks select one child for
the word at the top of the stack at each step. The \textsc{StackPtr} parser
benefits from the information of the whole sentence and all previously derived
subtree structures, and removes the left-to-right restriction in classical
transition-based parsers. Yet, the number of steps for building any (including
non-projective) parse tree is linear in the length of the sentence just as
other transition-based parsers, yielding an efficient decoding algorithm with
$O(n^2)$ time complexity. We evaluate our model on 29 treebanks spanning 20
languages and different dependency annotation schemas, and achieve
state-of-the-art performance on 21 of them.
| 2,018 | Computation and Language |
A Hierarchical End-to-End Model for Jointly Improving Text Summarization
and Sentiment Classification | Text summarization and sentiment classification both aim to capture the main
ideas of the text but at different levels. Text summarization is to describe
the text within a few sentences, while sentiment classification can be regarded
as a special type of summarization which "summarizes" the text into a even more
abstract fashion, i.e., a sentiment class. Based on this idea, we propose a
hierarchical end-to-end model for joint learning of text summarization and
sentiment classification, where the sentiment classification label is treated
as the further "summarization" of the text summarization output. Hence, the
sentiment classification layer is put upon the text summarization layer, and a
hierarchical structure is derived. Experimental results on Amazon online
reviews datasets show that our model achieves better performance than the
strong baseline systems on both abstractive summarization and sentiment
classification.
| 2,018 | Computation and Language |
Binarizer at SemEval-2018 Task 3: Parsing dependency and deep learning
for irony detection | In this paper, we describe the system submitted for the SemEval 2018 Task 3
(Irony detection in English tweets) Subtask A by the team Binarizer. Irony
detection is a key task for many natural language processing works. Our method
treats ironical tweets to consist of smaller parts containing different
emotions. We break down tweets into separate phrases using a dependency parser.
We then embed those phrases using an LSTM-based neural network model which is
pre-trained to predict emoticons for tweets. Finally, we train a
fully-connected network to achieve classification.
| 2,018 | Computation and Language |
Improving a Neural Semantic Parser by Counterfactual Learning from Human
Bandit Feedback | Counterfactual learning from human bandit feedback describes a scenario where
user feedback on the quality of outputs of a historic system is logged and used
to improve a target system. We show how to apply this learning framework to
neural semantic parsing. From a machine learning perspective, the key challenge
lies in a proper reweighting of the estimator so as to avoid known degeneracies
in counterfactual learning, while still being applicable to stochastic gradient
optimization. To conduct experiments with human users, we devise an easy-to-use
interface to collect human feedback on semantic parses. Our work is the first
to show that semantic parsers can be improved significantly by counterfactual
learning from logged human feedback data.
| 2,018 | Computation and Language |
The Fine Line between Linguistic Generalization and Failure in
Seq2Seq-Attention Models | Seq2Seq based neural architectures have become the go-to architecture to
apply to sequence to sequence language tasks. Despite their excellent
performance on these tasks, recent work has noted that these models usually do
not fully capture the linguistic structure required to generalize beyond the
dense sections of the data distribution \cite{ettinger2017towards}, and as
such, are likely to fail on samples from the tail end of the distribution (such
as inputs that are noisy \citep{belkinovnmtbreak} or of different lengths
\citep{bentivoglinmtlength}). In this paper, we look at a model's ability to
generalize on a simple symbol rewriting task with a clearly defined structure.
We find that the model's ability to generalize this structure beyond the
training distribution depends greatly on the chosen random seed, even when
performance on the standard test set remains the same. This suggests that a
model's ability to capture generalizable structure is highly sensitive.
Moreover, this sensitivity may not be apparent when evaluating it on standard
test sets.
| 2,018 | Computation and Language |
Robustness of sentence length measures in written texts | Hidden structural patterns in written texts have been subject of considerable
research in the last decades. In particular, mapping a text into a time series
of sentence lengths is a natural way to investigate text structure. Typically,
sentence length has been quantified by using measures based on the number of
words and the number of characters, but other variations are possible. To
quantify the robustness of different sentence length measures, we analyzed a
database containing about five hundred books in English. For each book, we
extracted six distinct measures of sentence length, including number of words
and number of characters (taking into account lemmatization and stop words
removal). We compared these six measures for each book by using i) Pearson's
coefficient to investigate linear correlations; ii) Kolmogorov--Smirnov test to
compare distributions; and iii) detrended fluctuation analysis (DFA) to
quantify auto-correlations. We have found that all six measures exhibit very
similar behavior, suggesting that sentence length is a robust measure related
to text structure.
| 2,018 | Computation and Language |
Fast and Scalable Expansion of Natural Language Understanding
Functionality for Intelligent Agents | Fast expansion of natural language functionality of intelligent virtual
agents is critical for achieving engaging and informative interactions.
However, developing accurate models for new natural language domains is a time
and data intensive process. We propose efficient deep neural network
architectures that maximally re-use available resources through transfer
learning. Our methods are applied for expanding the understanding capabilities
of a popular commercial agent and are evaluated on hundreds of new domains,
designed by internal or external developers. We demonstrate that our proposed
methods significantly increase accuracy in low resource settings and enable
rapid development of accurate models with less data.
| 2,018 | Computation and Language |
A Reinforcement Learning Approach to Interactive-Predictive Neural
Machine Translation | We present an approach to interactive-predictive neural machine translation
that attempts to reduce human effort from three directions: Firstly, instead of
requiring humans to select, correct, or delete segments, we employ the idea of
learning from human reinforcements in form of judgments on the quality of
partial translations. Secondly, human effort is further reduced by using the
entropy of word predictions as uncertainty criterion to trigger feedback
requests. Lastly, online updates of the model parameters after every
interaction allow the model to adapt quickly. We show in simulation experiments
that reward signals on partial translations significantly improve character
F-score and BLEU compared to feedback on full translations only, while human
effort can be reduced to an average number of $5$ feedback requests for every
input.
| 2,018 | Computation and Language |
An End-to-end Approach for Handling Unknown Slot Values in Dialogue
State Tracking | We highlight a practical yet rarely discussed problem in dialogue state
tracking (DST), namely handling unknown slot values. Previous approaches
generally assume predefined candidate lists and thus are not designed to output
unknown values, especially when the spoken language understanding (SLU) module
is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E
architecture based on the pointer network (PtrNet) that can effectively extract
unknown slot values while still obtains state-of-the-art accuracy on the
standard DSTC2 benchmark. We also provide extensive empirical evidence to show
that tracking unknown values can be challenging and our approach can bring
significant improvement with the help of an effective feature dropout
technique.
| 2,018 | Computation and Language |
Incorporating Chinese Radicals Into Neural Machine Translation: Deeper
Than Character Level | In neural machine translation (NMT), researchers face the challenge of
un-seen (or out-of-vocabulary OOV) words translation. To solve this, some
researchers propose the splitting of western languages such as English and
German into sub-words or compounds. In this paper, we try to address this OOV
issue and improve the NMT adequacy with a harder language Chinese whose
characters are even more sophisticated in composition. We integrate the Chinese
radicals into the NMT model with different settings to address the unseen words
challenge in Chinese to English translation. On the other hand, this also can
be considered as semantic part of the MT system since the Chinese radicals
usually carry the essential meaning of the words they are constructed in.
Meaningful radicals and new characters can be integrated into the NMT systems
with our models. We use an attention-based NMT system as a strong baseline
system. The experiments on standard Chinese-to-English NIST translation shared
task data 2006 and 2008 show that our designed models outperform the baseline
model in a wide range of state-of-the-art evaluation metrics including LEPOR,
BEER, and CharacTER, in addition to BLEU and NIST scores, especially on the
adequacy-level translation. We also have some interesting findings from the
results of our various experiment settings about the performance of words and
characters in Chinese NMT, which is different with other languages. For
instance, the fully character level NMT may perform well or the state of the
art in some other languages as researchers demonstrated recently, however, in
the Chinese NMT model, word boundary knowledge is important for the model
learning.
| 2,019 | Computation and Language |
Cross-lingual Candidate Search for Biomedical Concept Normalization | Biomedical concept normalization links concept mentions in texts to a
semantically equivalent concept in a biomedical knowledge base. This task is
challenging as concepts can have different expressions in natural languages,
e.g. paraphrases, which are not necessarily all present in the knowledge base.
Concept normalization of non-English biomedical text is even more challenging
as non-English resources tend to be much smaller and contain less synonyms. To
overcome the limitations of non-English terminologies we propose a
cross-lingual candidate search for concept normalization using a
character-based neural translation model trained on a multilingual biomedical
terminology. Our model is trained with Spanish, French, Dutch and German
versions of UMLS. The evaluation of our model is carried out on the French
Quaero corpus, showing that it outperforms most teams of CLEF eHealth 2015 and
2016. Additionally, we compare performance to commercial translators on
Spanish, French, Dutch and German versions of Mantra. Our model performs
similarly well, but is free of charge and can be run locally. This is
particularly important for clinical NLP applications as medical documents
underlay strict privacy restrictions.
| 2,018 | Computation and Language |
Upping the Ante: Towards a Better Benchmark for Chinese-to-English
Machine Translation | There are many machine translation (MT) papers that propose novel approaches
and show improvements over their self-defined baselines. The experimental
setting in each paper often differs from one another. As such, it is hard to
determine if a proposed approach is really useful and advances the state of the
art. Chinese-to-English translation is a common translation direction in MT
papers, although there is not one widely accepted experimental setting in
Chinese-to-English MT. Our goal in this paper is to propose a benchmark in
evaluation setup for Chinese-to-English machine translation, such that the
effectiveness of a new proposed MT approach can be directly compared to
previous approaches. Towards this end, we also built a highly competitive
state-of-the-art MT system trained on a large-scale training set. Our system
outperforms reported results on NIST OpenMT test sets in almost all papers
published in major conferences and journals in computational linguistics and
artificial intelligence in the past 11 years. We argue that a standardized
benchmark on data and performance is important for meaningful comparison.
| 2,018 | Computation and Language |
Extreme Adaptation for Personalized Neural Machine Translation | Every person speaks or writes their own flavor of their native language,
influenced by a number of factors: the content they tend to talk about, their
gender, their social status, or their geographical origin.
When attempting to perform Machine Translation (MT), these variations have a
significant effect on how the system should perform translation, but this is
not captured well by standard one-size-fits-all models.
In this paper, we propose a simple and parameter-efficient adaptation
technique that only requires adapting the bias of the output softmax to each
particular user of the MT system, either directly or through a factored
approximation.
Experiments on TED talks in three languages demonstrate improvements in
translation accuracy, and better reflection of speaker traits in the target
text.
| 2,018 | Computation and Language |
A Rank-Based Similarity Metric for Word Embeddings | Word Embeddings have recently imposed themselves as a standard for
representing word meaning in NLP. Semantic similarity between word pairs has
become the most common evaluation benchmark for these representations, with
vector cosine being typically used as the only similarity metric. In this
paper, we report experiments with a rank-based metric for WE, which performs
comparably to vector cosine in similarity estimation and outperforms it in the
recently-introduced and challenging task of outlier detection, thus suggesting
that rank-based measures can improve clustering quality.
| 2,018 | Computation and Language |
Various Approaches to Aspect-based Sentiment Analysis | The problem of aspect-based sentiment analysis deals with classifying
sentiments (negative, neutral, positive) for a given aspect in a sentence. A
traditional sentiment classification task involves treating the entire sentence
as a text document and classifying sentiments based on all the words. Let us
assume, we have a sentence such as "the acceleration of this car is fast, but
the reliability is horrible". This can be a difficult sentence because it has
two aspects with conflicting sentiments about the same entity. Considering
machine learning techniques (or deep learning), how do we encode the
information that we are interested in one aspect and its sentiment but not the
other? Let us explore various pre-processing steps, features, and methods used
to facilitate in solving this task.
| 2,018 | Computation and Language |
Chinese NER Using Lattice LSTM | We investigate a lattice-structured LSTM model for Chinese NER, which encodes
a sequence of input characters as well as all potential words that match a
lexicon. Compared with character-based methods, our model explicitly leverages
word and word sequence information. Compared with word-based methods, lattice
LSTM does not suffer from segmentation errors. Gated recurrent cells allow our
model to choose the most relevant characters and words from a sentence for
better NER results. Experiments on various datasets show that lattice LSTM
outperforms both word-based and character-based LSTM baselines, achieving the
best results.
| 2,018 | Computation and Language |
Compositional Representation of Morphologically-Rich Input for Neural
Machine Translation | Neural machine translation (NMT) models are typically trained with fixed-size
input and output vocabularies, which creates an important bottleneck on their
accuracy and generalization capability. As a solution, various studies proposed
segmenting words into sub-word units and performing translation at the
sub-lexical level. However, statistical word segmentation methods have recently
shown to be prone to morphological errors, which can lead to inaccurate
translations. In this paper, we propose to overcome this problem by replacing
the source-language embedding layer of NMT with a bi-directional recurrent
neural network that generates compositional representations of the input at any
desired level of granularity. We test our approach in a low-resource setting
with five languages from different morphological typologies, and under
different composition assumptions. By training NMT to compose word
representations from character n-grams, our approach consistently outperforms
(from 1.71 to 2.48 BLEU points) NMT learning embeddings of statistically
generated sub-word units.
| 2,018 | Computation and Language |
Exploring Hyper-Parameter Optimization for Neural Machine Translation on
GPU Architectures | Neural machine translation (NMT) has been accelerated by deep learning neural
networks over statistical-based approaches, due to the plethora and
programmability of commodity heterogeneous computing architectures such as
FPGAs and GPUs and the massive amount of training corpuses generated from news
outlets, government agencies and social media. Training a learning classifier
for neural networks entails tuning hyper-parameters that would yield the best
performance. Unfortunately, the number of parameters for machine translation
include discrete categories as well as continuous options, which makes for a
combinatorial explosive problem. This research explores optimizing
hyper-parameters when training deep learning neural networks for machine
translation. Specifically, our work investigates training a language model with
Marian NMT. Results compare NMT under various hyper-parameter settings across a
variety of modern GPU architecture generations in single node and multi-node
settings, revealing insights on which hyper-parameters matter most in terms of
performance, such as words processed per second, convergence rates, and
translation accuracy, and provides insights on how to best achieve
high-performing NMT systems.
| 2,021 | Computation and Language |
Learning Patient Representations from Text | Mining electronic health records for patients who satisfy a set of predefined
criteria is known in medical informatics as phenotyping. Phenotyping has
numerous applications such as outcome prediction, clinical trial recruitment,
and retrospective studies. Supervised machine learning for phenotyping
typically relies on sparse patient representations such as bag-of-words. We
consider an alternative that involves learning patient representations. We
develop a neural network model for learning patient representations and show
that the learned representations are general enough to obtain state-of-the-art
performance on a standard comorbidity detection task.
| 2,018 | Computation and Language |
Dynamic and Static Topic Model for Analyzing Time-Series Document
Collections | For extracting meaningful topics from texts, their structures should be
considered properly. In this paper, we aim to analyze structured time-series
documents such as a collection of news articles and a series of scientific
papers, wherein topics evolve along time depending on multiple topics in the
past and are also related to each other at each time. To this end, we propose a
dynamic and static topic model, which simultaneously considers the dynamic
structures of the temporal topic evolution and the static structures of the
topic hierarchy at each time. We show the results of experiments on collections
of scientific papers, in which the proposed method outperformed conventional
models. Moreover, we show an example of extracted topic structures, which we
found helpful for analyzing research activities.
| 2,018 | Computation and Language |
Zero-shot Sequence Labeling: Transferring Knowledge from Sentences to
Tokens | Can attention- or gradient-based visualization techniques be used to infer
token-level labels for binary sequence tagging problems, using networks trained
only on sentence-level labels? We construct a neural network architecture based
on soft attention, train it as a binary sentence classifier and evaluate
against token-level annotation on four different datasets. Inferring token
labels from a network provides a method for quantitatively evaluating what the
model is learning, along with generating useful feedback in assistance systems.
Our results indicate that attention-based methods are able to predict
token-level labels more accurately, compared to gradient-based methods,
sometimes even rivaling the supervised oracle network.
| 2,018 | Computation and Language |
Multi-Passage Machine Reading Comprehension with Cross-Passage Answer
Verification | Machine reading comprehension (MRC) on real web data usually requires the
machine to answer a question by analyzing multiple passages retrieved by search
engine. Compared with MRC on a single passage, multi-passage MRC is more
challenging, since we are likely to get multiple confusing answer candidates
from different passages. To address this problem, we propose an end-to-end
neural model that enables those answer candidates from different passages to
verify each other based on their content representations. Specifically, we
jointly train three modules that can predict the final answer based on three
factors: the answer boundary, the answer content and the cross-passage answer
verification. The experimental results show that our method outperforms the
baseline by a large margin and achieves the state-of-the-art performance on the
English MS-MARCO dataset and the Chinese DuReader dataset, both of which are
designed for MRC in real-world settings.
| 2,018 | Computation and Language |
Russian word sense induction by clustering averaged word embeddings | The paper reports our participation in the shared task on word sense
induction and disambiguation for the Russian language (RUSSE-2018). Our team
was ranked 2nd for the wiki-wiki dataset (containing mostly homonyms) and 5th
for the bts-rnc and active-dict datasets (containing mostly polysemous words)
among all 19 participants.
The method we employed was extremely naive. It implied representing contexts
of ambiguous words as averaged word embedding vectors, using off-the-shelf
pre-trained distributional models. Then, these vector representations were
clustered with mainstream clustering techniques, thus producing the groups
corresponding to the ambiguous word senses. As a side result, we show that word
embedding models trained on small but balanced corpora can be superior to those
trained on large but noisy data - not only in intrinsic evaluation, but also in
downstream tasks like word sense induction.
| 2,018 | Computation and Language |
Construction of the Literature Graph in Semantic Scholar | We describe a deployed scalable system for organizing published scientific
literature into a heterogeneous graph to facilitate algorithmic manipulation
and discovery. The resulting literature graph consists of more than 280M nodes,
representing papers, authors, entities and various interactions between them
(e.g., authorships, citations, entity mentions). We reduce literature graph
construction into familiar NLP tasks (e.g., entity extraction and linking),
point out research challenges due to differences from standard formulations of
these tasks, and report empirical results for each task. The methods described
in this paper are used to enable semantic features in www.semanticscholar.org
| 2,018 | Computation and Language |
Breaking NLI Systems with Sentences that Require Simple Lexical
Inferences | We create a new NLI test set that shows the deficiency of state-of-the-art
models in inferences that require lexical and world knowledge. The new examples
are simpler than the SNLI test set, containing sentences that differ by at most
one word from sentences in the training set. Yet, the performance on the new
test set is substantially worse across systems trained on SNLI, demonstrating
that these systems are limited in their generalization ability, failing to
capture many simple inferences.
| 2,018 | Computation and Language |
Coherence Modeling of Asynchronous Conversations: A Neural Entity Grid
Approach | We propose a novel coherence model for written asynchronous conversations
(e.g., forums, emails), and show its applications in coherence assessment and
thread reconstruction tasks. We conduct our research in two steps. First, we
propose improvements to the recently proposed neural entity grid model by
lexicalizing its entity transitions. Then, we extend the model to asynchronous
conversations by incorporating the underlying conversational structure in the
entity grid representation and feature computation. Our model achieves state of
the art results on standard coherence assessment tasks in monologue and
conversations outperforming existing models. We also demonstrate its
effectiveness in reconstructing thread structures.
| 2,018 | Computation and Language |
Multi-Domain Neural Machine Translation | We present an approach to neural machine translation (NMT) that supports
multiple domains in a single model and allows switching between the domains
when translating. The core idea is to treat text domains as distinct languages
and use multilingual NMT methods to create multi-domain translation systems, we
show that this approach results in significant translation quality gains over
fine-tuning. We also explore whether the knowledge of pre-specified text
domains is necessary, turns out that it is after all, but also that when it is
not known quite high translation quality can be reached.
| 2,018 | Computation and Language |
Learning Matching Models with Weak Supervision for Response Selection in
Retrieval-based Chatbots | We propose a method that can leverage unlabeled data to learn a matching
model for response selection in retrieval-based chatbots. The method employs a
sequence-to-sequence architecture (Seq2Seq) model as a weak annotator to judge
the matching degree of unlabeled pairs, and then performs learning with both
the weak signals and the unlabeled data. Experimental results on two public
data sets indicate that matching models get significant improvements when they
are learned with the proposed method.
| 2,018 | Computation and Language |
Multimodal Machine Translation with Reinforcement Learning | Multimodal machine translation is one of the applications that integrates
computer vision and language processing. It is a unique task given that in the
field of machine translation, many state-of-the-arts algorithms still only
employ textual information. In this work, we explore the effectiveness of
reinforcement learning in multimodal machine translation. We present a novel
algorithm based on the Advantage Actor-Critic (A2C) algorithm that specifically
cater to the multimodal machine translation task of the EMNLP 2018 Third
Conference on Machine Translation (WMT18). We experiment our proposed algorithm
on the Multi30K multilingual English-German image description dataset and the
Flickr30K image entity dataset. Our model takes two channels of inputs, image
and text, uses translation evaluation metrics as training rewards, and achieves
better results than supervised learning MLE baseline models. Furthermore, we
discuss the prospects and limitations of using reinforcement learning for
machine translation. Our experiment results suggest a promising reinforcement
learning solution to the general task of multimodal sequence to sequence
learning.
| 2,018 | Computation and Language |
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations | Revealing the implicit semantic relation between the constituents of a
noun-compound is important for many NLP applications. It has been addressed in
the literature either as a classification task to a set of pre-defined
relations or by producing free text paraphrases explicating the relations. Most
existing paraphrasing methods lack the ability to generalize, and have a hard
time interpreting infrequent or new noun-compounds. We propose a neural model
that generalizes better by representing paraphrases in a continuous space,
generalizing for both unseen noun-compounds and rare paraphrases. Our model
helps improving performance on both the noun-compound paraphrasing and
classification tasks.
| 2,018 | Computation and Language |
A Graph-to-Sequence Model for AMR-to-Text Generation | The problem of AMR-to-text generation is to recover a text representing the
same meaning as an input AMR graph. The current state-of-the-art method uses a
sequence-to-sequence model, leveraging LSTM for encoding a linearized AMR
structure. Although being able to model non-local semantic information, a
sequence LSTM can lose information from the AMR graph structure, and thus faces
challenges with large graphs, which result in long sequences. We introduce a
neural graph-to-sequence model, using a novel LSTM structure for directly
encoding graph-level semantics. On a standard benchmark, our model shows
superior results to existing methods in the literature.
| 2,018 | Computation and Language |
Sentence-State LSTM for Text Representation | Bi-directional LSTMs are a powerful tool for text representation. On the
other hand, they have been shown to suffer various limitations due to their
sequential nature. We investigate an alternative LSTM structure for encoding
text, which consists of a parallel state for each word. Recurrent steps are
used to perform local and global information exchange between words
simultaneously, rather than incremental reading of a sequence of words. Results
on various classification and sequence labelling benchmarks show that the
proposed model has strong representation power, giving highly competitive
performances compared to stacked BiLSTM models with similar parameter numbers.
| 2,018 | Computation and Language |
Hierarchical Structured Model for Fine-to-coarse Manifesto Text Analysis | Election manifestos document the intentions, motives, and views of political
parties. They are often used for analysing a party's fine-grained position on a
particular issue, as well as for coarse-grained positioning of a party on the
left--right spectrum. In this paper we propose a two-stage model for
automatically performing both levels of analysis over manifestos. In the first
step we employ a hierarchical multi-task structured deep model to predict fine-
and coarse-grained positions, and in the second step we perform post-hoc
calibration of coarse-grained positions using probabilistic soft logic. We
empirically show that the proposed model outperforms state-of-art approaches at
both granularities using manifestos from twelve countries, written in ten
different languages.
| 2,018 | Computation and Language |
Reasoning with Sarcasm by Reading In-between | Sarcasm is a sophisticated speech act which commonly manifests on social
communities such as Twitter and Reddit. The prevalence of sarcasm on the social
web is highly disruptive to opinion mining systems due to not only its tendency
of polarity flipping but also usage of figurative language. Sarcasm commonly
manifests with a contrastive theme either between positive-negative sentiments
or between literal-figurative scenarios. In this paper, we revisit the notion
of modeling contrast in order to reason with sarcasm. More specifically, we
propose an attention-based neural model that looks in-between instead of
across, enabling it to explicitly model contrast and incongruity. We conduct
extensive experiments on six benchmark datasets from Twitter, Reddit and the
Internet Argument Corpus. Our proposed model not only achieves state-of-the-art
performance on all datasets but also enjoys improved interpretability.
| 2,018 | Computation and Language |
One "Ruler" for All Languages: Multi-Lingual Dialogue Evaluation with
Adversarial Multi-Task Learning | Automatic evaluating the performance of Open-domain dialogue system is a
challenging problem. Recent work in neural network-based metrics has shown
promising opportunities for automatic dialogue evaluation. However, existing
methods mainly focus on monolingual evaluation, in which the trained metric is
not flexible enough to transfer across different languages. To address this
issue, we propose an adversarial multi-task neural metric (ADVMT) for
multi-lingual dialogue evaluation, with shared feature extraction across
languages. We evaluate the proposed model in two different languages.
Experiments show that the adversarial multi-task neural metric achieves a high
correlation with human annotation, which yields better performance than
monolingual ones and various existing metrics.
| 2,018 | Computation and Language |
Improving Character-level Japanese-Chinese Neural Machine Translation
with Radicals as an Additional Input Feature | In recent years, Neural Machine Translation (NMT) has been proven to get
impressive results. While some additional linguistic features of input words
improve word-level NMT, any additional character features have not been used to
improve character-level NMT so far. In this paper, we show that the radicals of
Chinese characters (or kanji), as a character feature information, can be
easily provide further improvements in the character-level NMT. In experiments
on WAT2016 Japanese-Chinese scientific paper excerpt corpus (ASPEC-JP), we find
that the proposed method improves the translation quality according to two
aspects: perplexity and BLEU. The character-level NMT with the radical input
feature's model got a state-of-the-art result of 40.61 BLEU points in the test
set, which is an improvement of about 8.6 BLEU points over the best system on
the WAT2016 Japanese-to-Chinese translation subtask with ASPEC-JP. The
improvements over the character-level NMT with no additional input feature are
up to about 1.5 and 1.4 BLEU points in the development-test set and the test
set of the corpus, respectively.
| 2,018 | Computation and Language |
Bleaching Text: Abstract Features for Cross-lingual Gender Prediction | Gender prediction has typically focused on lexical and social network
features, yielding good performance, but making systems highly language-,
topic-, and platform-dependent. Cross-lingual embeddings circumvent some of
these limitations, but capture gender-specific style less. We propose an
alternative: bleaching text, i.e., transforming lexical strings into more
abstract features. This study provides evidence that such features allow for
better transfer across languages. Moreover, we present a first study on the
ability of humans to perform cross-lingual gender prediction. We find that
human predictive power proves similar to that of our bleached models, and both
perform better than lexical models.
| 2,018 | Computation and Language |
Polite Dialogue Generation Without Parallel Data | Stylistic dialogue response generation, with valuable applications in
personality-based conversational agents, is a challenging task because the
response needs to be fluent, contextually-relevant, as well as
paralinguistically accurate. Moreover, parallel datasets for
regular-to-stylistic pairs are usually unavailable. We present three
weakly-supervised models that can generate diverse polite (or rude) dialogue
responses without parallel data. Our late fusion model (Fusion) merges the
decoder of an encoder-attention-decoder dialogue model with a language model
trained on stand-alone polite utterances. Our label-fine-tuning (LFT) model
prepends to each source sequence a politeness-score scaled label (predicted by
our state-of-the-art politeness classifier) during training, and at test time
is able to generate polite, neutral, and rude responses by simply scaling the
label embedding by the corresponding score. Our reinforcement learning model
(Polite-RL) encourages politeness generation by assigning rewards proportional
to the politeness classifier score of the sampled response. We also present two
retrieval-based polite dialogue model baselines. Human evaluation validates
that while the Fusion and the retrieval-based models achieve politeness with
poorer context-relevance, the LFT and Polite-RL models can produce
significantly more polite responses without sacrificing dialogue quality.
| 2,018 | Computation and Language |
Post-Specialisation: Retrofitting Vectors of Words Unseen in Lexical
Resources | Word vector specialisation (also known as retrofitting) is a portable,
light-weight approach to fine-tuning arbitrary distributional word vector
spaces by injecting external knowledge from rich lexical resources such as
WordNet. By design, these post-processing methods only update the vectors of
words occurring in external lexicons, leaving the representations of all unseen
words intact. In this paper, we show that constraint-driven vector space
specialisation can be extended to unseen words. We propose a novel
post-specialisation method that: a) preserves the useful linguistic knowledge
for seen words; while b) propagating this external signal to unseen words in
order to improve their vector representations as well. Our post-specialisation
approach explicits a non-linear specialisation function in the form of a deep
neural network by learning to predict specialised vectors from their original
distributional counterparts. The learned function is then used to specialise
vectors of unseen words. This approach, applicable to any post-processing
model, yields considerable gains over the initial specialisation models both in
intrinsic word similarity tasks, and in two downstream tasks: dialogue state
tracking and lexical text simplification. The positive effects persist across
three languages, demonstrating the importance of specialising the full
vocabulary of distributional word vector spaces.
| 2,018 | Computation and Language |
Multimodal Hierarchical Reinforcement Learning Policy for Task-Oriented
Visual Dialog | Creating an intelligent conversational system that understands vision and
language is one of the ultimate goals in Artificial Intelligence
(AI)~\cite{winograd1972understanding}. Extensive research has focused on
vision-to-language generation, however, limited research has touched on
combining these two modalities in a goal-driven dialog context. We propose a
multimodal hierarchical reinforcement learning framework that dynamically
integrates vision and language for task-oriented visual dialog. The framework
jointly learns the multimodal dialog state representation and the hierarchical
dialog policy to improve both dialog task success and efficiency. We also
propose a new technique, state adaptation, to integrate context awareness in
the dialog state representation. We evaluate the proposed framework and the
state adaptation technique in an image guessing game and achieve promising
results.
| 2,018 | Computation and Language |
Improved training of end-to-end attention models for speech recognition | Sequence-to-sequence attention-based models on subword units allow simple
open-vocabulary end-to-end speech recognition. In this work, we show that such
models can achieve competitive results on the Switchboard 300h and LibriSpeech
1000h tasks. In particular, we report the state-of-the-art word error rates
(WER) of 3.54% on the dev-clean and 3.82% on the test-clean evaluation subsets
of LibriSpeech. We introduce a new pretraining scheme by starting with a high
time reduction factor and lowering it during training, which is crucial both
for convergence and final performance. In some experiments, we also use an
auxiliary CTC loss function to help the convergence. In addition, we train long
short-term memory (LSTM) language models on subword units. By shallow fusion,
we report up to 27% relative improvements in WER over the attention baseline
without a language model.
| 2,019 | Computation and Language |
Investor Reaction to Financial Disclosures Across Topics: An Application
of Latent Dirichlet Allocation | This paper provides a holistic study of how stock prices vary in their
response to financial disclosures across different topics. Thereby, we
specifically shed light into the extensive amount of filings for which no a
priori categorization of their content exists. For this purpose, we utilize an
approach from data mining - namely, latent Dirichlet allocation - as a means of
topic modeling. This technique facilitates our task of automatically
categorizing, ex ante, the content of more than 70,000 regulatory 8-K filings
from U.S. companies. We then evaluate the subsequent stock market reaction. Our
empirical evidence suggests a considerable discrepancy among various types of
news stories in terms of their relevance and impact on financial markets. For
instance, we find a statistically significant abnormal return in response to
earnings results and credit rating, but also for disclosures regarding business
strategy, the health sector, as well as mergers and acquisitions. Our results
yield findings that benefit managers, investors and policy-makers by indicating
how regulatory filings should be structured and the topics most likely to
precede changes in stock valuations.
| 2,018 | Computation and Language |
Character-level Chinese-English Translation through ASCII Encoding | Character-level Neural Machine Translation (NMT) models have recently
achieved impressive results on many language pairs. They mainly do well for
Indo-European language pairs, where the languages share the same writing
system. However, for translating between Chinese and English, the gap between
the two different writing systems poses a major challenge because of a lack of
systematic correspondence between the individual linguistic units. In this
paper, we enable character-level NMT for Chinese, by breaking down Chinese
characters into linguistic units similar to that of Indo-European languages. We
use the Wubi encoding scheme, which preserves the original shape and semantic
information of the characters, while also being reversible. We show promising
results from training Wubi-based models on the character- and subword-level
with recurrent as well as convolutional models.
| 2,018 | Computation and Language |
LearningWord Embeddings for Low-resource Languages by PU Learning | Word embedding is a key component in many downstream applications in
processing natural languages. Existing approaches often assume the existence of
a large collection of text for learning effective word embedding. However, such
a corpus may not be available for some low-resource languages. In this paper,
we study how to effectively learn a word embedding model on a corpus with only
a few million tokens. In such a situation, the co-occurrence matrix is sparse
as the co-occurrences of many word pairs are unobserved. In contrast to
existing approaches often only sample a few unobserved word pairs as negative
samples, we argue that the zero entries in the co-occurrence matrix also
provide valuable information. We then design a Positive-Unlabeled Learning
(PU-Learning) approach to factorize the co-occurrence matrix and validate the
proposed approaches in four different languages.
| 2,018 | Computation and Language |
Opinion Fraud Detection via Neural Autoencoder Decision Forest | Online reviews play an important role in influencing buyers' daily purchase
decisions. However, fake and meaningless reviews, which cannot reflect users'
genuine purchase experience and opinions, widely exist on the Web and pose
great challenges for users to make right choices. Therefore,it is desirable to
build a fair model that evaluates the quality of products by distinguishing
spamming reviews. We present an end-to-end trainable unified model to leverage
the appealing properties from Autoencoder and random forest. A stochastic
decision tree model is implemented to guide the global parameter learning
process. Extensive experiments were conducted on a large Amazon review dataset.
The proposed model consistently outperforms a series of compared methods.
| 2,018 | Computation and Language |
A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for
Abstractive Text Summarization | In this paper, we propose a deep learning approach to tackle the automatic
summarization tasks by incorporating topic information into the convolutional
sequence-to-sequence (ConvS2S) model and using self-critical sequence training
(SCST) for optimization. Through jointly attending to topics and word-level
alignment, our approach can improve coherence, diversity, and informativeness
of generated summaries via a biased probability generation mechanism. On the
other hand, reinforcement training, like SCST, directly optimizes the proposed
model with respect to the non-differentiable metric ROUGE, which also avoids
the exposure bias during inference. We carry out the experimental evaluation
with state-of-the-art methods over the Gigaword, DUC-2004, and LCSTS datasets.
The empirical results demonstrate the superiority of our proposed method in the
abstractive summarization.
| 2,020 | Computation and Language |
On the Limitations of Unsupervised Bilingual Dictionary Induction | Unsupervised machine translation---i.e., not assuming any cross-lingual
supervision signal, whether a dictionary, translations, or comparable
corpora---seems impossible, but nevertheless, Lample et al. (2018) recently
proposed a fully unsupervised machine translation (MT) model. The model relies
heavily on an adversarial, unsupervised alignment of word embedding spaces for
bilingual dictionary induction (Conneau et al., 2018), which we examine here.
Our results identify the limitations of current unsupervised MT: unsupervised
bilingual dictionary induction performs much worse on morphologically rich
languages that are not dependent marking, when monolingual corpora from
different domains or different embedding algorithms are used. We show that a
simple trick, exploiting a weak supervision signal from identical words,
enables more robust induction, and establish a near-perfect correlation between
unsupervised bilingual dictionary induction performance and a previously
unexplored graph similarity metric.
| 2,018 | Computation and Language |
Adversarial Contrastive Estimation | Learning by contrasting positive and negative samples is a general strategy
adopted by many methods. Noise contrastive estimation (NCE) for word embeddings
and translating embeddings for knowledge graphs are examples in NLP employing
this approach. In this work, we view contrastive learning as an abstraction of
all such methods and augment the negative sampler into a mixture distribution
containing an adversarially learned sampler. The resulting adaptive sampler
finds harder negative examples, which forces the main model to learn a better
representation of the data. We evaluate our proposal on learning word
embeddings, order embeddings and knowledge graph embeddings and observe both
faster convergence and improved results on multiple metrics.
| 2,018 | Computation and Language |
Three tree priors and five datasets: A study of the effect of tree
priors in Indo-European phylogenetics | The age of the root of the Indo-European language family has received much
attention since the application of Bayesian phylogenetic methods by Gray and
Atkinson(2003). The root age of the Indo-European family has tended to decrease
from an age that supported the Anatolian origin hypothesis to an age that
supports the Steppe origin hypothesis with the application of new models (Chang
et al., 2015). However, none of the published work in the Indo-European
phylogenetics studied the effect of tree priors on phylogenetic analyses of the
Indo-European family. In this paper, I intend to fill this gap by exploring the
effect of tree priors on different aspects of the Indo-European family's
phylogenetic inference. I apply three tree priors---Uniform, Fossilized
Birth-Death (FBD), and Coalescent---to five publicly available datasets of the
Indo-European language family. I evaluate the posterior distribution of the
trees from the Bayesian analysis using Bayes Factor, and find that there is
support for the Steppe origin hypothesis in the case of two tree priors. I
report the median and 95% highest posterior density (HPD) interval of the root
ages for all the three tree priors. A model comparison suggested that either
Uniform prior or FBD prior is more suitable than the Coalescent prior to the
datasets belonging to the Indo-European language family.
| 2,018 | Computation and Language |
Automatic Article Commenting: the Task and Dataset | Comments of online articles provide extended views and improve user
engagement. Automatically making comments thus become a valuable functionality
for online forums, intelligent chatbots, etc. This paper proposes the new task
of automatic article commenting, and introduces a large-scale Chinese dataset
with millions of real comments and a human-annotated subset characterizing the
comments' varying quality. Incorporating the human bias of comment quality, we
further develop automatic metrics that generalize a broad set of popular
reference-based metrics and exhibit greatly improved correlations with human
evaluations.
| 2,018 | Computation and Language |
Statistical Analysis on E-Commerce Reviews, with Sentiment
Classification using Bidirectional Recurrent Neural Network (RNN) | Understanding customer sentiments is of paramount importance in marketing
strategies today. Not only will it give companies an insight as to how
customers perceive their products and/or services, but it will also give them
an idea on how to improve their offers. This paper attempts to understand the
correlation of different variables in customer reviews on a women clothing
e-commerce, and to classify each review whether it recommends the reviewed
product or not and whether it consists of positive, negative, or neutral
sentiment. To achieve these goals, we employed univariate and multivariate
analyses on dataset features except for review titles and review texts, and we
implemented a bidirectional recurrent neural network (RNN) with long-short term
memory unit (LSTM) for recommendation and sentiment classification. Results
have shown that a recommendation is a strong indicator of a positive sentiment
score, and vice-versa. On the other hand, ratings in product reviews are fuzzy
indicators of sentiment scores. We also found out that the bidirectional LSTM
was able to reach an F1-score of 0.88 for recommendation classification, and
0.93 for sentiment classification.
| 2,020 | Computation and Language |
Incorporating Subword Information into Matrix Factorization Word
Embeddings | The positive effect of adding subword information to word embeddings has been
demonstrated for predictive models. In this paper we investigate whether
similar benefits can also be derived from incorporating subwords into counting
models. We evaluate the impact of different types of subwords (n-grams and
unsupervised morphemes), with results confirming the importance of subword
information in learning representations of rare and out-of-vocabulary words.
| 2,018 | Computation and Language |
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted
Sum | LSTMs were introduced to combat vanishing gradients in simple RNNs by
augmenting them with gated additive recurrent connections. We present an
alternative view to explain the success of LSTMs: the gates themselves are
versatile recurrent models that provide more representational power than
previously appreciated. We do this by decoupling the LSTM's gates from the
embedded simple RNN, producing a new class of RNNs where the recurrence
computes an element-wise weighted sum of context-independent functions of the
input. Ablations on a range of problems demonstrate that the gating mechanism
alone performs as well as an LSTM in most settings, strongly suggesting that
the gates are doing much more in practice than just alleviating vanishing
gradients.
| 2,018 | Computation and Language |
Neural Machine Translation Decoding with Terminology Constraints | Despite the impressive quality improvements yielded by neural machine
translation (NMT) systems, controlling their translation output to adhere to
user-provided terminology constraints remains an open problem. We describe our
approach to constrained neural decoding based on finite-state machines and
multi-stack decoding which supports target-side constraints as well as
constraints with corresponding aligned input text spans. We demonstrate the
performance of our framework on multiple translation tasks and motivate the
need for constrained decoding with attentions as a means of reducing
misplacement and duplication when translating user constraints.
| 2,018 | Computation and Language |
Discourse-Aware Neural Rewards for Coherent Text Generation | In this paper, we investigate the use of discourse-aware rewards with
reinforcement learning to guide a model to generate long, coherent text. In
particular, we propose to learn neural rewards to model cross-sentence ordering
as a means to approximate desired discourse structure. Empirical results
demonstrate that a generator trained with the learned reward produces more
coherent and less repetitive text than models trained with cross-entropy or
with reinforcement learning with commonly used scores as rewards.
| 2,018 | Computation and Language |
The Evolution of Popularity and Images of Characters in Marvel Cinematic
Universe Fanfictions | This analysis proposes a new topic model to study the yearly trends in Marvel
Cinematic Universe fanfictions on three levels: character popularity, character
images/topics, and vocabulary pattern of topics. It is found that character
appearances in fanfictions have become more diverse over the years thanks to
constant introduction of new characters in feature films, and in the case of
Captain America, multi-dimensional character development is well-received by
the fanfiction world.
| 2,018 | Computation and Language |
SlugNERDS: A Named Entity Recognition Tool for Open Domain Dialogue
Systems | In dialogue systems, the tasks of named entity recognition (NER) and named
entity linking (NEL) are vital preprocessing steps for understanding user
intent, especially in open domain interaction where we cannot rely on
domain-specific inference. UCSC's effort as one of the funded teams in the 2017
Amazon Alexa Prize Contest has yielded Slugbot, an open domain social bot,
aimed at casual conversation. We discovered several challenges specifically
associated with both NER and NEL when building Slugbot, such as that the NE
labels are too coarse-grained or the entity types are not linked to a useful
ontology. Moreover, we have discovered that traditional approaches do not
perform well in our context: even systems designed to operate on tweets or
other social media data do not work well in dialogue systems. In this paper, we
introduce Slugbot's Named Entity Recognition for dialogue Systems (SlugNERDS),
a NER and NEL tool which is optimized to address these issues. We describe two
new resources that we are building as part of this work: SlugEntityDB and
SchemaActuator. We believe these resources will be useful for the research
community.
| 2,018 | Computation and Language |
hyperdoc2vec: Distributed Representations of Hypertext Documents | Hypertext documents, such as web pages and academic papers, are of great
importance in delivering information in our daily life. Although being
effective on plain documents, conventional text embedding methods suffer from
information loss if directly adapted to hyper-documents. In this paper, we
propose a general embedding approach for hyper-documents, namely, hyperdoc2vec,
along with four criteria characterizing necessary information that
hyper-document embedding models should preserve. Systematic comparisons are
conducted between hyperdoc2vec and several competitors on two tasks, i.e.,
paper classification and citation recommendation, in the academic paper domain.
Analyses and experiments both validate the superiority of hyperdoc2vec to other
models w.r.t. the four criteria.
| 2,018 | Computation and Language |
Learning Domain-Sensitive and Sentiment-Aware Word Embeddings | Word embeddings have been widely used in sentiment classification because of
their efficacy for semantic representations of words. Given reviews from
different domains, some existing methods for word embeddings exploit sentiment
information, but they cannot produce domain-sensitive embeddings. On the other
hand, some other existing methods can generate domain-sensitive word
embeddings, but they cannot distinguish words with similar contexts but
opposite sentiment polarity. We propose a new method for learning
domain-sensitive and sentiment-aware embeddings that simultaneously capture the
information of sentiment semantics and domain sensitivity of individual words.
Our method can automatically determine and produce domain-common embeddings and
domain-specific embeddings. The differentiation of domain-common and
domain-specific words enables the advantage of data augmentation of common
semantics from multiple domains and capture the varied semantics of specific
words from different domains at the same time. Experimental results show that
our model provides an effective way to learn domain-sensitive and
sentiment-aware word embeddings which benefit sentiment classification at both
sentence level and lexicon term level.
| 2,018 | Computation and Language |
Training Classifiers with Natural Language Explanations | Training accurate classifiers requires many labels, but each label provides
only limited information (one bit for binary classification). In this work, we
propose BabbleLabble, a framework for training classifiers in which an
annotator provides a natural language explanation for each labeling decision. A
semantic parser converts these explanations into programmatic labeling
functions that generate noisy labels for an arbitrary amount of unlabeled data,
which is used to train a classifier. On three relation extraction tasks, we
find that users are able to train classifiers with comparable F1 scores from
5-100$\times$ faster by providing explanations instead of just labels.
Furthermore, given the inherent imperfection of labeling functions, we find
that a simple rule-based semantic parser suffices.
| 2,018 | Computation and Language |
Towards Inference-Oriented Reading Comprehension: ParallelQA | In this paper, we investigate the tendency of end-to-end neural Machine
Reading Comprehension (MRC) models to match shallow patterns rather than
perform inference-oriented reasoning on RC benchmarks. We aim to test the
ability of these systems to answer questions which focus on referential
inference. We propose ParallelQA, a strategy to formulate such questions using
parallel passages. We also demonstrate that existing neural models fail to
generalize well to this setting.
| 2,018 | Computation and Language |
A comparable study of modeling units for end-to-end Mandarin speech
recognition | End-To-End speech recognition have become increasingly popular in mandarin
speech recognition and achieved delightful performance.
Mandarin is a tonal language which is different from English and requires
special treatment for the acoustic modeling units. There have been several
different kinds of modeling units for mandarin such as phoneme, syllable and
Chinese character.
In this work, we explore two major end-to-end models: connectionist temporal
classification (CTC) model and attention based encoder-decoder model for
mandarin speech recognition. We compare the performance of three different
scaled modeling units: context dependent phoneme(CDP), syllable with tone and
Chinese character.
We find that all types of modeling units can achieve approximate character
error rate (CER) in CTC model and the performance of Chinese character
attention model is better than syllable attention model. Furthermore, we find
that Chinese character is a reasonable unit for mandarin speech recognition. On
DidiCallcenter task, Chinese character attention model achieves a CER of 5.68%
and CTC model gets a CER of 7.29%, on the other DidiReading task, CER are 4.89%
and 5.79%, respectively. Moreover, attention model achieves a better
performance than CTC model on both datasets.
| 2,018 | Computation and Language |
Hybrid semi-Markov CRF for Neural Sequence Labeling | This paper proposes hybrid semi-Markov conditional random fields (SCRFs) for
neural sequence labeling in natural language processing. Based on conventional
conditional random fields (CRFs), SCRFs have been designed for the tasks of
assigning labels to segments by extracting features from and describing
transitions between segments instead of words. In this paper, we improve the
existing SCRF methods by employing word-level and segment-level information
simultaneously. First, word-level labels are utilized to derive the segment
scores in SCRFs. Second, a CRF output layer and an SCRF output layer are
integrated into an unified neural network and trained jointly. Experimental
results on CoNLL 2003 named entity recognition (NER) shared task show that our
model achieves state-of-the-art performance when no external knowledge is used.
| 2,018 | Computation and Language |
Obligation and Prohibition Extraction Using Hierarchical RNNs | We consider the task of detecting contractual obligations and prohibitions.
We show that a self-attention mechanism improves the performance of a BILSTM
classifier, the previous state of the art for this task, by allowing it to
focus on indicative tokens. We also introduce a hierarchical BILSTM, which
converts each sentence to an embedding, and processes the sentence embeddings
to classify each sentence. Apart from being faster to train, the hierarchical
BILSTM outperforms the flat one, even when the latter considers surrounding
sentences, because the hierarchical model has a broader discourse view.
| 2,018 | Computation and Language |
Improv Chat: Second Response Generation for Chatbot | Existing research on response generation for chatbot focuses on \textbf{First
Response Generation} which aims to teach the chatbot to say the first response
(e.g. a sentence) appropriate to the conversation context (e.g. the user's
query). In this paper, we introduce a new task \textbf{Second Response
Generation}, termed as Improv chat, which aims to teach the chatbot to say the
second response after saying the first response with respect the conversation
context, so as to lighten the burden on the user to keep the conversation
going. Specifically, we propose a general learning based framework and develop
a retrieval based system which can generate the second responses with the
users' query and the chatbot's first response as input. We present the approach
to building the conversation corpus for Improv chat from public forums and
social networks, as well as the neural networks based models for response
matching and ranking. We include the preliminary experiments and results in
this paper. This work could be further advanced with better deep matching
models for retrieval base systems or generative models for generation based
systems as well as extensive evaluations in real-life applications.
| 2,018 | Computation and Language |
Automatic Academic Paper Rating Based on Modularized Hierarchical
Convolutional Neural Network | As more and more academic papers are being submitted to conferences and
journals, evaluating all these papers by professionals is time-consuming and
can cause inequality due to the personal factors of the reviewers. In this
paper, in order to assist professionals in evaluating academic papers, we
propose a novel task: automatic academic paper rating (AAPR), which
automatically determine whether to accept academic papers. We build a new
dataset for this task and propose a novel modularized hierarchical
convolutional neural network to achieve automatic academic paper rating.
Evaluation results show that the proposed model outperforms the baselines by a
large margin. The dataset and code are available at
\url{https://github.com/lancopku/AAPR}
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.