Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Unsupervised Sentence Compression using Denoising Auto-Encoders | In sentence compression, the task of shortening sentences while retaining the
original meaning, models tend to be trained on large corpora containing pairs
of verbose and compressed sentences. To remove the need for paired corpora, we
emulate a summarization task and add noise to extend sentences and train a
denoising auto-encoder to recover the original, constructing an end-to-end
training regime without the need for any examples of compressed sentences. We
conduct a human evaluation of our model on a standard text summarization
dataset and show that it performs comparably to a supervised baseline based on
grammatical correctness and retention of meaning. Despite being exposed to no
target data, our unsupervised models learn to generate imperfect but reasonably
readable sentence summaries. Although we underperform supervised models based
on ROUGE scores, our models are competitive with a supervised baseline based on
human evaluation for grammatical correctness and retention of meaning.
| 2,018 | Computation and Language |
Coherence-Aware Neural Topic Modeling | Topic models are evaluated based on their ability to describe documents well
(i.e. low perplexity) and to produce topics that carry coherent semantic
meaning. In topic modeling so far, perplexity is a direct optimization target.
However, topic coherence, owing to its challenging computation, is not
optimized for and is only evaluated after training. In this work, under a
neural variational inference framework, we propose methods to incorporate a
topic coherence objective into the training process. We demonstrate that such a
coherence-aware topic model exhibits a similar level of perplexity as baseline
models but achieves substantially higher topic coherence.
| 2,018 | Computation and Language |
Neural Machine Translation of Logographic Languages Using Sub-character
Level Information | Recent neural machine translation (NMT) systems have been greatly improved by
encoder-decoder models with attention mechanisms and sub-word units. However,
important differences between languages with logographic and alphabetic writing
systems have long been overlooked. This study focuses on these differences and
uses a simple approach to improve the performance of NMT systems utilizing
decomposed sub-character level information for logographic languages. Our
results indicate that our approach not only improves the translation
capabilities of NMT systems between Chinese and English, but also further
improves NMT systems between Chinese and Japanese, because it utilizes the
shared information brought by similar sub-character units.
| 2,018 | Computation and Language |
Textual Analogy Parsing: What's Shared and What's Compared among
Analogous Facts | To understand a sentence like "whereas only 10% of White Americans live at or
below the poverty line, 28% of African Americans do" it is important not only
to identify individual facts, e.g., poverty rates of distinct demographic
groups, but also the higher-order relations between them, e.g., the disparity
between them. In this paper, we propose the task of Textual Analogy Parsing
(TAP) to model this higher-order meaning. The output of TAP is a frame-style
meaning representation which explicitly specifies what is shared (e.g., poverty
rates) and what is compared (e.g., White Americans vs. African Americans, 10%
vs. 28%) between its component facts. Such a meaning representation can enable
new applications that rely on discourse understanding such as automated chart
generation from quantitative text. We present a new dataset for TAP, baselines,
and a model that successfully uses an ILP to enforce the structural constraints
of the problem.
| 2,018 | Computation and Language |
Trick Me If You Can: Human-in-the-loop Generation of Adversarial
Examples for Question Answering | Adversarial evaluation stress tests a model's understanding of natural
language. While past approaches expose superficial patterns, the resulting
adversarial examples are limited in complexity and diversity. We propose
human-in-the-loop adversarial generation, where human authors are guided to
break models. We aid the authors with interpretations of model predictions
through an interactive user interface. We apply this generation framework to a
question answering task called Quizbowl, where trivia enthusiasts craft
adversarial questions. The resulting questions are validated via live
human--computer matches: although the questions appear ordinary to humans, they
systematically stump neural and information retrieval models. The adversarial
questions cover diverse phenomena from multi-hop reasoning to entity type
distractors, exposing open challenges in robust question answering.
| 2,019 | Computation and Language |
What If We Simply Swap the Two Text Fragments? A Straightforward yet
Effective Way to Test the Robustness of Methods to Confounding Signals in
Nature Language Inference Tasks | Nature language inference (NLI) task is a predictive task of determining the
inference relationship of a pair of natural language sentences. With the
increasing popularity of NLI, many state-of-the-art predictive models have been
proposed with impressive performances. However, several works have noticed the
statistical irregularities in the collected NLI data set that may result in an
over-estimated performance of these models and proposed remedies. In this
paper, we further investigate the statistical irregularities, what we refer as
confounding factors, of the NLI data sets. With the belief that some NLI labels
should preserve under swapping operations, we propose a simple yet effective
way (swapping the two text fragments) of evaluating the NLI predictive models
that naturally mitigate the observed problems. Further, we continue to train
the predictive models with our swapping manner and propose to use the deviation
of the model's evaluation performances under different percentages of training
text fragments to be swapped to describe the robustness of a predictive model.
Our evaluation metrics leads to some interesting understandings of recent
published NLI methods. Finally, we also apply the swapping operation on NLI
models to see the effectiveness of this straightforward method in mitigating
the confounding factor problems in training generic sentence embeddings for
other NLP transfer tasks.
| 2,018 | Computation and Language |
Operations Guided Neural Networks for High Fidelity Data-To-Text
Generation | Recent neural models for data-to-text generation are mostly based on
data-driven end-to-end training over encoder-decoder networks. Even though the
generated texts are mostly fluent and informative, they often generate
descriptions that are not consistent with the input structured data. This is a
critical issue especially in domains that require inference or calculations
over raw data. In this paper, we attempt to improve the fidelity of neural
data-to-text generation by utilizing pre-executed symbolic operations. We
propose a framework called Operation-guided Attention-based
sequence-to-sequence network (OpAtt), with a specifically designed gating
mechanism as well as a quantization module for operation results to utilize
information from pre-executed operations. Experiments on two sports datasets
show our proposed method clearly improves the fidelity of the generated texts
to the input structured data.
| 2,018 | Computation and Language |
Exploration on Grounded Word Embedding: Matching Words and Images with
Image-Enhanced Skip-Gram Model | Word embedding is designed to represent the semantic meaning of a word with
low dimensional vectors. The state-of-the-art methods of learning word
embeddings (word2vec and GloVe) only use the word co-occurrence information.
The learned embeddings are real number vectors, which are obscure to human. In
this paper, we propose an Image-Enhanced Skip-Gram Model to learn grounded word
embeddings by representing the word vectors in the same hyper-plane with image
vectors. Experiments show that the image vectors and word embeddings learned by
our model are highly correlated, which indicates that our model is able to
provide a vivid image-based explanation to the word embeddings.
| 2,018 | Computation and Language |
Generating Distractors for Reading Comprehension Questions from Real
Examinations | We investigate the task of distractor generation for multiple choice reading
comprehension questions from examinations. In contrast to all previous works,
we do not aim at preparing words or short phrases distractors, instead, we
endeavor to generate longer and semantic-rich distractors which are closer to
distractors in real reading comprehension from examinations. Taking a reading
comprehension article, a pair of question and its correct option as input, our
goal is to generate several distractors which are somehow related to the
answer, consistent with the semantic context of the question and have some
trace in the article. We propose a hierarchical encoder-decoder framework with
static and dynamic attention mechanisms to tackle this task. Specifically, the
dynamic attention can combine sentence-level and word-level attention varying
at each recurrent time step to generate a more readable sequence. The static
attention is to modulate the dynamic attention not to focus on question
irrelevant sentences or sentences which contribute to the correct option. Our
proposed framework outperforms several strong baselines on the first prepared
distractor generation dataset of real reading comprehension questions. For
human evaluation, compared with those distractors generated by baselines, our
generated distractors are more functional to confuse the annotators.
| 2,018 | Computation and Language |
Sentiment analysis for Arabic language: A brief survey of approaches and
techniques | With the emergence of Web 2.0 technology and the expansion of on-line social
networks, current Internet users have the ability to add their reviews, ratings
and opinions on social media and on commercial and news web sites. Sentiment
analysis aims to classify these reviews reviews in an automatic way. In the
literature, there are numerous approaches proposed for automatic sentiment
analysis for different language contexts. Each language has its own properties
that makes the sentiment analysis more challenging. In this regard, this work
presents a comprehensive survey of existing Arabic sentiment analysis studies,
and covers the various approaches and techniques proposed in the literature.
Moreover, we highlight the main difficulties and challenges of Arabic sentiment
analysis, and the proposed techniques in literature to overcome these barriers.
| 2,018 | Computation and Language |
Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book
Question Answering | We present a new kind of question answering dataset, OpenBookQA, modeled
after open book exams for assessing human understanding of a subject. The open
book that comes with our questions is a set of 1329 elementary level science
facts. Roughly 6000 questions probe an understanding of these facts and their
application to novel situations. This requires combining an open book fact
(e.g., metals conduct electricity) with broad common knowledge (e.g., a suit of
armor is made of metal) obtained from other sources. While existing QA datasets
over documents or knowledge bases, being generally self-contained, focus on
linguistic understanding, OpenBookQA probes a deeper understanding of both the
topic---in the context of common knowledge---and the language it is expressed
in. Human performance on OpenBookQA is close to 92%, but many state-of-the-art
pre-trained QA methods perform surprisingly poorly, worse than several simple
neural baselines we develop. Our oracle experiments designed to circumvent the
knowledge retrieval bottleneck demonstrate the value of both the open book and
additional facts. We leave it as a challenge to solve the retrieval problem in
this multi-hop setting and to close the large gap to human performance.
| 2,018 | Computation and Language |
The Lower The Simpler: Simplifying Hierarchical Recurrent Models | To improve the training efficiency of hierarchical recurrent models without
compromising their performance, we propose a strategy named as `the lower the
simpler', which is to simplify the baseline models by making the lower layers
simpler than the upper layers. We carry out this strategy to simplify two
typical hierarchical recurrent models, namely Hierarchical Recurrent
Encoder-Decoder (HRED) and R-NET, whose basic building block is GRU.
Specifically, we propose Scalar Gated Unit (SGU), which is a simplified variant
of GRU, and use it to replace the GRUs at the middle layers of HRED and R-NET.
Besides, we also use Fixed-size Ordinally-Forgetting Encoding (FOFE), which is
an efficient encoding method without any trainable parameter, to replace the
GRUs at the bottom layers of HRED and R-NET. The experimental results show that
the simplified HRED and the simplified R-NET contain significantly less
trainable parameters, consume significantly less training time, and achieve
slightly better performance than their baseline models.
| 2,019 | Computation and Language |
Explicit Contextual Semantics for Text Comprehension | Who did what to whom is a major focus in natural language understanding,
which is right the aim of semantic role labeling (SRL) task. Despite of sharing
a lot of processing characteristics and even task purpose, it is surprisingly
that jointly considering these two related tasks was never formally reported in
previous work. Thus this paper makes the first attempt to let SRL enhance text
comprehension and inference through specifying verbal predicates and their
corresponding semantic roles. In terms of deep learning models, our embeddings
are enhanced by explicit contextual semantic role labels for more fine-grained
semantics. We show that the salient labels can be conveniently added to
existing models and significantly improve deep learning models in challenging
text comprehension tasks. Extensive experiments on benchmark machine reading
comprehension and inference datasets verify that the proposed semantic learning
helps our system reach new state-of-the-art over strong baselines which have
been enhanced by well pretrained language models from the latest progress.
| 2,019 | Computation and Language |
Attentive Semantic Role Labeling with Boundary Indicator | The goal of semantic role labeling (SRL) is to discover the
predicate-argument structure of a sentence, which plays a critical role in deep
processing of natural language. This paper introduces simple yet effective
auxiliary tags for dependency-based SRL to enhance a syntax-agnostic model with
multi-hop self-attention. Our syntax-agnostic model achieves competitive
performance with state-of-the-art models on the CoNLL-2009 benchmarks both for
English and Chinese.
| 2,018 | Computation and Language |
Faithful Multimodal Explanation for Visual Question Answering | AI systems' ability to explain their reasoning is critical to their utility
and trustworthiness. Deep neural networks have enabled significant progress on
many challenging problems such as visual question answering (VQA). However,
most of them are opaque black boxes with limited explanatory capability. This
paper presents a novel approach to developing a high-performing VQA system that
can elucidate its answers with integrated textual and visual explanations that
faithfully reflect important aspects of its underlying reasoning while
capturing the style of comprehensible human explanations. Extensive
experimental evaluation demonstrates the advantages of this approach compared
to competing methods with both automatic evaluation metrics and human
evaluation metrics.
| 2,019 | Computation and Language |
Interpreting Neural Networks With Nearest Neighbors | Local model interpretation methods explain individual predictions by
assigning an importance value to each input feature. This value is often
determined by measuring the change in confidence when a feature is removed.
However, the confidence of neural networks is not a robust measure of model
uncertainty. This issue makes reliably judging the importance of the input
features difficult. We address this by changing the test-time behavior of
neural networks using Deep k-Nearest Neighbors. Without harming text
classification accuracy, this algorithm provides a more robust uncertainty
metric which we use to generate feature importance values. The resulting
interpretations better align with human perception than baseline methods.
Finally, we use our interpretation method to analyze model predictions on
dataset annotation artifacts.
| 2,018 | Computation and Language |
Transforming Question Answering Datasets Into Natural Language Inference
Datasets | Existing datasets for natural language inference (NLI) have propelled
research on language understanding. We propose a new method for automatically
deriving NLI datasets from the growing abundance of large-scale question
answering datasets. Our approach hinges on learning a sentence transformation
model which converts question-answer pairs into their declarative forms.
Despite being primarily trained on a single QA dataset, we show that it can be
successfully applied to a variety of other QA resources. Using this system, we
automatically derive a new freely available dataset of over 500k NLI examples
(QA-NLI), and show that it exhibits a wide range of inference phenomena rarely
seen in previous NLI datasets.
| 2,018 | Computation and Language |
Speeding Up Neural Machine Translation Decoding by Cube Pruning | Although neural machine translation has achieved promising results, it
suffers from slow translation speed. The direct consequence is that a trade-off
has to be made between translation quality and speed, thus its performance can
not come into full play. We apply cube pruning, a popular technique to speed up
dynamic programming, into neural machine translation to speed up the
translation. To construct the equivalence class, similar target hidden states
are combined, leading to less RNN expansion operations on the target side and
less \$\mathrm{softmax}\$ operations over the large target vocabulary. The
experiments show that, at the same or even better translation quality, our
method can translate faster compared with naive beam search by \$3.3\times\$ on
GPUs and \$3.5\times\$ on CPUs.
| 2,018 | Computation and Language |
Can Neural Generators for Dialogue Learn Sentence Planning and Discourse
Structuring? | Responses in task-oriented dialogue systems often realize multiple
propositions whose ultimate form depends on the use of sentence planning and
discourse structuring operations. For example a recommendation may consist of
an explicitly evaluative utterance e.g. Chanpen Thai is the best option, along
with content related by the justification discourse relation, e.g. It has great
food and service, that combines multiple propositions into a single phrase.
While neural generation methods integrate sentence planning and surface
realization in one end-to-end learning framework, previous work has not shown
that neural generators can: (1) perform common sentence planning and discourse
structuring operations; (2) make decisions as to whether to realize content in
a single sentence or over multiple sentences; (3) generalize sentence planning
and discourse relation operations beyond what was seen in training. We
systematically create large training corpora that exhibit particular sentence
planning operations and then test neural models to see what they learn. We
compare models without explicit latent variables for sentence planning with
ones that provide explicit supervision during training. We show that only the
models with additional supervision can reproduce sentence planing and discourse
operations and generalize to situations unseen in training.
| 2,018 | Computation and Language |
How clever is the FiLM model, and how clever can it be? | The FiLM model achieves close-to-perfect performance on the diagnostic CLEVR
dataset and is distinguished from other such models by having a comparatively
simple and easily transferable architecture. In this paper, we investigate in
more detail the ability of FiLM to learn various linguistic constructions. Our
main results show that (a) FiLM is not able to learn relational statements
straight away except for very simple instances, (b) training on a broader set
of instances as well as pretraining on simpler instance types can help
alleviate these learning difficulties, (c) mixing is less robust than
pretraining and very sensitive to the compositional structure of the dataset.
Overall, our results suggest that the approach of big all-encompassing datasets
and the paradigm of "the effectiveness of data" may have fundamental
limitations.
| 2,018 | Computation and Language |
Attentional Multi-Reading Sarcasm Detection | Recognizing sarcasm often requires a deep understanding of multiple sources
of information, including the utterance, the conversational context, and real
world facts. Most of the current sarcasm detection systems consider only the
utterance in isolation. There are some limited attempts toward taking into
account the conversational context. In this paper, we propose an interpretable
end-to-end model that combines information from both the utterance and the
conversational context to detect sarcasm, and demonstrate its effectiveness
through empirical evaluations. We also study the behavior of the proposed model
to provide explanations for the model's decisions. Importantly, our model is
capable of determining the impact of utterance and conversational context on
the model's decisions. Finally, we provide an ablation study to illustrate the
impact of different components of the proposed model.
| 2,018 | Computation and Language |
SHOMA at Parseme Shared Task on Automatic Identification of VMWEs:
Neural Multiword Expression Tagging with High Generalisation | This paper presents a language-independent deep learning architecture adapted
to the task of multiword expression (MWE) identification. We employ a neural
architecture comprising of convolutional and recurrent layers with the addition
of an optional CRF layer at the top. This system participated in the open track
of the Parseme shared task on automatic identification of verbal MWEs due to
the use of pre-trained wikipedia word embeddings. It outperformed all
participating systems in both open and closed tracks with the overall
macro-average MWE-based F1 score of 58.09 averaged among all languages. A
particular strength of the system is its superior performance on unseen data
entries.
| 2,018 | Computation and Language |
A case for deep learning in semantics | Pater's target article builds a persuasive case for establishing stronger
ties between theoretical linguistics and connectionism (deep learning). This
commentary extends his arguments to semantics, focusing in particular on issues
of learning, compositionality, and lexical meaning.
| 2,018 | Computation and Language |
Depth-bounding is effective: Improvements and evaluation of unsupervised
PCFG induction | There have been several recent attempts to improve the accuracy of grammar
induction systems by bounding the recursive complexity of the induction model
(Ponvert et al., 2011; Noji and Johnson, 2016; Shain et al., 2016; Jin et al.,
2018). Modern depth-bounded grammar inducers have been shown to be more
accurate than early unbounded PCFG inducers, but this technique has never been
compared against unbounded induction within the same system, in part because
most previous depth-bounding models are built around sequence models, the
complexity of which grows exponentially with the maximum allowed depth. The
present work instead applies depth bounds within a chart-based Bayesian PCFG
inducer (Johnson et al., 2007b), where bounding can be switched on and off, and
then samples trees with and without bounding. Results show that depth-bounding
is indeed significantly effective in limiting the search space of the inducer
and thereby increasing the accuracy of the resulting parsing model. Moreover,
parsing results on English, Chinese and German show that this bounded model
with a new inference technique is able to produce parse trees more accurately
than or competitively with state-of-the-art constituency-based grammar
induction models.
| 2,018 | Computation and Language |
A Deep Reinforced Sequence-to-Set Model for Multi-Label Text
Classification | Multi-label text classification (MLTC) aims to assign multiple labels to each
sample in the dataset. The labels usually have internal correlations. However,
traditional methods tend to ignore the correlations between labels. In order to
capture the correlations between labels, the sequence-to-sequence (Seq2Seq)
model views the MLTC task as a sequence generation problem, which achieves
excellent performance on this task. However, the Seq2Seq model is not suitable
for the MLTC task in essence. The reason is that it requires humans to
predefine the order of the output labels, while some of the output labels in
the MLTC task are essentially an unordered set rather than an ordered sequence.
This conflicts with the strict requirement of the Seq2Seq model for the label
order. In this paper, we propose a novel sequence-to-set framework utilizing
deep reinforcement learning, which not only captures the correlations between
labels, but also reduces the dependence on the label order. Extensive
experimental results show that our proposed method outperforms the competitive
baselines by a large margin.
| 2,018 | Computation and Language |
Greedy Search with Probabilistic N-gram Matching for Neural Machine
Translation | Neural machine translation (NMT) models are usually trained with the
word-level loss using the teacher forcing algorithm, which not only evaluates
the translation improperly but also suffers from exposure bias. Sequence-level
training under the reinforcement framework can mitigate the problems of the
word-level loss, but its performance is unstable due to the high variance of
the gradient estimation. On these grounds, we present a method with a
differentiable sequence-level training objective based on probabilistic n-gram
matching which can avoid the reinforcement framework. In addition, this method
performs greedy search in the training which uses the predicted words as
context just as at inference to alleviate the problem of exposure bias.
Experiment results on the NIST Chinese-to-English translation tasks show that
our method significantly outperforms the reinforcement-based algorithms and
achieves an improvement of 1.5 BLEU points on average over a strong baseline
system.
| 2,018 | Computation and Language |
Short-Term Meaning Shift: A Distributional Exploration | We present the first exploration of meaning shift over short periods of time
in online communities using distributional representations. We create a small
annotated dataset and use it to assess the performance of a standard model for
meaning shift detection on short-term meaning shift. We find that the model has
problems distinguishing meaning shift from referential phenomena, and propose a
measure of contextual variability to remedy this.
| 2,019 | Computation and Language |
Towards one-shot learning for rare-word translation with external
experts | Neural machine translation (NMT) has significantly improved the quality of
automatic translation models. One of the main challenges in current systems is
the translation of rare words. We present a generic approach to address this
weakness by having external models annotate the training data as Experts, and
control the model-expert interaction with a pointer network and reinforcement
learning. Our experiments using phrase-based models to simulate Experts to
complement neural machine translation models show that the model can be trained
to copy the annotations into the output consistently. We demonstrate the
benefit of our proposed framework in outof-domain translation scenarios with
only lexical resources, improving more than 1.0 BLEU point in both translation
directions English to Spanish and German to English
| 2,018 | Computation and Language |
Learning to Generate Structured Queries from Natural Language with
Indirect Supervision | Generating structured query language (SQL) from natural language is an
emerging research topic. This paper presents a new learning paradigm from
indirect supervision of the answers to natural language questions, instead of
SQL queries. This paradigm facilitates the acquisition of training data due to
the abundant resources of question-answer pairs for various domains in the
Internet, and expels the difficult SQL annotation job. An end-to-end neural
model integrating with reinforcement learning is proposed to learn SQL
generation policy within the answer-driven learning paradigm. The model is
evaluated on datasets of different domains, including movie and academic
publication. Experimental results show that our model outperforms the baseline
models.
| 2,018 | Computation and Language |
Towards JointUD: Part-of-speech Tagging and Lemmatization using
Recurrent Neural Networks | This paper describes our submission to CoNLL 2018 UD Shared Task. We have
extended an LSTM-based neural network designed for sequence tagging to
additionally generate character-level sequences. The network was jointly
trained to produce lemmas, part-of-speech tags and morphological features.
Sentence segmentation, tokenization and dependency parsing were handled by
UDPipe 1.2 baseline. The results demonstrate the viability of the proposed
multitask architecture, although its performance still remains far from
state-of-the-art.
| 2,018 | Computation and Language |
Multilingual Extractive Reading Comprehension by Runtime Machine
Translation | Despite recent work in Reading Comprehension (RC), progress has been mostly
limited to English due to the lack of large-scale datasets in other languages.
In this work, we introduce the first RC system for languages without RC
training data. Given a target language without RC training data and a pivot
language with RC training data (e.g. English), our method leverages existing RC
resources in the pivot language by combining a competitive RC model in the
pivot language with an attentive Neural Machine Translation (NMT) model. We
first translate the data from the target to the pivot language, and then obtain
an answer using the RC model in the pivot language. Finally, we recover the
corresponding answer in the original language using soft-alignment attention
scores from the NMT model. We create evaluation sets of RC data in two
non-English languages, namely Japanese and French, to evaluate our method.
Experimental results on these datasets show that our method significantly
outperforms a back-translation baseline of a state-of-the-art product-level
machine translation system.
| 2,018 | Computation and Language |
xSense: Learning Sense-Separated Sparse Representations and Textual
Definitions for Explainable Word Sense Networks | Despite the success achieved on various natural language processing tasks,
word embeddings are difficult to interpret due to the dense vector
representations. This paper focuses on interpreting the embeddings for various
aspects, including sense separation in the vector dimensions and definition
generation. Specifically, given a context together with a target word, our
algorithm first projects the target word embedding to a high-dimensional sparse
vector and picks the specific dimensions that can best explain the semantic
meaning of the target word by the encoded contextual information, where the
sense of the target word can be indirectly inferred. Finally, our algorithm
applies an RNN to generate the textual definition of the target word in the
human readable form, which enables direct interpretation of the corresponding
word embedding. This paper also introduces a large and high-quality
context-definition dataset that consists of sense definitions together with
multiple example sentences per polysemous word, which is a valuable resource
for definition modeling and word sense disambiguation. The conducted
experiments show the superior performance in BLEU score and the human
evaluation test.
| 2,018 | Computation and Language |
Toward a Standardized and More Accurate Indonesian Part-of-Speech
Tagging | Previous work in Indonesian part-of-speech (POS) tagging are hard to compare
as they are not evaluated on a common dataset. Furthermore, in spite of the
success of neural network models for English POS tagging, they are rarely
explored for Indonesian. In this paper, we explored various techniques for
Indonesian POS tagging, including rule-based, CRF, and neural network-based
models. We evaluated our models on the IDN Tagged Corpus. A new
state-of-the-art of 97.47 F1 score is achieved with a recurrent neural network.
To provide a standard for future work, we release the dataset split that we
used publicly.
| 2,019 | Computation and Language |
Neural Latent Relational Analysis to Capture Lexical Semantic Relations
in a Vector Space | Capturing the semantic relations of words in a vector space contributes to
many natural language processing tasks. One promising approach exploits
lexico-syntactic patterns as features of word pairs. In this paper, we propose
a novel model of this pattern-based approach, neural latent relational analysis
(NLRA). NLRA can generalize co-occurrences of word pairs and lexico-syntactic
patterns, and obtain embeddings of the word pairs that do not co-occur. This
overcomes the critical data sparseness problem encountered in previous
pattern-based models. Our experimental results on measuring relational
similarity demonstrate that NLRA outperforms the previous pattern-based models.
In addition, when combined with a vector offset model, NLRA achieves a
performance comparable to that of the state-of-the-art model that exploits
additional semantic relational data.
| 2,018 | Computation and Language |
Beyond task success: A closer look at jointly learning to see, ask, and
GuessWhat | We propose a grounded dialogue state encoder which addresses a foundational
issue on how to integrate visual grounding with dialogue system components. As
a test-bed, we focus on the GuessWhat?! game, a two-player game where the goal
is to identify an object in a complex visual scene by asking a sequence of
yes/no questions. Our visually-grounded encoder leverages synergies between
guessing and asking questions, as it is trained jointly using multi-task
learning. We further enrich our model via a cooperative learning regime. We
show that the introduction of both the joint architecture and cooperative
learning lead to accuracy improvements over the baseline system. We compare our
approach to an alternative system which extends the baseline with reinforcement
learning. Our in-depth analysis shows that the linguistic skills of the two
models differ dramatically, despite approaching comparable performance levels.
This points at the importance of analyzing the linguistic output of competing
systems beyond numeric comparison solely based on task success.
| 2,019 | Computation and Language |
Filling Missing Paths: Modeling Co-occurrences of Word Pairs and
Dependency Paths for Recognizing Lexical Semantic Relations | Recognizing lexical semantic relations between word pairs is an important
task for many applications of natural language processing. One of the
mainstream approaches to this task is to exploit the lexico-syntactic paths
connecting two target words, which reflect the semantic relations of word
pairs. However, this method requires that the considered words co-occur in a
sentence. This requirement is hardly satisfied because of Zipf's law, which
states that most content words occur very rarely. In this paper, we propose
novel methods with a neural model of $P(path|w_1, w_2)$ to solve this problem.
Our proposed model of $P(path|w_1, w_2)$ can be learned in an unsupervised
manner and can generalize the co-occurrences of word pairs and dependency
paths. This model can be used to augment the path data of word pairs that do
not co-occur in the corpus, and extract features capturing relational
information from word pairs. Our experimental results demonstrate that our
methods improve on previous neural approaches based on dependency paths and
successfully solve the focused problem.
| 2,018 | Computation and Language |
Identifying Relationships Among Sentences in Court Case Transcripts
Using Discourse Relations | Case Law has a significant impact on the proceedings of legal cases.
Therefore, the information that can be obtained from previous court cases is
valuable to lawyers and other legal officials when performing their duties.
This paper describes a methodology of applying discourse relations between
sentences when processing text documents related to the legal domain. In this
study, we developed a mechanism to classify the relationships that can be
observed among sentences in transcripts of United States court cases. First, we
defined relationship types that can be observed between sentences in court case
transcripts. Then we classified pairs of sentences according to the
relationship type by combining a machine learning model and a rule-based
approach. The results obtained through our system were evaluated using human
judges. To the best of our knowledge, this is the first study where discourse
relationships between sentences have been used to determine relationships among
sentences in legal court case transcripts.
| 2,019 | Computation and Language |
Multi-view Models for Political Ideology Detection of News Articles | A news article's title, content and link structure often reveal its political
ideology. However, most existing works on automatic political ideology
detection only leverage textual cues. Drawing inspiration from recent advances
in neural inference, we propose a novel attention based multi-view model to
leverage cues from all of the above views to identify the ideology evinced by a
news article. Our model draws on advances in representation learning in natural
language processing and network science to capture cues from both textual
content and the network structure of news articles. We empirically evaluate our
model against a battery of baselines and show that our model outperforms state
of the art by 10 percentage points F1 score.
| 2,018 | Computation and Language |
Improving Question Answering by Commonsense-Based Pre-Training | Although neural network approaches achieve remarkable success on a variety of
NLP tasks, many of them struggle to answer questions that require commonsense
knowledge. We believe the main reason is the lack of commonsense
\mbox{connections} between concepts. To remedy this, we provide a simple and
effective method that leverages external commonsense knowledge base such as
ConceptNet. We pre-train direct and indirect relational functions between
concepts, and show that these pre-trained functions could be easily added to
existing neural network models. Results show that incorporating
commonsense-based function improves the baseline on three question answering
tasks that require commonsense reasoning. Further analysis shows that our
system \mbox{discovers} and leverages useful evidence from an external
commonsense knowledge base, which is missing in existing neural network models
and help derive the correct answer.
| 2,019 | Computation and Language |
Learning Named Entity Tagger using Domain-Specific Dictionary | Recent advances in deep neural models allow us to build reliable named entity
recognition (NER) systems without handcrafting features. However, such methods
require large amounts of manually-labeled training data. There have been
efforts on replacing human annotations with distant supervision (in conjunction
with external dictionaries), but the generated noisy labels pose significant
challenges on learning effective neural models. Here we propose two neural
models to suit noisy distant supervision from the dictionary. First, under the
traditional sequence labeling framework, we propose a revised fuzzy CRF layer
to handle tokens with multiple possible labels. After identifying the nature of
noisy labels in distant supervision, we go beyond the traditional framework and
propose a novel, more effective neural model AutoNER with a new Tie or Break
scheme. In addition, we discuss how to refine distant supervision for better
NER performance. Extensive experiments on three benchmark datasets demonstrate
that AutoNER achieves the best performance when only using dictionaries with no
additional human effort, and delivers competitive results with state-of-the-art
supervised benchmarks.
| 2,018 | Computation and Language |
Detecting Gang-Involved Escalation on Social Media Using Context | Gang-involved youth in cities such as Chicago have increasingly turned to
social media to post about their experiences and intents online. In some
situations, when they experience the loss of a loved one, their online
expression of emotion may evolve into aggression towards rival gangs and
ultimately into real-world violence. In this paper, we present a novel system
for detecting Aggression and Loss in social media. Our system features the use
of domain-specific resources automatically derived from a large unlabeled
corpus, and contextual representations of the emotional and semantic content of
the user's recent tweets as well as their interactions with other users.
Incorporating context in our Convolutional Neural Network (CNN) leads to a
significant improvement.
| 2,018 | Computation and Language |
Unsupervised Cross-lingual Transfer of Word Embedding Spaces | Cross-lingual transfer of word embeddings aims to establish the semantic
mappings among words in different languages by learning the transformation
functions over the corresponding word embedding spaces. Successfully solving
this problem would benefit many downstream tasks such as to translate text
classification models from resource-rich languages (e.g. English) to
low-resource languages. Supervised methods for this problem rely on the
availability of cross-lingual supervision, either using parallel corpora or
bilingual lexicons as the labeled data for training, which may not be available
for many low resource languages. This paper proposes an unsupervised learning
approach that does not require any cross-lingual labeled data. Given two
monolingual word embedding spaces for any language pair, our algorithm
optimizes the transformation functions in both directions simultaneously based
on distributional matching as well as minimizing the back-translation losses.
We use a neural network implementation to calculate the Sinkhorn distance, a
well-defined distributional similarity measure, and optimize our objective
through back-propagation. Our evaluation on benchmark datasets for bilingual
lexicon induction and cross-lingual word similarity prediction shows stronger
or competitive performance of the proposed method compared to other
state-of-the-art supervised and unsupervised baseline methods over many
language pairs.
| 2,018 | Computation and Language |
Topic Memory Networks for Short Text Classification | Many classification models work poorly on short texts due to data sparsity.
To address this issue, we propose topic memory networks for short text
classification with a novel topic memory mechanism to encode latent topic
representations indicative of class labels. Different from most prior work that
focuses on extending features with external knowledge or pre-trained topics,
our model jointly explores topic inference and text classification with memory
networks in an end-to-end manner. Experimental results on four benchmark
datasets show that our model outperforms state-of-the-art models on short text
classification, meanwhile generates coherent topics.
| 2,018 | Computation and Language |
Learning Scripts as Hidden Markov Models | Scripts have been proposed to model the stereotypical event sequences found
in narratives. They can be applied to make a variety of inferences including
filling gaps in the narratives and resolving ambiguous references. This paper
proposes the first formal framework for scripts based on Hidden Markov Models
(HMMs). Our framework supports robust inference and learning algorithms, which
are lacking in previous clustering models. We develop an algorithm for
structure and parameter learning based on Expectation Maximization and evaluate
it on a number of natural datasets. The results show that our algorithm is
superior to several informed baselines for predicting missing events in partial
observation sequences.
| 2,018 | Computation and Language |
A Joint Model of Conversational Discourse and Latent Topics on
Microblogs | Conventional topic models are ineffective for topic extraction from microblog
messages, because the data sparseness exhibited in short messages lacking
structure and contexts results in poor message-level word co-occurrence
patterns. To address this issue, we organize microblog messages as conversation
trees based on their reposting and replying relations, and propose an
unsupervised model that jointly learns word distributions to represent: 1)
different roles of conversational discourse, 2) various latent topics in
reflecting content information. By explicitly distinguishing the probabilities
of messages with varying discourse roles in containing topical words, our model
is able to discover clusters of discourse words that are indicative of topical
content. In an automatic evaluation on large-scale microblog corpora, our joint
model yields topics with better coherence scores than competitive topic models
from previous studies. Qualitative analysis on model outputs indicates that our
model induces meaningful representations for both discourse and topics. We
further present an empirical study on microblog summarization based on the
outputs of our joint model. The results show that the jointly modeled discourse
and topic representations can effectively indicate summary-worthy content in
microblog conversations.
| 2,018 | Computation and Language |
Evaluating Multimodal Representations on Sentence Similarity: vSTS,
Visual Semantic Textual Similarity Dataset | In this paper we introduce vSTS, a new dataset for measuring textual
similarity of sentences using multimodal information. The dataset is comprised
by images along with its respectively textual captions. We describe the dataset
both quantitatively and qualitatively, and claim that it is a valid gold
standard for measuring automatic multimodal textual similarity systems. We also
describe the initial experiments combining the multimodal information.
| 2,017 | Computation and Language |
How much should you ask? On the question structure in QA systems | Datasets that boosted state-of-the-art solutions for Question Answering (QA)
systems prove that it is possible to ask questions in natural language manner.
However, users are still used to query-like systems where they type in keywords
to search for answer. In this study we validate which parts of questions are
essential for obtaining valid answer. In order to conclude that, we take
advantage of LIME - a framework that explains prediction by local
approximation. We find that grammar and natural language is disregarded by QA.
State-of-the-art model can answer properly even if 'asked' only with a few
words with high coefficients calculated with LIME. According to our knowledge,
it is the first time that QA model is being explained by LIME.
| 2,018 | Computation and Language |
Does it care what you asked? Understanding Importance of Verbs in Deep
Learning QA System | In this paper we present the results of an investigation of the importance of
verbs in a deep learning QA system trained on SQuAD dataset. We show that main
verbs in questions carry little influence on the decisions made by the system -
in over 90% of researched cases swapping verbs for their antonyms did not
change system decision. We track this phenomenon down to the insides of the
net, analyzing the mechanism of self-attention and values contained in hidden
layers of RNN. Finally, we recognize the characteristics of the SQuAD dataset
as the source of the problem. Our work refers to the recently popular topic of
adversarial examples in NLP, combined with investigating deep net structure.
| 2,018 | Computation and Language |
Studying the History of the Arabic Language: Language Technology and a
Large-Scale Historical Corpus | Arabic is a widely-spoken language with a long and rich history, but existing
corpora and language technology focus mostly on modern Arabic and its
varieties. Therefore, studying the history of the language has so far been
mostly limited to manual analyses on a small scale. In this work, we present a
large-scale historical corpus of the written Arabic language, spanning 1400
years. We describe our efforts to clean and process this corpus using Arabic
NLP tools, including the identification of reused text. We study the history of
the Arabic language using a novel automatic periodization algorithm, as well as
other techniques. Our findings confirm the established division of written
Arabic into Modern Standard and Classical Arabic, and confirm other established
periodizations, while suggesting that written Arabic may be divisible into
still further periods of development.
| 2,018 | Computation and Language |
Multilingual Cross-domain Perspectives on Online Hate Speech | In this report, we present a study of eight corpora of online hate speech, by
demonstrating the NLP techniques that we used to collect and analyze the
jihadist, extremist, racist, and sexist content. Analysis of the multilingual
corpora shows that the different contexts share certain characteristics in
their hateful rhetoric. To expose the main features, we have focused on text
classification, text profiling, keyword and collocation extraction, along with
manual annotation and qualitative study.
| 2,018 | Computation and Language |
On The Alignment Problem In Multi-Head Attention-Based Neural Machine
Translation | This work investigates the alignment problem in state-of-the-art multi-head
attention models based on the transformer architecture. We demonstrate that
alignment extraction in transformer models can be improved by augmenting an
additional alignment head to the multi-head source-to-target attention
component. This is used to compute sharper attention weights. We describe how
to use the alignment head to achieve competitive performance. To study the
effect of adding the alignment head, we simulate a dictionary-guided
translation task, where the user wants to guide translation using pre-defined
dictionary entries. Using the proposed approach, we achieve up to $3.8$ % BLEU
improvement when using the dictionary, in comparison to $2.4$ % BLEU in the
baseline case. We also propose alignment pruning to speed up decoding in
alignment-based neural machine translation (ANMT), which speeds up translation
by a factor of $1.8$ without loss in translation performance. We carry out
experiments on the shared WMT 2016 English$\to$Romanian news task and the BOLT
Chinese$\to$English discussion forum task.
| 2,018 | Computation and Language |
Evaluating Semantic Rationality of a Sentence: A Sememe-Word-Matching
Neural Network based on HowNet | Automatic evaluation of semantic rationality is an important yet challenging
task, and current automatic techniques cannot well identify whether a sentence
is semantically rational. The methods based on the language model do not
measure the sentence by rationality but by commonness. The methods based on the
similarity with human written sentences will fail if human-written references
are not available. In this paper, we propose a novel model called
Sememe-Word-Matching Neural Network (SWM-NN) to tackle semantic rationality
evaluation by taking advantage of sememe knowledge base HowNet. The advantage
is that our model can utilize a proper combination of sememes to represent the
fine-grained semantic meanings of a word within the specific contexts. We use
the fine-grained semantic representation to help the model learn the semantic
dependency among words. To evaluate the effectiveness of the proposed model, we
build a large-scale rationality evaluation dataset. Experimental results on
this dataset show that the proposed model outperforms the competitive baselines
with a 5.4\% improvement in accuracy.
| 2,018 | Computation and Language |
Can LSTM Learn to Capture Agreement? The Case of Basque | Sequential neural networks models are powerful tools in a variety of Natural
Language Processing (NLP) tasks. The sequential nature of these models raises
the questions: to what extent can these models implicitly learn hierarchical
structures typical to human language, and what kind of grammatical phenomena
can they acquire?
We focus on the task of agreement prediction in Basque, as a case study for a
task that requires implicit understanding of sentence structure and the
acquisition of a complex but consistent morphological system. Analyzing
experimental results from two syntactic prediction tasks -- verb number
prediction and suffix recovery -- we find that sequential models perform worse
on agreement prediction in Basque than one might expect on the basis of a
previous agreement prediction work in English. Tentative findings based on
diagnostic classifiers suggest the network makes use of local heuristics as a
proxy for the hierarchical structure of the sentence. We propose the Basque
agreement prediction task as challenging benchmark for models that attempt to
learn regularities in human language.
| 2,018 | Computation and Language |
AWE: Asymmetric Word Embedding for Textual Entailment | Textual entailment is a fundamental task in natural language processing. It
refers to the directional relation between text fragments such that the
"premise" can infer "hypothesis". In recent years deep learning methods have
achieved great success in this task. Many of them have considered the
inter-sentence word-word interactions between the premise-hypothesis pairs,
however, few of them considered the "asymmetry" of these interactions.
Different from paraphrase identification or sentence similarity evaluation,
textual entailment is essentially determining a directional (asymmetric)
relation between the premise and the hypothesis. In this paper, we propose a
simple but effective way to enhance existing textual entailment algorithms by
using asymmetric word embeddings. Experimental results on SciTail and SNLI
datasets show that the learned asymmetric word embeddings could significantly
improve the word-word interaction based textual entailment models. It is
noteworthy that the proposed AWE-DeIsTe model can get 2.1% accuracy improvement
over prior state-of-the-art on SciTail.
| 2,018 | Computation and Language |
On learning an interpreted language with recurrent models | Can recurrent neural nets, inspired by human sequential data processing,
learn to understand language? We construct simplified datasets reflecting core
properties of natural language as modeled in formal syntax and semantics:
recursive syntactic structure and compositionality. We find LSTM and GRU
networks to generalise to compositional interpretation well, but only in the
most favorable learning settings, with a well-paced curriculum, extensive
training data, and left-to-right (but not right-to-left) composition.
| 2,021 | Computation and Language |
Adversarial Propagation and Zero-Shot Cross-Lingual Transfer of Word
Vector Specialization | Semantic specialization is the process of fine-tuning pre-trained
distributional word vectors using external lexical knowledge (e.g., WordNet) to
accentuate a particular semantic relation in the specialized vector space.
While post-processing specialization methods are applicable to arbitrary
distributional vectors, they are limited to updating only the vectors of words
occurring in external lexicons (i.e., seen words), leaving the vectors of all
other words unchanged. We propose a novel approach to specializing the full
distributional vocabulary. Our adversarial post-specialization method
propagates the external lexical knowledge to the full distributional space. We
exploit words seen in the resources as training examples for learning a global
specialization function. This function is learned by combining a standard
L2-distance loss with an adversarial loss: the adversarial component produces
more realistic output vectors. We show the effectiveness and robustness of the
proposed method across three languages and on three tasks: word similarity,
dialog state tracking, and lexical simplification. We report consistent
improvements over distributional word vectors and vectors specialized by other
state-of-the-art specialization frameworks. Finally, we also propose a
cross-lingual transfer method for zero-shot specialization which successfully
specializes a full target distributional space without any lexical knowledge in
the target language and without any bilingual data.
| 2,018 | Computation and Language |
What can linguistics and deep learning contribute to each other? | Joe Pater's target article calls for greater interaction between neural
network research and linguistics. I expand on this call and show how such
interaction can benefit both fields. Linguists can contribute to research on
neural networks for language technologies by clearly delineating the linguistic
capabilities that can be expected of such systems, and by constructing
controlled experimental paradigms that can determine whether those desiderata
have been met. In the other direction, neural networks can benefit the
scientific study of language by providing infrastructure for modeling human
sentence processing and for evaluating the necessity of particular innate
constraints on language acquisition.
| 2,018 | Computation and Language |
Multimodal neural pronunciation modeling for spoken languages with
logographic origin | Graphemes of most languages encode pronunciation, though some are more
explicit than others. Languages like Spanish have a straightforward mapping
between its graphemes and phonemes, while this mapping is more convoluted for
languages like English. Spoken languages such as Cantonese present even more
challenges in pronunciation modeling: (1) they do not have a standard written
form, (2) the closest graphemic origins are logographic Han characters, of
which only a subset of these logographic characters implicitly encodes
pronunciation. In this work, we propose a multimodal approach to predict the
pronunciation of Cantonese logographic characters, using neural networks with a
geometric representation of logographs and pronunciation of cognates in
historically related languages. The proposed framework improves performance by
18.1% and 25.0% respective to unimodal and multimodal baselines.
| 2,018 | Computation and Language |
Automatic, Personalized, and Flexible Playlist Generation using
Reinforcement Learning | Songs can be well arranged by professional music curators to form a riveting
playlist that creates engaging listening experiences. However, it is
time-consuming for curators to timely rearrange these playlists for fitting
trends in future. By exploiting the techniques of deep learning and
reinforcement learning, in this paper, we consider music playlist generation as
a language modeling problem and solve it by the proposed attention language
model with policy gradient. We develop a systematic and interactive approach so
that the resulting playlists can be tuned flexibly according to user
preferences. Considering a playlist as a sequence of words, we first train our
attention RNN language model on baseline recommended playlists. By optimizing
suitable imposed reward functions, the model is thus refined for corresponding
preferences. The experimental results demonstrate that our approach not only
generates coherent playlists automatically but is also able to flexibly
recommend personalized playlists for diversity, novelty and freshness.
| 2,018 | Computation and Language |
Generalizing Word Embeddings using Bag of Subwords | We approach the problem of generalizing pre-trained word embeddings beyond
fixed-size vocabularies without using additional contextual information. We
propose a subword-level word vector generation model that views words as bags
of character $n$-grams. The model is simple, fast to train and provides good
vectors for rare or unseen words. Experiments show that our model achieves
state-of-the-art performances in English word similarity task and in joint
prediction of part-of-speech tag and morphosyntactic attributes in 23
languages, suggesting our model's ability in capturing the relationship between
words' textual representations and their embeddings.
| 2,018 | Computation and Language |
Knowledge Based Machine Reading Comprehension | Machine reading comprehension (MRC) requires reasoning about both the
knowledge involved in a document and knowledge about the world. However,
existing datasets are typically dominated by questions that can be well solved
by context matching, which fail to test this capability. To encourage the
progress on knowledge-based reasoning in MRC, we present knowledge-based MRC in
this paper, and build a new dataset consisting of 40,047 question-answer pairs.
The annotation of this dataset is designed so that successfully answering the
questions requires understanding and the knowledge involved in a document. We
implement a framework consisting of both a question answering model and a
question generation model, both of which take the knowledge extracted from the
document as well as relevant facts from an external knowledge base such as
Freebase/ProBase/Reverb/NELL. Results show that incorporating side information
from external KB improves the accuracy of the baseline question answer system.
We compare it with a standard MRC model BiDAF, and also provide the difficulty
of the dataset and lay out remaining challenges.
| 2,018 | Computation and Language |
Knowledge-Aware Conversational Semantic Parsing Over Web Tables | Conversational semantic parsing over tables requires knowledge acquiring and
reasoning abilities, which have not been well explored by current
state-of-the-art approaches. Motivated by this fact, we propose a
knowledge-aware semantic parser to improve parsing performance by integrating
various types of knowledge. In this paper, we consider three types of
knowledge, including grammar knowledge, expert knowledge, and external resource
knowledge. First, grammar knowledge empowers the model to effectively replicate
previously generated logical form, which effectively handles the co-reference
and ellipsis phenomena in conversation Second, based on expert knowledge, we
propose a decomposable model, which is more controllable compared with
traditional end-to-end models that put all the burdens of learning on
trial-and-error in an end-to-end way. Third, external resource knowledge, i.e.,
provided by a pre-trained language model or an entity typing model, is used to
improve the representation of question and table for a better semantic
understanding. We conduct experiments on the SequentialQA dataset. Results show
that our knowledge-aware model outperforms the state-of-the-art approaches.
Incremental experimental results also prove the usefulness of various
knowledge. Further analysis shows that our approach has the ability to derive
the meaning representation of a context-dependent utterance by leveraging
previously generated outcomes.
| 2,018 | Computation and Language |
Retrieval-Enhanced Adversarial Training for Neural Response Generation | Dialogue systems are usually built on either generation-based or
retrieval-based approaches, yet they do not benefit from the advantages of
different models. In this paper, we propose a Retrieval-Enhanced Adversarial
Training (REAT) method for neural response generation. Distinct from existing
approaches, the REAT method leverages an encoder-decoder framework in terms of
an adversarial training paradigm, while taking advantage of N-best response
candidates from a retrieval-based system to construct the discriminator. An
empirical study on a large scale public available benchmark dataset shows that
the REAT method significantly outperforms the vanilla Seq2Seq model as well as
the conventional adversarial training approach.
| 2,019 | Computation and Language |
Incorporating Syntactic and Semantic Information in Word Embeddings
using Graph Convolutional Networks | Word embeddings have been widely adopted across several NLP applications.
Most existing word embedding methods utilize sequential context of a word to
learn its embedding. While there have been some attempts at utilizing syntactic
context of a word, such methods result in an explosion of the vocabulary size.
In this paper, we overcome this problem by proposing SynGCN, a flexible Graph
Convolution based method for learning word embeddings. SynGCN utilizes the
dependency context of a word without increasing the vocabulary size. Word
embeddings learned by SynGCN outperform existing methods on various intrinsic
and extrinsic tasks and provide an advantage when used with ELMo. We also
propose SemGCN, an effective framework for incorporating diverse semantic
knowledge for further enhancing learned word representations. We make the
source code of both models available to encourage reproducible research.
| 2,019 | Computation and Language |
Neural Melody Composition from Lyrics | In this paper, we study a novel task that learns to compose music from
natural language. Given the lyrics as input, we propose a melody composition
model that generates lyrics-conditional melody as well as the exact alignment
between the generated melody and the given lyrics simultaneously. More
specifically, we develop the melody composition model based on the
sequence-to-sequence framework. It consists of two neural encoders to encode
the current lyrics and the context melody respectively, and a hierarchical
decoder to jointly produce musical notes and the corresponding alignment.
Experimental results on lyrics-melody pairs of 18,451 pop songs demonstrate the
effectiveness of our proposed methods. In addition, we apply a singing voice
synthesizer software to synthesize the "singing" of the lyrics and melodies for
human evaluation. Results indicate that our generated melodies are more
melodious and tuneful compared with the baseline method.
| 2,018 | Computation and Language |
Hate Speech Dataset from a White Supremacy Forum | Hate speech is commonly defined as any communication that disparages a target
group of people based on some characteristic such as race, colour, ethnicity,
gender, sexual orientation, nationality, religion, or other characteristic. Due
to the massive rise of user-generated web content on social media, the amount
of hate speech is also steadily increasing. Over the past years, interest in
online hate speech detection and, particularly, the automation of this task has
continuously grown, along with the societal impact of the phenomenon. This
paper describes a hate speech dataset composed of thousands of sentences
manually labelled as containing hate speech or not. The sentences have been
extracted from Stormfront, a white supremacist forum. A custom annotation tool
has been developed to carry out the manual labelling task which, among other
things, allows the annotators to choose whether to read the context of a
sentence before labelling it. The paper also provides a thoughtful qualitative
and quantitative study of the resulting dataset and several baseline
experiments with different classification models. The dataset is publicly
available.
| 2,018 | Computation and Language |
Emo2Vec: Learning Generalized Emotion Representation by Multi-task
Training | In this paper, we propose Emo2Vec which encodes emotional semantics into
vectors. We train Emo2Vec by multi-task learning six different emotion-related
tasks, including emotion/sentiment analysis, sarcasm classification, stress
detection, abusive language classification, insult detection, and personality
recognition. Our evaluation of Emo2Vec shows that it outperforms existing
affect-related representations, such as Sentiment-Specific Word Embedding and
DeepMoji embeddings with much smaller training corpora. When concatenated with
GloVe, Emo2Vec achieves competitive performances to state-of-the-art results on
several tasks using a simple logistic regression classifier.
| 2,018 | Computation and Language |
End-to-end Audiovisual Speech Activity Detection with Bimodal Recurrent
Neural Models | Speech activity detection (SAD) plays an important role in current speech
processing systems, including automatic speech recognition (ASR). SAD is
particularly difficult in environments with acoustic noise. A practical
solution is to incorporate visual information, increasing the robustness of the
SAD approach. An audiovisual system has the advantage of being robust to
different speech modes (e.g., whisper speech) or background noise. Recent
advances in audiovisual speech processing using deep learning have opened
opportunities to capture in a principled way the temporal relationships between
acoustic and visual features. This study explores this idea proposing a
\emph{bimodal recurrent neural network} (BRNN) framework for SAD. The approach
models the temporal dynamic of the sequential audiovisual data, improving the
accuracy and robustness of the proposed SAD system. Instead of estimating
hand-crafted features, the study investigates an end-to-end training approach,
where acoustic and visual features are directly learned from the raw data
during training. The experimental evaluation considers a large audiovisual
corpus with over 60.8 hours of recordings, collected from 105 speakers. The
results demonstrate that the proposed framework leads to absolute improvements
up to 1.2% under practical scenarios over a VAD baseline using only audio
implemented with deep neural network (DNN). The proposed approach achieves
92.7% F1-score when it is evaluated using the sensors from a portable tablet
under noisy acoustic environment, which is only 1.0% lower than the performance
obtained under ideal conditions (e.g., clean speech obtained with a high
definition camera and a close-talking microphone).
| 2,019 | Computation and Language |
Unsupervised Controllable Text Formalization | We propose a novel framework for controllable natural language
transformation. Realizing that the requirement of parallel corpus is
practically unsustainable for controllable generation tasks, an unsupervised
training scheme is introduced. The crux of the framework is a deep neural
encoder-decoder that is reinforced with text-transformation knowledge through
auxiliary modules (called scorers). The scorers, based on off-the-shelf
language processing tools, decide the learning scheme of the encoder-decoder
based on its actions. We apply this framework for the text-transformation task
of formalizing an input text by improving its readability grade; the degree of
required formalization can be controlled by the user at run-time. Experiments
on public datasets demonstrate the efficacy of our model towards: (a)
transforming a given text to a more formal style, and (b) introducing
appropriate amount of formalness in the output text pertaining to the input
control. Our code and datasets are released for academic use.
| 2,019 | Computation and Language |
Solving Sinhala Language Arithmetic Problems using Neural Networks | A methodology is presented to solve Arithmetic problems in Sinhala Language
using a Neural Network. The system comprises of (a) keyword identification, (b)
question identification, (c) mathematical operation identification and is
combined using a neural network. Naive Bayes Classification is used in order to
identify keywords and Conditional Random Field to identify the question and the
operation which should be performed on the identified keywords to achieve the
expected result. "One vs. all Classification" is done using a neural network
for sentences. All functions are combined through the neural network which
builds an equation to solve the problem. The paper compares each methodology in
ARIS and Mahoshadha to the method presented in the paper. Mahoshadha2 learns to
solve arithmetic problems with the accuracy of 76%.
| 2,018 | Computation and Language |
Game-Based Video-Context Dialogue | Current dialogue systems focus more on textual and speech context knowledge
and are usually based on two speakers. Some recent work has investigated static
image-based dialogue. However, several real-world human interactions also
involve dynamic visual context (similar to videos) as well as dialogue
exchanges among multiple speakers. To move closer towards such multimodal
conversational skills and visually-situated applications, we introduce a new
video-context, many-speaker dialogue dataset based on live-broadcast soccer
game videos and chats from Twitch.tv. This challenging testbed allows us to
develop visually-grounded dialogue models that should generate relevant
temporal and spatial event language from the live video, while also being
relevant to the chat history. For strong baselines, we also present several
discriminative and generative models, e.g., based on tridirectional attention
flow (TriDAF). We evaluate these models via retrieval ranking-recall, automatic
phrase-matching metrics, as well as human evaluation studies. We also present
dataset analyses, model ablations, and visualizations to understand the
contribution of different modalities and model components.
| 2,018 | Computation and Language |
Closed-Book Training to Improve Summarization Encoder Memory | A good neural sequence-to-sequence summarization model should have a strong
encoder that can distill and memorize the important information from long input
texts so that the decoder can generate salient summaries based on the encoder's
memory. In this paper, we aim to improve the memorization capabilities of the
encoder of a pointer-generator model by adding an additional 'closed-book'
decoder without attention and pointer mechanisms. Such a decoder forces the
encoder to be more selective in the information encoded in its memory state
because the decoder can't rely on the extra information provided by the
attention and possibly copy modules, and hence improves the entire model. On
the CNN/Daily Mail dataset, our 2-decoder model outperforms the baseline
significantly in terms of ROUGE and METEOR metrics, for both cross-entropy and
reinforced setups (and on human evaluation). Moreover, our model also achieves
higher scores in a test-only DUC-2002 generalizability setup. We further
present a memory ability test, two saliency metrics, as well as several
sanity-check ablations (based on fixed-encoder, gradient-flow cut, and model
capacity) to prove that the encoder of our 2-decoder model does in fact learn
stronger memory representations than the baseline encoder.
| 2,018 | Computation and Language |
Jump to better conclusions: SCAN both left and right | Lake and Baroni (2018) recently introduced the SCAN data set, which consists
of simple commands paired with action sequences and is intended to test the
strong generalization abilities of recurrent sequence-to-sequence models. Their
initial experiments suggested that such models may fail because they lack the
ability to extract systematic rules. Here, we take a closer look at SCAN and
show that it does not always capture the kind of generalization that it was
designed for. To mitigate this we propose a complementary dataset, which
requires mapping actions back to the original commands, called NACS. We show
that models that do well on SCAN do not necessarily do well on NACS, and that
NACS exhibits properties more closely aligned with realistic use-cases for
sequence-to-sequence models.
| 2,020 | Computation and Language |
Semantic WordRank: Generating Finer Single-Document Summarizations | We present Semantic WordRank (SWR), an unsupervised method for generating an
extractive summary of a single document. Built on a weighted word graph with
semantic and co-occurrence edges, SWR scores sentences using an
article-structure-biased PageRank algorithm with a Softplus function
adjustment, and promotes topic diversity using spectral subtopic clustering
under the Word-Movers-Distance metric. We evaluate SWR on the DUC-02 and
SummBank datasets and show that SWR produces better summaries than the
state-of-the-art algorithms over DUC-02 under common ROUGE measures. We then
show that, under the same measures over SummBank, SWR outperforms each of the
three human annotators (aka. judges) and compares favorably with the combined
performance of all judges.
| 2,018 | Computation and Language |
Zero-Shot Cross-lingual Classification Using Multilingual Neural Machine
Translation | Transferring representations from large supervised tasks to downstream tasks
has shown promising results in AI fields such as Computer Vision and Natural
Language Processing (NLP). In parallel, the recent progress in Machine
Translation (MT) has enabled one to train multilingual Neural MT (NMT) systems
that can translate between multiple languages and are also capable of
performing zero-shot translation. However, little attention has been paid to
leveraging representations learned by a multilingual NMT system to enable
zero-shot multilinguality in other NLP tasks. In this paper, we demonstrate a
simple framework, a multilingual Encoder-Classifier, for cross-lingual transfer
learning by reusing the encoder from a multilingual NMT system and stitching it
with a task-specific classifier component. Our proposed model achieves
significant improvements in the English setup on three benchmark tasks - Amazon
Reviews, SST and SNLI. Further, our system can perform classification in a new
language for which no classification data was seen during training, showing
that zero-shot classification is possible and remarkably competitive. In order
to understand the underlying factors contributing to this finding, we conducted
a series of analyses on the effect of the shared vocabulary, the training data
type for NMT, classifier complexity, encoder representation power, and model
generalization on zero-shot performance. Our results provide strong evidence
that the representations learned from multilingual NMT systems are widely
applicable across languages and tasks.
| 2,018 | Computation and Language |
Learning to Summarize Radiology Findings | The Impression section of a radiology report summarizes crucial radiology
findings in natural language and plays a central role in communicating these
findings to physicians. However, the process of generating impressions by
summarizing findings is time-consuming for radiologists and prone to errors. We
propose to automate the generation of radiology impressions with neural
sequence-to-sequence learning. We further propose a customized neural model for
this task which learns to encode the study background information and use this
information to guide the decoding process. On a large dataset of radiology
reports collected from actual hospital studies, our model outperforms existing
non-neural and neural baselines under the ROUGE metrics. In a blind experiment,
a board-certified radiologist indicated that 67% of sampled system summaries
are at least as good as the corresponding human-written summaries, suggesting
significant clinical validity. To our knowledge our work represents the first
attempt in this direction.
| 2,018 | Computation and Language |
SafeCity: Understanding Diverse Forms of Sexual Harassment Personal
Stories | With the recent rise of #MeToo, an increasing number of personal stories
about sexual harassment and sexual abuse have been shared online. In order to
push forward the fight against such harassment and abuse, we present the task
of automatically categorizing and analyzing various forms of sexual harassment,
based on stories shared on the online forum SafeCity. For the labels of
groping, ogling, and commenting, our single-label CNN-RNN model achieves an
accuracy of 86.5%, and our multi-label model achieves a Hamming score of 82.5%.
Furthermore, we present analysis using LIME, first-derivative saliency
heatmaps, activation clustering, and embedding visualization to interpret
neural model predictions and demonstrate how this extracts features that can
help automatically fill out incident reports, identify unsafe areas, avoid
unsafe practices, and 'pin the creeps'.
| 2,018 | Computation and Language |
T\"ubingen-Oslo system: Linear regression works the best at Predicting
Current and Future Psychological Health from Childhood Essays in the CLPsych
2018 Shared Task | This paper describes our efforts in predicting current and future
psychological health from childhood essays within the scope of the CLPsych-2018
Shared Task. We experimented with a number of different models, including
recurrent and convolutional networks, Poisson regression, support vector
regression, and L1 and L2 regularized linear regression. We obtained the best
results on the training/development data with L2 regularized linear regression
(ridge regression) which also got the best scores on main metrics in the
official testing for task A (predicting psychological health from essays
written at the age of 11 years) and task B (predicting later psychological
health from essays written at the age of 11).
| 2,018 | Computation and Language |
LiveBot: Generating Live Video Comments Based on Visual and Textual
Contexts | We introduce the task of automatic live commenting. Live commenting, which is
also called `video barrage', is an emerging feature on online video sites that
allows real-time comments from viewers to fly across the screen like bullets or
roll at the right side of the screen. The live comments are a mixture of
opinions for the video and the chit chats with other comments. Automatic live
commenting requires AI agents to comprehend the videos and interact with human
viewers who also make the comments, so it is a good testbed of an AI agent's
ability of dealing with both dynamic vision and language. In this work, we
construct a large-scale live comment dataset with 2,361 videos and 895,929 live
comments. Then, we introduce two neural models to generate live comments based
on the visual and textual contexts, which achieve better performance than
previous neural baselines such as the sequence-to-sequence model. Finally, we
provide a retrieval-based evaluation protocol for automatic live commenting
where the model is asked to sort a set of candidate comments based on the
log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank.
Putting it all together, we demonstrate the first `LiveBot'.
| 2,018 | Computation and Language |
Unsupervised Machine Commenting with Neural Variational Topic Model | Article comments can provide supplementary opinions and facts for readers,
thereby increase the attraction and engagement of articles. Therefore,
automatically commenting is helpful in improving the activeness of the
community, such as online forums and news websites. Previous work shows that
training an automatic commenting system requires large parallel corpora.
Although part of articles are naturally paired with the comments on some
websites, most articles and comments are unpaired on the Internet. To fully
exploit the unpaired data, we completely remove the need for parallel data and
propose a novel unsupervised approach to train an automatic article commenting
model, relying on nothing but unpaired articles and comments. Our model is
based on a retrieval-based commenting framework, which uses news to retrieve
comments based on the similarity of their topics. The topic representation is
obtained from a neural variational topic model, which is trained in an
unsupervised manner. We evaluate our model on a news comment dataset.
Experiments show that our proposed topic-based approach significantly
outperforms previous lexicon-based models. The model also profits from paired
corpora and achieves state-of-the-art performance under semi-supervised
scenarios.
| 2,018 | Computation and Language |
XNLI: Evaluating Cross-lingual Sentence Representations | State-of-the-art natural language processing systems rely on supervision in
the form of annotated data to learn competent models. These models are
generally trained on data in a single language (usually English), and cannot be
directly used beyond that language. Since collecting data in every language is
not realistic, there has been a growing interest in cross-lingual language
understanding (XLU) and low-resource cross-language transfer. In this work, we
construct an evaluation set for XLU by extending the development and test sets
of the Multi-Genre Natural Language Inference Corpus (MultiNLI) to 15
languages, including low-resource languages such as Swahili and Urdu. We hope
that our dataset, dubbed XNLI, will catalyze research in cross-lingual sentence
understanding by providing an informative standard evaluation task. In
addition, we provide several baselines for multilingual sentence understanding,
including two based on machine translation systems, and two that use parallel
data to train aligned multilingual bag-of-words and LSTM encoders. We find that
XNLI represents a practical and challenging evaluation suite, and that directly
translating the test data yields the best performance among available
baselines.
| 2,018 | Computation and Language |
IncSQL: Training Incremental Text-to-SQL Parsers with Non-Deterministic
Oracles | We present a sequence-to-action parsing approach for the natural language to
SQL task that incrementally fills the slots of a SQL query with feasible
actions from a pre-defined inventory. To account for the fact that typically
there are multiple correct SQL queries with the same or very similar semantics,
we draw inspiration from syntactic parsing techniques and propose to train our
sequence-to-action models with non-deterministic oracles. We evaluate our
models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the
test set, a 2.1% absolute improvement over the models trained with traditional
static oracles assuming a single correct target SQL query. When further
combined with the execution-guided decoding strategy, our model sets a new
state-of-the-art performance at an execution accuracy of 87.1%.
| 2,018 | Computation and Language |
On the Strength of Character Language Models for Multilingual Named
Entity Recognition | Character-level patterns have been widely used as features in English Named
Entity Recognition (NER) systems. However, to date there has been no direct
investigation of the inherent differences between name and non-name tokens in
text, nor whether this property holds across multiple languages. This paper
analyzes the capabilities of corpus-agnostic Character-level Language Models
(CLMs) in the binary task of distinguishing name tokens from non-name tokens.
We demonstrate that CLMs provide a simple and powerful model for capturing
these differences, identifying named entity tokens in a diverse set of
languages at close to the performance of full NER systems. Moreover, by adding
very simple CLM-based features we can significantly improve the performance of
an off-the-shelf NER system for multiple languages.
| 2,018 | Computation and Language |
Freezing Subnetworks to Analyze Domain Adaptation in Neural Machine
Translation | To better understand the effectiveness of continued training, we analyze the
major components of a neural machine translation system (the encoder, decoder,
and each embedding space) and consider each component's contribution to, and
capacity for, domain adaptation. We find that freezing any single component
during continued training has minimal impact on performance, and that
performance is surprisingly good when a single component is adapted while
holding the rest of the model fixed. We also find that continued training does
not move the model very far from the out-of-domain model, compared to a
sensitivity analysis metric, suggesting that the out-of-domain model can
provide a good generic initialization for the new domain.
| 2,018 | Computation and Language |
Automatic Catchphrase Extraction from Legal Case Documents via Scoring
using Deep Neural Networks | In this paper, we present a method of automatic catchphrase extracting from
legal case documents. We utilize deep neural networks for constructing scoring
model of our extraction system. We achieve comparable performance with systems
using corpus-wide and citation information which we do not use in our system.
| 2,018 | Computation and Language |
Unsupervised Abstractive Sentence Summarization using Length Controlled
Variational Autoencoder | In this work we present an unsupervised approach to summarize sentences in
abstractive way using Variational Autoencoder (VAE). VAE are known to learn a
semantically rich latent variable, representing high dimensional input. VAEs
are trained by learning to reconstruct the input from the probabilistic latent
variable. Explicitly providing the information about output length during
training influences the VAE to not encode this information and thus can be
manipulated during inference. Instructing the decoder to produce a shorter
output sequence leads to expressing the input sentence with fewer words. We
show on different summarization data sets, that these shorter sentences can not
beat a simple baseline but yield higher ROUGE scores than trying to reconstruct
the whole sentence.
| 2,018 | Computation and Language |
SQL-to-Text Generation with Graph-to-Sequence Model | Previous work approaches the SQL-to-text generation task using vanilla
Seq2Seq models, which may not fully capture the inherent graph-structured
information in SQL query. In this paper, we first introduce a strategy to
represent the SQL query as a directed graph and then employ a graph-to-sequence
model to encode the global structure information into node embeddings. This
model can effectively learn the correlation between the SQL query pattern and
its interpretation. Experimental results on the WikiSQL dataset and
Stackoverflow dataset show that our model significantly outperforms the Seq2Seq
and Tree2Seq baselines, achieving the state-of-the-art performance.
| 2,019 | Computation and Language |
Supervised Machine Learning for Extractive Query Based Summarisation of
Biomedical Data | The automation of text summarisation of biomedical publications is a pressing
need due to the plethora of information available on-line. This paper explores
the impact of several supervised machine learning approaches for extracting
multi-document summaries for given queries. In particular, we compare
classification and regression approaches for query-based extractive
summarisation using data provided by the BioASQ Challenge. We tackled the
problem of annotating sentences for training classification systems and show
that a simple annotation approach outperforms regression-based summarisation.
| 2,018 | Computation and Language |
Macquarie University at BioASQ 6b: Deep learning and deep reinforcement
learning for query-based multi-document summarisation | This paper describes Macquarie University's contribution to the BioASQ
Challenge (BioASQ 6b, Phase B). We focused on the extraction of the ideal
answers, and the task was approached as an instance of query-based
multi-document summarisation. In particular, this paper focuses on the
experiments related to the deep learning and reinforcement learning approaches
used in the submitted runs. The best run used a deep learning model under a
regression-based framework. The deep learning architecture used features
derived from the output of LSTM chains on word embeddings, plus features based
on similarity with the query, and sentence position. The reinforcement learning
approach was a proof-of-concept prototype that trained a global policy using
REINFORCE. The global policy was implemented as a neural network that used
$tf.idf$ features encoding the candidate sentence, question, and context.
| 2,018 | Computation and Language |
Characterizing Variation in Crowd-Sourced Data for Training Neural
Language Generators to Produce Stylistically Varied Outputs | One of the biggest challenges of end-to-end language generation from meaning
representations in dialogue systems is making the outputs more natural and
varied. Here we take a large corpus of 50K crowd-sourced utterances in the
restaurant domain and develop text analysis methods that systematically
characterize types of sentences in the training data. We then automatically
label the training data to allow us to conduct two kinds of experiments with a
neural generator. First, we test the effect of training the system with
different stylistic partitions and quantify the effect of smaller, but more
stylistically controlled training data. Second, we propose a method of labeling
the style variants during training, and show that we can modify the style of
the generated utterances using our stylistic labels. We contrast and compare
these methods that can be used with any existing large corpus, showing how they
vary in terms of semantic quality and stylistic control.
| 2,018 | Computation and Language |
Skeleton-to-Response: Dialogue Generation Guided by Retrieval Memory | For dialogue response generation, traditional generative models generate
responses solely from input queries. Such models rely on insufficient
information for generating a specific response since a certain query could be
answered in multiple ways. Consequentially, those models tend to output generic
and dull responses, impeding the generation of informative utterances.
Recently, researchers have attempted to fill the information gap by exploiting
information retrieval techniques. When generating a response for a current
query, similar dialogues retrieved from the entire training data are considered
as an additional knowledge source. While this may harvest massive information,
the generative models could be overwhelmed, leading to undesirable performance.
In this paper, we propose a new framework which exploits retrieval results via
a skeleton-then-response paradigm. At first, a skeleton is generated by
revising the retrieved responses. Then, a novel generative model uses both the
generated skeleton and the original query for response generation. Experimental
results show that our approaches significantly improve the diversity and
informativeness of the generated responses.
| 2,020 | Computation and Language |
Numeral Understanding in Financial Tweets for Fine-grained Crowd-based
Forecasting | Numerals that contain much information in financial documents are crucial for
financial decision making. They play different roles in financial analysis
processes. This paper is aimed at understanding the meanings of numerals in
financial tweets for fine-grained crowd-based forecasting. We propose a
taxonomy that classifies the numerals in financial tweets into 7 categories,
and further extend some of these categories into several subcategories. Neural
network-based models with word and character-level encoders are proposed for
7-way classification and 17-way classification. We perform backtest to confirm
the effectiveness of the numeric opinions made by the crowd. This work is the
first attempt to understand numerals in financial social media data, and we
provide the first comparison of fine-grained opinion of individual investors
and analysts based on their forecast price. The numeral corpus used in our
experiments, called FinNum 1.0 , is available for research purposes.
| 2,019 | Computation and Language |
Ground Truth for training OCR engines on historical documents in German
Fraktur and Early Modern Latin | In this paper we describe a dataset of German and Latin \textit{ground truth}
(GT) for historical OCR in the form of printed text line images paired with
their transcription. This dataset, called \textit{GT4HistOCR}, consists of
313,173 line pairs covering a wide period of printing dates from incunabula
from the 15th century to 19th century books printed in Fraktur types and is
openly available under a CC-BY 4.0 license. The special form of GT as line
image/transcription pairs makes it directly usable to train state-of-the-art
recognition models for OCR software employing recurring neural networks in LSTM
architecture such as Tesseract 4 or OCRopus. We also provide some pretrained
OCRopus models for subcorpora of our dataset yielding between 95\% (early
printings) and 98\% (19th century Fraktur printings) character accuracy rates
on unseen test cases, a Perl script to harmonize GT produced by different
transcription rules, and give hints on how to construct GT for OCR purposes
which has requirements that may differ from linguistically motivated
transcriptions.
| 2,018 | Computation and Language |
Extending Neural Generative Conversational Model using External
Knowledge Sources | The use of connectionist approaches in conversational agents has been
progressing rapidly due to the availability of large corpora. However current
generative dialogue models often lack coherence and are content poor. This work
proposes an architecture to incorporate unstructured knowledge sources to
enhance the next utterance prediction in chit-chat type of generative dialogue
models. We focus on Sequence-to-Sequence (Seq2Seq) conversational agents
trained with the Reddit News dataset, and consider incorporating external
knowledge from Wikipedia summaries as well as from the NELL knowledge base. Our
experiments show faster training time and improved perplexity when leveraging
external knowledge.
| 2,018 | Computation and Language |
Events Beyond ACE: Curated Training for Events | We explore a human-driven approach to annotation, curated training (CT), in
which annotation is framed as teaching the system by using interactive search
to identify informative snippets of text to annotate, unlike traditional
approaches which either annotate preselected text or use active learning. A
trained annotator performed 80 hours of CT for the thirty event types of the
NIST TAC KBP Event Argument Extraction evaluation. Combining this annotation
with ACE results in a 6% reduction in error and the learning curve of CT
plateaus more slowly than for full-document annotation. 3 NLP researchers
performed CT for one event type and showed much sharper learning curves with
all three exceeding ACE performance in less than ninety minutes, suggesting
that CT can provide further benefits when the annotator deeply understands the
system.
| 2,018 | Computation and Language |
Geo-Text Data and Data-Driven Geospatial Semantics | Many datasets nowadays contain links between geographic locations and natural
language texts. These links can be geotags, such as geotagged tweets or
geotagged Wikipedia pages, in which location coordinates are explicitly
attached to texts. These links can also be place mentions, such as those in
news articles, travel blogs, or historical archives, in which texts are
implicitly connected to the mentioned places. This kind of data is referred to
as geo-text data. The availability of large amounts of geo-text data brings
both challenges and opportunities. On the one hand, it is challenging to
automatically process this kind of data due to the unstructured texts and the
complex spatial footprints of some places. On the other hand, geo-text data
offers unique research opportunities through the rich information contained in
texts and the special links between texts and geography. As a result, geo-text
data facilitates various studies especially those in data-driven geospatial
semantics. This paper discusses geo-text data and related concepts. With a
focus on data-driven research, this paper systematically reviews a large number
of studies that have discovered multiple types of knowledge from geo-text data.
Based on the literature review, a generalized workflow is extracted and key
challenges for future work are discussed.
| 2,018 | Computation and Language |
Graph Convolutional Networks for Text Classification | Text classification is an important and classical problem in natural language
processing. There have been a number of studies that applied convolutional
neural networks (convolution on regular grid, e.g., sequence) to
classification. However, only a limited number of studies have explored the
more flexible graph convolutional neural networks (convolution on non-grid,
e.g., arbitrary graph) for the task. In this work, we propose to use graph
convolutional networks for text classification. We build a single text graph
for a corpus based on word co-occurrence and document word relations, then
learn a Text Graph Convolutional Network (Text GCN) for the corpus. Our Text
GCN is initialized with one-hot representation for word and document, it then
jointly learns the embeddings for both words and documents, as supervised by
the known class labels for documents. Our experimental results on multiple
benchmark datasets demonstrate that a vanilla Text GCN without any external
word embeddings or knowledge outperforms state-of-the-art methods for text
classification. On the other hand, Text GCN also learns predictive word and
document embeddings. In addition, experimental results show that the
improvement of Text GCN over state-of-the-art comparison methods become more
prominent as we lower the percentage of training data, suggesting the
robustness of Text GCN to less training data in text classification.
| 2,018 | Computation and Language |
CLUSE: Cross-Lingual Unsupervised Sense Embeddings | This paper proposes a modularized sense induction and representation learning
model that jointly learns bilingual sense embeddings that align well in the
vector space, where the cross-lingual signal in the English-Chinese parallel
corpus is exploited to capture the collocation and distributed characteristics
in the language pair. The model is evaluated on the Stanford Contextual Word
Similarity (SCWS) dataset to ensure the quality of monolingual sense
embeddings. In addition, we introduce Bilingual Contextual Word Similarity
(BCWS), a large and high-quality dataset for evaluating cross-lingual sense
embeddings, which is the first attempt of measuring whether the learned
embeddings are indeed aligned well in the vector space. The proposed approach
shows the superior quality of sense embeddings evaluated in both monolingual
and bilingual spaces.
| 2,018 | Computation and Language |
Abstractive Dialogue Summarization with Sentence-Gated Modeling
Optimized by Dialogue Acts | Neural abstractive summarization has been increasingly studied, where the
prior work mainly focused on summarizing single-speaker documents (news,
scientific publications, etc). In dialogues, there are different interactions
between speakers, which are usually defined as dialogue acts. The interactive
signals may provide informative cues for better summarizing dialogues. This
paper proposes to explicitly leverage dialogue acts in a neural summarization
model, where a sentence-gated mechanism is designed for modeling the
relationship between dialogue acts and the summary. The experiments show that
our proposed model significantly improves the abstractive summarization
performance compared to the state-of-the-art baselines on AMI meeting corpus,
demonstrating the usefulness of the interactive signal provided by dialogue
acts.
| 2,018 | Computation and Language |
Neural Networks and Quantifier Conservativity: Does Data Distribution
Affect Learnability? | All known natural language determiners are conservative. Psycholinguistic
experiments indicate that children exhibit a corresponding learnability bias
when faced with the task of learning new determiners. However, recent work
indicates that this bias towards conservativity is not observed during the
training stage of artificial neural networks. In this work, we investigate
whether the learnability bias exhibited by children is in part due to the
distribution of quantifiers in natural language. We share results of five
experiments, contrasted by the distribution of conservative vs.
non-conservative determiners in the training data. We demonstrate that the
aquisitional issues with non-conservative quantifiers can not be explained by
the distribution of natural language data, which favors conservative
quantifiers. This finding indicates that the bias in language acquisition data
might be innate or representational.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.