Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Zero-Shot Dual Machine Translation | Neural Machine Translation (NMT) systems rely on large amounts of parallel
data. This is a major challenge for low-resource languages. Building on recent
work on unsupervised and semi-supervised methods, we present an approach that
combines zero-shot and dual learning. The latter relies on reinforcement
learning, to exploit the duality of the machine translation task, and requires
only monolingual data for the target language pair. Experiments show that a
zero-shot dual system, trained on English-French and English-Spanish,
outperforms by large margins a standard NMT system in zero-shot translation
performance on Spanish-French (both directions). The zero-shot dual method
approaches the performance, within 2.2 BLEU points, of a comparable supervised
setting. Our method can obtain improvements also on the setting where a small
amount of parallel data for the zero-shot language pair is available. Adding
Russian, to extend our experiments to jointly modeling 6 zero-shot translation
directions, all directions improve between 4 and 15 BLEU points, again,
reaching performance near that of the supervised setting.
| 2,018 | Computation and Language |
Mixed-Precision Training for NLP and Speech Recognition with OpenSeq2Seq | We present OpenSeq2Seq - a TensorFlow-based toolkit for training
sequence-to-sequence models that features distributed and mixed-precision
training. Benchmarks on machine translation and speech recognition tasks show
that models built using OpenSeq2Seq give state-of-the-art performance at 1.5-3x
less training time. OpenSeq2Seq currently provides building blocks for models
that solve a wide range of tasks including neural machine translation,
automatic speech recognition, and speech synthesis.
| 2,018 | Computation and Language |
A Study of Question Effectiveness Using Reddit "Ask Me Anything" Threads | Asking effective questions is a powerful social skill. In this paper we seek
to build computational models that learn to discriminate effective questions
from ineffective ones. Armed with such a capability, future advanced systems
can evaluate the quality of questions and provide suggestions for effective
question wording. We create a large-scale, real-world dataset that contains
over 400,000 questions collected from Reddit "Ask Me Anything" threads. Each
thread resembles an online press conference where questions compete with each
other for attention from the host. This dataset enables the development of a
class of computational models for predicting whether a question will be
answered. We develop a new convolutional neural network architecture with
variable-length context and demonstrate the efficacy of the model by comparing
it with state-of-the-art baselines and human judges.
| 2,018 | Computation and Language |
Toward Extractive Summarization of Online Forum Discussions via
Hierarchical Attention Networks | Forum threads are lengthy and rich in content. Concise thread summaries will
benefit both newcomers seeking information and those who participate in the
discussion. Few studies, however, have examined the task of forum thread
summarization. In this work we make the first attempt to adapt the hierarchical
attention networks for thread summarization. The model draws on the recent
development of neural attention mechanisms to build sentence and thread
representations and use them for summarization. Our results indicate that the
proposed approach can outperform a range of competitive baselines. Further, a
redundancy removal step is crucial for achieving outstanding results.
| 2,018 | Computation and Language |
Reinforced Extractive Summarization with Question-Focused Rewards | We investigate a new training paradigm for extractive summarization.
Traditionally, human abstracts are used to derive goldstandard labels for
extraction units. However, the labels are often inaccurate, because human
abstracts and source documents cannot be easily aligned at the word level. In
this paper we convert human abstracts to a set of Cloze-style comprehension
questions. System summaries are encouraged to preserve salient source content
useful for answering questions and share common words with the abstracts. We
use reinforcement learning to explore the space of possible extractive
summaries and introduce a question-focused reward function to promote concise,
fluent, and informative summaries. Our experiments show that the proposed
method is effective. It surpasses state-of-the-art systems on the standard
summarization dataset.
| 2,018 | Computation and Language |
Modeling Language Vagueness in Privacy Policies using Deep Neural
Networks | Website privacy policies are too long to read and difficult to understand.
The over-sophisticated language makes privacy notices to be less effective than
they should be. People become even less willing to share their personal
information when they perceive the privacy policy as vague. This paper focuses
on decoding vagueness from a natural language processing perspective. While
thoroughly identifying the vague terms and their linguistic scope remains an
elusive challenge, in this work we seek to learn vector representations of
words in privacy policies using deep neural networks. The vector
representations are fed to an interactive visualization tool (LSTMVis) to test
on their ability to discover syntactically and semantically related vague
terms. The approach holds promise for modeling and understanding language
vagueness.
| 2,018 | Computation and Language |
Automatic Summarization of Student Course Feedback | Student course feedback is generated daily in both classrooms and online
course discussion forums. Traditionally, instructors manually analyze these
responses in a costly manner. In this work, we propose a new approach to
summarizing student course feedback based on the integer linear programming
(ILP) framework. Our approach allows different student responses to share
co-occurrence statistics and alleviates sparsity issues. Experimental results
on a student feedback corpus show that our approach outperforms a range of
baselines in terms of both ROUGE scores and human evaluation.
| 2,018 | Computation and Language |
An Improved Phrase-based Approach to Annotating and Summarizing Student
Course Responses | Teaching large classes remains a great challenge, primarily because it is
difficult to attend to all the student needs in a timely manner. Automatic text
summarization systems can be leveraged to summarize the student feedback,
submitted immediately after each lecture, but it is left to be discovered what
makes a good summary for student responses. In this work we explore a new
methodology that effectively extracts summary phrases from the student
responses. Each phrase is tagged with the number of students who raise the
issue. The phrases are evaluated along two dimensions: with respect to text
content, they should be informative and well-formed, measured by the ROUGE
metric; additionally, they shall attend to the most pressing student needs,
measured by a newly proposed metric. This work is enabled by a phrase-based
annotation and highlighting scheme, which is new to the summarization task. The
phrase-based framework allows us to summarize the student responses into a set
of bullet points and present to the instructor promptly.
| 2,018 | Computation and Language |
Toward Abstractive Summarization Using Semantic Representations | We present a novel abstractive summarization framework that draws on the
recent development of a treebank for the Abstract Meaning Representation (AMR).
In this framework, the source text is parsed to a set of AMR graphs, the graphs
are transformed into a summary graph, and then text is generated from the
summary graph. We focus on the graph-to-graph transformation that reduces the
source semantic graph into a summary graph, making use of an existing AMR
parser and assuming the eventual availability of an AMR-to-text generator. The
framework is data-driven, trainable, and not specifically designed for a
particular domain. Experiments on gold-standard AMR annotations and system
parses show promising results. Code is available at:
https://github.com/summarization
| 2,018 | Computation and Language |
Connecting Distant Entities with Induction through Conditional Random
Fields for Named Entity Recognition: Precursor-Induced CRF | This paper presents a method of designing specific high-order dependency
factor on the linear chain conditional random fields (CRFs) for named entity
recognition (NER). Named entities tend to be separated from each other by
multiple outside tokens in a text, and thus the first-order CRF, as well as the
second-order CRF, may innately lose transition information between distant
named entities. The proposed design uses outside label in NER as a transmission
medium of precedent entity information on the CRF. Then, empirical results
apparently demonstrate that it is possible to exploit long-distance label
dependency in the original first-order linear chain CRF structure upon NER
while reducing computational loss rather than in the second-order CRF.
| 2,018 | Computation and Language |
SJTU-NLP at SemEval-2018 Task 9: Neural Hypernym Discovery with Term
Embeddings | This paper describes a hypernym discovery system for our participation in the
SemEval-2018 Task 9, which aims to discover the best (set of) candidate
hypernyms for input concepts or entities, given the search space of a
pre-defined vocabulary. We introduce a neural network architecture for the
concerned task and empirically study various neural network models to build the
representations in latent space for words and phrases. The evaluated models
include convolutional neural network, long-short term memory network, gated
recurrent unit and recurrent convolutional neural network. We also explore
different embedding methods, including word embedding and sense embedding for
better performance.
| 2,018 | Computation and Language |
Dependent Gated Reading for Cloze-Style Question Answering | We present a novel deep learning architecture to address the cloze-style
question answering task. Existing approaches employ reading mechanisms that do
not fully exploit the interdependency between the document and the query. In
this paper, we propose a novel \emph{dependent gated reading} bidirectional GRU
network (DGR) to efficiently model the relationship between the document and
the query during encoding and decision making. Our evaluation shows that DGR
obtains highly competitive performance on well-known machine comprehension
benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD
What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate
our model by ablation and attention studies.
| 2,018 | Computation and Language |
Generating Fine-Grained Open Vocabulary Entity Type Descriptions | While large-scale knowledge graphs provide vast amounts of structured facts
about entities, a short textual description can often be useful to succinctly
characterize an entity and its type. Unfortunately, many knowledge graph
entities lack such textual descriptions. In this paper, we introduce a dynamic
memory-based network that generates a short open vocabulary description of an
entity by jointly leveraging induced fact embeddings as well as the dynamic
context of the generated sequence of words. We demonstrate the ability of our
architecture to discern relevant information for more accurate generation of
type description by pitting the system against several strong baselines.
| 2,018 | Computation and Language |
Convolutional neural networks for chemical-disease relation extraction
are improved with character-based word embeddings | We investigate the incorporation of character-based word representations into
a standard CNN-based relation extraction model. We experiment with two common
neural architectures, CNN and LSTM, to learn word vector representations from
character embeddings. Through a task on the BioCreative-V CDR corpus,
extracting relationships between chemicals and diseases, we show that models
exploiting the character-based word representations improve on models that do
not use this information, obtaining state-of-the-art result relative to
previous neural approaches.
| 2,018 | Computation and Language |
Reliability and Learnability of Human Bandit Feedback for
Sequence-to-Sequence Reinforcement Learning | We present a study on reinforcement learning (RL) from human bandit feedback
for sequence-to-sequence learning, exemplified by the task of bandit neural
machine translation (NMT). We investigate the reliability of human bandit
feedback, and analyze the influence of reliability on the learnability of a
reward estimator, and the effect of the quality of reward estimates on the
overall RL task. Our analysis of cardinal (5-point ratings) and ordinal
(pairwise preferences) feedback shows that their intra- and inter-annotator
$\alpha$-agreement is comparable. Best reliability is obtained for standardized
cardinal feedback, and cardinal feedback is also easiest to learn and
generalize from. Finally, improvements of over 1 BLEU can be obtained by
integrating a regression-based reward estimator trained on cardinal feedback
for 800 translations into RL for NMT. This shows that RL is possible even from
small amounts of fairly reliable human feedback, pointing to a great potential
for applications at larger scale.
| 2,018 | Computation and Language |
Convolutional neural network compression for natural language processing | Convolutional neural networks are modern models that are very efficient in
many classification tasks. They were originally created for image processing
purposes. Then some trials were performed to use them in different domains like
natural language processing. The artificial intelligence systems (like humanoid
robots) are very often based on embedded systems with constraints on memory,
power consumption etc. Therefore convolutional neural network because of its
memory capacity should be reduced to be mapped to given hardware. In this
paper, results are presented of compressing the efficient convolutional neural
networks for sentiment analysis. The main steps are quantization and pruning
processes. The method responsible for mapping compressed network to FPGA and
results of this implementation are presented. The described simulations showed
that 5-bit width is enough to have no drop in accuracy from floating point
version of the network. Additionally, significant memory footprint reduction
was achieved (from 85% up to 93%).
| 2,018 | Computation and Language |
UG18 at SemEval-2018 Task 1: Generating Additional Training Data for
Predicting Emotion Intensity in Spanish | The present study describes our submission to SemEval 2018 Task 1: Affect in
Tweets. Our Spanish-only approach aimed to demonstrate that it is beneficial to
automatically generate additional training data by (i) translating training
data from other languages and (ii) applying a semi-supervised learning method.
We find strong support for both approaches, with those models outperforming our
regular models in all subtasks. However, creating a stepwise ensemble of
different models as opposed to simply averaging did not result in an increase
in performance. We placed second (EI-Reg), second (EI-Oc), fourth (V-Reg) and
fifth (V-Oc) in the four Spanish subtasks we participated in.
| 2,018 | Computation and Language |
Inducing Grammars with and for Neural Machine Translation | Machine translation systems require semantic knowledge and grammatical
understanding. Neural machine translation (NMT) systems often assume this
information is captured by an attention mechanism and a decoder that ensures
fluency. Recent work has shown that incorporating explicit syntax alleviates
the burden of modeling both types of knowledge. However, requiring parses is
expensive and does not explore the question of what syntax a model needs during
translation. To address both of these issues we introduce a model that
simultaneously translates while inducing dependency trees. In this way, we
leverage the benefits of structure while investigating what syntax NMT must
induce to maximize performance. We show that our dependency trees are 1.
language pair dependent and 2. improve translation quality.
| 2,018 | Computation and Language |
Temporal Event Knowledge Acquisition via Identifying Narratives | Inspired by the double temporality characteristic of narrative texts, we
propose a novel approach for acquiring rich temporal "before/after" event
knowledge across sentences in narrative stories. The double temporality states
that a narrative story often describes a sequence of events following the
chronological order and therefore, the temporal order of events matches with
their textual order. We explored narratology principles and built a weakly
supervised approach that identifies 287k narrative paragraphs from three large
text corpora. We then extracted rich temporal event knowledge from these
narrative paragraphs. Such event knowledge is shown useful to improve temporal
relation classification and outperform several recent neural network models on
the narrative cloze task.
| 2,018 | Computation and Language |
Denoising Distant Supervision for Relation Extraction via Instance-Level
Adversarial Training | Existing neural relation extraction (NRE) models rely on distant supervision
and suffer from wrong labeling problems. In this paper, we propose a novel
adversarial training mechanism over instances for relation extraction to
alleviate the noise issue. As compared with previous denoising methods, our
proposed method can better discriminate those informative instances from noisy
ones. Our method is also efficient and flexible to be applied to various NRE
architectures. As shown in the experiments on a large-scale benchmark dataset
in relation extraction, our denoising method can effectively filter out noisy
instances and achieve significant improvements as compared with the
state-of-the-art models.
| 2,018 | Computation and Language |
GLAC Net: GLocal Attention Cascading Networks for Multi-image Cued Story
Generation | The task of multi-image cued story generation, such as visual storytelling
dataset (VIST) challenge, is to compose multiple coherent sentences from a
given sequence of images. The main difficulty is how to generate image-specific
sentences within the context of overall images. Here we propose a deep learning
network model, GLAC Net, that generates visual stories by combining
global-local (glocal) attention and context cascading mechanisms. The model
incorporates two levels of attention, i.e., overall encoding level and image
feature level, to construct image-dependent sentences. While standard attention
configuration needs a large number of parameters, the GLAC Net implements them
in a very simple way via hard connections from the outputs of encoders or image
features onto the sentence generators. The coherency of the generated story is
further improved by conveying (cascading) the information of the previous
sentence to the next sentence serially. We evaluate the performance of the GLAC
Net on the visual storytelling dataset (VIST) and achieve very competitive
results compared to the state-of-the-art techniques. Our code and pre-trained
models are available here.
| 2,019 | Computation and Language |
Resolving Event Coreference with Supervised Representation Learning and
Clustering-Oriented Regularization | We present an approach to event coreference resolution by developing a
general framework for clustering that uses supervised representation learning.
We propose a neural network architecture with novel Clustering-Oriented
Regularization (CORE) terms in the objective function. These terms encourage
the model to create embeddings of event mentions that are amenable to
clustering. We then use agglomerative clustering on these embeddings to build
event coreference chains. For both within- and cross-document coreference on
the ECB+ corpus, our model obtains better results than models that require
significantly more pre-annotated information. This work provides insight and
motivating results for a new general approach to solving coreference and
clustering problems with representation learning.
| 2,018 | Computation and Language |
Soft Layer-Specific Multi-Task Summarization with Entailment and
Question Generation | An accurate abstractive summary of a document should contain all its salient
information and should be logically entailed by the input document. We improve
these important aspects of abstractive summarization via multi-task learning
with the auxiliary tasks of question generation and entailment generation,
where the former teaches the summarization model how to look for salient
questioning-worthy details, and the latter teaches the model how to rewrite a
summary which is a directed-logical subset of the input document. We also
propose novel multi-task architectures with high-level (semantic)
layer-specific sharing across multiple encoder and decoder layers of the three
tasks, as well as soft-sharing mechanisms (and show performance ablations and
analysis examples of each contribution). Overall, we achieve statistically
significant improvements over the state-of-the-art on both the CNN/DailyMail
and Gigaword datasets, as well as on the DUC-2002 transfer setup. We also
present several quantitative and qualitative analysis studies of our model's
learned saliency and entailment skills.
| 2,018 | Computation and Language |
Think Visually: Question Answering through Virtual Imagery | In this paper, we study the problem of geometric reasoning in the context of
question-answering. We introduce Dynamic Spatial Memory Network (DSMN), a new
deep network architecture designed for answering questions that admit latent
visual representations. DSMN learns to generate and reason over such
representations. Further, we propose two synthetic benchmarks, FloorPlanQA and
ShapeIntersection, to evaluate the geometric reasoning capability of QA
systems. Experimental results validate the effectiveness of our proposed DSMN
for visual thinking tasks.
| 2,018 | Computation and Language |
Fast Abstractive Summarization with Reinforce-Selected Sentence
Rewriting | Inspired by how humans summarize long documents, we propose an accurate and
fast summarization model that first selects salient sentences and then rewrites
them abstractively (i.e., compresses and paraphrases) to generate a concise
overall summary. We use a novel sentence-level policy gradient method to bridge
the non-differentiable computation between these two neural networks in a
hierarchical way, while maintaining language fluency. Empirically, we achieve
the new state-of-the-art on all metrics (including human evaluation) on the
CNN/Daily Mail dataset, as well as significantly higher abstractiveness scores.
Moreover, by first operating at the sentence-level and then the word-level, we
enable parallel decoding of our neural generative model that results in
substantially faster (10-20x) inference speed as well as 4x faster training
convergence than previous long-paragraph encoder-decoder models. We also
demonstrate the generalization of our model on the test-only DUC-2002 dataset,
where we achieve higher scores than a state-of-the-art model.
| 2,018 | Computation and Language |
Core Conflictual Relationship: Text Mining to Discover What and When | Following detailed presentation of the Core Conflictual Relationship Theme
(CCRT), there is the objective of relevant methods for what has been described
as verbalization and visualization of data. Such is also termed data mining and
text mining, and knowledge discovery in data. The Correspondence Analysis
methodology, also termed Geometric Data Analysis, is shown in a case study to
be comprehensive and revealing. Computational efficiency depends on how the
analysis process is structured. For both illustrative and revealing aspects of
the case study here, relatively extensive dream reports are used. This
Geometric Data Analysis confirms the validity of CCRT method.
| 2,018 | Computation and Language |
Refining Source Representations with Relation Networks for Neural
Machine Translation | Although neural machine translation with the encoder-decoder framework has
achieved great success recently, it still suffers drawbacks of forgetting
distant information, which is an inherent disadvantage of recurrent neural
network structure, and disregarding relationship between source words during
encoding step. Whereas in practice, the former information and relationship are
often useful in current step. We target on solving these problems and thus
introduce relation networks to learn better representations of the source. The
relation networks are able to facilitate memorization capability of recurrent
neural network via associating source words with each other, this would also
help retain their relationships. Then the source representations and all the
relations are fed into the attention component together while decoding, with
the main encoder-decoder framework unchanged. Experiments on several datasets
show that our method can improve the translation performance significantly over
the conventional encoder-decoder model and even outperform the approach
involving supervised syntactic knowledge.
| 2,018 | Computation and Language |
A visual approach for age and gender identification on Twitter | The goal of Author Profiling (AP) is to identify demographic aspects (e.g.,
age, gender) from a given set of authors by analyzing their written texts.
Recently, the AP task has gained interest in many problems related to computer
forensics, psychology, marketing, but specially in those related with social
media exploitation. As known, social media data is shared through a wide range
of modalities (e.g., text, images and audio), representing valuable information
to be exploited for extracting valuable insights from users. Nevertheless, most
of the current work in AP using social media data has been devoted to analyze
textual information only, and there are very few works that have started
exploring the gender identification using visual information. Contrastingly,
this paper focuses in exploiting the visual modality to perform both age and
gender identification in social media, specifically in Twitter. Our goal is to
evaluate the pertinence of using visual information in solving the AP task.
Accordingly, we have extended the Twitter corpus from PAN 2014, incorporating
posted images from all the users, making a distinction between tweeted and
retweeted images. Performed experiments provide interesting evidence on the
usefulness of visual information in comparison with traditional textual
representations for the AP task.
| 2,018 | Computation and Language |
Graph-based Filtering of Out-of-Vocabulary Words for Encoder-Decoder
Models | Encoder-decoder models typically only employ words that are frequently used
in the training corpus to reduce the computational costs and exclude noise.
However, this vocabulary set may still include words that interfere with
learning in encoder-decoder models. This paper proposes a method for selecting
more suitable words for learning encoders by utilizing not only frequency, but
also co-occurrence information, which we capture using the HITS algorithm. We
apply our proposed method to two tasks: machine translation and grammatical
error correction. For Japanese-to-English translation, this method achieves a
BLEU score that is 0.56 points more than that of a baseline. It also
outperforms the baseline method for English grammatical error correction, with
an F0.5-measure that is 1.48 points higher.
| 2,018 | Computation and Language |
Bi-Directional Neural Machine Translation with Synthetic Parallel Data | Despite impressive progress in high-resource settings, Neural Machine
Translation (NMT) still struggles in low-resource and out-of-domain scenarios,
often failing to match the quality of phrase-based translation. We propose a
novel technique that combines back-translation and multilingual NMT to improve
performance in these difficult cases. Our technique trains a single model for
both directions of a language pair, allowing us to back-translate source or
target monolingual data without requiring an auxiliary model. We then continue
training on the augmented parallel data, enabling a cycle of improvement for a
single model that can incorporate any source, target, or parallel data to
improve both translation directions. As a byproduct, these models can reduce
training and deployment costs significantly compared to uni-directional models.
Extensive experiments show that our technique outperforms standard
back-translation in low-resource scenarios, improves quality on cross-domain
tasks, and effectively reduces costs across the board.
| 2,018 | Computation and Language |
Distilling Knowledge for Search-based Structured Prediction | Many natural language processing tasks can be modeled into structured
prediction and solved as a search problem. In this paper, we distill an
ensemble of multiple models trained with different initialization into a single
model. In addition to learning to match the ensemble's probability output on
the reference states, we also use the ensemble to explore the search space and
learn from the encountered states in the exploration. Experimental results on
two typical search-based structured prediction tasks -- transition-based
dependency parsing and neural machine translation show that distillation can
effectively improve the single model's performance and the final model achieves
improvements of 1.32 in LAS and 2.65 in BLEU score on these two tasks
respectively over strong baselines and it outperforms the greedy structured
prediction models in previous literatures.
| 2,018 | Computation and Language |
Table-to-Text: Describing Table Region with Natural Language | In this paper, we present a generative model to generate a natural language
sentence describing a table region, e.g., a row. The model maps a row from a
table to a continuous vector and then generates a natural language sentence by
leveraging the semantics of a table. To deal with rare words appearing in a
table, we develop a flexible copying mechanism that selectively replicates
contents from the table in the output sequence. Extensive experiments
demonstrate the accuracy of the model and the power of the copying mechanism.
On two synthetic datasets, WIKIBIO and SIMPLEQUESTIONS, our model improves the
current state-of-the-art BLEU-4 score from 34.70 to 40.26 and from 33.32 to
39.12, respectively. Furthermore, we introduce an open-domain dataset
WIKITABLETEXT including 13,318 explanatory sentences for 4,962 tables. Our
model achieves a BLEU-4 score of 38.23, which outperforms template based and
language model based approaches.
| 2,018 | Computation and Language |
Multi-hop Inference for Sentence-level TextGraphs: How Challenging is
Meaningfully Combining Information for Science Question Answering? | Question Answering for complex questions is often modeled as a graph
construction or traversal task, where a solver must build or traverse a graph
of facts that answer and explain a given question. This "multi-hop" inference
has been shown to be extremely challenging, with few models able to aggregate
more than two facts before being overwhelmed by "semantic drift", or the
tendency for long chains of facts to quickly drift off topic. This is a major
barrier to current inference models, as even elementary science questions
require an average of 4 to 6 facts to answer and explain. In this work we
empirically characterize the difficulty of building or traversing a graph of
sentences connected by lexical overlap, by evaluating chance sentence
aggregation quality through 9,784 manually-annotated judgments across knowledge
graphs built from three free-text corpora (including study guides and Simple
Wikipedia). We demonstrate semantic drift tends to be high and aggregation
quality low, at between 0.04% and 3%, and highlight scenarios that maximize the
likelihood of meaningfully combining information.
| 2,018 | Computation and Language |
Unsupervised detection of diachronic word sense evolution | Most words have several senses and connotations which evolve in time due to
semantic shift, so that closely related words may gain different or even
opposite meanings over the years. This evolution is very relevant to the study
of language and of cultural changes, but the tools currently available for
diachronic semantic analysis have significant, inherent limitations and are not
suitable for real-time analysis. In this article, we demonstrate how the
linearity of random vectors techniques enables building time series of
congruent word embeddings (or semantic spaces) which can then be compared and
combined linearly without loss of precision over any time period to detect
diachronic semantic shifts. We show how this approach yields time trajectories
of polysemous words such as amazon or apple, enables following semantic drifts
and gender bias across time, reveals the shifting instantiations of stable
concepts such as hurricane or president. This very fast, linear approach can
easily be distributed over many processors to follow in real time streams of
social media such as Twitter or Facebook; the resulting, time-dependent
semantic spaces can then be combined at will by simple additions or
subtractions.
| 2,018 | Computation and Language |
Fully Statistical Neural Belief Tracking | This paper proposes an improvement to the existing data-driven Neural Belief
Tracking (NBT) framework for Dialogue State Tracking (DST). The existing NBT
model uses a hand-crafted belief state update mechanism which involves an
expensive manual retuning step whenever the model is deployed to a new dialogue
domain. We show that this update mechanism can be learned jointly with the
semantic decoding and context modelling parts of the NBT model, eliminating the
last rule-based module from this DST framework. We propose two different
statistical update mechanisms and show that dialogue dynamics can be modelled
with a very small number of additional model parameters. In our DST evaluation
over three languages, we show that this model achieves competitive performance
and provides a robust framework for building resource-light DST models.
| 2,018 | Computation and Language |
Quantum-inspired Complex Word Embedding | A challenging task for word embeddings is to capture the emergent meaning or
polarity of a combination of individual words. For example, existing approaches
in word embeddings will assign high probabilities to the words "Penguin" and
"Fly" if they frequently co-occur, but it fails to capture the fact that they
occur in an opposite sense - Penguins do not fly. We hypothesize that humans do
not associate a single polarity or sentiment to each word. The word contributes
to the overall polarity of a combination of words depending upon which other
words it is combined with. This is analogous to the behavior of microscopic
particles which exist in all possible states at the same time and interfere
with each other to give rise to new states depending upon their relative
phases. We make use of the Hilbert Space representation of such particles in
Quantum Mechanics where we subscribe a relative phase to each word, which is a
complex number, and investigate two such quantum inspired models to derive the
meaning of a combination of words. The proposed models achieve better
performances than state-of-the-art non-quantum models on the binary sentence
classification task.
| 2,018 | Computation and Language |
Semantic Sentence Matching with Densely-connected Recurrent and
Co-attentive Information | Sentence matching is widely used in various natural language tasks such as
natural language inference, paraphrase identification, and question answering.
For these tasks, understanding logical and semantic relationship between two
sentences is required but it is yet challenging. Although attention mechanism
is useful to capture the semantic relationship and to properly align the
elements of two sentences, previous methods of attention mechanism simply use a
summation operation which does not retain original features enough. Inspired by
DenseNet, a densely connected convolutional network, we propose a
densely-connected co-attentive recurrent neural network, each layer of which
uses concatenated information of attentive features as well as hidden features
of all the preceding recurrent layers. It enables preserving the original and
the co-attentive feature information from the bottommost word embedding layer
to the uppermost recurrent layer. To alleviate the problem of an
ever-increasing size of feature vectors due to dense concatenation operations,
we also propose to use an autoencoder after dense concatenation. We evaluate
our proposed architecture on highly competitive benchmark datasets related to
sentence matching. Experimental results show that our architecture, which
retains recurrent and attentive features, achieves state-of-the-art
performances for most of the tasks.
| 2,018 | Computation and Language |
Syntactic Dependency Representations in Neural Relation Classification | We investigate the use of different syntactic dependency representations in a
neural relation classification task and compare the CoNLL, Stanford Basic and
Universal Dependencies schemes. We further compare with a syntax-agnostic
approach and perform an error analysis in order to gain a better understanding
of the results.
| 2,018 | Computation and Language |
OpenNMT: Neural Machine Translation Toolkit | OpenNMT is an open-source toolkit for neural machine translation (NMT). The
system prioritizes efficiency, modularity, and extensibility with the goal of
supporting NMT research into model architectures, feature representations, and
source modalities, while maintaining competitive performance and reasonable
training requirements. The toolkit consists of modeling and translation
support, as well as detailed pedagogical documentation about the underlying
techniques. OpenNMT has been used in several production MT systems, modified
for numerous research papers, and is implemented across several deep learning
frameworks.
| 2,018 | Computation and Language |
AMR Dependency Parsing with a Typed Semantic Algebra | We present a semantic parser for Abstract Meaning Representations which
learns to parse strings into tree representations of the compositional
structure of an AMR graph. This allows us to use standard neural techniques for
supertagging and dependency tree parsing, constrained by a linguistically
principled type system. We present two approximative decoding algorithms, which
achieve state-of-the-art accuracy and outperform strong baselines.
| 2,018 | Computation and Language |
Entity Linking in 40 Languages using MAG | A plethora of Entity Linking (EL) approaches has recently been developed.
While many claim to be multilingual, the MAG (Multilingual AGDISTIS) approach
has been shown recently to outperform the state of the art in multilingual EL
on 7 languages. With this demo, we extend MAG to support EL in 40 different
languages, including especially low-resources languages such as Ukrainian,
Greek, Hungarian, Croatian, Portuguese, Japanese and Korean. Our demo relies on
online web services which allow for an easy access to our entity linking
approaches and can disambiguate against DBpedia and Wikidata. During the demo,
we will show how to use MAG by means of POST requests as well as using its
user-friendly web interface. All data used in the demo is available at
https://hobbitdata.informatik.uni-leipzig.de/agdistis/
| 2,018 | Computation and Language |
Human vs Automatic Metrics: on the Importance of Correlation Design | This paper discusses two existing approaches to the correlation analysis
between automatic evaluation metrics and human scores in the area of natural
language generation. Our experiments show that depending on the usage of a
system- or sentence-level correlation analysis, correlation results between
automatic scores and human judgments are inconsistent.
| 2,021 | Computation and Language |
CoupleNet: Paying Attention to Couples with Coupled Attention for
Relationship Recommendation | Dating and romantic relationships not only play a huge role in our personal
lives but also collectively influence and shape society. Today, many romantic
partnerships originate from the Internet, signifying the importance of
technology and the web in modern dating. In this paper, we present a text-based
computational approach for estimating the relationship compatibility of two
users on social media. Unlike many previous works that propose reciprocal
recommender systems for online dating websites, we devise a distant supervision
heuristic to obtain real world couples from social platforms such as Twitter.
Our approach, the CoupleNet is an end-to-end deep learning based estimator that
analyzes the social profiles of two users and subsequently performs a
similarity match between the users. Intuitively, our approach performs both
user profiling and match-making within a unified end-to-end framework.
CoupleNet utilizes hierarchical recurrent neural models for learning
representations of user profiles and subsequently coupled attention mechanisms
to fuse information aggregated from two users. To the best of our knowledge,
our approach is the first data-driven deep learning approach for our novel
relationship recommendation problem. We benchmark our CoupleNet against several
machine learning and deep learning baselines. Experimental results show that
our approach outperforms all approaches significantly in terms of precision.
Qualitative analysis shows that our model is capable of also producing
explainable results to users.
| 2,018 | Computation and Language |
Lightly-supervised Representation Learning with Global Interpretability | We propose a lightly-supervised approach for information extraction, in
particular named entity classification, which combines the benefits of
traditional bootstrapping, i.e., use of limited annotations and
interpretability of extraction patterns, with the robust learning approaches
proposed in representation learning. Our algorithm iteratively learns custom
embeddings for both the multi-word entities to be extracted and the patterns
that match them from a few example entities per category. We demonstrate that
this representation-based approach outperforms three other state-of-the-art
bootstrapping approaches on two datasets: CoNLL-2003 and OntoNotes.
Additionally, using these embeddings, our approach outputs a
globally-interpretable model consisting of a decision list, by ranking patterns
based on their proximity to the average entity embedding in a given class. We
show that this interpretable model performs close to our complete bootstrapping
model, proving that representation learning can be used to produce
interpretable models with small loss in performance.
| 2,018 | Computation and Language |
Like a Baby: Visually Situated Neural Language Acquisition | We examine the benefits of visual context in training neural language models
to perform next-word prediction. A multi-modal neural architecture is
introduced that outperform its equivalent trained on language alone with a 2\%
decrease in perplexity, even when no visual context is available at test.
Fine-tuning the embeddings of a pre-trained state-of-the-art bidirectional
language model (BERT) in the language modeling framework yields a 3.5\%
improvement. The advantage for training with visual context when testing
without is robust across different languages (English, German and Spanish) and
different models (GRU, LSTM, $\Delta$-RNN, as well as those that use BERT
embeddings). Thus, language models perform better when they learn like a baby,
i.e, in a multi-modal environment. This finding is compatible with the theory
of situated cognition: language is inseparable from its physical context.
| 2,019 | Computation and Language |
Entrainment profiles: Comparison by gender, role, and feature set | We examine prosodic entrainment in cooperative game dialogs for new feature
sets describing register, pitch accent shape, and rhythmic aspects of
utterances. For these as well as for established features we present
entrainment profiles to detect within- and across-dialog entrainment by the
speakers' gender and role in the game. It turned out, that feature sets undergo
entrainment in different quantitative and qualitative ways, which can partly be
attributed to their different functions. Furthermore, interactions between
speaker gender and role (describer vs. follower) suggest gender-dependent
strategies in cooperative solution-oriented interactions: female describers
entrain most, male describers least. Our data suggests a slight advantage of
the latter strategy on task success.
| 2,018 | Computation and Language |
Polyglot Semantic Role Labeling | Previous approaches to multilingual semantic dependency parsing treat
languages independently, without exploiting the similarities between semantic
structures across languages. We experiment with a new approach where we combine
resources from a pair of languages in the CoNLL 2009 shared task to build a
polyglot semantic role labeler. Notwithstanding the absence of parallel data,
and the dissimilarity in annotations between languages, our approach results in
an improvement in SRL performance on multiple languages over a monolingual
baseline. Analysis of the polyglot model shows it to be advantageous in
lower-resource settings.
| 2,018 | Computation and Language |
Automatic Identification of Arabic expressions related to future events
in Lebanon's economy | In this paper, we propose a method to automatically identify future events in
Lebanon's economy from Arabic texts. Challenges are threefold: first, we need
to build a corpus of Arabic texts that covers Lebanon's economy; second, we
need to study how future events are expressed linguistically in these texts;
and third, we need to automatically identify the relevant textual segments
accordingly. We will validate this method on a constructed corpus form the web
and show that it has very promising results. To do so, we will be using SLCSAS,
a system for semantic analysis, based on the Contextual Explorer method, and
"AlKhalil Morpho Sys" system for morpho-syntactic analysis.
| 2,018 | Computation and Language |
Semantically-informed distance and similarity measures for paraphrase
plagiarism identification | Paraphrase plagiarism identification represents a very complex task given
that plagiarized texts are intentionally modified through several rewording
techniques. Accordingly, this paper introduces two new measures for evaluating
the relatedness of two given texts: a semantically-informed similarity measure
and a semantically-informed edit distance. Both measures are able to extract
semantic information from either an external resource or a distributed
representation of words, resulting in informative features for training a
supervised classifier for detecting paraphrase plagiarism. Obtained results
indicate that the proposed metrics are consistently good in detecting different
types of paraphrase plagiarism. In addition, results are very competitive
against state-of-the art methods having the advantage of representing a much
more simple but equally effective solution.
| 2,018 | Computation and Language |
Splitting source code identifiers using Bidirectional LSTM Recurrent
Neural Network | Programmers make rich use of natural language in the source code they write
through identifiers and comments. Source code identifiers are selected from a
pool of tokens which are strongly related to the meaning, naming conventions,
and context. These tokens are often combined to produce more precise and
obvious designations. Such multi-part identifiers count for 97% of all naming
tokens in the Public Git Archive - the largest dataset of Git repositories to
date. We introduce a bidirectional LSTM recurrent neural network to detect
subtokens in source code identifiers. We trained that network on 41.7 million
distinct splittable identifiers collected from 182,014 open source projects in
Public Git Archive, and show that it outperforms several other machine learning
models. The proposed network can be used to improve the upstream models which
are based on source code identifiers, as well as improving developer experience
allowing writing code without switching the keyboard case.
| 2,018 | Computation and Language |
LSTMs Exploit Linguistic Attributes of Data | While recurrent neural networks have found success in a variety of natural
language processing applications, they are general models of sequential data.
We investigate how the properties of natural language data affect an LSTM's
ability to learn a nonlinguistic task: recalling elements from its input. We
find that models trained on natural language data are able to recall tokens
from much longer sequences than models trained on non-language sequential data.
Furthermore, we show that the LSTM learns to solve the memorization task by
explicitly using a subset of its neurons to count timesteps in the input. We
hypothesize that the patterns and structure in natural language data enable
LSTMs to learn by providing approximate ways of reducing loss, but
understanding the effect of different training data on the learnability of
LSTMs remains an open question.
| 2,019 | Computation and Language |
Unsupervised Text Style Transfer using Language Models as Discriminators | Binary classifiers are often employed as discriminators in GAN-based
unsupervised style transfer systems to ensure that transferred sentences are
similar to sentences in the target domain. One difficulty with this approach is
that the error signal provided by the discriminator can be unstable and is
sometimes insufficient to train the generator to produce fluent language. In
this paper, we propose a new technique that uses a target domain language model
as the discriminator, providing richer and more stable token-level feedback
during the learning process. We train the generator to minimize the negative
log likelihood (NLL) of generated sentences, evaluated by the language model.
By using a continuous approximation of discrete sampling under the generator,
our model can be trained using back-propagation in an end- to-end fashion.
Moreover, our empirical results show that when using a language model as a
structured discriminator, it is possible to forgo adversarial steps during
training, making the process more stable. We compare our model with previous
work using convolutional neural networks (CNNs) as discriminators and show that
our approach leads to improved performance on three tasks: word substitution
decipherment, sentiment modification, and related language translation.
| 2,019 | Computation and Language |
Multi-turn Dialogue Response Generation in an Adversarial Learning
Framework | We propose an adversarial learning approach for generating multi-turn
dialogue responses. Our proposed framework, hredGAN, is based on conditional
generative adversarial networks (GANs). The GAN's generator is a modified
hierarchical recurrent encoder-decoder network (HRED) and the discriminator is
a word-level bidirectional RNN that shares context and word embeddings with the
generator. During inference, noise samples conditioned on the dialogue history
are used to perturb the generator's latent space to generate several possible
responses. The final response is the one ranked best by the discriminator. The
hredGAN shows improved performance over existing methods: (1) it generalizes
better than networks trained using only the log-likelihood criterion, and (2)
it generates longer, more informative and more diverse responses with high
utterance and topic relevance even with limited training data. This improvement
is demonstrated on the Movie triples and Ubuntu dialogue datasets using both
automatic and human evaluations.
| 2,019 | Computation and Language |
Adversarial Learning of Task-Oriented Neural Dialog Models | In this work, we propose an adversarial learning method for reward estimation
in reinforcement learning (RL) based task-oriented dialog models. Most of the
current RL based task-oriented dialog systems require the access to a reward
signal from either user feedback or user ratings. Such user ratings, however,
may not always be consistent or available in practice. Furthermore, online
dialog policy learning with RL typically requires a large number of queries to
users, suffering from sample efficiency problem. To address these challenges,
we propose an adversarial learning method to learn dialog rewards directly from
dialog samples. Such rewards are further used to optimize the dialog policy
with policy gradient based RL. In the evaluation in a restaurant search domain,
we show that the proposed adversarial dialog learning method achieves advanced
dialog success rate comparing to strong baseline methods. We further discuss
the covariate shift problem in online adversarial dialog learning and show how
we can address that with partial access to user feedback.
| 2,018 | Computation and Language |
Planning, Inference and Pragmatics in Sequential Language Games | We study sequential language games in which two players, each with private
information, communicate to achieve a common goal. In such games, a successful
player must (i) infer the partner's private information from the partner's
messages, (ii) generate messages that are most likely to help with the goal,
and (iii) reason pragmatically about the partner's strategy. We propose a model
that captures all three characteristics and demonstrate their importance in
capturing human behavior on a new goal-oriented dataset we collected using
crowdsourcing.
| 2,018 | Computation and Language |
Visual Referring Expression Recognition: What Do Systems Actually Learn? | We present an empirical analysis of the state-of-the-art systems for
referring expression recognition -- the task of identifying the object in an
image referred to by a natural language expression -- with the goal of gaining
insight into how these systems reason about language and vision. Surprisingly,
we find strong evidence that even sophisticated and linguistically-motivated
models for this task may ignore the linguistic structure, instead relying on
shallow correlations introduced by unintended biases in the data selection and
annotation process. For example, we show that a system trained and tested on
the input image $\textit{without the input referring expression}$ can achieve a
precision of 71.2% in top-2 predictions. Furthermore, a system that predicts
only the object category given the input can achieve a precision of 84.2% in
top-2 predictions. These surprisingly positive results for what should be
deficient prediction scenarios suggest that careful analysis of what our models
are learning -- and further, how our data is constructed -- is critical as we
seek to make substantive progress on grounded language tasks.
| 2,018 | Computation and Language |
Anaphora and Coreference Resolution: A Review | Entity resolution aims at resolving repeated references to an entity in a
document and forms a core component of natural language processing (NLP)
research. This field possesses immense potential to improve the performance of
other NLP fields like machine translation, sentiment analysis, paraphrase
detection, summarization, etc. The area of entity resolution in NLP has seen
proliferation of research in two separate sub-areas namely: anaphora resolution
and coreference resolution. Through this review article, we aim at clarifying
the scope of these two tasks in entity resolution. We also carry out a detailed
analysis of the datasets, evaluation metrics and research methods that have
been adopted to tackle this NLP problem. This survey is motivated with the aim
of providing the reader with a clear understanding of what constitutes this NLP
problem and the issues that require attention.
| 2,018 | Computation and Language |
Using Inter-Sentence Diverse Beam Search to Reduce Redundancy in Visual
Storytelling | Visual storytelling includes two important parts: coherence between the story
and images as well as the story structure. For image to text neural network
models, similar images in the sequence would provide close information for
story generator to obtain almost identical sentence. However, repeatedly
narrating same objects or events will undermine a good story structure. In this
paper, we proposed an inter-sentence diverse beam search to generate a more
expressive story. Comparing to some recent models of visual storytelling task,
which generate story without considering the generated sentence of the previous
picture, our proposed method can avoid generating identical sentence even given
a sequence of similar pictures.
| 2,018 | Computation and Language |
An English-Hindi Code-Mixed Corpus: Stance Annotation and Baseline
System | Social media has become one of the main channels for peo- ple to communicate
and share their views with the society. We can often detect from these views
whether the person is in favor, against or neu- tral towards a given topic.
These opinions from social media are very useful for various companies. We
present a new dataset that consists of 3545 English-Hindi code-mixed tweets
with opinion towards Demoneti- sation that was implemented in India in 2016
which was followed by a large countrywide debate. We present a baseline
supervised classification system for stance detection developed using the same
dataset that uses various machine learning techniques to achieve an accuracy of
58.7% on 10-fold cross validation.
| 2,018 | Computation and Language |
A Corpus of English-Hindi Code-Mixed Tweets for Sarcasm Detection | Social media platforms like twitter and facebook have be- come two of the
largest mediums used by people to express their views to- wards different
topics. Generation of such large user data has made NLP tasks like sentiment
analysis and opinion mining much more important. Using sarcasm in texts on
social media has become a popular trend lately. Using sarcasm reverses the
meaning and polarity of what is implied by the text which poses challenge for
many NLP tasks. The task of sarcasm detection in text is gaining more and more
importance for both commer- cial and security services. We present the first
English-Hindi code-mixed dataset of tweets marked for presence of sarcasm and
irony where each token is also annotated with a language tag. We present a
baseline su- pervised classification system developed using the same dataset
which achieves an average F-score of 78.4 after using random forest classifier
and performing 10-fold cross validation.
| 2,018 | Computation and Language |
Character-Level Models versus Morphology in Semantic Role Labeling | Character-level models have become a popular approach specially for their
accessibility and ability to handle unseen data. However, little is known on
their ability to reveal the underlying morphological structure of a word, which
is a crucial skill for high-level semantic analysis tasks, such as semantic
role labeling (SRL). In this work, we train various types of SRL models that
use word, character and morphology level information and analyze how
performance of characters compare to words and morphology for several
languages. We conduct an in-depth error analysis for each morphological
typology and analyze the strengths and limitations of character-level models
that relate to out-of-domain data, training data size, long range dependencies
and model complexity. Our exhaustive analyses shed light on important
characteristics of character-level models and their semantic capability.
| 2,018 | Computation and Language |
Identifying and Understanding User Reactions to Deceptive and Trusted
Social News Sources | In the age of social news, it is important to understand the types of
reactions that are evoked from news sources with various levels of credibility.
In the present work we seek to better understand how users react to trusted and
deceptive news sources across two popular, and very different, social media
platforms. To that end, (1) we develop a model to classify user reactions into
one of nine types, such as answer, elaboration, and question, etc, and (2) we
measure the speed and the type of reaction for trusted and deceptive news
sources for 10.8M Twitter posts and 6.2M Reddit comments. We show that there
are significant differences in the speed and the type of reactions between
trusted and deceptive news sources on Twitter, but far smaller differences on
Reddit.
| 2,018 | Computation and Language |
End-to-end named entity extraction from speech | Named entity recognition (NER) is among SLU tasks that usually extract
semantic information from textual documents. Until now, NER from speech is made
through a pipeline process that consists in processing first an automatic
speech recognition (ASR) on the audio and then processing a NER on the ASR
outputs. Such approach has some disadvantages (error propagation, metric to
tune ASR systems sub-optimal in regards to the final task, reduced space search
at the ASR output level...) and it is known that more integrated approaches
outperform sequential ones, when they can be applied. In this paper, we present
a first study of end-to-end approach that directly extracts named entities from
speech, though a unique neural architecture. On a such way, a joint
optimization is able for both ASR and NER. Experiments are carried on French
data easily accessible, composed of data distributed in several evaluation
campaign. Experimental results show that this end-to-end approach provides
better results (F-measure=0.69 on test data) than a classical pipeline approach
to detect named entity categories (F-measure=0.65).
| 2,018 | Computation and Language |
Bilingual Character Representation for Efficiently Addressing
Out-of-Vocabulary Words in Code-Switching Named Entity Recognition | We propose an LSTM-based model with hierarchical architecture on named entity
recognition from code-switching Twitter data. Our model uses bilingual
character representation and transfer learning to address out-of-vocabulary
words. In order to mitigate data noise, we propose to use token replacement and
normalization. In the 3rd Workshop on Computational Approaches to Linguistic
Code-Switching Shared Task, we achieved second place with 62.76% harmonic mean
F1-score for English-Spanish language pair without using any gazetteer and
knowledge-based information.
| 2,019 | Computation and Language |
Code-Switching Language Modeling using Syntax-Aware Multi-Task Learning | Lack of text data has been the major issue on code-switching language
modeling. In this paper, we introduce multi-task learning based language model
which shares syntax representation of languages to leverage linguistic
information and tackle the low resource data issue. Our model jointly learns
both language modeling and Part-of-Speech tagging on code-switched utterances.
In this way, the model is able to identify the location of code-switching
points and improves the prediction of next word. Our approach outperforms
standard LSTM based language model, with an improvement of 9.7% and 7.4% in
perplexity on SEAME Phase I and Phase II dataset respectively.
| 2,018 | Computation and Language |
Marian: Cost-effective High-Quality Neural Machine Translation in C++ | This paper describes the submissions of the "Marian" team to the WNMT 2018
shared task. We investigate combinations of teacher-student training,
low-precision matrix products, auto-tuning and other methods to optimize the
Transformer model on GPU and CPU. By further integrating these methods with the
new averaging attention networks, a recently introduced faster Transformer
variant, we create a number of high-quality, high-performance models on the GPU
and CPU, dominating the Pareto frontier for this shared task.
| 2,018 | Computation and Language |
Amnestic Forgery: an Ontology of Conceptual Metaphors | This paper presents Amnestic Forgery, an ontology for metaphor semantics,
based on MetaNet, which is inspired by the theory of Conceptual Metaphor.
Amnestic Forgery reuses and extends the Framester schema, as an ideal ontology
design framework to deal with both semiotic and referential aspects of frames,
roles, mappings, and eventually blending. The description of the resource is
supplied by a discussion of its applications, with examples taken from metaphor
generation, and the referential problems of metaphoric mappings. Both schema
and data are available from the Framester SPARQL endpoint.
| 2,021 | Computation and Language |
What the Vec? Towards Probabilistically Grounded Embeddings | Word2Vec (W2V) and GloVe are popular, fast and efficient word embedding
algorithms. Their embeddings are widely used and perform well on a variety of
natural language processing tasks. Moreover, W2V has recently been adopted in
the field of graph embedding, where it underpins several leading algorithms.
However, despite their ubiquity and relatively simple model architecture, a
theoretical understanding of what the embedding parameters of W2V and GloVe
learn and why that is useful in downstream tasks has been lacking. We show that
different interactions between PMI vectors reflect semantic word relationships,
such as similarity and paraphrasing, that are encoded in low dimensional word
embeddings under a suitable projection, theoretically explaining why embeddings
of W2V and GloVe work. As a consequence, we also reveal an interesting
mathematical interconnection between the considered semantic relationships
themselves.
| 2,019 | Computation and Language |
A Web-scale system for scientific knowledge exploration | To enable efficient exploration of Web-scale scientific knowledge, it is
necessary to organize scientific publications into a hierarchical concept
structure. In this work, we present a large-scale system to (1) identify
hundreds of thousands of scientific concepts, (2) tag these identified concepts
to hundreds of millions of scientific publications by leveraging both text and
graph structure, and (3) build a six-level concept hierarchy with a
subsumption-based model. The system builds the most comprehensive cross-domain
scientific concept ontology published to date, with more than 200 thousand
concepts and over one million relationships.
| 2,018 | Computation and Language |
On the Impact of Various Types of Noise on Neural Machine Translation | We examine how various types of noise in the parallel training data impact
the quality of neural machine translation systems. We create five types of
artificial noise and analyze how they degrade performance in neural and
statistical machine translation. We find that neural models are generally more
harmed by noise than statistical models. For one especially egregious type of
noise they learn to just copy the input sentence.
| 2,020 | Computation and Language |
Empirical Evaluation of Character-Based Model on Neural Named-Entity
Recognition in Indonesian Conversational Texts | Despite the long history of named-entity recognition (NER) task in the
natural language processing community, previous work rarely studied the task on
conversational texts. Such texts are challenging because they contain a lot of
word variations which increase the number of out-of-vocabulary (OOV) words. The
high number of OOV words poses a difficulty for word-based neural models.
Meanwhile, there is plenty of evidence to the effectiveness of character-based
neural models in mitigating this OOV problem. We report an empirical evaluation
of neural sequence labeling models with character embedding to tackle NER task
in Indonesian conversational texts. Our experiments show that (1) character
models outperform word embedding-only models by up to 4 $F_1$ points, (2)
character models perform better in OOV cases with an improvement of as high as
15 $F_1$ points, and (3) character models are robust against a very high OOV
rate.
| 2,018 | Computation and Language |
Attention-Based LSTM for Psychological Stress Detection from Spoken
Language Using Distant Supervision | We propose a Long Short-Term Memory (LSTM) with attention mechanism to
classify psychological stress from self-conducted interview transcriptions. We
apply distant supervision by automatically labeling tweets based on their
hashtag content, which complements and expands the size of our corpus. This
additional data is used to initialize the model parameters, and which it is
fine-tuned using the interview data. This improves the model's robustness,
especially by expanding the vocabulary size. The bidirectional LSTM model with
attention is found to be the best model in terms of accuracy (74.1%) and
f-score (74.3%). Furthermore, we show that distant supervision fine-tuning
enhances the model's performance by 1.6% accuracy and 2.1% f-score. The
attention mechanism helps the model to select informative words.
| 2,018 | Computation and Language |
DialogWAE: Multimodal Response Generation with Conditional Wasserstein
Auto-Encoder | Variational autoencoders~(VAEs) have shown a promise in data-driven
conversation modeling. However, most VAE conversation models match the
approximate posterior distribution over the latent variables to a simple prior
such as standard normal distribution, thereby restricting the generated
responses to a relatively simple (e.g., unimodal) scope. In this paper, we
propose DialogWAE, a conditional Wasserstein autoencoder~(WAE) specially
designed for dialogue modeling. Unlike VAEs that impose a simple distribution
over the latent variables, DialogWAE models the distribution of data by
training a GAN within the latent variable space. Specifically, our model
samples from the prior and posterior distributions over the latent variables by
transforming context-dependent random noise using neural networks and minimizes
the Wasserstein distance between the two distributions. We further develop a
Gaussian mixture prior network to enrich the latent space. Experiments on two
popular datasets show that DialogWAE outperforms the state-of-the-art
approaches in generating more coherent, informative and diverse responses.
| 2,019 | Computation and Language |
SemEval 2019 Shared Task: Cross-lingual Semantic Parsing with UCCA -
Call for Participation | We announce a shared task on UCCA parsing in English, German and French, and
call for participants to submit their systems. UCCA is a cross-linguistically
applicable framework for semantic representation, which builds on extensive
typological work and supports rapid annotation. UCCA poses a challenge for
existing parsing techniques, as it exhibits reentrancy (resulting in DAG
structures), discontinuous structures and non-terminal nodes corresponding to
complex semantic units. Given the success of recent semantic parsing shared
tasks (on SDP and AMR), we expect the task to have a significant contribution
to the advancement of UCCA parsing in particular, and semantic parsing in
general. Furthermore, existing applications for semantic evaluation that are
based on UCCA will greatly benefit from better automatic methods for UCCA
parsing. The competition website is
https://competitions.codalab.org/competitions/19160
| 2,021 | Computation and Language |
Neural Network Acceptability Judgments | This paper investigates the ability of artificial neural networks to judge
the grammatical acceptability of a sentence, with the goal of testing their
linguistic competence. We introduce the Corpus of Linguistic Acceptability
(CoLA), a set of 10,657 English sentences labeled as grammatical or
ungrammatical from published linguistics literature. As baselines, we train
several recurrent neural network models on acceptability classification, and
find that our models outperform unsupervised models by Lau et al (2016) on
CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et
al.'s models and ours learn systematic generalizations like subject-verb-object
order. However, all models we test perform far below human level on a wide
range of grammatical constructions.
| 2,019 | Computation and Language |
Multi-Label Transfer Learning for Multi-Relational Semantic Similarity | Multi-relational semantic similarity datasets define the semantic relations
between two short texts in multiple ways, e.g., similarity, relatedness, and so
on. Yet, all the systems to date designed to capture such relations target one
relation at a time. We propose a multi-label transfer learning approach based
on LSTM to make predictions for several relations simultaneously and aggregate
the losses to update the parameters. This multi-label regression approach
jointly learns the information provided by the multiple relations, rather than
treating them as separate tasks. Not only does this approach outperform the
single-task approach and the traditional multi-task learning approach, but it
also achieves state-of-the-art performance on all but one relation of the Human
Activity Phrase dataset.
| 2,019 | Computation and Language |
Incremental Natural Language Processing: Challenges, Strategies, and
Evaluation | Incrementality is ubiquitous in human-human interaction and beneficial for
human-computer interaction. It has been a topic of research in different parts
of the NLP community, mostly with focus on the specific topic at hand even
though incremental systems have to deal with similar challenges regardless of
domain. In this survey, I consolidate and categorize the approaches,
identifying similarities and differences in the computation and data, and show
trade-offs that have to be considered. A focus lies on evaluating incremental
systems because the standard metrics often fail to capture the incremental
properties of a system and coming up with a suitable evaluation scheme is
non-trivial.
| 2,018 | Computation and Language |
Text normalization using memory augmented neural networks | We perform text normalization, i.e. the transformation of words from the
written to the spoken form, using a memory augmented neural network. With the
addition of dynamic memory access and storage mechanism, we present a neural
architecture that will serve as a language-agnostic text normalization system
while avoiding the kind of unacceptable errors made by the LSTM-based recurrent
neural networks. By successfully reducing the frequency of such mistakes, we
show that this novel architecture is indeed a better alternative. Our proposed
system requires significantly lesser amounts of data, training time and compute
resources. Additionally, we perform data up-sampling, circumventing the data
sparsity problem in some semiotic classes, to show that sufficient examples in
any particular class can improve the performance of our text normalization
system. Although a few occurrences of these errors still remain in certain
semiotic classes, we demonstrate that memory augmented networks with
meta-learning capabilities can open many doors to a superior text normalization
system.
| 2,019 | Computation and Language |
Scaling Neural Machine Translation | Sequence to sequence learning models still require several days to reach
state of the art performance on large benchmark datasets using a single
machine. This paper shows that reduced precision and large batch training can
speedup training by nearly 5x on a single 8-GPU machine with careful tuning and
implementation. On WMT'14 English-German translation, we match the accuracy of
Vaswani et al. (2017) in under 5 hours when training on 8 GPUs and we obtain a
new state of the art of 29.3 BLEU after training for 85 minutes on 128 GPUs. We
further improve these results to 29.8 BLEU by training on the much larger
Paracrawl dataset. On the WMT'14 English-French task, we obtain a
state-of-the-art BLEU of 43.2 in 8.5 hours on 128 GPUs.
| 2,018 | Computation and Language |
A Survey of Domain Adaptation for Neural Machine Translation | Neural machine translation (NMT) is a deep learning based approach for
machine translation, which yields the state-of-the-art translation performance
in scenarios where large-scale parallel corpora are available. Although the
high-quality and domain-specific translation is crucial in the real world,
domain-specific corpora are usually scarce or nonexistent, and thus vanilla NMT
performs poorly in such scenarios. Domain adaptation that leverages both
out-of-domain parallel corpora as well as monolingual corpora for in-domain
translation, is very important for domain-specific translation. In this paper,
we give a comprehensive survey of the state-of-the-art domain adaptation
techniques for NMT.
| 2,018 | Computation and Language |
Some of Them Can be Guessed! Exploring the Effect of Linguistic Context
in Predicting Quantifiers | We study the role of linguistic context in predicting quantifiers (`few',
`all'). We collect crowdsourced data from human participants and test various
models in a local (single-sentence) and a global context (multi-sentence)
condition. Models significantly out-perform humans in the former setting and
are only slightly better in the latter. While human performance improves with
more linguistic context (especially on proportional quantifiers), model
performance suffers. Models are very effective in exploiting lexical and
morpho-syntactic patterns; humans are better at genuinely understanding the
meaning of the (global) context.
| 2,018 | Computation and Language |
Improving Dialogue Act Classification for Spontaneous Arabic Speech and
Instant Messages at Utterance Level | The ability to model and automatically detect dialogue act is an important
step toward understanding spontaneous speech and Instant Messages. However, it
has been difficult to infer a dialogue act from a surface utterance because it
highly depends on the context of the utterance and speaker linguistic
knowledge; especially in Arabic dialects. This paper proposes a statistical
dialogue analysis model to recognize utterance's dialogue acts using a
multi-classes hierarchical structure. The model can automatically acquire
probabilistic discourse knowledge from a dialogue corpus were collected and
annotated manually from multi-genre Egyptian call-centers. Extensive
experiments were conducted using Support Vector Machines classifier to evaluate
the system performance. The results attained in the term of average F-measure
scores of 0.912; showed that the proposed approach has moderately improved
F-measure by approximately 20%.
| 2,018 | Computation and Language |
Audio Visual Scene-Aware Dialog (AVSD) Challenge at DSTC7 | Scene-aware dialog systems will be able to have conversations with users
about the objects and events around them. Progress on such systems can be made
by integrating state-of-the-art technologies from multiple research areas
including end-to-end dialog systems visual dialog, and video description. We
introduce the Audio Visual Scene Aware Dialog (AVSD) challenge and dataset. In
this challenge, which is one track of the 7th Dialog System Technology
Challenges (DSTC7) workshop1, the task is to build a system that generates
responses in a dialog about an input video
| 2,018 | Computation and Language |
Fast Locality Sensitive Hashing for Beam Search on GPU | We present a GPU-based Locality Sensitive Hashing (LSH) algorithm to speed up
beam search for sequence models. We utilize the winner-take-all (WTA) hash,
which is based on relative ranking order of hidden dimensions and thus
resilient to perturbations in numerical values. Our algorithm is designed by
fully considering the underling architecture of CUDA-enabled GPUs
(Algorithm/Architecture Co-design): 1) A parallel Cuckoo hash table is applied
for LSH code lookup (guaranteed O(1) lookup time); 2) Candidate lists are
shared across beams to maximize the parallelism; 3) Top frequent words are
merged into candidate lists to improve performance. Experiments on 4
large-scale neural machine translation models demonstrate that our algorithm
can achieve up to 4x speedup on softmax module, and 2x overall speedup without
hurting BLEU on GPU.
| 2,018 | Computation and Language |
Does the brain represent words? An evaluation of brain decoding studies
of language understanding | Language decoding studies have identified word representations which can be
used to predict brain activity in response to novel words and sentences
(Anderson et al., 2016; Pereira et al., 2018). The unspoken assumption of these
studies is that, during processing, linguistic information is transformed into
some shared semantic space, and those semantic representations are then used
for a variety of linguistic and non-linguistic tasks. We claim that current
studies vastly underdetermine the content of these representations, the
algorithms which the brain deploys to produce and consume them, and the
computational tasks which they are designed to solve. We illustrate this
indeterminacy with an extension of the sentence-decoding experiment of Pereira
et al. (2018), showing how standard evaluations fail to distinguish between
language processing models which deploy different mechanisms and which are
optimized to solve very different tasks. We conclude by suggesting changes to
the brain decoding paradigm which can support stronger claims of neural
representation.
| 2,018 | Computation and Language |
Multiplex Communities and the Emergence of International Conflict | Advances in community detection reveal new insights into multiplex and
multilayer networks. Less work, however, investigates the relationship between
these communities and outcomes in social systems. We leverage these advances to
shed light on the relationship between the cooperative mesostructure of the
international system and the onset of interstate conflict. We detect
communities based upon weaker signals of affinity expressed in United Nations
votes and speeches, as well as stronger signals observed across multiple layers
of bilateral cooperation. Communities of diplomatic affinity display an
expected negative relationship with conflict onset. Ties in communities based
upon observed cooperation, however, display no effect under a standard model
specification and a positive relationship with conflict under an alternative
specification. These results align with some extant hypotheses but also point
to a paucity in our understanding of the relationship between community
structure and behavioral outcomes in networks.
| 2,019 | Computation and Language |
AP18-OLR Challenge: Three Tasks and Their Baselines | The third oriental language recognition (OLR) challenge AP18-OLR is
introduced in this paper, including the data profile, the tasks and the
evaluation principles. Following the events in the last two years, namely
AP16-OLR and AP17-OLR, the challenge this year focuses on more challenging
tasks, including (1) short-duration utterances, (2) confusing languages, and
(3) open-set recognition. The same as the previous events, the data of AP18-OLR
is also provided by SpeechOcean and the NSFC M2ASR project. Baselines based on
both the i-vector model and neural networks are constructed for the
participants' reference. We report the baseline results on the three tasks and
demonstrate that the three tasks are truly challenging. All the data is free
for participants, and the Kaldi recipes for the baselines have been published
online.
| 2,018 | Computation and Language |
Emotion Detection in Text: a Review | In recent years, emotion detection in text has become more popular due to its
vast potential applications in marketing, political science, psychology,
human-computer interaction, artificial intelligence, etc. Access to a huge
amount of textual data, especially opinionated and self-expression text also
played a special role to bring attention to this field. In this paper, we
review the work that has been done in identifying emotion expressions in text
and argue that although many techniques, methodologies, and models have been
created to detect emotion in text, there are various reasons that make these
methods insufficient. Although, there is an essential need to improve the
design and architecture of current systems, factors such as the complexity of
human emotions, and the use of implicit and metaphorical language in expressing
it, lead us to think that just re-purposing standard methodologies will not be
enough to capture these complexities, and it is important to pay attention to
the linguistic intricacies of emotion expression.
| 2,018 | Computation and Language |
Stress Test Evaluation for Natural Language Inference | Natural language inference (NLI) is the task of determining if a natural
language hypothesis can be inferred from a given premise in a justifiable
manner. NLI was proposed as a benchmark task for natural language
understanding. Existing models perform well at standard datasets for NLI,
achieving impressive results across different genres of text. However, the
extent to which these models understand the semantic content of sentences is
unclear. In this work, we propose an evaluation methodology consisting of
automatically constructed "stress tests" that allow us to examine whether
systems have the ability to make real inferential decisions. Our evaluation of
six sentence-encoder models on these stress tests reveals strengths and
weaknesses of these models with respect to challenging linguistic phenomena,
and suggests important directions for future work in this area.
| 2,018 | Computation and Language |
Quantifying the dynamics of topical fluctuations in language | The availability of large diachronic corpora has provided the impetus for a
growing body of quantitative research on language evolution and meaning change.
The central quantities in this research are token frequencies of linguistic
elements in texts, with changes in frequency taken to reflect the popularity or
selective fitness of an element. However, corpus frequencies may change for a
wide variety of reasons, including purely random sampling effects, or because
corpora are composed of contemporary media and fiction texts within which the
underlying topics ebb and flow with cultural and socio-political trends. In
this work, we introduce a simple model for controlling for topical fluctuations
in corpora - the topical-cultural advection model - and demonstrate how it
provides a robust baseline of variability in word frequency changes over time.
We validate the model on a diachronic corpus spanning two centuries, and a
carefully-controlled artificial language change scenario, and then use it to
correct for topical fluctuations in historical time series. Finally, we use the
model to show that the emergence of new words typically corresponds with the
rise of a trending topic. This suggests that some lexical innovations occur due
to growing communicative need in a subspace of the lexicon, and that the
topical-cultural advection model can be used to quantify this.
| 2,020 | Computation and Language |
Dense Information Flow for Neural Machine Translation | Recently, neural machine translation has achieved remarkable progress by
introducing well-designed deep neural networks into its encoder-decoder
framework. From the optimization perspective, residual connections are adopted
to improve learning performance for both encoder and decoder in most of these
deep architectures, and advanced attention connections are applied as well.
Inspired by the success of the DenseNet model in computer vision problems, in
this paper, we propose a densely connected NMT architecture (DenseNMT) that is
able to train more efficiently for NMT. The proposed DenseNMT not only allows
dense connection in creating new features for both encoder and decoder, but
also uses the dense attention structure to improve attention quality. Our
experiments on multiple datasets show that DenseNMT structure is more
competitive and efficient.
| 2,018 | Computation and Language |
Contextualize, Show and Tell: A Neural Visual Storyteller | We present a neural model for generating short stories from image sequences,
which extends the image description model by Vinyals et al. (Vinyals et al.,
2015). This extension relies on an encoder LSTM to compute a context vector of
each story from the image sequence. This context vector is used as the first
state of multiple independent decoder LSTMs, each of which generates the
portion of the story corresponding to each image in the sequence by taking the
image embedding as the first input. Our model showed competitive results with
the METEOR metric and human ratings in the internal track of the Visual
Storytelling Challenge 2018.
| 2,018 | Computation and Language |
TI-CNN: Convolutional Neural Networks for Fake News Detection | With the development of social networks, fake news for various commercial and
political purposes has been appearing in large numbers and gotten widespread in
the online world. With deceptive words, people can get infected by the fake
news very easily and will share them without any fact-checking. For instance,
during the 2016 US president election, various kinds of fake news about the
candidates widely spread through both official news media and the online social
networks. These fake news is usually released to either smear the opponents or
support the candidate on their side. The erroneous information in the fake news
is usually written to motivate the voters' irrational emotion and enthusiasm.
Such kinds of fake news sometimes can bring about devastating effects, and an
important goal in improving the credibility of online social networks is to
identify the fake news timely. In this paper, we propose to study the fake news
detection problem. Automatic fake news identification is extremely hard, since
pure model based fact-checking for news is still an open problem, and few
existing models can be applied to solve the problem. With a thorough
investigation of a fake news data, lots of useful explicit features are
identified from both the text words and images used in the fake news. Besides
the explicit features, there also exist some hidden patterns in the words and
images used in fake news, which can be captured with a set of latent features
extracted via the multiple convolutional layers in our model. A model named as
TI-CNN (Text and Image information based Convolutinal Neural Network) is
proposed in this paper. By projecting the explicit and latent features into a
unified feature space, TI-CNN is trained with both the text and image
information simultaneously. Extensive experiments carried on the real-world
fake news datasets have demonstrate the effectiveness of TI-CNN.
| 2,023 | Computation and Language |
Psychological State in Text: A Limitation of Sentiment Analysis | Starting with the idea that sentiment analysis models should be able to
predict not only positive or negative but also other psychological states of a
person, we implement a sentiment analysis model to investigate the relationship
between the model and emotional state. We first examine psychological
measurements of 64 participants and ask them to write a book report about a
story. After that, we train our sentiment analysis model using crawled movie
review data. We finally evaluate participants' writings, using the pretrained
model as a concept of transfer learning. The result shows that sentiment
analysis model performs good at predicting a score, but the score does not have
any correlation with human's self-checked sentiment.
| 2,018 | Computation and Language |
Multi-Cast Attention Networks for Retrieval-based Question Answering and
Response Prediction | Attention is typically used to select informative sub-phrases that are used
for prediction. This paper investigates the novel use of attention as a form of
feature augmentation, i.e, casted attention. We propose Multi-Cast Attention
Networks (MCAN), a new attention mechanism and general model architecture for a
potpourri of ranking tasks in the conversational modeling and question
answering domains. Our approach performs a series of soft attention operations,
each time casting a scalar feature upon the inner word embeddings. The key idea
is to provide a real-valued hint (feature) to a subsequent encoder layer and is
targeted at improving the representation learning process. There are several
advantages to this design, e.g., it allows an arbitrary number of attention
mechanisms to be casted, allowing for multiple attention types (e.g.,
co-attention, intra-attention) and attention variants (e.g., alignment-pooling,
max-pooling, mean-pooling) to be executed simultaneously. This not only
eliminates the costly need to tune the nature of the co-attention layer, but
also provides greater extents of explainability to practitioners. Via extensive
experiments on four well-known benchmark datasets, we show that MCAN achieves
state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms
existing state-of-the-art models by $9\%$. MCAN also achieves the best
performing score to date on the well-studied TrecQA dataset.
| 2,018 | Computation and Language |
Building Advanced Dialogue Managers for Goal-Oriented Dialogue Systems | Goal-Oriented (GO) Dialogue Systems, colloquially known as goal oriented
chatbots, help users achieve a predefined goal (e.g. book a movie ticket)
within a closed domain. A first step is to understand the user's goal by using
natural language understanding techniques. Once the goal is known, the bot must
manage a dialogue to achieve that goal, which is conducted with respect to a
learnt policy. The success of the dialogue system depends on the quality of the
policy, which is in turn reliant on the availability of high-quality training
data for the policy learning method, for instance Deep Reinforcement Learning.
Due to the domain specificity, the amount of available data is typically too
low to allow the training of good dialogue policies. In this master thesis we
introduce a transfer learning method to mitigate the effects of the low
in-domain data availability. Our transfer learning based approach improves the
bot's success rate by $20\%$ in relative terms for distant domains and we more
than double it for close domains, compared to the model without transfer
learning. Moreover, the transfer learning chatbots learn the policy up to 5 to
10 times faster. Finally, as the transfer learning approach is complementary to
additional processing such as warm-starting, we show that their joint
application gives the best outcomes.
| 2,018 | Computation and Language |
Transfer Topic Labeling with Domain-Specific Knowledge Base: An Analysis
of UK House of Commons Speeches 1935-2014 | Topic models are widely used in natural language processing, allowing
researchers to estimate the underlying themes in a collection of documents.
Most topic models use unsupervised methods and hence require the additional
step of attaching meaningful labels to estimated topics. This process of manual
labeling is not scalable and suffers from human bias. We present a
semi-automatic transfer topic labeling method that seeks to remedy these
problems. Domain-specific codebooks form the knowledge-base for automated topic
labeling. We demonstrate our approach with a dynamic topic model analysis of
the complete corpus of UK House of Commons speeches 1935-2014, using the coding
instructions of the Comparative Agendas Project to label topics. We show that
our method works well for a majority of the topics we estimate; but we also
find that institution-specific topics, in particular on subnational governance,
require manual input. We validate our results using human expert coding.
| 2,018 | Computation and Language |
Learning Semantic Sentence Embeddings using Sequential Pair-wise
Discriminator | In this paper, we propose a method for obtaining sentence-level embeddings.
While the problem of securing word-level embeddings is very well studied, we
propose a novel method for obtaining sentence-level embeddings. This is
obtained by a simple method in the context of solving the paraphrase generation
task. If we use a sequential encoder-decoder model for generating paraphrase,
we would like the generated paraphrase to be semantically close to the original
sentence. One way to ensure this is by adding constraints for true paraphrase
embeddings to be close and unrelated paraphrase candidate sentence embeddings
to be far. This is ensured by using a sequential pair-wise discriminator that
shares weights with the encoder that is trained with a suitable loss function.
Our loss function penalizes paraphrase sentence embedding distances from being
too large. This loss is used in combination with a sequential encoder-decoder
network. We also validated our method by evaluating the obtained embeddings for
a sentiment analysis task. The proposed method results in semantic embeddings
and outperforms the state-of-the-art on the paraphrase generation and sentiment
analysis task on standard datasets. These results are also shown to be
statistically significant.
| 2,019 | Computation and Language |
Latent Tree Learning with Differentiable Parsers: Shift-Reduce Parsing
and Chart Parsing | Latent tree learning models represent sentences by composing their words
according to an induced parse tree, all based on a downstream task. These
models often outperform baselines which use (externally provided) syntax trees
to drive the composition order. This work contributes (a) a new latent tree
learning model based on shift-reduce parsing, with competitive downstream
performance and non-trivial induced trees, and (b) an analysis of the trees
learned by our shift-reduce model and by a chart-based model.
| 2,018 | Computation and Language |
An unsupervised and customizable misspelling generator for mining noisy
health-related text sources | In this paper, we present a customizable datacentric system that
automatically generates common misspellings for complex health-related terms.
The spelling variant generator relies on a dense vector model learned from
large unlabeled text, which is used to find semantically close terms to the
original/seed keyword, followed by the filtering of terms that are lexically
dissimilar beyond a given threshold. The process is executed recursively,
converging when no new terms similar (lexically and semantically) to the seed
keyword are found. Weighting of intra-word character sequence similarities
allows further problem-specific customization of the system. On a dataset
prepared for this study, our system outperforms the current state-of-the-art
for medication name variant generation with best F1-score of 0.69 and
F1/4-score of 0.78. Extrinsic evaluation of the system on a set of
cancer-related terms showed an increase of over 67% in retrieval rate from
Twitter posts when the generated variants are included. Our proposed spelling
variant generator has several advantages over the current state-of-the-art and
other types of variant generators-(i) it is capable of filtering out lexically
similar but semantically dissimilar terms, (ii) the number of variants
generated is low as many low-frequency and ambiguous misspellings are filtered
out, and (iii) the system is fully automatic, customizable and easily
executable. While the base system is fully unsupervised, we show how
supervision maybe employed to adjust weights for task-specific customization.
The performance and significant relative simplicity of our proposed approach
makes it a much needed misspelling generation resource for health-related text
mining from noisy sources. The source code for the system has been made
publicly available for research purposes.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.