Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Modeling Naive Psychology of Characters in Simple Commonsense Stories | Understanding a narrative requires reading between the lines and reasoning
about the unspoken but obvious implications about events and people's mental
states - a capability that is trivial for humans but remarkably hard for
machines. To facilitate research addressing this challenge, we introduce a new
annotation framework to explain naive psychology of story characters as
fully-specified chains of mental states with respect to motivations and
emotional reactions. Our work presents a new large-scale dataset with rich
low-level annotations and establishes baseline performance on several new
tasks, suggesting avenues for future research.
| 2,018 | Computation and Language |
Are BLEU and Meaning Representation in Opposition? | One of possible ways of obtaining continuous-space sentence representations
is by training neural machine translation (NMT) systems. The recent attention
mechanism however removes the single point in the neural network from which the
source sentence representation can be extracted. We propose several variations
of the attentive NMT architecture bringing this meeting point back. Empirical
evaluation suggests that the better the translation quality, the worse the
learned sentence representations serve in a wide range of classification and
similarity tasks.
| 2,018 | Computation and Language |
A Deep Ensemble Model with Slot Alignment for Sequence-to-Sequence
Natural Language Generation | Natural language generation lies at the core of generative dialogue systems
and conversational agents. We describe an ensemble neural language generator,
and present several novel methods for data representation and augmentation that
yield improved results in our model. We test the model on three datasets in the
restaurant, TV and laptop domains, and report both objective and subjective
evaluations of our best model. Using a range of automatic metrics, as well as
human evaluators, we show that our approach achieves better results than
state-of-the-art models on the same datasets.
| 2,018 | Computation and Language |
Extending a Parser to Distant Domains Using a Few Dozen Partially
Annotated Examples | We revisit domain adaptation for parsers in the neural era. First we show
that recent advances in word representations greatly diminish the need for
domain adaptation when the target domain is syntactically similar to the source
domain. As evidence, we train a parser on the Wall Street Jour- nal alone that
achieves over 90% F1 on the Brown corpus. For more syntactically dis- tant
domains, we provide a simple way to adapt a parser using only dozens of partial
annotations. For instance, we increase the percentage of error-free
geometry-domain parses in a held-out set from 45% to 73% using approximately
five dozen training examples. In the process, we demon- strate a new
state-of-the-art single model result on the Wall Street Journal test set of
94.3%. This is an absolute increase of 1.7% over the previous state-of-the-art
of 92.6%.
| 2,018 | Computation and Language |
Content-based Popularity Prediction of Online Petitions Using a Deep
Regression Model | Online petitions are a cost-effective way for citizens to collectively engage
with policy-makers in a democracy. Predicting the popularity of a petition ---
commonly measured by its signature count --- based on its textual content has
utility for policy-makers as well as those posting the petition. In this work,
we model this task using CNN regression with an auxiliary ordinal regression
objective. We demonstrate the effectiveness of our proposed approach using UK
and US government petition datasets.
| 2,018 | Computation and Language |
Cross-Target Stance Classification with Self-Attention Networks | In stance classification, the target on which the stance is made defines the
boundary of the task, and a classifier is usually trained for prediction on the
same target. In this work, we explore the potential for generalizing
classifiers between different targets, and propose a neural model that can
apply what has been learned from a source target to a destination target. We
show that our model can find useful information shared between relevant targets
which improves generalization in certain scenarios.
| 2,018 | Computation and Language |
Convolutional Attention Networks for Multimodal Emotion Recognition from
Speech and Text Data | Emotion recognition has become a popular topic of interest, especially in the
field of human computer interaction. Previous works involve unimodal analysis
of emotion, while recent efforts focus on multi-modal emotion recognition from
vision and speech. In this paper, we propose a new method of learning about the
hidden representations between just speech and text data using convolutional
attention networks. Compared to the shallow model which employs simple
concatenation of feature vectors, the proposed attention model performs much
better in classifying emotion from speech and text data contained in the
CMU-MOSEI dataset.
| 2,019 | Computation and Language |
Extrapolation in NLP | We argue that extrapolation to examples outside the training space will often
be easier for models that capture global structures, rather than just maximise
their local fit to the training data. We show that this is true for two popular
models: the Decomposable Attention Model and word2vec.
| 2,018 | Computation and Language |
Classifying medical relations in clinical text via convolutional neural
networks | Deep learning research on relation classification has achieved solid
performance in the general domain. This study proposes a convolutional neural
network (CNN) architecture with a multi-pooling operation for medical relation
classification on clinical records and explores a loss function with a
category-level constraint matrix. Experiments using the 2010 i2b2/VA relation
corpus demonstrate these models, which do not depend on any external features,
outperform previous single-model methods and our best model is competitive with
the existing ensemble-based method.
| 2,018 | Computation and Language |
Annotating Electronic Medical Records for Question Answering | Our research is in the relatively unexplored area of question answering
technologies for patient-specific questions over their electronic health
records. A large dataset of human expert curated question and answer pairs is
an important pre-requisite for developing, training and evaluating any question
answering system that is powered by machine learning. In this paper, we
describe a process for creating such a dataset of questions and answers. Our
methodology is replicable, can be conducted by medical students as annotators,
and results in high inter-annotator agreement (0.71 Cohen's kappa). Over the
course of 11 months, 11 medical students followed our annotation methodology,
resulting in a question answering dataset of 5696 questions over 71 patient
records, of which 1747 questions have corresponding answers generated by the
medical students.
| 2,018 | Computation and Language |
Neural language representations predict outcomes of scientific research | Many research fields codify their findings in standard formats, often by
reporting correlations between quantities of interest. But the space of all
testable correlates is far larger than scientific resources can currently
address, so the ability to accurately predict correlations would be useful to
plan research and allocate resources. Using a dataset of approximately 170,000
correlational findings extracted from leading social science journals, we show
that a trained neural network can accurately predict the reported correlations
using only the text descriptions of the correlates. Accurate predictive models
such as these can guide scientists towards promising untested correlates,
better quantify the information gained from new findings, and has implications
for moving artificial intelligence systems from predicting structures to
predicting relationships in the real world.
| 2,018 | Computation and Language |
Event2Mind: Commonsense Inference on Events, Intents, and Reactions | We investigate a new commonsense inference task: given an event described in
a short free-form text ("X drinks coffee in the morning"), a system reasons
about the likely intents ("X wants to stay awake") and reactions ("X feels
alert") of the event's participants. To support this study, we construct a new
crowdsourced corpus of 25,000 event phrases covering a diverse range of
everyday events and situations. We report baseline performance on this task,
demonstrating that neural encoder-decoder models can successfully compose
embedding representations of previously unseen events and reason about the
likely intents and reactions of the event participants. In addition, we
demonstrate how commonsense inference on people's intents and reactions can
help unveil the implicit gender inequality prevalent in modern movie scripts.
| 2,019 | Computation and Language |
Ask No More: Deciding when to guess in referential visual dialogue | Our goal is to explore how the abilities brought in by a dialogue manager can
be included in end-to-end visually grounded conversational agents. We make
initial steps towards this general goal by augmenting a task-oriented visual
dialogue model with a decision-making component that decides whether to ask a
follow-up question to identify a target referent in an image, or to stop the
conversation to make a guess. Our analyses show that adding a decision making
component produces dialogues that are less repetitive and that include fewer
unnecessary questions, thus potentially leading to more efficient and less
unnatural interactions.
| 2,018 | Computation and Language |
Neural User Simulation for Corpus-based Policy Optimisation for Spoken
Dialogue Systems | User Simulators are one of the major tools that enable offline training of
task-oriented dialogue systems. For this task the Agenda-Based User Simulator
(ABUS) is often used. The ABUS is based on hand-crafted rules and its output is
in semantic form. Issues arise from both properties such as limited diversity
and the inability to interface a text-level belief tracker. This paper
introduces the Neural User Simulator (NUS) whose behaviour is learned from a
corpus and which generates natural language, hence needing a less labelled
dataset than simulators generating a semantic output. In comparison to much of
the past work on this topic, which evaluates user simulators on corpus-based
metrics, we use the NUS to train the policy of a reinforcement learning based
Spoken Dialogue System. The NUS is compared to the ABUS by evaluating the
policies that were trained using the simulators. Cross-model evaluation is
performed i.e. training on one simulator and testing on the other. Furthermore,
the trained policies are tested on real users. In both evaluation tasks the NUS
outperformed the ABUS.
| 2,018 | Computation and Language |
Tracking State Changes in Procedural Text: A Challenge Dataset and
Models for Process Paragraph Comprehension | We present a new dataset and models for comprehending paragraphs about
processes (e.g., photosynthesis), an important genre of text describing a
dynamic world. The new dataset, ProPara, is the first to contain natural
(rather than machine-generated) text about a changing world along with a full
annotation of entity states (location and existence) during those changes (81k
datapoints). The end-task, tracking the location and existence of entities
through the text, is challenging because the causal effects of actions are
often implicit and need to be inferred. We find that previous models that have
worked well on synthetic data achieve only mediocre performance on ProPara, and
introduce two new neural models that exploit alternative mechanisms for state
prediction, in particular using LSTM input encoding and span prediction. The
new models improve accuracy by up to 19%. The dataset and models are available
to the community at http://data.allenai.org/propara.
| 2,018 | Computation and Language |
Linear-Time Constituency Parsing with RNNs and Dynamic Programming | Recently, span-based constituency parsing has achieved competitive accuracies
with extremely simple models by using bidirectional RNNs to model "spans".
However, the minimal span parser of Stern et al (2017a) which holds the current
state of the art accuracy is a chart parser running in cubic time, $O(n^3)$,
which is too slow for longer sentences and for applications beyond sentence
boundaries such as end-to-end discourse parsing and joint sentence boundary
detection and parsing. We propose a linear-time constituency parser with RNNs
and dynamic programming using graph-structured stack and beam search, which
runs in time $O(n b^2)$ where $b$ is the beam size. We further speed this up to
$O(n b\log b)$ by integrating cube pruning. Compared with chart parsing
baselines, this linear-time parser is substantially faster for long sentences
on the Penn Treebank and orders of magnitude faster for discourse parsing, and
achieves the highest F1 accuracy on the Penn Treebank among single model
end-to-end systems.
| 2,018 | Computation and Language |
Gated Recurrent Unit Based Acoustic Modeling with Future Context | The use of future contextual information is typically shown to be helpful for
acoustic modeling. However, for the recurrent neural network (RNN), it's not so
easy to model the future temporal context effectively, meanwhile keep lower
model latency. In this paper, we attempt to design a RNN acoustic model that
being capable of utilizing the future context effectively and directly, with
the model latency and computation cost as low as possible. The proposed model
is based on the minimal gated recurrent unit (mGRU) with an input projection
layer inserted in it. Two context modules, temporal encoding and temporal
convolution, are specifically designed for this architecture to model the
future context. Experimental results on the Switchboard task and an internal
Mandarin ASR task show that, the proposed model performs much better than long
short-term memory (LSTM) and mGRU models, whereas enables online decoding with
a maximum latency of 170 ms. This model even outperforms a very strong
baseline, TDNN-LSTM, with smaller model latency and almost half less
parameters.
| 2,018 | Computation and Language |
Aspect Based Sentiment Analysis with Gated Convolutional Networks | Aspect based sentiment analysis (ABSA) can provide more detailed information
than general sentiment analysis, because it aims to predict the sentiment
polarities of the given aspects or entities in text. We summarize previous
approaches into two subtasks: aspect-category sentiment analysis (ACSA) and
aspect-term sentiment analysis (ATSA). Most previous approaches employ long
short-term memory and attention mechanisms to predict the sentiment polarity of
the concerned targets, which are often complicated and need more training time.
We propose a model based on convolutional neural networks and gating
mechanisms, which is more accurate and efficient. First, the novel Gated
Tanh-ReLU Units can selectively output the sentiment features according to the
given aspect or entity. The architecture is much simpler than attention layer
used in the existing models. Second, the computations of our model could be
easily parallelized during training, because convolutional layers do not have
time dependency as in LSTM layers, and gating units also work independently.
The experiments on SemEval datasets demonstrate the efficiency and
effectiveness of our models.
| 2,018 | Computation and Language |
SNU_IDS at SemEval-2018 Task 12: Sentence Encoder with Contextualized
Vectors for Argument Reasoning Comprehension | We present a novel neural architecture for the Argument Reasoning
Comprehension task of SemEval 2018. It is a simple neural network consisting of
three parts, collectively judging whether the logic built on a set of given
sentences (a claim, reason, and warrant) is plausible or not. The model
utilizes contextualized word vectors pre-trained on large machine translation
(MT) datasets as a form of transfer learning, which can help to mitigate the
lack of training data. Quantitative analysis shows that simply leveraging LSTMs
trained on MT datasets outperforms several baselines and non-transferred
models, achieving accuracies of about 70% on the development set and about 60%
on the test set.
| 2,018 | Computation and Language |
Combining Advanced Methods in Japanese-Vietnamese Neural Machine
Translation | Neural machine translation (NMT) systems have recently obtained state-of-the
art in many machine translation systems between popular language pairs because
of the availability of data. For low-resourced language pairs, there are few
researches in this field due to the lack of bilingual data. In this paper, we
attempt to build the first NMT systems for a low-resourced language
pairs:Japanese-Vietnamese. We have also shown significant improvements when
combining advanced methods to reduce the adverse impacts of data sparsity and
improve the quality of NMT systems. In addition, we proposed a variant of
Byte-Pair Encoding algorithm to perform effective word segmentation for
Vietnamese texts and alleviate the rare-word problem that persists in NMT
systems.
| 2,018 | Computation and Language |
Style Obfuscation by Invariance | The task of obfuscating writing style using sequence models has previously
been investigated under the framework of obfuscation-by-transfer, where the
input text is explicitly rewritten in another style. These approaches also
often lead to major alterations to the semantic content of the input. In this
work, we propose obfuscation-by-invariance, and investigate to what extent
models trained to be explicitly style-invariant preserve semantics. We evaluate
our architectures on parallel and non-parallel corpora, and compare automatic
and human evaluations on the obfuscated sentences. Our experiments show that
style classifier performance can be reduced to chance level, whilst the
automatic evaluation of the output is seemingly equal to models applying
style-transfer. However, based on human evaluation we demonstrate a trade-off
between the level of obfuscation and the observed quality of the output in
terms of meaning preservation and grammaticality.
| 2,018 | Computation and Language |
A Study on Dialog Act Recognition using Character-Level Tokenization | Dialog act recognition is an important step for dialog systems since it
reveals the intention behind the uttered words. Most approaches on the task use
word-level tokenization. In contrast, this paper explores the use of
character-level tokenization. This is relevant since there is information at
the sub-word level that is related to the function of the words and, thus,
their intention. We also explore the use of different context windows around
each token, which are able to capture important elements, such as affixes.
Furthermore, we assess the importance of punctuation and capitalization. We
performed experiments on both the Switchboard Dialog Act Corpus and the DIHANA
Corpus. In both cases, the experiments not only show that character-level
tokenization leads to better performance than the typical word-level
approaches, but also that both approaches are able to capture complementary
information. Thus, the best results are achieved by combining tokenization at
both levels.
| 2,018 | Computation and Language |
Language Expansion In Text-Based Games | Text-based games are suitable test-beds for designing agents that can learn
by interaction with the environment in the form of natural language text. Very
recently, deep reinforcement learning based agents have been successfully
applied for playing text-based games. In this paper, we explore the possibility
of designing a single agent to play several text-based games and of expanding
the agent's vocabulary using the vocabulary of agents trained for multiple
games. To this extent, we explore the application of recently proposed policy
distillation method for video games to the text-based game setting. We also use
text-based games as a test-bed to analyze and hence understand policy
distillation approach in detail.
| 2,018 | Computation and Language |
Robust Handling of Polysemy via Sparse Representations | Words are polysemous and multi-faceted, with many shades of meanings. We
suggest that sparse distributed representations are more suitable than other,
commonly used, (dense) representations to express these multiple facets, and
present Category Builder, a working system that, as we show, makes use of
sparse representations to support multi-faceted lexical representations. We
argue that the set expansion task is well suited to study these meaning
distinctions since a word may belong to multiple sets with a different reason
for membership in each. We therefore exhibit the performance of Category
Builder on this task, while showing that our representation captures at the
same time analogy problems such as "the Ganga of Egypt" or "the Voldemort of
Tolkien". Category Builder is shown to be a more expressive lexical
representation and to outperform dense representations such as Word2Vec in some
analogy classes despite being shown only two of the three input terms.
| 2,018 | Computation and Language |
Multi-view Sentence Representation Learning | Multi-view learning can provide self-supervision when different views are
available of the same data. The distributional hypothesis provides another form
of useful self-supervision from adjacent sentences which are plentiful in large
unlabelled corpora. Motivated by the asymmetry in the two hemispheres of the
human brain as well as the observation that different learning architectures
tend to emphasise different aspects of sentence meaning, we create a unified
multi-view sentence representation learning framework, in which, one view
encodes the input sentence with a Recurrent Neural Network (RNN), and the other
view encodes it with a simple linear model, and the training objective is to
maximise the agreement specified by the adjacent context information between
two views. We show that, after training, the vectors produced from our
multi-view training provide improved representations over the single-view
training, and the combination of different views gives further representational
improvement and demonstrates solid transferability on standard downstream
tasks.
| 2,018 | Computation and Language |
Unsupervised Cross-Modal Alignment of Speech and Text Embedding Spaces | Recent research has shown that word embedding spaces learned from text
corpora of different languages can be aligned without any parallel data
supervision. Inspired by the success in unsupervised cross-lingual word
embeddings, in this paper we target learning a cross-modal alignment between
the embedding spaces of speech and text learned from corpora of their
respective modalities in an unsupervised fashion. The proposed framework learns
the individual speech and text embedding spaces, and attempts to align the two
spaces via adversarial training, followed by a refinement procedure. We show
how our framework could be used to perform spoken word classification and
translation, and the results on these two tasks demonstrate that the
performance of our unsupervised alignment approach is comparable to its
supervised counterpart. Our framework is especially useful for developing
automatic speech recognition (ASR) and speech-to-text translation systems for
low- or zero-resource languages, which have little parallel audio-text data for
training modern supervised ASR and speech-to-text translation models, but
account for the majority of the languages spoken across the world.
| 2,018 | Computation and Language |
Metric for Automatic Machine Translation Evaluation based on Universal
Sentence Representations | Sentence representations can capture a wide range of information that cannot
be captured by local features based on character or word N-grams. This paper
examines the usefulness of universal sentence representations for evaluating
the quality of machine translation. Although it is difficult to train sentence
representations using small-scale translation datasets with manual evaluation,
sentence representations trained from large-scale data in other tasks can
improve the automatic evaluation of machine translation. Experimental results
of the WMT-2016 dataset show that the proposed method achieves state-of-the-art
performance with sentence representation features only.
| 2,018 | Computation and Language |
Learning to Repair Software Vulnerabilities with Generative Adversarial
Networks | Motivated by the problem of automated repair of software vulnerabilities, we
propose an adversarial learning approach that maps from one discrete source
domain to another target domain without requiring paired labeled examples or
source and target domains to be bijections. We demonstrate that the proposed
adversarial learning approach is an effective technique for repairing software
vulnerabilities, performing close to seq2seq approaches that require labeled
pairs. The proposed Generative Adversarial Network approach is
application-agnostic in that it can be applied to other problems similar to
code repair, such as grammar correction or sentiment translation.
| 2,018 | Computation and Language |
Diverse Few-Shot Text Classification with Multiple Metrics | We study few-shot learning in natural language domains. Compared to many
existing works that apply either metric-based or optimization-based
meta-learning to image domain with low inter-task variance, we consider a more
realistic setting, where tasks are diverse. However, it imposes tremendous
difficulties to existing state-of-the-art metric-based algorithms since a
single metric is insufficient to capture complex task variations in natural
language domain. To alleviate the problem, we propose an adaptive metric
learning approach that automatically determines the best weighted combination
from a set of metrics obtained from meta-training tasks for a newly seen
few-shot task. Extensive quantitative evaluations on real-world sentiment
analysis and dialog intent classification datasets demonstrate that the
proposed method performs favorably against state-of-the-art few shot learning
algorithms in terms of predictive accuracy. We make our code and data available
for further study.
| 2,018 | Computation and Language |
Fighting Offensive Language on Social Media with Unsupervised Text Style
Transfer | We introduce a new approach to tackle the problem of offensive language in
online social media. Our approach uses unsupervised text style transfer to
translate offensive sentences into non-offensive ones. We propose a new method
for training encoder-decoders using non-parallel data that combines a
collaborative classifier, attention and the cycle consistency loss.
Experimental results on data from Twitter and Reddit show that our method
outperforms a state-of-the-art text style transfer system in two out of three
quantitative metrics and produces reliable non-offensive transferred sentences.
| 2,018 | Computation and Language |
The UN Parallel Corpus Annotated for Translation Direction | This work distinguishes between translated and original text in the UN
protocol corpus. By modeling the problem as classification problem, we can
achieve up to 95% classification accuracy. We begin by deriving a parallel
corpus for different language-pairs annotated for translation direction, and
then classify the data by using various feature extraction methods. We compare
the different methods as well as the ability to distinguish between translated
and original texts in the different languages. The annotated corpus is publicly
available.
| 2,018 | Computation and Language |
Generating High-Quality Surface Realizations Using Data Augmentation and
Factored Sequence Models | This work presents a new state of the art in reconstruction of surface
realizations from obfuscated text. We identify the lack of sufficient training
data as the major obstacle to training high-performing models, and solve this
issue by generating large amounts of synthetic training data. We also propose
preprocessing techniques which make the structure contained in the input
features more accessible to sequence models. Our models were ranked first on
all evaluation metrics in the English portion of the 2018 Surface Realization
shared task.
| 2,018 | Computation and Language |
Abstractive Text Classification Using Sequence-to-convolution Neural
Networks | We propose a new deep neural network model and its training scheme for text
classification. Our model Sequence-to-convolution Neural Networks(Seq2CNN)
consists of two blocks: Sequential Block that summarizes input texts and
Convolution Block that receives summary of input and classifies it to a label.
Seq2CNN is trained end-to-end to classify various-length texts without
preprocessing inputs into fixed length. We also present Gradual Weight
Shift(GWS) method that stabilizes training. GWS is applied to our model's loss
function. We compared our model with word-based TextCNN trained with different
data preprocessing methods. We obtained significant improvement in
classification accuracy over word-based TextCNN without any ensemble or data
augmentation.
| 2,020 | Computation and Language |
A Hierarchical Structured Self-Attentive Model for Extractive Document
Summarization (HSSAS) | The recent advance in neural network architecture and training algorithms
have shown the effectiveness of representation learning. The neural
network-based models generate better representation than the traditional ones.
They have the ability to automatically learn the distributed representation for
sentences and documents. To this end, we proposed a novel model that addresses
several issues that are not adequately modeled by the previously proposed
models, such as the memory problem and incorporating the knowledge of document
structure. Our model uses a hierarchical structured self-attention mechanism to
create the sentence and document embeddings. This architecture mirrors the
hierarchical structure of the document and in turn enables us to obtain better
feature representation. The attention mechanism provides extra source of
information to guide the summary extraction. The new model treated the
summarization task as a classification problem in which the model computes the
respective probabilities of sentence-summary membership. The model predictions
are broken up by several features such as information content, salience,
novelty and positional representation. The proposed model was evaluated on two
well-known datasets, the CNN / Daily Mail, and DUC 2002. The experimental
results show that our model outperforms the current extractive state-of-the-art
by a considerable margin.
| 2,018 | Computation and Language |
Knowledge-enriched Two-layered Attention Network for Sentiment Analysis | We propose a novel two-layered attention network based on Bidirectional Long
Short-Term Memory for sentiment analysis. The novel two-layered attention
network takes advantage of the external knowledge bases to improve the
sentiment prediction. It uses the Knowledge Graph Embedding generated using the
WordNet. We build our model by combining the two-layered attention network with
the supervised model based on Support Vector Regression using a Multilayer
Perceptron network for sentiment analysis. We evaluate our model on the
benchmark dataset of SemEval 2017 Task 5. Experimental results show that the
proposed model surpasses the top system of SemEval 2017 Task 5. The model
performs significantly better by improving the state-of-the-art system at
SemEval 2017 Task 5 by 1.7 and 3.7 points for sub-tracks 1 and 2 respectively.
| 2,018 | Computation and Language |
Validating WordNet Meronymy Relations using Adimen-SUMO | In this paper, we report on the practical application of a novel approach for
validating the knowledge of WordNet using Adimen-SUMO. In particular, this
paper focuses on cross-checking the WordNet meronymy relations against the
knowledge encoded in Adimen-SUMO. Our validation approach tests a large set of
competency questions (CQs), which are derived (semi)-automatically from the
knowledge encoded in WordNet, SUMO and their mapping, by applying efficient
first-order logic automated theorem provers. Unfortunately, despite of being
created manually, these knowledge resources are not free of errors and
discrepancies. In consequence, some of the resulting CQs are not plausible
according to the knowledge included in Adimen-SUMO. Thus, first we focus on
(semi)-automatically improving the alignment between these knowledge resources,
and second, we perform a minimal set of corrections in the ontology. Our aim is
to minimize the manual effort required for an extensive validation process. We
report on the strategies followed, the changes made, the effort needed and its
impact when validating the WordNet meronymy relations using improved versions
of the mapping and the ontology. Based on the new results, we discuss the
implications of the appropriate corrections and the need of future
enhancements.
| 2,018 | Computation and Language |
Knowledgeable Reader: Enhancing Cloze-Style Reading Comprehension with
External Commonsense Knowledge | We introduce a neural reading comprehension model that integrates external
commonsense knowledge, encoded as a key-value memory, in a cloze-style setting.
Instead of relying only on document-to-question interaction or discrete
features as in prior work, our model attends to relevant external knowledge and
combines this knowledge with the context representation before inferring the
answer. This allows the model to attract and imply knowledge from an external
knowledge source that is not explicitly stated in the text, but that is
relevant for inferring the answer. Our model improves results over a very
strong baseline on a hard Common Nouns dataset, making it a strong competitor
of much more complex models. By including knowledge explicitly, our model can
also provide evidence about the background knowledge used in the RC process.
| 2,018 | Computation and Language |
Sentence Modeling via Multiple Word Embeddings and Multi-level
Comparison for Semantic Textual Similarity | Different word embedding models capture different aspects of linguistic
properties. This inspired us to propose a model (M-MaxLSTM-CNN) for employing
multiple sets of word embeddings for evaluating sentence similarity/relation.
Representing each word by multiple word embeddings, the MaxLSTM-CNN encoder
generates a novel sentence embedding. We then learn the similarity/relation
between our sentence embeddings via Multi-level comparison. Our method
M-MaxLSTM-CNN consistently shows strong performances in several tasks (i.e.,
measure textual similarity, identify paraphrase, recognize textual entailment).
According to the experimental results on STS Benchmark dataset and SICK dataset
from SemEval, M-MaxLSTM-CNN outperforms the state-of-the-art methods for
textual similarity tasks. Our model does not use hand-crafted features (e.g.,
alignment features, Ngram overlaps, dependency features) as well as does not
require pre-trained word embeddings to have the same dimension.
| 2,018 | Computation and Language |
Improving Aspect Term Extraction with Bidirectional Dependency Tree
Representation | Aspect term extraction is one of the important subtasks in aspect-based
sentiment analysis. Previous studies have shown that using dependency tree
structure representation is promising for this task. However, most dependency
tree structures involve only one directional propagation on the dependency
tree. In this paper, we first propose a novel bidirectional dependency tree
network to extract dependency structure features from the given sentences. The
key idea is to explicitly incorporate both representations gained separately
from the bottom-up and top-down propagation on the given dependency syntactic
tree. An end-to-end framework is then developed to integrate the embedded
representations and BiLSTM plus CRF to learn both tree-structured and
sequential features to solve the aspect term extraction problem. Experimental
results demonstrate that the proposed model outperforms state-of-the-art
baseline models on four benchmark SemEval datasets.
| 2,019 | Computation and Language |
Morphological analysis using a sequence decoder | We introduce Morse, a recurrent encoder-decoder model that produces
morphological analyses of each word in a sentence. The encoder turns the
relevant information about the word and its context into a fixed size vector
representation and the decoder generates the sequence of characters for the
lemma followed by a sequence of individual morphological features. We show that
generating morphological features individually rather than as a combined tag
allows the model to handle rare or unseen tags and outperform whole-tag models.
In addition, generating morphological features as a sequence rather than e.g.\
an unordered set allows our model to produce an arbitrary number of features
that represent multiple inflectional groups in morphologically complex
languages. We obtain state-of-the art results in nine languages of different
morphological complexity under low-resource, high-resource and transfer
learning settings. We also introduce TrMor2018, a new high accuracy Turkish
morphology dataset. Our Morse implementation and the TrMor2018 dataset are
available online to support future research\footnote{See
\url{https://github.com/ai-ku/Morse.jl} for a Morse implementation in
Julia/Knet \cite{knet2016mlsys} and \url{https://github.com/ai-ku/TrMor2018}
for the new Turkish dataset.}.
| 2,019 | Computation and Language |
A new dataset and model for learning to understand navigational
instructions | In this paper, we present a state-of-the-art model and introduce a new
dataset for grounded language learning. Our goal is to develop a model that can
learn to follow new instructions given prior instruction-perception-action
examples. We based our work on the SAIL dataset which consists of navigational
instructions and actions in a maze-like environment. The new model we propose
achieves the best results to date on the SAIL dataset by using an improved
perceptual component that can represent relative positions of objects. We also
analyze the problems with the SAIL dataset regarding its size and balance. We
argue that performance on a small, fixed-size dataset is no longer a good
measure to differentiate state-of-the-art models. We introduce SAILx, a
synthetic dataset generator, and perform experiments where the size and balance
of the dataset are controlled.
| 2,018 | Computation and Language |
Aff2Vec: Affect--Enriched Distributional Word Representations | Human communication includes information, opinions, and reactions. Reactions
are often captured by the affective-messages in written as well as verbal
communications. While there has been work in affect modeling and to some extent
affective content generation, the area of affective word distributions in not
well studied. Synsets and lexica capture semantic relationships across words.
These models however lack in encoding affective or emotional word
interpretations. Our proposed model, Aff2Vec provides a method for enriched
word embeddings that are representative of affective interpretations of words.
Aff2Vec outperforms the state--of--the--art in intrinsic word-similarity tasks.
Further, the use of Aff2Vec representations outperforms baseline embeddings in
downstream natural language understanding tasks including sentiment analysis,
personality detection, and frustration prediction.
| 2,018 | Computation and Language |
Incorporating Glosses into Neural Word Sense Disambiguation | Word Sense Disambiguation (WSD) aims to identify the correct meaning of
polysemous words in the particular context. Lexical resources like WordNet
which are proved to be of great help for WSD in the knowledge-based methods.
However, previous neural networks for WSD always rely on massive labeled data
(context), ignoring lexical resources like glosses (sense definitions). In this
paper, we integrate the context and glosses of the target word into a unified
framework in order to make full use of both labeled data and lexical knowledge.
Therefore, we propose GAS: a gloss-augmented WSD neural network which jointly
encodes the context and glosses of the target word. GAS models the semantic
relationship between the context and the gloss in an improved memory network
framework, which breaks the barriers of the previous supervised methods and
knowledge-based methods. We further extend the original gloss of word sense via
its semantic relations in WordNet to enrich the gloss information. The
experimental results show that our model outperforms the state-of-theart
systems on several English all-words WSD datasets.
| 2,018 | Computation and Language |
A Talker Ensemble: the University of Wroc{\l}aw's Entry to the NIPS 2017
Conversational Intelligence Challenge | We present Poetwannabe, a chatbot submitted by the University of Wroc{\l}aw
to the NIPS 2017 Conversational Intelligence Challenge, in which it ranked
first ex-aequo. It is able to conduct a conversation with a user in a natural
language. The primary functionality of our dialogue system is context-aware
question answering (QA), while its secondary function is maintaining user
engagement. The chatbot is composed of a number of sub-modules, which
independently prepare replies to user's prompts and assess their own
confidence. To answer questions, our dialogue system relies heavily on factual
data, sourced mostly from Wikipedia and DBpedia, data of real user interactions
in public forums, as well as data concerning general literature. Where
applicable, modules are trained on large datasets using GPUs. However, to
comply with the competition's requirements, the final system is compact and
runs on commodity hardware.
| 2,018 | Computation and Language |
Efficient and Robust Question Answering from Minimal Context over
Documents | Neural models for question answering (QA) over documents have achieved
significant performance improvements. Although effective, these models do not
scale to large corpora due to their complex modeling of interactions between
the document and the question. Moreover, recent work has shown that such models
are sensitive to adversarial inputs. In this paper, we study the minimal
context required to answer the question, and find that most questions in
existing datasets can be answered with a small set of sentences. Inspired by
this observation, we propose a simple sentence selector to select the minimal
set of sentences to feed into the QA model. Our overall system achieves
significant reductions in training (up to 15 times) and inference times (up to
13 times), with accuracy comparable to or better than the state-of-the-art on
SQuAD, NewsQA, TriviaQA and SQuAD-Open. Furthermore, our experimental results
and analyses show that our approach is more robust to adversarial inputs.
| 2,018 | Computation and Language |
NeuralREG: An end-to-end approach to referring expression generation | Traditionally, Referring Expression Generation (REG) models first decide on
the form and then on the content of references to discourse entities in text,
typically relying on features such as salience and grammatical function. In
this paper, we present a new approach (NeuralREG), relying on deep neural
networks, which makes decisions about form and content in one go without
explicit feature extraction. Using a delexicalized version of the WebNLG
corpus, we show that the neural model substantially improves over two strong
baselines. Data and models are publicly available.
| 2,018 | Computation and Language |
Computational Historical Linguistics | Computational approaches to historical linguistics have been proposed since
half a century. Within the last decade, this line of research has received a
major boost, owing both to the transfer of ideas and software from
computational biology and to the release of several large electronic data
resources suitable for systematic comparative work.
In this article, some of the central research topic of this new wave of
computational historical linguistics are introduced and discussed. These are
automatic assessment of genetic relatedness, automatic cognate detection,
phylogenetic inference and ancestral state reconstruction. They will be
demonstrated by means of a case study of automatically reconstructing a
Proto-Romance word list from lexical data of 50 modern Romance languages and
dialects.
| 2,018 | Computation and Language |
Numeracy for Language Models: Evaluating and Improving their Ability to
Predict Numbers | Numeracy is the ability to understand and work with numbers. It is a
necessary skill for composing and understanding documents in clinical,
scientific, and other technical domains. In this paper, we explore different
strategies for modelling numerals with language models, such as memorisation
and digit-by-digit composition, and propose a novel neural architecture that
uses a continuous probability density function to model numerals from an open
vocabulary. Our evaluation on clinical and scientific datasets shows that using
hierarchical models to distinguish numerals from words improves a perplexity
metric on the subset of numerals by 2 and 4 orders of magnitude, respectively,
over non-hierarchical models. A combination of strategies can further improve
perplexity. Our continuous probability density function model reduces mean
absolute percentage errors by 18% and 54% in comparison to the second best
strategy for each dataset, respectively.
| 2,021 | Computation and Language |
Party Matters: Enhancing Legislative Embeddings with Author Attributes
for Vote Prediction | Predicting how Congressional legislators will vote is important for
understanding their past and future behavior. However, previous work on
roll-call prediction has been limited to single session settings, thus did not
consider generalization across sessions. In this paper, we show that metadata
is crucial for modeling voting outcomes in new contexts, as changes between
sessions lead to changes in the underlying data generation process. We show how
augmenting bill text with the sponsors' ideologies in a neural network model
can achieve an average of a 4% boost in accuracy over the previous
state-of-the-art.
| 2,018 | Computation and Language |
Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive
Token Encodings | The rise of neural networks, and particularly recurrent neural networks, has
produced significant advances in part-of-speech tagging accuracy. One
characteristic common among these models is the presence of rich initial word
encodings. These encodings typically are composed of a recurrent
character-based representation with learned and pre-trained word embeddings.
However, these encodings do not consider a context wider than a single word and
it is only through subsequent recurrent layers that word or sub-word
information interacts. In this paper, we investigate models that use recurrent
neural networks with sentence-level context for initial character and
word-based representations. In particular we show that optimal results are
obtained by integrating these context sensitive representations through
synchronized training with a meta-model that learns to combine their states. We
present results on part-of-speech and morphological tagging with
state-of-the-art performance on a number of languages.
| 2,018 | Computation and Language |
Sparse and Constrained Attention for Neural Machine Translation | In NMT, words are sometimes dropped from the source or generated repeatedly
in the translation. We explore novel strategies to address the coverage problem
that change only the attention transformation. Our approach allocates
fertilities to source words, used to bound the attention each word can receive.
We experiment with various sparse and constrained attention transformations and
propose a new one, constrained sparsemax, shown to be differentiable and
sparse. Empirical evaluation is provided in three languages pairs.
| 2,018 | Computation and Language |
Halo: Learning Semantics-Aware Representations for Cross-Lingual
Information Extraction | Cross-lingual information extraction (CLIE) is an important and challenging
task, especially in low resource scenarios. To tackle this challenge, we
propose a training method, called Halo, which enforces the local region of each
hidden state of a neural model to only generate target tokens with the same
semantic structure tag. This simple but powerful technique enables a neural
model to learn semantics-aware representations that are robust to noise,
without introducing any extra parameter, thus yielding better generalization in
both high and low resource settings.
| 2,018 | Computation and Language |
Character-based Neural Networks for Sentence Pair Modeling | Sentence pair modeling is critical for many NLP tasks, such as paraphrase
identification, semantic textual similarity, and natural language inference.
Most state-of-the-art neural models for these tasks rely on pretrained word
embedding and compose sentence-level semantics in varied ways; however, few
works have attempted to verify whether we really need pretrained embeddings in
these tasks. In this paper, we study how effective subword-level (character and
character n-gram) representations are in sentence pair modeling. Though it is
well-known that subword models are effective in tasks with single sentence
input, including language modeling and machine translation, they have not been
systematically studied in sentence pair modeling tasks where the semantic and
string similarities between texts matter. Our experiments show that subword
models without any pretrained word embedding can achieve new state-of-the-art
results on two social media datasets and competitive results on news data for
paraphrase identification.
| 2,018 | Computation and Language |
Controlling Personality-Based Stylistic Variation with Neural Natural
Language Generators | Natural language generators for task-oriented dialogue must effectively
realize system dialogue actions and their associated semantics. In many
applications, it is also desirable for generators to control the style of an
utterance. To date, work on task-oriented neural generation has primarily
focused on semantic fidelity rather than achieving stylistic goals, while work
on style has been done in contexts where it is difficult to measure content
preservation. Here we present three different sequence-to-sequence models and
carefully test how well they disentangle content and style. We use a
statistical generator, Personage, to synthesize a new corpus of over 88,000
restaurant domain utterances whose style varies according to models of
personality, giving us total control over both the semantic content and the
stylistic variation in the training data. We then vary the amount of explicit
stylistic supervision given to the three models. We show that our most explicit
model can simultaneously achieve high fidelity to both semantic and stylistic
goals: this model adds a context vector of 36 stylistic parameters as input to
the hidden state of the encoder at each time step, showing the benefits of
explicit stylistic supervision, even when the amount of training data is large.
| 2,018 | Computation and Language |
Learning sentence embeddings using Recursive Networks | Learning sentence vectors that generalise well is a challenging task. In this
paper we compare three methods of learning phrase embeddings: 1) Using LSTMs,
2) using recursive nets, 3) A variant of the method 2 using the POS information
of the phrase. We train our models on dictionary definitions of words to obtain
a reverse dictionary application similar to Felix et al. [1]. To see if our
embeddings can be transferred to a new task we also train and test on the
rotten tomatoes dataset [2]. We train keeping the sentence embeddings fixed as
well as with fine tuning.
| 2,018 | Computation and Language |
Joint Image Captioning and Question Answering | Answering visual questions need acquire daily common knowledge and model the
semantic connection among different parts in images, which is too difficult for
VQA systems to learn from images with the only supervision from answers.
Meanwhile, image captioning systems with beam search strategy tend to generate
similar captions and fail to diversely describe images. To address the
aforementioned issues, we present a system to have these two tasks compensate
with each other, which is capable of jointly producing image captions and
answering visual questions. In particular, we utilize question and image
features to generate question-related captions and use the generated captions
as additional features to provide new knowledge to the VQA system. For image
captioning, our system attains more informative results in term of the relative
improvements on VQA tasks as well as competitive results using automated
metrics. Applying our system to the VQA tasks, our results on VQA v2 dataset
achieve 65.8% using generated captions and 69.1% using annotated captions in
validation set and 68.4% in the test-standard set. Further, an ensemble of 10
models results in 69.7% in the test-standard split.
| 2,018 | Computation and Language |
Estimating the Rating of Reviewers Based on the Text | User-generated texts such as reviews and social media are valuable sources of
information. Online reviews are important assets for users to buy a product,
see a movie, or make a decision. Therefore, rating of a review is one of the
reliable factors for all users to read and trust the reviews. This paper
analyzes the texts of the reviews to evaluate and predict the ratings.
Moreover, we study the effect of lexical features generated from text as well
as sentimental words on the accuracy of rating prediction. Our analysis show
that words with high information gain score are more efficient compared to
words with high TF-IDF value. In addition, we explore the best number of
features for predicting the ratings of the reviews.
| 2,018 | Computation and Language |
Paracompositionality, MWEs and Argument Substitution | Multi-word expressions, verb-particle constructions, idiomatically combining
phrases, and phrasal idioms have something in common: not all of their elements
contribute to the argument structure of the predicate implicated by the
expression.
Radically lexicalized theories of grammar that avoid string-, term-, logical
form-, and tree-writing, and categorial grammars that avoid wrap operation,
make predictions about the categories involved in verb-particles and phrasal
idioms. They may require singleton types, which can only substitute for one
value, not just for one kind of value. These types are asymmetric: they can be
arguments only. They also narrowly constrain the kind of semantic value that
can correspond to such syntactic categories. Idiomatically combining phrases do
not subcategorize for singleton types, and they exploit another locally
computable and compositional property of a correspondence, that every syntactic
expression can project its head word. Such MWEs can be seen as empirically
realized categorial possibilities, rather than lacuna in a theory of
lexicalizable syntactic categories.
| 2,018 | Computation and Language |
Context-Aware Sequence-to-Sequence Models for Conversational Systems | This work proposes a novel approach based on sequence-to-sequence (seq2seq)
models for context-aware conversational systems. Exist- ing seq2seq models have
been shown to be good for generating natural responses in a data-driven
conversational system. However, they still lack mechanisms to incorporate
previous conversation turns. We investigate RNN-based methods that efficiently
integrate previous turns as a context for generating responses. Overall, our
experimental results based on human judgment demonstrate the feasibility and
effectiveness of the proposed approach.
| 2,018 | Computation and Language |
Sentiment Analysis of Arabic Tweets: Feature Engineering and A Hybrid
Approach | Sentiment Analysis in Arabic is a challenging task due to the rich morphology
of the language. Moreover, the task is further complicated when applied to
Twitter data that is known to be highly informal and noisy. In this paper, we
develop a hybrid method for sentiment analysis for Arabic tweets for a specific
Arabic dialect which is the Saudi Dialect. Several features were engineered and
evaluated using a feature backward selection method. Then a hybrid method that
combines a corpus-based and lexicon-based method was developed for several
classification models (two-way, three-way, four-way). The best F1-score for
each of these models was (69.9,61.63,55.07) respectively.
| 2,018 | Computation and Language |
Multimodal Affective Analysis Using Hierarchical Attention Strategy with
Word-Level Alignment | Multimodal affective computing, learning to recognize and interpret human
affects and subjective information from multiple data sources, is still
challenging because: (i) it is hard to extract informative features to
represent human affects from heterogeneous inputs; (ii) current fusion
strategies only fuse different modalities at abstract level, ignoring
time-dependent interactions between modalities. Addressing such issues, we
introduce a hierarchical multimodal architecture with attention and word-level
fusion to classify utter-ance-level sentiment and emotion from text and audio
data. Our introduced model outperforms the state-of-the-art approaches on
published datasets and we demonstrated that our model is able to visualize and
interpret the synchronized attention over modalities.
| 2,018 | Computation and Language |
COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval | This paper contributes to cross-lingual image annotation and retrieval in
terms of data and baseline methods. We propose COCO-CN, a novel dataset
enriching MS-COCO with manually written Chinese sentences and tags. For more
effective annotation acquisition, we develop a recommendation-assisted
collective annotation system, automatically providing an annotator with several
tags and sentences deemed to be relevant with respect to the pictorial content.
Having 20,342 images annotated with 27,218 Chinese sentences and 70,993 tags,
COCO-CN is currently the largest Chinese-English dataset that provides a
unified and challenging platform for cross-lingual image tagging, captioning
and retrieval. We develop conceptually simple yet effective methods per task
for learning from cross-lingual resources. Extensive experiments on the three
tasks justify the viability of the proposed dataset and methods. Data and code
are publicly available at https://github.com/li-xirong/coco-cn
| 2,019 | Computation and Language |
Normalization of Transliterated Words in Code-Mixed Data Using Seq2Seq
Model & Levenshtein Distance | Building tools for code-mixed data is rapidly gaining popularity in the NLP
research community as such data is exponentially rising on social media.
Working with code-mixed data contains several challenges, especially due to
grammatical inconsistencies and spelling variations in addition to all the
previous known challenges for social media scenarios. In this article, we
present a novel architecture focusing on normalizing phonetic typing
variations, which is commonly seen in code-mixed data. One of the main features
of our architecture is that in addition to normalizing, it can also be utilized
for back-transliteration and word identification in some cases. Our model
achieved an accuracy of 90.27% on the test data.
| 2,018 | Computation and Language |
Enhancing Chinese Intent Classification by Dynamically Integrating
Character Features into Word Embeddings with Ensemble Techniques | Intent classification has been widely researched on English data with deep
learning approaches that are based on neural networks and word embeddings. The
challenge for Chinese intent classification stems from the fact that, unlike
English where most words are made up of 26 phonologic alphabet letters, Chinese
is logographic, where a Chinese character is a more basic semantic unit that
can be informative and its meaning does not vary too much in contexts. Chinese
word embeddings alone can be inadequate for representing words, and pre-trained
embeddings can suffer from not aligning well with the task at hand. To account
for the inadequacy and leverage Chinese character information, we propose a
low-effort and generic way to dynamically integrate character embedding based
feature maps with word embedding based inputs, whose resulting word-character
embeddings are stacked with a contextual information extraction module to
further incorporate context information for predictions. On top of the proposed
model, we employ an ensemble method to combine single models and obtain the
final result. The approach is data-independent without relying on external
sources like pre-trained word embeddings. The proposed model outperforms
baseline models and existing methods.
| 2,018 | Computation and Language |
Learning to Mine Aligned Code and Natural Language Pairs from Stack
Overflow | For tasks like code synthesis from natural language, code retrieval, and code
summarization, data-driven models have shown great promise. However, creating
these models require parallel data between natural language (NL) and code with
fine-grained alignments. Stack Overflow (SO) is a promising source to create
such a data set: the questions are diverse and most of them have corresponding
answers with high-quality code snippets. However, existing heuristic methods
(e.g., pairing the title of a post with the code in the accepted answer) are
limited both in their coverage and the correctness of the NL-code pairs
obtained. In this paper, we propose a novel method to mine high-quality aligned
data from SO using two sets of features: hand-crafted features considering the
structure of the extracted snippets, and correspondence features obtained by
training a probabilistic model to capture the correlation between NL and code
using neural networks. These features are fed into a classifier that determines
the quality of mined NL-code pairs. Experiments using Python and Java as test
beds show that the proposed method greatly expands coverage and accuracy over
existing mining methods, even when using only a small number of labeled
examples. Further, we find that reasonable results are achieved even when
training the classifier on one language and testing on another, showing promise
for scaling NL-code mining to a wide variety of programming languages beyond
those for which we are able to annotate data.
| 2,018 | Computation and Language |
Self-Attention-Based Message-Relevant Response Generation for Neural
Conversation Model | Using a sequence-to-sequence framework, many neural conversation models for
chit-chat succeed in naturalness of the response. Nevertheless, the neural
conversation models tend to give generic responses which are not specific to
given messages, and it still remains as a challenge. To alleviate the tendency,
we propose a method to promote message-relevant and diverse responses for
neural conversation model by using self-attention, which is time-efficient as
well as effective. Furthermore, we present an investigation of why and how
effective self-attention is in deep comparison with the standard dialogue
generation. The experiment results show that the proposed method improves the
standard dialogue generation in various evaluation metrics.
| 2,018 | Computation and Language |
A Transition-based Algorithm for Unrestricted AMR Parsing | Non-projective parsing can be useful to handle cycles and reentrancy in AMR
graphs. We explore this idea and introduce a greedy left-to-right
non-projective transition-based parser. At each parsing configuration, an
oracle decides whether to create a concept or whether to connect a pair of
existing concepts. The algorithm handles reentrancy and arbitrary cycles
natively, i.e. within the transition system itself. The model is evaluated on
the LDC2015E86 corpus, obtaining results close to the state of the art,
including a Smatch of 64%, and showing good behavior on reentrant edges.
| 2,018 | Computation and Language |
Bilingual Sentiment Embeddings: Joint Projection of Sentiment Across
Languages | Sentiment analysis in low-resource languages suffers from a lack of annotated
corpora to estimate high-performing models. Machine translation and bilingual
word embeddings provide some relief through cross-lingual sentiment approaches.
However, they either require large amounts of parallel data or do not
sufficiently capture sentiment information. We introduce Bilingual Sentiment
Embeddings (BLSE), which jointly represent sentiment information in a source
and target language. This model only requires a small bilingual lexicon, a
source-language corpus annotated for sentiment, and monolingual word embeddings
for each language. We perform experiments on three language combinations
(Spanish, Catalan, Basque) for sentence-level cross-lingual sentiment
classification and find that our model significantly outperforms
state-of-the-art methods on four out of six experimental setups, as well as
capturing complementary information to machine translation. Our analysis of the
resulting embedding space provides evidence that it represents sentiment
information in the resource-poor target language without any annotated data in
that language.
| 2,018 | Computation and Language |
Grounding the Semantics of Part-of-Day Nouns Worldwide using Twitter | The usage of part-of-day nouns, such as 'night', and their time-specific
greetings ('good night'), varies across languages and cultures. We show the
possibilities that Twitter offers for studying the semantics of these terms and
its variability between countries. We mine a worldwide sample of multilingual
tweets with temporal greetings, and study how their frequencies vary in
relation with local time. The results provide insights into the semantics of
these temporal expressions and the cultural and sociological factors
influencing their usage.
| 2,018 | Computation and Language |
Selecting Machine-Translated Data for Quick Bootstrapping of a Natural
Language Understanding System | This paper investigates the use of Machine Translation (MT) to bootstrap a
Natural Language Understanding (NLU) system for a new language for the use case
of a large-scale voice-controlled device. The goal is to decrease the cost and
time needed to get an annotated corpus for the new language, while still having
a large enough coverage of user requests. Different methods of filtering MT
data in order to keep utterances that improve NLU performance and
language-specific post-processing methods are investigated. These methods are
tested in a large-scale NLU task with translating around 10 millions training
utterances from English to German. The results show a large improvement for
using MT data over a grammar-based and over an in-house data collection
baseline, while reducing the manual effort greatly. Both filtering and
post-processing approaches improve results further.
| 2,018 | Computation and Language |
RDF2Vec-based Classification of Ontology Alignment Changes | When ontologies cover overlapping topics, the overlap can be represented
using ontology alignments. These alignments need to be continuously adapted to
changing ontologies. Especially for large ontologies this is a costly task
often consisting of manual work. Finding changes that do not lead to an
adaption of the alignment can potentially make this process significantly
easier. This work presents an approach to finding these changes based on RDF
embeddings and common classification techniques. To examine the feasibility of
this approach, an evaluation on a real-world dataset is presented. In this
evaluation, the best classifiers reached a precision of 0.8.
| 2,018 | Computation and Language |
How much does a word weigh? Weighting word embeddings for word sense
induction | The paper describes our participation in the first shared task on word sense
induction and disambiguation for the Russian language RUSSE'2018 (Panchenko et
al., 2018). For each of several dozens of ambiguous words, the participants
were asked to group text fragments containing it according to the senses of
this word, which were not provided beforehand, therefore the "induction" part
of the task. For instance, a word "bank" and a set of text fragments (also
known as "contexts") in which this word occurs, e.g. "bank is a financial
institution that accepts deposits" and "river bank is a slope beside a body of
water" were given. A participant was asked to cluster such contexts in the
unknown in advance number of clusters corresponding to, in this case, the
"company" and the "area" senses of the word "bank". The organizers proposed
three evaluation datasets of varying complexity and text genres based
respectively on texts of Wikipedia, Web pages, and a dictionary of the Russian
language. We present two experiments: a positive and a negative one, based
respectively on clustering of contexts represented as a weighted average of
word embeddings and on machine translation using two state-of-the-art
production neural machine translation systems. Our team showed the second best
result on two datasets and the third best result on the remaining one dataset
among 18 participating teams. We managed to substantially outperform
competitive state-of-the-art baselines from the previous years based on sense
embeddings.
| 2,018 | Computation and Language |
Working Memory Networks: Augmenting Memory Networks with a Relational
Reasoning Module | During the last years, there has been a lot of interest in achieving some
kind of complex reasoning using deep neural networks. To do that, models like
Memory Networks (MemNNs) have combined external memory storages and attention
mechanisms. These architectures, however, lack of more complex reasoning
mechanisms that could allow, for instance, relational reasoning. Relation
Networks (RNs), on the other hand, have shown outstanding results in relational
reasoning tasks. Unfortunately, their computational cost grows quadratically
with the number of memories, something prohibitive for larger problems. To
solve these issues, we introduce the Working Memory Network, a MemNN
architecture with a novel working memory storage and reasoning module. Our
model retains the relational reasoning abilities of the RN while reducing its
computational complexity from quadratic to linear. We tested our model on the
text QA dataset bAbI and the visual QA dataset NLVR. In the jointly trained
bAbI-10k, we set a new state-of-the-art, achieving a mean error of less than
0.5%. Moreover, a simple ensemble of two of our models solves all 20 tasks in
the joint version of the benchmark.
| 2,018 | Computation and Language |
Scoring Lexical Entailment with a Supervised Directional Similarity
Network | We present the Supervised Directional Similarity Network (SDSN), a novel
neural architecture for learning task-specific transformation functions on top
of general-purpose word embeddings. Relying on only a limited amount of
supervision from task-specific scores on a subset of the vocabulary, our
architecture is able to generalise and transform a general-purpose
distributional vector space to model the relation of lexical entailment.
Experiments show excellent performance on scoring graded lexical entailment,
raising the state-of-the-art on the HyperLex dataset by approximately 25%.
| 2,018 | Computation and Language |
Embedding Syntax and Semantics of Prepositions via Tensor Decomposition | Prepositions are among the most frequent words in English and play complex
roles in the syntax and semantics of sentences. Not surprisingly, they pose
well-known difficulties in automatic processing of sentences (prepositional
attachment ambiguities and idiosyncratic uses in phrases). Existing methods on
preposition representation treat prepositions no different from content words
(e.g., word2vec and GloVe). In addition, recent studies aiming at solving
prepositional attachment and preposition selection problems depend heavily on
external linguistic resources and use dataset-specific word representations. In
this paper we use word-triple counts (one of the triples being a preposition)
to capture a preposition's interaction with its attachment and complement. We
then derive preposition embeddings via tensor decomposition on a large
unlabeled corpus. We reveal a new geometry involving Hadamard products and
empirically demonstrate its utility in paraphrasing phrasal verbs. Furthermore,
our preposition embeddings are used as simple features in two challenging
downstream tasks: preposition selection and prepositional attachment
disambiguation. We achieve results comparable to or better than the
state-of-the-art on multiple standardized datasets.
| 2,018 | Computation and Language |
Modeling Interpersonal Influence of Verbal Behavior in Couples Therapy
Dyadic Interactions | Dyadic interactions among humans are marked by speakers continuously
influencing and reacting to each other in terms of responses and behaviors,
among others. Understanding how interpersonal dynamics affect behavior is
important for successful treatment in psychotherapy domains. Traditional
schemes that automatically identify behavior for this purpose have often looked
at only the target speaker. In this work, we propose a Markov model of how a
target speaker's behavior is influenced by their own past behavior as well as
their perception of their partner's behavior, based on lexical features. Apart
from incorporating additional potentially useful information, our model can
also control the degree to which the partner affects the target speaker. We
evaluate our proposed model on the task of classifying Negative behavior in
Couples Therapy and show that it is more accurate than the single-speaker
model. Furthermore, we investigate the degree to which the optimal influence
relates to how well a couple does on the long-term, via relating to
relationship outcomes
| 2,018 | Computation and Language |
Native Language Cognate Effects on Second Language Lexical Choice | We present a computational analysis of cognate effects on the spontaneous
linguistic productions of advanced non-native speakers. Introducing a large
corpus of highly competent non-native English speakers, and using a set of
carefully selected lexical items, we show that the lexical choices of
non-natives are affected by cognates in their native language. This effect is
so powerful that we are able to reconstruct the phylogenetic language tree of
the Indo-European language family solely from the frequencies of specific
lexical items in the English of authors with various native languages. We
quantitatively analyze non-native lexical choice, highlighting cognate
facilitation as one of the important phenomena shaping the language of
non-native speakers.
| 2,018 | Computation and Language |
Crowd-Labeling Fashion Reviews with Quality Control | We present a new methodology for high-quality labeling in the fashion domain
with crowd workers instead of experts. We focus on the Aspect-Based Sentiment
Analysis task. Our methods filter out inaccurate input from crowd workers but
we preserve different worker labeling to capture the inherent high variability
of the opinions. We demonstrate the quality of labeled data based on Facebook's
FastText framework as a baseline.
| 2,018 | Computation and Language |
Global-Locally Self-Attentive Dialogue State Tracker | Dialogue state tracking, which estimates user goals and requests given the
dialogue context, is an essential part of task-oriented dialogue systems. In
this paper, we propose the Global-Locally Self-Attentive Dialogue State Tracker
(GLAD), which learns representations of the user utterance and previous system
actions with global-local modules. Our model uses global modules to share
parameters between estimators for different types (called slots) of dialogue
states, and uses local modules to learn slot-specific features. We show that
this significantly improves tracking of rare states and achieves
state-of-the-art performance on the WoZ and DSTC2 state tracking tasks. GLAD
obtains 88.1% joint goal accuracy and 97.1% request accuracy on WoZ,
outperforming prior work by 3.7% and 5.5%. On DSTC2, our model obtains 74.5%
joint goal accuracy and 97.5% request accuracy, outperforming prior work by
1.1% and 1.0%.
| 2,018 | Computation and Language |
Learning compositionally through attentive guidance | While neural network models have been successfully applied to domains that
require substantial generalisation skills, recent studies have implied that
they struggle when solving the task they are trained on requires inferring its
underlying compositional structure. In this paper, we introduce Attentive
Guidance, a mechanism to direct a sequence to sequence model equipped with
attention to find more compositional solutions. We test it on two tasks,
devised precisely to assess the compositional capabilities of neural models,
and we show that vanilla sequence to sequence models with attention overfit the
training distribution, while the guided versions come up with compositional
solutions that fit the training and testing distributions almost equally well.
Moreover, the learned solutions generalise even in cases where the training and
testing distributions strongly diverge. In this way, we demonstrate that
sequence to sequence models are capable of finding compositional solutions
without requiring extra components. These results helps to disentangle the
causes for the lack of systematic compositionality in neural networks, which
can in turn fuel future work.
| 2,019 | Computation and Language |
Letting Emotions Flow: Success Prediction by Modeling the Flow of
Emotions in Books | Books have the power to make us feel happiness, sadness, pain, surprise, or
sorrow. An author's dexterity in the use of these emotions captivates readers
and makes it difficult for them to put the book down. In this paper, we model
the flow of emotions over a book using recurrent neural networks and quantify
its usefulness in predicting success in books. We obtained the best weighted
F1-score of 69% for predicting books' success in a multitask setting
(simultaneously predicting success and genre of books).
| 2,018 | Computation and Language |
A Corpus for Multilingual Document Classification in Eight Languages | Cross-lingual document classification aims at training a document classifier
on resources in one language and transferring it to a different language
without any additional resources. Several approaches have been proposed in the
literature and the current best practice is to evaluate them on a subset of the
Reuters Corpus Volume 2. However, this subset covers only few languages
(English, German, French and Spanish) and almost all published works focus on
the the transfer between English and German. In addition, we have observed that
the class prior distributions differ significantly between the languages. We
argue that this complicates the evaluation of the multilinguality. In this
paper, we propose a new subset of the Reuters corpus with balanced class priors
for eight languages. By adding Italian, Russian, Japanese and Chinese, we cover
languages which are very different with respect to syntax, morphology, etc. We
provide strong baselines for all language transfer directions using
multilingual word and sentence embeddings respectively. Our goal is to offer a
freely available framework to evaluate cross-lingual document classification,
and we hope to foster by these means, research in this important area.
| 2,018 | Computation and Language |
Filtering and Mining Parallel Data in a Joint Multilingual Space | We learn a joint multilingual sentence embedding and use the distance between
sentences in different languages to filter noisy parallel data and to mine for
parallel data in large news collections. We are able to improve a competitive
baseline on the WMT'14 English to German task by 0.3 BLEU by filtering out 25%
of the training data. The same approach is used to mine additional bitexts for
the WMT'14 system and to obtain competitive results on the BUCC shared task to
identify parallel sentences in comparable corpora. The approach is generic, it
can be applied to many language pairs and it is independent of the architecture
of the machine translation system.
| 2,018 | Computation and Language |
Baseline Needs More Love: On Simple Word-Embedding-Based Models and
Associated Pooling Mechanisms | Many deep learning architectures have been proposed to model the
compositionality in text sequences, requiring a substantial number of
parameters and expensive computations. However, there has not been a rigorous
evaluation regarding the added value of sophisticated compositional functions.
In this paper, we conduct a point-by-point comparative study between Simple
Word-Embedding-based Models (SWEMs), consisting of parameter-free pooling
operations, relative to word-embedding-based RNN/CNN models. Surprisingly,
SWEMs exhibit comparable or even superior performance in the majority of cases
considered. Based upon this understanding, we propose two additional pooling
strategies over learned word embeddings: (i) a max-pooling operation for
improved interpretability; and (ii) a hierarchical pooling operation, which
preserves spatial (n-gram) information within text sequences. We present
experiments on 17 datasets encompassing three tasks: (i) (long) document
classification; (ii) text sequence matching; and (iii) short text tasks,
including classification and tagging. The source code and datasets can be
obtained from https:// github.com/dinghanshen/SWEM.
| 2,018 | Computation and Language |
Fast Neural Machine Translation Implementation | This paper describes the submissions to the efficiency track for GPUs at the
Workshop for Neural Machine Translation and Generation by members of the
University of Edinburgh, Adam Mickiewicz University, Tilde and University of
Alicante. We focus on efficient implementation of the recurrent deep-learning
model as implemented in Amun, the fast inference engine for neural machine
translation. We improve the performance with an efficient mini-batching
algorithm, and by fusing the softmax operation with the k-best extraction
algorithm. Submissions using Amun were first, second and third fastest in the
GPU efficiency track.
| 2,018 | Computation and Language |
Diffusion Maps for Textual Network Embedding | Textual network embedding leverages rich text information associated with the
network to learn low-dimensional vectorial representations of vertices. Rather
than using typical natural language processing (NLP) approaches, recent
research exploits the relationship of texts on the same edge to graphically
embed text. However, these models neglect to measure the complete level of
connectivity between any two texts in the graph. We present diffusion maps for
textual network embedding (DMTE), integrating global structural information of
the graph to capture the semantic relatedness between texts, with a
diffusion-convolution operation applied on the text inputs. In addition, a new
objective function is designed to efficiently preserve the high-order proximity
using the graph diffusion. Experimental results show that the proposed approach
outperforms state-of-the-art methods on the vertex-classification and
link-prediction tasks.
| 2,019 | Computation and Language |
Robust Distant Supervision Relation Extraction via Deep Reinforcement
Learning | Distant supervision has become the standard method for relation extraction.
However, even though it is an efficient method, it does not come at no
cost---The resulted distantly-supervised training samples are often very noisy.
To combat the noise, most of the recent state-of-the-art approaches focus on
selecting one-best sentence or calculating soft attention weights over the set
of the sentences of one specific entity pair. However, these methods are
suboptimal, and the false positive problem is still a key stumbling bottleneck
for the performance. We argue that those incorrectly-labeled candidate
sentences must be treated with a hard decision, rather than being dealt with
soft attention weights. To do this, our paper describes a radical solution---We
explore a deep reinforcement learning strategy to generate the false-positive
indicator, where we automatically recognize false positives for each relation
type without any supervised information. Unlike the removal operation in the
previous studies, we redistribute them into the negative examples. The
experimental results show that the proposed strategy significantly improves the
performance of distant supervision comparing to state-of-the-art systems.
| 2,018 | Computation and Language |
DSGAN: Generative Adversarial Training for Distant Supervision Relation
Extraction | Distant supervision can effectively label data for relation extraction, but
suffers from the noise labeling problem. Recent works mainly perform soft
bag-level noise reduction strategies to find the relatively better samples in a
sentence bag, which is suboptimal compared with making a hard decision of false
positive samples in sentence level. In this paper, we introduce an adversarial
learning framework, which we named DSGAN, to learn a sentence-level
true-positive generator. Inspired by Generative Adversarial Networks, we regard
the positive samples generated by the generator as the negative samples to
train the discriminator. The optimal generator is obtained until the
discrimination ability of the discriminator has the greatest decline. We adopt
the generator to filter distant supervision training dataset and redistribute
the false positive instances into the negative set, in which way to provide a
cleaned dataset for relation classification. The experimental results show that
the proposed strategy significantly improves the performance of distant
supervision relation extraction comparing to state-of-the-art systems.
| 2,018 | Computation and Language |
A Sentiment Analysis of Breast Cancer Treatment Experiences and
Healthcare Perceptions Across Twitter | Background: Social media has the capacity to afford the healthcare industry
with valuable feedback from patients who reveal and express their medical
decision-making process, as well as self-reported quality of life indicators
both during and post treatment. In prior work, [Crannell et. al.], we have
studied an active cancer patient population on Twitter and compiled a set of
tweets describing their experience with this disease. We refer to these online
public testimonies as "Invisible Patient Reported Outcomes" (iPROs), because
they carry relevant indicators, yet are difficult to capture by conventional
means of self-report. Methods: Our present study aims to identify tweets
related to the patient experience as an additional informative tool for
monitoring public health. Using Twitter's public streaming API, we compiled
over 5.3 million "breast cancer" related tweets spanning September 2016 until
mid December 2017. We combined supervised machine learning methods with natural
language processing to sift tweets relevant to breast cancer patient
experiences. We analyzed a sample of 845 breast cancer patient and survivor
accounts, responsible for over 48,000 posts. We investigated tweet content with
a hedonometric sentiment analysis to quantitatively extract emotionally charged
topics. Results: We found that positive experiences were shared regarding
patient treatment, raising support, and spreading awareness. Further
discussions related to healthcare were prevalent and largely negative focusing
on fear of political legislation that could result in loss of coverage.
Conclusions: Social media can provide a positive outlet for patients to discuss
their needs and concerns regarding their healthcare coverage and treatment
needs. Capturing iPROs from online communication can help inform healthcare
professionals and lead to more connected and personalized treatment regimens.
| 2,018 | Computation and Language |
Phrase Table as Recommendation Memory for Neural Machine Translation | Neural Machine Translation (NMT) has drawn much attention due to its
promising translation performance recently. However, several studies indicate
that NMT often generates fluent but unfaithful translations. In this paper, we
propose a method to alleviate this problem by using a phrase table as
recommendation memory. The main idea is to add bonus to words worthy of
recommendation, so that NMT can make correct predictions. Specifically, we
first derive a prefix tree to accommodate all the candidate target phrases by
searching the phrase translation table according to the source sentence. Then,
we construct a recommendation word set by matching between candidate target
phrases and previously translated target words by NMT. After that, we determine
the specific bonus value for each recommendable word by using the attention
vector and phrase translation probability. Finally, we integrate this bonus
value into NMT to improve the translation results. The extensive experiments
demonstrate that the proposed methods obtain remarkable improvements over the
strong attentionbased NMT.
| 2,018 | Computation and Language |
Lifelong Domain Word Embedding via Meta-Learning | Learning high-quality domain word embeddings is important for achieving good
performance in many NLP tasks. General-purpose embeddings trained on
large-scale corpora are often sub-optimal for domain-specific applications.
However, domain-specific tasks often do not have large in-domain corpora for
training high-quality domain embeddings. In this paper, we propose a novel
lifelong learning setting for domain embedding. That is, when performing the
new domain embedding, the system has seen many past domains, and it tries to
expand the new in-domain corpus by exploiting the corpora from the past domains
via meta-learning. The proposed meta-learner characterizes the similarities of
the contexts of the same word in many domain corpora, which helps retrieve
relevant data from the past domains to expand the new domain corpus.
Experimental results show that domain embeddings produced from such a process
improve the performance of the downstream tasks.
| 2,018 | Computation and Language |
Japanese Predicate Conjugation for Neural Machine Translation | Neural machine translation (NMT) has a drawback in that can generate only
high-frequency words owing to the computational costs of the softmax function
in the output layer.
In Japanese-English NMT, Japanese predicate conjugation causes an increase in
vocabulary size. For example, one verb can have as many as 19 surface
varieties. In this research, we focus on predicate conjugation for compressing
the vocabulary size in Japanese. The vocabulary list is filled with the various
forms of verbs. We propose methods using predicate conjugation information
without discarding linguistic information. The proposed methods can generate
low-frequency words and deal with unknown words. Two methods were considered to
introduce conjugation information: the first considers it as a token
(conjugation token) and the second considers it as an embedded vector
(conjugation feature).
The results using these methods demonstrate that the vocabulary size can be
compressed by approximately 86.1% (Tanaka corpus) and the NMT models can output
the words not in the training data set. Furthermore, BLEU scores improved by
0.91 points in Japanese-to-English translation, and 0.32 points in
English-to-Japanese translation with ASPEC.
| 2,018 | Computation and Language |
Context-Aware Neural Machine Translation Learns Anaphora Resolution | Standard machine translation systems process sentences in isolation and hence
ignore extra-sentential information, even though extended context can both
prevent mistakes in ambiguous cases and improve translation coherence. We
introduce a context-aware neural machine translation model designed in such way
that the flow of information from the extended context to the translation model
can be controlled and analyzed. We experiment with an English-Russian subtitles
dataset, and observe that much of what is captured by our model deals with
improving pronoun translation. We measure correspondences between induced
attention distributions and coreference relations and observe that the model
implicitly captures anaphora. It is consistent with gains for sentences where
pronouns need to be gendered in translation. Beside improvements in anaphoric
cases, the model also improves in overall BLEU, both over its context-agnostic
version (+0.7) and over simple concatenation of the context and source
sentences (+0.6).
| 2,018 | Computation and Language |
Recursive Neural Network Based Preordering for English-to-Japanese
Machine Translation | The word order between source and target languages significantly influences
the translation quality in machine translation. Preordering can effectively
address this problem. Previous preordering methods require a manual feature
design, making language dependent design costly. In this paper, we propose a
preordering method with a recursive neural network that learns features from
raw inputs. Experiments show that the proposed method achieves comparable gain
in translation quality to the state-of-the-art method but without a manual
feature design.
| 2,018 | Computation and Language |
Snips Voice Platform: an embedded Spoken Language Understanding system
for private-by-design voice interfaces | This paper presents the machine learning architecture of the Snips Voice
Platform, a software solution to perform Spoken Language Understanding on
microprocessors typical of IoT devices. The embedded inference is fast and
accurate while enforcing privacy by design, as no personal user data is ever
collected. Focusing on Automatic Speech Recognition and Natural Language
Understanding, we detail our approach to training high-performance Machine
Learning models that are small enough to run in real-time on small devices.
Additionally, we describe a data generation procedure that provides sufficient,
high-quality training data without compromising user privacy.
| 2,018 | Computation and Language |
Situated Mapping of Sequential Instructions to Actions with Single-step
Reward Observation | We propose a learning approach for mapping context-dependent sequential
instructions to actions. We address the problem of discourse and state
dependencies with an attention-based model that considers both the history of
the interaction and the state of the world. To train from start and goal states
without access to demonstrations, we propose SESTRA, a learning algorithm that
takes advantage of single-step reward observations and immediate expected
reward maximization. We evaluate on the SCONE domains, and show absolute
accuracy improvements of 9.8%-25.3% across the domains over approaches that use
high-level logical representations.
| 2,018 | Computation and Language |
Neural Argument Generation Augmented with Externally Retrieved Evidence | High quality arguments are essential elements for human reasoning and
decision-making processes. However, effective argument construction is a
challenging task for both human and machines. In this work, we study a novel
task on automatically generating arguments of a different stance for a given
statement. We propose an encoder-decoder style neural network-based argument
generation model enriched with externally retrieved evidence from Wikipedia.
Our model first generates a set of talking point phrases as intermediate
representation, followed by a separate decoder producing the final argument
based on both input and the keyphrases. Experiments on a large-scale dataset
collected from Reddit show that our model constructs arguments with more
topic-relevant content than a popular sequence-to-sequence generation model
according to both automatic evaluation and human assessments.
| 2,018 | Computation and Language |
Duluth UROP at SemEval-2018 Task 2: Multilingual Emoji Prediction with
Ensemble Learning and Oversampling | This paper describes the Duluth UROP systems that participated in
SemEval--2018 Task 2, Multilingual Emoji Prediction. We relied on a variety of
ensembles made up of classifiers using Naive Bayes, Logistic Regression, and
Random Forests. We used unigram and bigram features and tried to offset the
skewness of the data through the use of oversampling. Our task evaluation
results place us 19th of 48 systems in the English evaluation, and 5th of 21 in
the Spanish. After the evaluation we realized that some simple changes to
preprocessing could significantly improve our results. After making these
changes we attained results that would have placed us sixth in the English
evaluation, and second in the Spanish.
| 2,018 | Computation and Language |
UMDuluth-CS8761 at SemEval-2018 Task 9: Hypernym Discovery using Hearst
Patterns, Co-occurrence frequencies and Word Embeddings | Hypernym Discovery is the task of identifying potential hypernyms for a given
term. A hypernym is a more generalized word that is super-ordinate to more
specific words. This paper explores several approaches that rely on
co-occurrence frequencies of word pairs, Hearst Patterns based on regular
expressions, and word embeddings created from the UMBC corpus. Our system
Babbage participated in Subtask 1A for English and placed 6th of 19 systems
when identifying concept hypernyms, and 12th of 18 systems for entity
hypernyms.
| 2,018 | Computation and Language |
UMDSub at SemEval-2018 Task 2: Multilingual Emoji Prediction
Multi-channel Convolutional Neural Network on Subword Embedding | This paper describes the UMDSub system that participated in Task 2 of
SemEval-2018. We developed a system that predicts an emoji given the raw text
in a English tweet. The system is a Multi-channel Convolutional Neural Network
based on subword embeddings for the representation of tweets. This model
improves on character or word based methods by about 2\%. Our system placed
21st of 48 participating systems in the official evaluation.
| 2,018 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.