Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Vicinity-Driven Paragraph and Sentence Alignment for Comparable Corpora | Parallel corpora have driven great progress in the field of Text
Simplification. However, most sentence alignment algorithms either offer a
limited range of alignment types supported, or simply ignore valuable clues
present in comparable documents. We address this problem by introducing a new
set of flexible vicinity-driven paragraph and sentence alignment algorithms
that 1-N, N-1, N-N and long distance null alignments without the need for
hard-to-replicate supervised models.
| 2,016 | Computation and Language |
Information Extraction with Character-level Neural Networks and Free
Noisy Supervision | We present an architecture for information extraction from text that augments
an existing parser with a character-level neural network. The network is
trained using a measure of consistency of extracted data with existing
databases as a form of noisy supervision. Our architecture combines the ability
of constraint-based information extraction systems to easily incorporate domain
knowledge and constraints with the ability of deep neural networks to leverage
large amounts of data to learn complex features. Boosting the existing parser's
precision, the system led to large improvements over a mature and highly tuned
constraint-based production information extraction system used at Bloomberg for
financial language text.
| 2,017 | Computation and Language |
Models of retrieval in sentence comprehension: A computational
evaluation using Bayesian hierarchical modeling | Research on interference has provided evidence that the formation of
dependencies between non-adjacent words relies on a cue-based retrieval
mechanism. Two different models can account for one of the main predictions of
interference, i.e., a slowdown at a retrieval site, when several items share a
feature associated with a retrieval cue: Lewis and Vasishth's (2005)
activation-based model and McElree's (2000) direct access model. Even though
these two models have been used almost interchangeably, they are based on
different assumptions and predict differences in the relationship between
reading times and response accuracy. The activation-based model follows the
assumptions of ACT-R, and its retrieval process behaves as a lognormal race
between accumulators of evidence with a single variance. Under this model,
accuracy of the retrieval is determined by the winner of the race and retrieval
time by its rate of accumulation. In contrast, the direct access model assumes
a model of memory where only the probability of retrieval varies between items;
in this model, differences in latencies are a by-product of the possibility and
repairing incorrect retrievals. We implemented both models in a Bayesian
hierarchical framework in order to evaluate them and compare them. We show that
some aspects of the data are better fit under the direct access model than
under the activation-based model. We suggest that this finding does not rule
out the possibility that retrieval may be behaving as a race model with
assumptions that follow less closely the ones from the ACT-R framework. We show
that by introducing a modification of the activation model, i.e, by assuming
that the accumulation of evidence for retrieval of incorrect items is not only
slower but noisier (i.e., different variances for the correct and incorrect
items), the model can provide a fit as good as the one of the direct access
model.
| 2,017 | Computation and Language |
Multi-Perspective Context Matching for Machine Comprehension | Previous machine comprehension (MC) datasets are either too small to train
end-to-end deep learning models, or not difficult enough to evaluate the
ability of current MC techniques. The newly released SQuAD dataset alleviates
these limitations, and gives us a chance to develop more realistic MC models.
Based on this dataset, we propose a Multi-Perspective Context Matching (MPCM)
model, which is an end-to-end system that directly predicts the answer
beginning and ending points in a passage. Our model first adjusts each
word-embedding vector in the passage by multiplying a relevancy weight computed
against the question. Then, we encode the question and weighted passage by
using bi-directional LSTMs. For each point in the passage, our model matches
the context of this point against the encoded question from multiple
perspectives and produces a matching vector. Given those matched vectors, we
employ another bi-directional LSTM to aggregate all the information and predict
the beginning and ending points. Experimental result on the test set of SQuAD
shows that our model achieves a competitive result on the leaderboard.
| 2,016 | Computation and Language |
Building Large Machine Reading-Comprehension Datasets using Paragraph
Vectors | We present a dual contribution to the task of machine reading-comprehension:
a technique for creating large-sized machine-comprehension (MC) datasets using
paragraph-vector models; and a novel, hybrid neural-network architecture that
combines the representation power of recurrent neural networks with the
discriminative power of fully-connected multi-layered networks. We use the
MC-dataset generation technique to build a dataset of around 2 million
examples, for which we empirically determine the high-ceiling of human
performance (around 91% accuracy), as well as the performance of a variety of
computer models. Among all the models we have experimented with, our hybrid
neural-network architecture achieves the highest performance (83.2% accuracy).
The remaining gap to the human-performance ceiling provides enough room for
future model improvements.
| 2,016 | Computation and Language |
Improving Neural Language Models with a Continuous Cache | We propose an extension to neural network language models to adapt their
prediction to the recent history. Our model is a simplified version of memory
augmented networks, which stores past hidden activations as memory and accesses
them through a dot product with the current hidden activation. This mechanism
is very efficient and scales to very large memory sizes. We also draw a link
between the use of external memory in neural network and cache models used with
count based language models. We demonstrate on several language model datasets
that our approach performs significantly better than recent memory augmented
networks.
| 2,016 | Computation and Language |
Hypernyms under Siege: Linguistically-motivated Artillery for Hypernymy
Detection | The fundamental role of hypernymy in NLP has motivated the development of
many methods for the automatic identification of this relation, most of which
rely on word distribution. We investigate an extensive number of such
unsupervised measures, using several distributional semantic models that differ
by context type and feature weighting. We analyze the performance of the
different methods based on their linguistic motivation. Comparison to the
state-of-the-art supervised methods shows that while supervised methods
generally outperform the unsupervised ones, the former are sensitive to the
distribution of training instances, hurting their reliability. Being based on
general linguistic hypotheses and independent from training data, unsupervised
measures are more robust, and therefore are still useful artillery for
hypernymy detection.
| 2,017 | Computation and Language |
Mining Compatible/Incompatible Entities from Question and Answering via
Yes/No Answer Classification using Distant Label Expansion | Product Community Question Answering (PCQA) provides useful information about
products and their features (aspects) that may not be well addressed by product
descriptions and reviews. We observe that a product's compatibility issues with
other products are frequently discussed in PCQA and such issues are more
frequently addressed in accessories, i.e., via a yes/no question "Does this
mouse work with windows 10?". In this paper, we address the problem of
extracting compatible and incompatible products from yes/no questions in PCQA.
This problem can naturally have a two-stage framework: first, we perform
Complementary Entity (product) Recognition (CER) on yes/no questions; second,
we identify the polarities of yes/no answers to assign the complementary
entities a compatibility label (compatible, incompatible or unknown). We
leverage an existing unsupervised method for the first stage and a 3-class
classifier by combining a distant PU-learning method (learning from positive
and unlabeled examples) together with a binary classifier for the second stage.
The benefit of using distant PU-learning is that it can help to expand more
implicit yes/no answers without using any human annotated data. We conduct
experiments on 4 products to show that the proposed method is effective.
| 2,016 | Computation and Language |
Grammatical Constraints on Intra-sentential Code-Switching: From
Theories to Working Models | We make one of the first attempts to build working models for
intra-sentential code-switching based on the Equivalence-Constraint (Poplack
1980) and Matrix-Language (Myers-Scotton 1993) theories. We conduct a detailed
theoretical analysis, and a small-scale empirical study of the two models for
Hindi-English CS. Our analyses show that the models are neither sound nor
complete. Taking insights from the errors made by the models, we propose a new
model that combines features of both the theories.
| 2,016 | Computation and Language |
Neural Emoji Recommendation in Dialogue Systems | Emoji is an essential component in dialogues which has been broadly utilized
on almost all social platforms. It could express more delicate feelings beyond
plain texts and thus smooth the communications between users, making dialogue
systems more anthropomorphic and vivid. In this paper, we focus on
automatically recommending appropriate emojis given the contextual information
in multi-turn dialogue systems, where the challenges locate in understanding
the whole conversations. More specifically, we propose the hierarchical long
short-term memory model (H-LSTM) to construct dialogue representations,
followed by a softmax classifier for emoji classification. We evaluate our
models on the task of emoji classification in a real-world dataset, with some
further explorations on parameter sensitivity and case study. Experimental
results demonstrate that our method achieves the best performances on all
evaluation metrics. It indicates that our method could well capture the
contextual information and emotion flow in dialogues, which is significant for
emoji recommendation.
| 2,016 | Computation and Language |
How Grammatical is Character-level Neural Machine Translation? Assessing
MT Quality with Contrastive Translation Pairs | Analysing translation quality in regards to specific linguistic phenomena has
historically been difficult and time-consuming. Neural machine translation has
the attractive property that it can produce scores for arbitrary translations,
and we propose a novel method to assess how well NMT systems model specific
linguistic phenomena such as agreement over long distances, the production of
novel words, and the faithful translation of polarity. The core idea is that we
measure whether a reference translation is more probable under a NMT model than
a contrastive translation which introduces a specific type of error. We present
LingEval97, a large-scale data set of 97000 contrastive translation pairs based
on the WMT English->German translation task, with errors automatically created
with simple rules. We report results for a number of systems, and find that
recently introduced character-level NMT systems perform better at
transliteration than models with byte-pair encoding (BPE) segmentation, but
perform more poorly at morphosyntactic agreement, and translating discontiguous
units of meaning.
| 2,017 | Computation and Language |
Recurrent Deep Stacking Networks for Speech Recognition | This paper presented our work on applying Recurrent Deep Stacking Networks
(RDSNs) to Robust Automatic Speech Recognition (ASR) tasks. In the paper, we
also proposed a more efficient yet comparable substitute to RDSN, Bi- Pass
Stacking Network (BPSN). The main idea of these two models is to add
phoneme-level information into acoustic models, transforming an acoustic model
to the combination of an acoustic model and a phoneme-level N-gram model.
Experiments showed that RDSN and BPsn can substantially improve the
performances over conventional DNNs.
| 2,020 | Computation and Language |
Unsupervised Clustering of Commercial Domains for Adaptive Machine
Translation | In this paper, we report on domain clustering in the ambit of an adaptive MT
architecture. A standard bottom-up hierarchical clustering algorithm has been
instantiated with five different distances, which have been compared, on an MT
benchmark built on 40 commercial domains, in terms of dendrograms, intrinsic
and extrinsic evaluations. The main outcome is that the most expensive distance
is also the only one able to allow the MT engine to guarantee good performance
even with few, but highly populated clusters of domains.
| 2,016 | Computation and Language |
Multilingual Word Embeddings using Multigraphs | We present a family of neural-network--inspired models for computing
continuous word representations, specifically designed to exploit both
monolingual and multilingual text. This framework allows us to perform
unsupervised training of embeddings that exhibit higher accuracy on syntactic
and semantic compositionality, as well as multilingual semantic similarity,
compared to previous models trained in an unsupervised fashion. We also show
that such multilingual embeddings, optimized for semantic similarity, can
improve the performance of statistical machine translation with respect to how
it handles words not present in the parallel data.
| 2,016 | Computation and Language |
Incorporating Language Level Information into Acoustic Models | This paper proposed a class of novel Deep Recurrent Neural Networks which can
incorporate language-level information into acoustic models. For simplicity, we
named these networks Recurrent Deep Language Networks (RDLNs). Multiple
variants of RDLNs were considered, including two kinds of context information,
two methods to process the context, and two methods to incorporate the
language-level information. RDLNs provided possible methods to fine-tune the
whole Automatic Speech Recognition (ASR) system in the acoustic modeling
process.
| 2,020 | Computation and Language |
CoPaSul Manual -- Contour-based parametric and superpositional
intonation stylization | The purposes of the CoPaSul toolkit are (1) automatic prosodic annotation and
(2) prosodic feature extraction from syllable to utterance level. CoPaSul
stands for contour-based, parametric, superpositional intonation stylization.
In this framework intonation is represented as a superposition of global and
local contours that are described parametrically in terms of polynomial
coefficients. On the global level (usually associated but not necessarily
restricted to intonation phrases) the stylization serves to represent register
in terms of time-varying F0 level and range. On the local level (e.g. accent
groups), local contour shapes are described. From this parameterization several
features related to prosodic boundaries and prominence can be derived.
Furthermore, by coefficient clustering prosodic contour classes can be obtained
in a bottom-up way. Next to the stylization-based feature extraction also
standard F0 and energy measures (e.g. mean and variance) as well as rhythmic
aspects can be calculated. At the current state automatic annotation comprises:
segmentation into interpausal chunks, syllable nucleus extraction, and
unsupervised localization of prosodic phrase boundaries and prominent
syllables. F0 and partly also energy feature sets can be derived for: standard
measurements (as median and IQR), register in terms of F0 level and range,
prosodic boundaries, local contour shapes, bottom-up derived contour classes,
Gestalt of accent groups in terms of their deviation from higher level prosodic
units, as well as for rhythmic aspects quantifying the relation between F0 and
energy contours and prosodic event rates.
| 2,023 | Computation and Language |
Interpretable Semantic Textual Similarity: Finding and explaining
differences between sentences | User acceptance of artificial intelligence agents might depend on their
ability to explain their reasoning, which requires adding an interpretability
layer that fa- cilitates users to understand their behavior. This paper focuses
on adding an in- terpretable layer on top of Semantic Textual Similarity (STS),
which measures the degree of semantic equivalence between two sentences. The
interpretability layer is formalized as the alignment between pairs of segments
across the two sentences, where the relation between the segments is labeled
with a relation type and a similarity score. We present a publicly available
dataset of sentence pairs annotated following the formalization. We then
develop a system trained on this dataset which, given a sentence pair, explains
what is similar and different, in the form of graded and typed segment
alignments. When evaluated on the dataset, the system performs better than an
informed baseline, showing that the dataset and task are well-defined and
feasible. Most importantly, two user studies show how the system output can be
used to automatically produce explanations in natural language. Users performed
better when having access to the explanations, pro- viding preliminary evidence
that our dataset and method to automatically produce explanations is useful in
real applications.
| 2,016 | Computation and Language |
Learning through Dialogue Interactions by Asking Questions | A good dialogue agent should have the ability to interact with users by both
responding to questions and by asking questions, and importantly to learn from
both types of interaction. In this work, we explore this direction by designing
a simulator and a set of synthetic tasks in the movie domain that allow such
interactions between a learner and a teacher. We investigate how a learner can
benefit from asking questions in both offline and online reinforcement learning
settings, and demonstrate that the learner improves when asking questions.
Finally, real experiments with Mechanical Turk validate the approach. Our work
represents a first step in developing such end-to-end learned interactive
dialogue agents.
| 2,017 | Computation and Language |
TeKnowbase: Towards Construction of a Knowledge-base of Technical
Concepts | In this paper, we describe the construction of TeKnowbase, a knowledge-base
of technical concepts in computer science. Our main information sources are
technical websites such as Webopedia and Techtarget as well as Wikipedia and
online textbooks. We divide the knowledge-base construction problem into two
parts -- the acquisition of entities and the extraction of relationships among
these entities. Our knowledge-base consists of approximately 100,000 triples.
We conducted an evaluation on a sample of triples and report an accuracy of a
little over 90\%. We additionally conducted classification experiments on
StackOverflow data with features from TeKnowbase and achieved improved
classification accuracy.
| 2,016 | Computation and Language |
Transition-based Parsing with Context Enhancement and Future Reward
Reranking | This paper presents a novel reranking model, future reward reranking, to
re-score the actions in a transition-based parser by using a global scorer.
Different to conventional reranking parsing, the model searches for the best
dependency tree in all feasible trees constraining by a sequence of actions to
get the future reward of the sequence. The scorer is based on a first-order
graph-based parser with bidirectional LSTM, which catches different parsing
view compared with the transition-based parser. Besides, since context
enhancement has shown substantial improvement in the arc-stand transition-based
parsing over the parsing accuracy, we implement context enhancement on an
arc-eager transition-base parser with stack LSTMs, the dynamic oracle and
dropout supporting and achieve further improvement. With the global scorer and
context enhancement, the results show that UAS of the parser increases as much
as 1.20% for English and 1.66% for Chinese, and LAS increases as much as 1.32%
for English and 1.63% for Chinese. Moreover, we get state-of-the-art LASs,
achieving 87.58% for Chinese and 93.37% for English.
| 2,016 | Computation and Language |
Building a robust sentiment lexicon with (almost) no resource | Creating sentiment polarity lexicons is labor intensive. Automatically
translating them from resourceful languages requires in-domain machine
translation systems, which rely on large quantities of bi-texts. In this paper,
we propose to replace machine translation by transferring words from the
lexicon through word embeddings aligned across languages with a simple linear
transform. The approach leads to no degradation, compared to machine
translation, when tested on sentiment polarity classification on tweets from
four languages.
| 2,016 | Computation and Language |
Neural Networks for Joint Sentence Classification in Medical Paper
Abstracts | Existing models based on artificial neural networks (ANNs) for sentence
classification often do not incorporate the context in which sentences appear,
and classify sentences individually. However, traditional sentence
classification approaches have been shown to greatly benefit from jointly
classifying subsequent sentences, such as with conditional random fields. In
this work, we present an ANN architecture that combines the effectiveness of
typical ANN models to classify sentences in isolation, with the strength of
structured prediction. Our model achieves state-of-the-art results on two
different datasets for sequential sentence classification in medical abstracts.
| 2,016 | Computation and Language |
A Simple Approach to Multilingual Polarity Classification in Twitter | Recently, sentiment analysis has received a lot of attention due to the
interest in mining opinions of social media users. Sentiment analysis consists
in determining the polarity of a given text, i.e., its degree of positiveness
or negativeness. Traditionally, Sentiment Analysis algorithms have been
tailored to a specific language given the complexity of having a number of
lexical variations and errors introduced by the people generating content. In
this contribution, our aim is to provide a simple to implement and easy to use
multilingual framework, that can serve as a baseline for sentiment analysis
contests, and as starting point to build new sentiment analysis systems. We
compare our approach in eight different languages, three of them have important
international contests, namely, SemEval (English), TASS (Spanish), and
SENTIPOLC (Italian). Within the competitions our approach reaches from medium
to high positions in the rankings; whereas in the remaining languages our
approach outperforms the reported results.
| 2,016 | Computation and Language |
Modeling Trolling in Social Media Conversations | Social media websites, electronic newspapers and Internet forums allow
visitors to leave comments for others to read and interact. This exchange is
not free from participants with malicious intentions, who troll others by
positing messages that are intended to be provocative, offensive, or menacing.
With the goal of facilitating the computational modeling of trolling, we
propose a trolling categorization that is novel in the sense that it allows
comment-based analysis from both the trolls' and the responders' perspectives,
characterizing these two perspectives using four aspects, namely, the troll's
intention and his intention disclosure, as well as the responder's
interpretation of the troll's intention and her response strategy. Using this
categorization, we annotate and release a dataset containing excerpts of Reddit
conversations involving suspected trolls and their interactions with other
users. Finally, we identify the difficult-to-classify cases in our corpus and
suggest potential solutions for them.
| 2,016 | Computation and Language |
Automatic Labelling of Topics with Neural Embeddings | Topics generated by topic models are typically represented as list of terms.
To reduce the cognitive overhead of interpreting these topics for end-users, we
propose labelling a topic with a succinct phrase that summarises its theme or
idea. Using Wikipedia document titles as label candidates, we compute neural
embeddings for documents and words to select the most relevant labels for
topics. Compared to a state-of-the-art topic labelling system, our methodology
is simpler, more efficient, and finds better topic labels.
| 2,016 | Computation and Language |
A Two-Phase Approach Towards Identifying Argument Structure in Natural
Language | We propose a new approach for extracting argument structure from natural
language texts that contain an underlying argument. Our approach comprises of
two phases: Score Assignment and Structure Prediction. The Score Assignment
phase trains models to classify relations between argument units (Support,
Attack or Neutral). To that end, different training strategies have been
explored. We identify different linguistic and lexical features for training
the classifiers. Through ablation study, we observe that our novel use of
word-embedding features is most effective for this task. The Structure
Prediction phase makes use of the scores from the Score Assignment phase to
arrive at the optimal structure. We perform experiments on three argumentation
datasets, namely, AraucariaDB, Debatepedia and Wikipedia. We also propose two
baselines and observe that the proposed approach outperforms baseline systems
for the final task of Structure Prediction.
| 2,016 | Computation and Language |
Neural Networks Classifier for Data Selection in Statistical Machine
Translation | We address the data selection problem in statistical machine translation
(SMT) as a classification task. The new data selection method is based on a
neural network classifier. We present a new method description and empirical
results proving that our data selection method provides better translation
quality, compared to a state-of-the-art method (i.e., Cross entropy). Moreover,
the empirical results reported are coherent across different language pairs.
| 2,016 | Computation and Language |
Web-based Semantic Similarity for Emotion Recognition in Web Objects | In this project we propose a new approach for emotion recognition using
web-based similarity (e.g. confidence, PMI and PMING). We aim to extract basic
emotions from short sentences with emotional content (e.g. news titles, tweets,
captions), performing a web-based quantitative evaluation of semantic proximity
between each word of the analyzed sentence and each emotion of a psychological
model (e.g. Plutchik, Ekman, Lovheim). The phases of the extraction include:
text preprocessing (tokenization, stop words, filtering), search engine
automated query, HTML parsing of results (i.e. scraping), estimation of
semantic proximity, ranking of emotions according to proximity measures. The
main idea is that, since it is possible to generalize semantic similarity under
the assumption that similar concepts co-occur in documents indexed in search
engines, therefore also emotions can be generalized in the same way, through
tags or terms that express them in a particular language, ranking emotions.
Training results are compared to human evaluation, then additional comparative
tests on results are performed, both for the global ranking correlation (e.g.
Kendall, Spearman, Pearson) both for the evaluation of the emotion linked to
each single word. Different from sentiment analysis, our approach works at a
deeper level of abstraction, aiming at recognizing specific emotions and not
only the positive/negative sentiment, in order to predict emotions as semantic
data.
| 2,016 | Computation and Language |
Neural Multi-Source Morphological Reinflection | We explore the task of multi-source morphological reinflection, which
generalizes the standard, single-source version. The input consists of (i) a
target tag and (ii) multiple pairs of source form and source tag for a lemma.
The motivation is that it is beneficial to have access to more than one source
form since different source forms can provide complementary information, e.g.,
different stems. We further present a novel extension to the encoder- decoder
recurrent neural architecture, consisting of multiple encoders, to better solve
the task. We show that our new architecture outperforms single-source
reinflection models and publish our dataset for multi-source morphological
reinflection to facilitate future research.
| 2,017 | Computation and Language |
An Empirical Study of Adequate Vision Span for Attention-Based Neural
Machine Translation | Recently, the attention mechanism plays a key role to achieve high
performance for Neural Machine Translation models. However, as it computes a
score function for the encoder states in all positions at each decoding step,
the attention model greatly increases the computational complexity. In this
paper, we investigate the adequate vision span of attention models in the
context of machine translation, by proposing a novel attention framework that
is capable of reducing redundant score computation dynamically. The term
"vision span" means a window of the encoder states considered by the attention
model in one step. In our experiments, we found that the average window size of
vision span can be reduced by over 50% with modest loss in accuracy on
English-Japanese and German-English translation tasks.% This results indicate
that the conventional attention mechanism performs a significant amount of
redundant computation.
| 2,017 | Computation and Language |
Improving Tweet Representations using Temporal and User Context | In this work we propose a novel representation learning model which computes
semantic representations for tweets accurately. Our model systematically
exploits the chronologically adjacent tweets ('context') from users' Twitter
timelines for this task. Further, we make our model user-aware so that it can
do well in modeling the target tweet by exploiting the rich knowledge about the
user such as the way the user writes the post and also summarizing the topics
on which the user writes. We empirically demonstrate that the proposed models
outperform the state-of-the-art models in predicting the user profile
attributes like spouse, education and job by 19.66%, 2.27% and 2.22%
respectively.
| 2,016 | Computation and Language |
Boosting Neural Machine Translation | Training efficiency is one of the main problems for Neural Machine
Translation (NMT). Deep networks need for very large data as well as many
training iterations to achieve state-of-the-art performance. This results in
very high computation cost, slowing down research and industrialisation. In
this paper, we propose to alleviate this problem with several training methods
based on data boosting and bootstrap with no modifications to the neural
network. It imitates the learning process of humans, which typically spend more
time when learning "difficult" concepts than easier ones. We experiment on an
English-French translation task showing accuracy improvements of up to 1.63
BLEU while saving 20% of training time.
| 2,017 | Computation and Language |
Neural Machine Translation from Simplified Translations | Text simplification aims at reducing the lexical, grammatical and structural
complexity of a text while keeping the same meaning. In the context of machine
translation, we introduce the idea of simplified translations in order to boost
the learning ability of deep neural translation models. We conduct preliminary
experiments showing that translation complexity is actually reduced in a
translation of a source bi-text compared to the target reference of the bi-text
while using a neural machine translation (NMT) system learned on the exact same
bi-text. Based on knowledge distillation idea, we then train an NMT system
using the simplified bi-text, and show that it outperforms the initial system
that was built over the reference data set. Performance is further boosted when
both reference and automatic translations are used to learn the network. We
perform an elementary analysis of the translated corpus and report accuracy
results of the proposed approach on English-to-French and English-to-German
translation tasks.
| 2,016 | Computation and Language |
Domain Control for Neural Machine Translation | Machine translation systems are very sensitive to the domains they were
trained on. Several domain adaptation techniques have been deeply studied. We
propose a new technique for neural machine translation (NMT) that we call
domain control which is performed at runtime using a unique neural network
covering multiple domains. The presented approach shows quality improvements
when compared to dedicated domains translating on any of the covered domains
and even on out-of-domain data. In addition, model parameters do not need to be
re-estimated for each domain, making this effective to real use cases.
Evaluation is carried out on English-to-French translation for two different
testing scenarios. We first consider the case where an end-user performs
translations on a known domain. Secondly, we consider the scenario where the
domain is not known and predicted at the sentence level before translating.
Results show consistent accuracy improvements for both conditions.
| 2,017 | Computation and Language |
Domain specialization: a post-training domain adaptation for Neural
Machine Translation | Domain adaptation is a key feature in Machine Translation. It generally
encompasses terminology, domain and style adaptation, especially for human
post-editing workflows in Computer Assisted Translation (CAT). With Neural
Machine Translation (NMT), we introduce a new notion of domain adaptation that
we call "specialization" and which is showing promising results both in the
learning speed and in adaptation accuracy. In this paper, we propose to explore
this approach under several perspectives.
| 2,016 | Computation and Language |
Span-Based Constituency Parsing with a Structure-Label System and
Provably Optimal Dynamic Oracles | Parsing accuracy using efficient greedy transition systems has improved
dramatically in recent years thanks to neural networks. Despite striking
results in dependency parsing, however, neural models have not surpassed
state-of-the-art approaches in constituency parsing. To remedy this, we
introduce a new shift-reduce system whose stack contains merely sentence spans,
represented by a bare minimum of LSTM features. We also design the first
provably optimal dynamic oracle for constituency parsing, which runs in
amortized O(1) time, compared to O(n^3) oracles for standard dependency
parsing. Training with this oracle, we achieve the best F1 scores on both
English and French of any parser that does not use reranking or external data.
| 2,016 | Computation and Language |
Exploring Different Dimensions of Attention for Uncertainty Detection | Neural networks with attention have proven effective for many natural
language processing tasks. In this paper, we develop attention mechanisms for
uncertainty detection. In particular, we generalize standardly used attention
mechanisms by introducing external attention and sequence-preserving attention.
These novel architectures differ from standard approaches in that they use
external resources to compute attention weights and preserve sequence
information. We compare them to other configurations along different dimensions
of attention. Our novel architectures set the new state of the art on a
Wikipedia benchmark dataset and perform similar to the state-of-the-art model
on a biomedical benchmark which uses a large set of linguistic features.
| 2,017 | Computation and Language |
Unsupervised Dialogue Act Induction using Gaussian Mixtures | This paper introduces a new unsupervised approach for dialogue act induction.
Given the sequence of dialogue utterances, the task is to assign them the
labels representing their function in the dialogue.
Utterances are represented as real-valued vectors encoding their meaning. We
model the dialogue as Hidden Markov model with emission probabilities estimated
by Gaussian mixtures. We use Gibbs sampling for posterior inference.
We present the results on the standard Switchboard-DAMSL corpus. Our
algorithm achieves promising results compared with strong supervised baselines
and outperforms other unsupervised algorithms.
| 2,017 | Computation and Language |
Grammar rules for the isiZulu complex verb | The isiZulu verb is known for its morphological complexity, which is a
subject for on-going linguistics research, as well as for prospects of
computational use, such as controlled natural language interfaces, machine
translation, and spellcheckers. To this end, we seek to answer the question as
to what the precise grammar rules for the isiZulu complex verb are (and, by
extension, the Bantu verb morphology). To this end, we iteratively specify the
grammar as a Context Free Grammar, and evaluate it computationally. The grammar
presented in this paper covers the subject and object concords, negation,
present tense, aspect, mood, and the causative, applicative, stative, and the
reciprocal verbal extensions, politeness, the wh-question modifiers, and aspect
doubling, ensuring their correct order as they appear in verbs. The grammar
conforms to specification.
| 2,016 | Computation and Language |
Inferring the location of authors from words in their texts | For the purposes of computational dialectology or other geographically bound
text analysis tasks, texts must be annotated with their or their authors'
location. Many texts are locatable through explicit labels but most have no
explicit annotation of place. This paper describes a series of experiments to
determine how positionally annotated microblog posts can be used to learn
location-indicating words which then can be used to locate blog texts and their
authors. A Gaussian distribution is used to model the locational qualities of
words. We introduce the notion of placeness to describe how locational words
are.
We find that modelling word distributions to account for several locations
and thus several Gaussian distributions per word, defining a filter which picks
out words with high placeness based on their local distributional context, and
aggregating locational information in a centroid for each text gives the most
useful results. The results are applied to data in the Swedish language.
| 2,016 | Computation and Language |
Stateology: State-Level Interactive Charting of Language, Feelings, and
Values | People's personality and motivations are manifest in their everyday language
usage. With the emergence of social media, ample examples of such usage are
procurable. In this paper, we aim to analyze the vocabulary used by close to
200,000 Blogger users in the U.S. with the purpose of geographically portraying
various demographic, linguistic, and psychological dimensions at the state
level. We give a description of a web-based tool for viewing maps that depict
various characteristics of the social media users as derived from this large
blog dataset of over two billion words.
| 2,016 | Computation and Language |
SCDV : Sparse Composite Document Vectors using soft clustering over
distributional representations | We present a feature vector formation technique for documents - Sparse
Composite Document Vector (SCDV) - which overcomes several shortcomings of the
current distributional paragraph vector representations that are widely used
for text representation. In SCDV, word embedding's are clustered to capture
multiple semantic contexts in which words occur. They are then chained together
to form document topic-vectors that can express complex, multi-topic documents.
Through extensive experiments on multi-class and multi-label classification
tasks, we outperform the previous state-of-the-art method, NTSG (Liu et al.,
2015a). We also show that SCDV embedding's perform well on heterogeneous tasks
like Topic Coherence, context-sensitive Learning and Information Retrieval.
Moreover, we achieve significant reduction in training and prediction times
compared to other representation methods. SCDV achieves best of both worlds -
better performance with lower time and space complexity.
| 2,017 | Computation and Language |
User Bias Removal in Review Score Prediction | Review score prediction of text reviews has recently gained a lot of
attention in recommendation systems. A major problem in models for review score
prediction is the presence of noise due to user-bias in review scores. We
propose two simple statistical methods to remove such noise and improve review
score prediction. Compared to other methods that use multiple classifiers, one
for each user, our model uses a single global classifier to predict review
scores. We empirically evaluate our methods on two major categories
(\textit{Electronics} and \textit{Movies and TV}) of the SNAP published Amazon
e-Commerce Reviews data-set and Amazon \textit{Fine Food} reviews data-set. We
obtain improved review score prediction for three commonly used text feature
representations.
| 2,017 | Computation and Language |
Fast Domain Adaptation for Neural Machine Translation | Neural Machine Translation (NMT) is a new approach for automatic translation
of text from one human language into another. The basic concept in NMT is to
train a large Neural Network that maximizes the translation performance on a
given parallel corpus. NMT is gaining popularity in the research community
because it outperformed traditional SMT approaches in several translation tasks
at WMT and other evaluation tasks/benchmarks at least for some language pairs.
However, many of the enhancements in SMT over the years have not been
incorporated into the NMT framework. In this paper, we focus on one such
enhancement namely domain adaptation. We propose an approach for adapting a NMT
system to a new domain. The main idea behind domain adaptation is that the
availability of large out-of-domain training data and a small in-domain
training data. We report significant gains with our proposed method in both
automatic metrics and a human subjective evaluation metric on two language
pairs. With our adaptation method, we show large improvement on the new domain
while the performance of our general domain only degrades slightly. In
addition, our approach is fast enough to adapt an already trained system to a
new domain within few hours without the need to retrain the NMT model on the
combined data which usually takes several days/weeks depending on the volume of
the data.
| 2,016 | Computation and Language |
Sparse Coding of Neural Word Embeddings for Multilingual Sequence
Labeling | In this paper we propose and carefully evaluate a sequence labeling framework
which solely utilizes sparse indicator features derived from dense distributed
word representations. The proposed model obtains (near) state-of-the art
performance for both part-of-speech tagging and named entity recognition for a
variety of languages. Our model relies only on a few thousand sparse
coding-derived features, without applying any modification of the word
representations employed for the different tasks. The proposed model has
favorable generalization properties as it retains over 89.8% of its average POS
tagging accuracy when trained at 1.2% of the total available training data,
i.e.~150 sentences per language.
| 2,016 | Computation and Language |
Multi-Agent Cooperation and the Emergence of (Natural) Language | The current mainstream approach to train natural language systems is to
expose them to large amounts of text. This passive learning is problematic if
we are interested in developing interactive machines, such as conversational
agents. We propose a framework for language learning that relies on multi-agent
communication. We study this learning in the context of referential games. In
these games, a sender and a receiver see a pair of images. The sender is told
one of them is the target and is allowed to send a message from a fixed,
arbitrary vocabulary to the receiver. The receiver must rely on this message to
identify the target. Thus, the agents develop their own language interactively
out of the need to communicate. We show that two networks with simple
configurations are able to learn to coordinate in the referential game. We
further explore how to make changes to the game environment to cause the "word
meanings" induced in the game to better reflect intuitive semantic properties
of the images. In addition, we present a simple strategy for grounding the
agents' code into natural language. Both of these are necessary steps towards
developing machines that are able to communicate with humans productively.
| 2,017 | Computation and Language |
Inverted Bilingual Topic Models for Lexicon Extraction from Non-parallel
Data | Topic models have been successfully applied in lexicon extraction. However,
most previous methods are limited to document-aligned data. In this paper, we
try to address two challenges of applying topic models to lexicon extraction in
non-parallel data: 1) hard to model the word relationship and 2) noisy seed
dictionary. To solve these two challenges, we propose two new bilingual topic
models to better capture the semantic information of each word while
discriminating the multiple translations in a noisy seed dictionary. We extend
the scope of topic models by inverting the roles of "word" and "document". In
addition, to solve the problem of noise in seed dictionary, we incorporate the
probability of translation selection in our models. Moreover, we also propose
an effective measure to evaluate the similarity of words in different languages
and select the optimal translation pairs. Experimental results using real world
data demonstrate the utility and efficacy of the proposed models.
| 2,017 | Computation and Language |
A Context-aware Attention Network for Interactive Question Answering | Neural network based sequence-to-sequence models in an encoder-decoder
framework have been successfully applied to solve Question Answering (QA)
problems, predicting answers from statements and questions. However, almost all
previous models have failed to consider detailed context information and
unknown states under which systems do not have enough information to answer
given questions. These scenarios with incomplete or ambiguous information are
very common in the setting of Interactive Question Answering (IQA). To address
this challenge, we develop a novel model, employing context-dependent
word-level attention for more accurate statement representations and
question-guided sentence-level attention for better context modeling. We also
generate unique IQA datasets to test our model, which will be made publicly
available. Employing these attention mechanisms, our model accurately
understands when it can output an answer or when it requires generating a
supplementary question for additional input depending on different contexts.
When available, user's feedback is encoded and directly applied to update
sentence-level attention to infer an answer. Extensive experiments on QA and
IQA datasets quantitatively demonstrate the effectiveness of our model with
significant improvement over state-of-the-art conventional QA models.
| 2,017 | Computation and Language |
Continuous multilinguality with language vectors | Most existing models for multilingual natural language processing (NLP) treat
language as a discrete category, and make predictions for either one language
or the other. In contrast, we propose using continuous vector representations
of language. We show that these can be learned efficiently with a
character-based neural language model, and used to improve inference about
language varieties not seen during training. In experiments with 1303 Bible
translations into 990 different languages, we empirically explore the capacity
of multilingual language models, and also show that the language vectors
capture genetic relationships between languages.
| 2,017 | Computation and Language |
Noise Mitigation for Neural Entity Typing and Relation Extraction | In this paper, we address two different types of noise in information
extraction models: noise from distant supervision and noise from pipeline input
features. Our target tasks are entity typing and relation extraction. For the
first noise type, we introduce multi-instance multi-label learning algorithms
using neural network models, and apply them to fine-grained entity typing for
the first time. This gives our models comparable performance with the
state-of-the-art supervised approach which uses global embeddings of entities.
For the second noise type, we propose ways to improve the integration of noisy
entity type predictions into relation extraction. Our experiments show that
probabilistic predictions are more robust than discrete predictions and that
joint training of the two tasks performs best.
| 2,017 | Computation and Language |
Re-evaluating Automatic Metrics for Image Captioning | The task of generating natural language descriptions from images has received
a lot of attention in recent years. Consequently, it is becoming increasingly
important to evaluate such image captioning approaches in an automatic manner.
In this paper, we provide an in-depth evaluation of the existing image
captioning metrics through a series of carefully designed experiments.
Moreover, we explore the utilization of the recently proposed Word Mover's
Distance (WMD) document metric for the purpose of image captioning. Our
findings outline the differences and/or similarities between metrics and their
relative robustness by means of extensive correlation, accuracy and distraction
based evaluations. Our results also demonstrate that WMD provides strong
advantages over other metrics.
| 2,016 | Computation and Language |
Understanding Image and Text Simultaneously: a Dual Vision-Language
Machine Comprehension Task | We introduce a new multi-modal task for computer systems, posed as a combined
vision-language comprehension challenge: identifying the most suitable text
describing a scene, given several similar options. Accomplishing the task
entails demonstrating comprehension beyond just recognizing "keywords" (or
key-phrases) and their corresponding visual concepts. Instead, it requires an
alignment between the representations of the two modalities that achieves a
visually-grounded "understanding" of various linguistic elements and their
dependencies. This new task also admits an easy-to-compute and well-studied
metric: the accuracy in detecting the true target among the decoys.
The paper makes several contributions: an effective and extensible mechanism
for generating decoys from (human-created) image captions; an instance of
applying this mechanism, yielding a large-scale machine comprehension dataset
(based on the COCO images and captions) that we make publicly available; human
evaluation results on this dataset, informing a performance upper-bound; and
several baseline and competitive learning approaches that illustrate the
utility of the proposed task and dataset in advancing both image and language
comprehension. We also show that, in a multi-task learning setting, the
performance on the proposed task is positively correlated with the end-to-end
task of image captioning.
| 2,016 | Computation and Language |
"What is Relevant in a Text Document?": An Interpretable Machine
Learning Approach | Text documents can be described by a number of abstract concepts such as
semantic category, writing style, or sentiment. Machine learning (ML) models
have been trained to automatically map documents to these abstract concepts,
allowing to annotate very large text collections, more than could be processed
by a human in a lifetime. Besides predicting the text's category very
accurately, it is also highly desirable to understand how and why the
categorization process takes place. In this paper, we demonstrate that such
understanding can be achieved by tracing the classification decision back to
individual words using layer-wise relevance propagation (LRP), a recently
developed technique for explaining predictions of complex non-linear
classifiers. We train two word-based ML models, a convolutional neural network
(CNN) and a bag-of-words SVM classifier, on a topic categorization task and
adapt the LRP method to decompose the predictions of these models onto words.
Resulting scores indicate how much individual words contribute to the overall
classification decision. This enables one to distill relevant information from
text documents without an explicit semantic information extraction step. We
further use the word-wise relevance scores for generating novel vector-based
document representations which capture semantic information. Based on these
document vectors, we introduce a measure of model explanatory power and show
that, although the SVM and CNN models perform similarly in terms of
classification accuracy, the latter exhibits a higher level of explainability
which makes it more comprehensible for humans and potentially more useful for
other applications.
| 2,017 | Computation and Language |
Supervised Opinion Aspect Extraction by Exploiting Past Extraction
Results | One of the key tasks of sentiment analysis of product reviews is to extract
product aspects or features that users have expressed opinions on. In this
work, we focus on using supervised sequence labeling as the base approach to
performing the task. Although several extraction methods using sequence
labeling methods such as Conditional Random Fields (CRF) and Hidden Markov
Models (HMM) have been proposed, we show that this supervised approach can be
significantly improved by exploiting the idea of concept sharing across
multiple domains. For example, "screen" is an aspect in iPhone, but not only
iPhone has a screen, many electronic devices have screens too. When "screen"
appears in a review of a new domain (or product), it is likely to be an aspect
too. Knowing this information enables us to do much better extraction in the
new domain. This paper proposes a novel extraction method exploiting this idea
in the context of supervised sequence labeling. Experimental results show that
it produces markedly better results than without using the past information.
| 2,016 | Computation and Language |
A CRF Based POS Tagger for Code-mixed Indian Social Media Text | In this work, we describe a conditional random fields (CRF) based system for
Part-Of- Speech (POS) tagging of code-mixed Indian social media text as part of
our participation in the tool contest on POS tagging for codemixed Indian
social media text, held in conjunction with the 2016 International Conference
on Natural Language Processing, IIT(BHU), India. We participated only in
constrained mode contest for all three language pairs, Bengali-English,
Hindi-English and Telegu-English. Our system achieves the overall average F1
score of 79.99, which is the highest overall average F1 score among all 16
systems participated in constrained mode contest.
| 2,016 | Computation and Language |
Language Modeling with Gated Convolutional Networks | The pre-dominant approach to language modeling to date is based on recurrent
neural networks. Their success on this task is often linked to their ability to
capture unbounded context. In this paper we develop a finite context approach
through stacked convolutions, which can be more efficient since they allow
parallelization over sequential tokens. We propose a novel simplified gating
mechanism that outperforms Oord et al (2016) and investigate the impact of key
architectural decisions. The proposed approach achieves state-of-the-art on the
WikiText-103 benchmark, even though it features long-term dependencies, as well
as competitive results on the Google Billion Words benchmark. Our model reduces
the latency to score a sentence by an order of magnitude compared to a
recurrent baseline. To our knowledge, this is the first time a non-recurrent
approach is competitive with strong recurrent models on these large scale
language tasks.
| 2,017 | Computation and Language |
KS_JU@DPIL-FIRE2016:Detecting Paraphrases in Indian Languages Using
Multinomial Logistic Regression Model | In this work, we describe a system that detects paraphrases in Indian
Languages as part of our participation in the shared Task on detecting
paraphrases in Indian Languages (DPIL) organized by Forum for Information
Retrieval Evaluation (FIRE) in 2016. Our paraphrase detection method uses a
multinomial logistic regression model trained with a variety of features which
are basically lexical and semantic level similarities between two sentences in
a pair. The performance of the system has been evaluated against the test set
released for the FIRE 2016 shared task on DPIL. Our system achieves the highest
f-measure of 0.95 on task1 in Punjabi language.The performance of our system on
task1 in Hindi language is f-measure of 0.90. Out of 11 teams participated in
the shared task, only four teams participated in all four languages, Hindi,
Punjabi, Malayalam and Tamil, but the remaining 7 teams participated in one of
the four languages. We also participated in task1 and task2 both for all four
Indian Languages. The overall average performance of our system including task1
and task2 overall four languages is F1-score of 0.81 which is the second
highest score among the four systems that participated in all four languages.
| 2,016 | Computation and Language |
Predicting the Industry of Users on Social Media | Automatic profiling of social media users is an important task for supporting
a multitude of downstream applications. While a number of studies have used
social media content to extract and study collective social attributes, there
is a lack of substantial research that addresses the detection of a user's
industry. We frame this task as classification using both feature engineering
and ensemble learning. Our industry-detection system uses both posted content
and profile information to detect a user's industry with 64.3% accuracy,
significantly outperforming the majority baseline in a taxonomy of fourteen
industry classes. Our qualitative analysis suggests that a person's industry
not only affects the words used and their perceived meanings, but also the
number and type of emotions being expressed.
| 2,016 | Computation and Language |
Understanding Neural Networks through Representation Erasure | While neural networks have been successfully applied to many natural language
processing tasks, they come at the cost of interpretability. In this paper, we
propose a general methodology to analyze and interpret decisions from a neural
model by observing the effects on the model of erasing various parts of the
representation, such as input word-vector dimensions, intermediate hidden
units, or input words. We present several approaches to analyzing the effects
of such erasure, from computing the relative difference in evaluation metrics,
to using reinforcement learning to erase the minimum set of input words in
order to flip a neural model's decision. In a comprehensive analysis of
multiple NLP tasks, including linguistic feature classification, sentence-level
sentiment analysis, and document level sentiment aspect prediction, we show
that the proposed methodology not only offers clear explanations about neural
model decisions, but also provides a way to conduct error analysis on neural
models.
| 2,017 | Computation and Language |
Text Summarization using Deep Learning and Ridge Regression | We develop models and extract relevant features for automatic text
summarization and investigate the performance of different models on the DUC
2001 dataset. Two different models were developed, one being a ridge regressor
and the other one was a multi-layer perceptron. The hyperparameters were varied
and their performance were noted. We segregated the summarization task into 2
main steps, the first being sentence ranking and the second step being sentence
selection. In the first step, given a document, we sort the sentences based on
their Importance, and in the second step, in order to obtain non-redundant
sentences, we weed out the sentences that are have high similarity with the
previously selected sentences.
| 2,017 | Computation and Language |
Abstractive Headline Generation for Spoken Content by Attentive
Recurrent Neural Networks with ASR Error Modeling | Headline generation for spoken content is important since spoken content is
difficult to be shown on the screen and browsed by the user. It is a special
type of abstractive summarization, for which the summaries are generated word
by word from scratch without using any part of the original content. Many deep
learning approaches for headline generation from text document have been
proposed recently, all requiring huge quantities of training data, which is
difficult for spoken document summarization. In this paper, we propose an ASR
error modeling approach to learn the underlying structure of ASR error patterns
and incorporate this model in an Attentive Recurrent Neural Network (ARNN)
architecture. In this way, the model for abstractive headline generation for
spoken content can be learned from abundant text data and the ASR data for some
recognizers. Experiments showed very encouraging results and verified that the
proposed ASR error model works well even when the input spoken content is
recognized by a recognizer very different from the one the model learned from.
| 2,016 | Computation and Language |
Shamela: A Large-Scale Historical Arabic Corpus | Arabic is a widely-spoken language with a rich and long history spanning more
than fourteen centuries. Yet existing Arabic corpora largely focus on the
modern period or lack sufficient diachronic information. We develop a
large-scale, historical corpus of Arabic of about 1 billion words from diverse
periods of time. We clean this corpus, process it with a morphological
analyzer, and enhance it by detecting parallel passages and automatically
dating undated texts. We demonstrate its utility with selected case-studies in
which we show its application to the digital humanities.
| 2,016 | Computation and Language |
Here's My Point: Joint Pointer Architecture for Argument Mining | One of the major goals in automated argumentation mining is to uncover the
argument structure present in argumentative text. In order to determine this
structure, one must understand how different individual components of the
overall argument are linked. General consensus in this field dictates that the
argument components form a hierarchy of persuasion, which manifests itself in a
tree structure. This work provides the first neural network-based approach to
argumentation mining, focusing on the two tasks of extracting links between
argument components, and classifying types of argument components. In order to
solve this problem, we propose to use a joint model that is based on a Pointer
Network architecture. A Pointer Network is appealing for this task for the
following reasons: 1) It takes into account the sequential nature of argument
components; 2) By construction, it enforces certain properties of the tree
structure present in argument relations; 3) The hidden representations can be
applied to auxiliary tasks. In order to extend the contribution of the original
Pointer Network model, we construct a joint model that simultaneously attempts
to learn the type of argument component, as well as continuing to predict links
between argument components. The proposed joint model achieves state-of-the-art
results on two separate evaluation corpora, achieving far superior performance
than a regular Pointer Network model. Our results show that optimizing for both
tasks, and adding a fully-connected layer prior to recurrent neural network
input, is crucial for high performance.
| 2,017 | Computation and Language |
Deep Semi-Supervised Learning with Linguistically Motivated Sequence
Labeling Task Hierarchies | In this paper we present a novel Neural Network algorithm for conducting
semi-supervised learning for sequence labeling tasks arranged in a
linguistically motivated hierarchy. This relationship is exploited to
regularise the representations of supervised tasks by backpropagating the error
of the unsupervised task through the supervised tasks. We introduce a neural
network where lower layers are supervised by junior downstream tasks and the
final layer task is an auxiliary unsupervised task. The architecture shows
improvements of up to two percentage points F1 for Chunking compared to a
plausible baseline.
| 2,016 | Computation and Language |
Verifying Heaps' law using Google Books Ngram data | This article is devoted to the verification of the empirical Heaps law in
European languages using Google Books Ngram corpus data. The connection between
word distribution frequency and expected dependence of individual word number
on text size is analysed in terms of a simple probability model of text
generation. It is shown that the Heaps exponent varies significantly within
characteristic time intervals of 60-100 years.
| 2,013 | Computation and Language |
Intelligent information extraction based on artificial neural network | Question Answering System (QAS) is used for information retrieval and natural
language processing (NLP) to reduce human effort. There are numerous QAS based
on the user documents present today, but they all are limited to providing
objective answers and process simple questions only. Complex questions cannot
be answered by the existing QAS, as they require interpretation of the current
and old data as well as the question asked by the user. The above limitations
can be overcome by using deep cases and neural network. Hence we propose a
modified QAS in which we create a deep artificial neural network with
associative memory from text documents. The modified QAS processes the contents
of the text document provided to it and find the answer to even complex
questions in the documents.
| 2,017 | Computation and Language |
A POS Tagger for Code Mixed Indian Social Media Text - ICON-2016 NLP
Tools Contest Entry from Surukam | Building Part-of-Speech (POS) taggers for code-mixed Indian languages is a
particularly challenging problem in computational linguistics due to a dearth
of accurately annotated training corpora. ICON, as part of its NLP tools
contest has organized this challenge as a shared task for the second
consecutive year to improve the state-of-the-art. This paper describes the POS
tagger built at Surukam to predict the coarse-grained and fine-grained POS tags
for three language pairs - Bengali-English, Telugu-English and Hindi-English,
with the text spanning three popular social media platforms - Facebook,
WhatsApp and Twitter. We employed Conditional Random Fields as the sequence
tagging algorithm and used a library called sklearn-crfsuite - a thin wrapper
around CRFsuite for training our model. Among the features we used include -
character n-grams, language information and patterns for emoji, number,
punctuation and web-address. Our submissions in the constrained
environment,i.e., without making any use of monolingual POS taggers or the
like, obtained an overall average F1-score of 76.45%, which is comparable to
the 2015 winning score of 76.79%.
| 2,017 | Computation and Language |
Cutting-off Redundant Repeating Generations for Neural Abstractive
Summarization | This paper tackles the reduction of redundant repeating generation that is
often observed in RNN-based encoder-decoder models. Our basic idea is to
jointly estimate the upper-bound frequency of each target vocabulary in the
encoder and control the output words based on the estimation in the decoder.
Our method shows significant improvement over a strong RNN-based
encoder-decoder baseline and achieved its best results on an abstractive
summarization benchmark.
| 2,017 | Computation and Language |
Expanding Subjective Lexicons for Social Media Mining with Embedding
Subspaces | Recent approaches for sentiment lexicon induction have capitalized on
pre-trained word embeddings that capture latent semantic properties. However,
embeddings obtained by optimizing performance of a given task (e.g. predicting
contextual words) are sub-optimal for other applications. In this paper, we
address this problem by exploiting task-specific representations, induced via
embedding sub-space projection. This allows us to expand lexicons describing
multiple semantic properties. For each property, our model jointly learns
suitable representations and the concomitant predictor. Experiments conducted
over multiple subjective lexicons, show that our model outperforms previous
work and other baselines; even in low training data regimes. Furthermore,
lexicon-based sentiment classifiers built on top of our lexicons outperform
similar resources and yield performances comparable to those of supervised
models.
| 2,017 | Computation and Language |
Social Media Argumentation Mining: The Quest for Deliberateness in
Raucousness | Argumentation mining from social media content has attracted increasing
attention. The task is both challenging and rewarding. The informal nature of
user-generated content makes the task dauntingly difficult. On the other hand,
the insights that could be gained by a large-scale analysis of social media
argumentation make it a very worthwhile task. In this position paper I discuss
the motivation for social media argumentation mining, as well as the tasks and
challenges involved.
| 2,017 | Computation and Language |
Aspect-augmented Adversarial Networks for Domain Adaptation | We introduce a neural method for transfer learning between two (source and
target) classification tasks or aspects over the same domain. Rather than
training on target labels, we use a few keywords pertaining to source and
target aspects indicating sentence relevance instead of document class labels.
Documents are encoded by learning to embed and softly select relevant sentences
in an aspect-dependent manner. A shared classifier is trained on the source
encoded documents and labels, and applied to target encoded documents. We
ensure transfer through aspect-adversarial training so that encoded documents
are, as sets, aspect-invariant. Experimental results demonstrate that our
approach outperforms different baselines and model variants on two datasets,
yielding an improvement of 27% on a pathology dataset and 5% on a review
dataset.
| 2,017 | Computation and Language |
Stance detection in online discussions | This paper describes our system created to detect stance in online
discussions. The goal is to identify whether the author of a comment is in
favor of the given target or against. Our approach is based on a maximum
entropy classifier, which uses surface-level, sentiment and domain-specific
features. The system was originally developed to detect stance in English
tweets. We adapted it to process Czech news commentaries.
| 2,017 | Computation and Language |
End-to-End Attention based Text-Dependent Speaker Verification | A new type of End-to-End system for text-dependent speaker verification is
presented in this paper. Previously, using the phonetically
discriminative/speaker discriminative DNNs as feature extractors for speaker
verification has shown promising results. The extracted frame-level (DNN
bottleneck, posterior or d-vector) features are equally weighted and aggregated
to compute an utterance-level speaker representation (d-vector or i-vector). In
this work we use speaker discriminative CNNs to extract the noise-robust
frame-level features. These features are smartly combined to form an
utterance-level speaker vector through an attention mechanism. The proposed
attention model takes the speaker discriminative information and the phonetic
information to learn the weights. The whole system, including the CNN and
attention model, is joint optimized using an end-to-end criterion. The training
algorithm imitates exactly the evaluation process --- directly mapping a test
utterance and a few target speaker utterances into a single verification score.
The algorithm can automatically select the most similar impostor for each
target speaker to train the network. We demonstrated the effectiveness of the
proposed end-to-end system on Windows $10$ "Hey Cortana" speaker verification
task.
| 2,017 | Computation and Language |
Shortcut Sequence Tagging | Deep stacked RNNs are usually hard to train. Adding shortcut connections
across different layers is a common way to ease the training of stacked
networks. However, extra shortcuts make the recurrent step more complicated. To
simply the stacked architecture, we propose a framework called shortcut block,
which is a marriage of the gating mechanism and shortcuts, while discarding the
self-connected part in LSTM cell. We present extensive empirical experiments
showing that this design makes training easy and improves generalization. We
propose various shortcut block topologies and compositions to explore its
effectiveness. Based on this architecture, we obtain a 6% relatively
improvement over the state-of-the-art on CCGbank supertagging dataset. We also
get comparable results on POS tagging task.
| 2,017 | Computation and Language |
On (Commercial) Benefits of Automatic Text Summarization Systems in the
News Domain: A Case of Media Monitoring and Media Response Analysis | In this work, we present the results of a systematic study to investigate the
(commercial) benefits of automatic text summarization systems in a real world
scenario. More specifically, we define a use case in the context of media
monitoring and media response analysis and claim that even using a simple
query-based extractive approach can dramatically save the processing time of
the employees without significantly reducing the quality of their work.
| 2,017 | Computation and Language |
Fuzzy Based Implicit Sentiment Analysis on Quantitative Sentences | With the rapid growth of social media on the web, emotional polarity
computation has become a flourishing frontier in the text mining community.
However, it is challenging to understand the latest trends and summarize the
state or general opinions about products due to the big diversity and size of
social media data and this creates the need of automated and real time opinion
extraction and mining. On the other hand, the bulk of current research has been
devoted to study the subjective sentences which contain opinion keywords and
limited work has been reported for objective statements that imply sentiment.
In this paper, fuzzy based knowledge engineering model has been developed for
sentiment classification of special group of such sentences including the
change or deviation from desired range or value. Drug reviews are the rich
source of such statements. Therefore, in this research, some experiments were
carried out on patient's reviews on several different cholesterol lowering
drugs to determine their sentiment polarity. The main conclusion through this
study is, in order to increase the accuracy level of existing drug opinion
mining systems, objective sentences which imply opinion should be taken into
account. Our experimental results demonstrate that our proposed model obtains
over 72 percent F1 value.
| 2,017 | Computation and Language |
Unsupervised neural and Bayesian models for zero-resource speech
processing | In settings where only unlabelled speech data is available, zero-resource
speech technology needs to be developed without transcriptions, pronunciation
dictionaries, or language modelling text. There are two central problems in
zero-resource speech processing: (i) finding frame-level feature
representations which make it easier to discriminate between linguistic units
(phones or words), and (ii) segmenting and clustering unlabelled speech into
meaningful units. In this thesis, we argue that a combination of top-down and
bottom-up modelling is advantageous in tackling these two problems.
To address the problem of frame-level representation learning, we present the
correspondence autoencoder (cAE), a neural network trained with weak top-down
supervision from an unsupervised term discovery system. By combining this
top-down supervision with unsupervised bottom-up initialization, the cAE yields
much more discriminative features than previous approaches. We then present our
unsupervised segmental Bayesian model that segments and clusters unlabelled
speech into hypothesized words. By imposing a consistent top-down segmentation
while also using bottom-up knowledge from detected syllable boundaries, our
system outperforms several others on multi-speaker conversational English and
Xitsonga speech data. Finally, we show that the clusters discovered by the
segmental Bayesian model can be made less speaker- and gender-specific by using
features from the cAE instead of traditional acoustic features.
In summary, the different models and systems presented in this thesis show
that both top-down and bottom-up modelling can improve representation learning,
segmentation and clustering of unlabelled speech data.
| 2,017 | Computation and Language |
Neural Probabilistic Model for Non-projective MST Parsing | In this paper, we propose a probabilistic parsing model, which defines a
proper conditional probability distribution over non-projective dependency
trees for a given sentence, using neural representations as inputs. The neural
network architecture is based on bi-directional LSTM-CNNs which benefits from
both word- and character-level representations automatically, by using
combination of bidirectional LSTM and CNN. On top of the neural network, we
introduce a probabilistic structured layer, defining a conditional log-linear
model over non-projective trees. We evaluate our model on 17 different
datasets, across 14 different languages. By exploiting Kirchhoff's Matrix-Tree
Theorem (Tutte, 1984), the partition functions and marginals can be computed
efficiently, leading to a straight-forward end-to-end model training procedure
via back-propagation. Our parser achieves state-of-the-art parsing performance
on nine datasets.
| 2,017 | Computation and Language |
Joint Semantic Synthesis and Morphological Analysis of the Derived Word | Much like sentences are composed of words, words themselves are composed of
smaller units. For example, the English word questionably can be analyzed as
question+able+ly. However, this structural decomposition of the word does not
directly give us a semantic representation of the word's meaning. Since
morphology obeys the principle of compositionality, the semantics of the word
can be systematically derived from the meaning of its parts. In this work, we
propose a novel probabilistic model of word formation that captures both the
analysis of a word w into its constituents segments and the synthesis of the
meaning of w from the meanings of those segments. Our model jointly learns to
segment words into morphemes and compose distributional semantic vectors of
those morphemes. We experiment with the model on English CELEX data and German
DerivBase (Zeller et al., 2013) data. We show that jointly modeling semantics
increases both segmentation accuracy and morpheme F1 by between 3% and 5%.
Additionally, we investigate different models of vector composition, showing
that recurrent neural networks yield an improvement over simple additive
models. Finally, we study the degree to which the representations correspond to
a linguist's notion of morphological productivity.
| 2,018 | Computation and Language |
Textual Entailment with Structured Attentions and Composition | Deep learning techniques are increasingly popular in the textual entailment
task, overcoming the fragility of traditional discrete models with hard
alignments and logics. In particular, the recently proposed attention models
(Rockt\"aschel et al., 2015; Wang and Jiang, 2015) achieves state-of-the-art
accuracy by computing soft word alignments between the premise and hypothesis
sentences. However, there remains a major limitation: this line of work
completely ignores syntax and recursion, which is helpful in many traditional
efforts. We show that it is beneficial to extend the attention model to tree
nodes between premise and hypothesis. More importantly, this subtree-level
attention reveals information about entailment relation. We study the recursive
composition of this subtree-level entailment relation, which can be viewed as a
soft version of the Natural Logic framework (MacCartney and Manning, 2009).
Experiments show that our structured attention and entailment composition model
can correctly identify and infer entailment relations from the bottom up, and
bring significant improvements in accuracy.
| 2,017 | Computation and Language |
Crime Topic Modeling | The classification of crime into discrete categories entails a massive loss
of information. Crimes emerge out of a complex mix of behaviors and situations,
yet most of these details cannot be captured by singular crime type labels.
This information loss impacts our ability to not only understand the causes of
crime, but also how to develop optimal crime prevention strategies. We apply
machine learning methods to short narrative text descriptions accompanying
crime records with the goal of discovering ecologically more meaningful latent
crime classes. We term these latent classes "crime topics" in reference to
text-based topic modeling methods that produce them. We use topic distributions
to measure clustering among formally recognized crime types. Crime topics
replicate broad distinctions between violent and property crime, but also
reveal nuances linked to target characteristics, situational conditions and the
tools and methods of attack. Formal crime types are not discrete in topic
space. Rather, crime types are distributed across a range of crime topics.
Similarly, individual crime topics are distributed across a range of formal
crime types. Key ecological groups include identity theft, shoplifting,
burglary and theft, car crimes and vandalism, criminal threats and confidence
crimes, and violent crimes. Though not a replacement for formal legal crime
classifications, crime topics provide a unique window into the heterogeneous
causal processes underlying crime.
| 2,017 | Computation and Language |
Replication issues in syntax-based aspect extraction for opinion mining | Reproducing experiments is an important instrument to validate previous work
and build upon existing approaches. It has been tackled numerous times in
different areas of science. In this paper, we introduce an empirical
replicability study of three well-known algorithms for syntactic centric
aspect-based opinion mining. We show that reproducing results continues to be a
difficult endeavor, mainly due to the lack of details regarding preprocessing
and parameter setting, as well as due to the absence of available
implementations that clarify these details. We consider these are important
threats to validity of the research on the field, specifically when compared to
other problems in NLP where public datasets and code availability are critical
validity components. We conclude by encouraging code-based research, which we
think has a key role in helping researchers to understand the meaning of the
state-of-the-art better and to generate continuous advances.
| 2,017 | Computation and Language |
Real Multi-Sense or Pseudo Multi-Sense: An Approach to Improve Word
Representation | Previous researches have shown that learning multiple representations for
polysemous words can improve the performance of word embeddings on many tasks.
However, this leads to another problem. Several vectors of a word may actually
point to the same meaning, namely pseudo multi-sense. In this paper, we
introduce the concept of pseudo multi-sense, and then propose an algorithm to
detect such cases. With the consideration of the detected pseudo multi-sense
cases, we try to refine the existing word embeddings to eliminate the influence
of pseudo multi-sense. Moreover, we apply our algorithm on previous released
multi-sense word embeddings and tested it on artificial word similarity tasks
and the analogy task. The result of the experiments shows that diminishing
pseudo multi-sense can improve the quality of word representations. Thus, our
method is actually an efficient way to reduce linguistic complexity.
| 2,017 | Computation and Language |
Enumeration of Extractive Oracle Summaries | To analyze the limitations and the future directions of the extractive
summarization paradigm, this paper proposes an Integer Linear Programming (ILP)
formulation to obtain extractive oracle summaries in terms of ROUGE-N. We also
propose an algorithm that enumerates all of the oracle summaries for a set of
reference summaries to exploit F-measures that evaluate which system summaries
contain how many sentences that are extracted as an oracle summary. Our
experimental results obtained from Document Understanding Conference (DUC)
corpora demonstrated the following: (1) room still exists to improve the
performance of extractive summarization; (2) the F-measures derived from the
enumerated oracle summaries have significantly stronger correlations with human
judgment than those derived from single oracle summaries.
| 2,017 | Computation and Language |
Cross-Lingual Dependency Parsing with Late Decoding for Truly
Low-Resource Languages | In cross-lingual dependency annotation projection, information is often lost
during transfer because of early decoding. We present an end-to-end graph-based
neural network dependency parser that can be trained to reproduce matrices of
edge scores, which can be directly projected across word alignments. We show
that our approach to cross-lingual dependency parsing is not only simpler, but
also achieves an absolute improvement of 2.25% averaged across 10 languages
compared to the previous state of the art.
| 2,017 | Computation and Language |
Structural Attention Neural Networks for improved sentiment analysis | We introduce a tree-structured attention neural network for sentences and
small phrases and apply it to the problem of sentiment classification. Our
model expands the current recursive models by incorporating structural
information around a node of a syntactic tree using both bottom-up and top-down
information propagation. Also, the model utilizes structural attention to
identify the most salient representations during the construction of the
syntactic tree. To our knowledge, the proposed models achieve state of the art
performance on the Stanford Sentiment Treebank dataset.
| 2,017 | Computation and Language |
Neural Machine Translation on Scarce-Resource Condition: A case-study on
Persian-English | Neural Machine Translation (NMT) is a new approach for Machine Translation
(MT), and due to its success, it has absorbed the attention of many researchers
in the field. In this paper, we study NMT model on Persian-English language
pairs, to analyze the model and investigate the appropriateness of the model
for scarce-resourced scenarios, the situation that exists for Persian-centered
translation systems. We adjust the model for the Persian language and find the
best parameters and hyper parameters for two tasks: translation and
transliteration. We also apply some preprocessing task on the Persian dataset
which yields to increase for about one point in terms of BLEU score. Also, we
have modified the loss function to enhance the word alignment of the model.
This new loss function yields a total of 1.87 point improvements in terms of
BLEU score in the translation quality.
| 2,017 | Computation and Language |
Sentence-level dialects identification in the greater China region | Identifying the different varieties of the same language is more challenging
than unrelated languages identification. In this paper, we propose an approach
to discriminate language varieties or dialects of Mandarin Chinese for the
Mainland China, Hong Kong, Taiwan, Macao, Malaysia and Singapore, a.k.a., the
Greater China Region (GCR). When applied to the dialects identification of the
GCR, we find that the commonly used character-level or word-level uni-gram
feature is not very efficient since there exist several specific problems such
as the ambiguity and context-dependent characteristic of words in the dialects
of the GCR. To overcome these challenges, we use not only the general features
like character-level n-gram, but also many new word-level features, including
PMI-based and word alignment-based features. A series of evaluation results on
both the news and open-domain dataset from Wikipedia show the effectiveness of
the proposed approach.
| 2,016 | Computation and Language |
Multi-level Representations for Fine-Grained Typing of Knowledge Base
Entities | Entities are essential elements of natural language. In this paper, we
present methods for learning multi-level representations of entities on three
complementary levels: character (character patterns in entity names extracted,
e.g., by neural networks), word (embeddings of words in entity names) and
entity (entity embeddings). We investigate state-of-the-art learning methods on
each level and find large differences, e.g., for deep learning models,
traditional ngram features and the subword model of fasttext (Bojanowski et
al., 2016) on the character level; for word2vec (Mikolov et al., 2013) on the
word level; and for the order-aware model wang2vec (Ling et al., 2015a) on the
entity level. We confirm experimentally that each level of representation
contributes complementary information and a joint representation of all three
levels improves the existing embedding based baseline for fine-grained entity
typing by a large margin. Additionally, we show that adding information from
entity descriptions further improves multi-level representations of entities.
| 2,017 | Computation and Language |
Neural Personalized Response Generation as Domain Adaptation | In this paper, we focus on the personalized response generation for
conversational systems. Based on the sequence to sequence learning, especially
the encoder-decoder framework, we propose a two-phase approach, namely
initialization then adaptation, to model the responding style of human and then
generate personalized responses. For evaluation, we propose a novel human aided
method to evaluate the performance of the personalized response generation
models by online real-time conversation and offline human judgement. Moreover,
the lexical divergence of the responses generated by the 5 personalized models
indicates that the proposed two-phase approach achieves good results on
modeling the responding style of human and generating personalized responses
for the conversational systems.
| 2,019 | Computation and Language |
Task-Specific Attentive Pooling of Phrase Alignments Contributes to
Sentence Matching | This work studies comparatively two typical sentence matching tasks: textual
entailment (TE) and answer selection (AS), observing that weaker phrase
alignments are more critical in TE, while stronger phrase alignments deserve
more attention in AS. The key to reach this observation lies in phrase
detection, phrase representation, phrase alignment, and more importantly how to
connect those aligned phrases of different matching degrees with the final
classifier. Prior work (i) has limitations in phrase generation and
representation, or (ii) conducts alignment at word and phrase levels by
handcrafted features or (iii) utilizes a single framework of alignment without
considering the characteristics of specific tasks, which limits the framework's
effectiveness across tasks. We propose an architecture based on Gated Recurrent
Unit that supports (i) representation learning of phrases of arbitrary
granularity and (ii) task-specific attentive pooling of phrase alignments
between two sentences. Experimental results on TE and AS match our observation
and show the effectiveness of our approach.
| 2,017 | Computation and Language |
Crowdsourcing Ground Truth for Medical Relation Extraction | Cognitive computing systems require human labeled data for evaluation, and
often for training. The standard practice used in gathering this data minimizes
disagreement between annotators, and we have found this results in data that
fails to account for the ambiguity inherent in language. We have proposed the
CrowdTruth method for collecting ground truth through crowdsourcing, that
reconsiders the role of people in machine learning based on the observation
that disagreement between annotators provides a useful signal for phenomena
such as ambiguity in the text. We report on using this method to build an
annotated data set for medical relation extraction for the $cause$ and $treat$
relations, and how this data performed in a supervised training experiment. We
demonstrate that by modeling ambiguity, labeled data gathered from crowd
workers can (1) reach the level of quality of domain experts for this task
while reducing the cost, and (2) provide better training data at scale than
distant supervision. We further propose and validate new weighted measures for
precision, recall, and F-measure, that account for ambiguity in both human and
machine performance on this task.
| 2,018 | Computation and Language |
Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic
Speech Recognition | Multi-task learning (MTL) involves the simultaneous training of two or more
related tasks over shared representations. In this work, we apply MTL to
audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn
a mapping between audio-visual fused features and frame labels obtained from
acoustic GMM/HMM model. This is combined with an auxiliary task which maps
visual features to frame labels obtained from a separate visual GMM/HMM model.
The MTL model is tested at various levels of babble noise and the results are
compared with a base-line hybrid DNN-HMM AV-ASR model. Our results indicate
that MTL is especially useful at higher level of noise. Compared to base-line,
upto 7\% relative improvement in WER is reported at -3 SNR dB
| 2,017 | Computation and Language |
Implicitly Incorporating Morphological Information into Word Embedding | In this paper, we propose three novel models to enhance word embedding by
implicitly using morphological information. Experiments on word similarity and
syntactic analogy show that the implicit models are superior to traditional
explicit ones. Our models outperform all state-of-the-art baselines and
significantly improve the performance on both tasks. Moreover, our performance
on the smallest corpus is similar to the performance of CBOW on the corpus
which is five times the size of ours. Parameter analysis indicates that the
implicit models can supplement semantic information during the word embedding
training process.
| 2,017 | Computation and Language |
A Simple and Accurate Syntax-Agnostic Neural Model for Dependency-based
Semantic Role Labeling | We introduce a simple and accurate neural model for dependency-based semantic
role labeling. Our model predicts predicate-argument dependencies relying on
states of a bidirectional LSTM encoder. The semantic role labeler achieves
competitive performance on English, even without any kind of syntactic
information and only using local inference. However, when automatically
predicted part-of-speech tags are provided as input, it substantially
outperforms all previous local models and approaches the best reported results
on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish
where our approach also achieves competitive results. Syntactic parsers are
unreliable on out-of-domain data, so standard (i.e., syntactically-informed)
SRL models are hindered when tested in this setting. Our syntax-agnostic model
appears more robust, resulting in the best reported results on standard
out-of-domain test sets.
| 2,017 | Computation and Language |
Towards End-to-End Speech Recognition with Deep Convolutional Neural
Networks | Convolutional Neural Networks (CNNs) are effective models for reducing
spectral variations and modeling spectral correlations in acoustic features for
automatic speech recognition (ASR). Hybrid speech recognition systems
incorporating CNNs with Hidden Markov Models/Gaussian Mixture Models
(HMMs/GMMs) have achieved the state-of-the-art in various benchmarks.
Meanwhile, Connectionist Temporal Classification (CTC) with Recurrent Neural
Networks (RNNs), which is proposed for labeling unsegmented sequences, makes it
feasible to train an end-to-end speech recognition system instead of hybrid
settings. However, RNNs are computationally expensive and sometimes difficult
to train. In this paper, inspired by the advantages of both CNNs and the CTC
approach, we propose an end-to-end speech framework for sequence labeling, by
combining hierarchical CNNs with CTC directly without recurrent connections. By
evaluating the approach on the TIMIT phoneme recognition task, we show that the
proposed model is not only computationally efficient, but also competitive with
the existing baseline systems. Moreover, we argue that CNNs have the capability
to model temporal correlations with appropriate context information.
| 2,017 | Computation and Language |
Bidirectional American Sign Language to English Translation | We outline a bidirectional translation system that converts sentences from
American Sign Language (ASL) to English, and vice versa. To perform machine
translation between ASL and English, we utilize a generative approach.
Specifically, we employ an adjustment to the IBM word-alignment model 1 (IBM
WAM1), where we define language models for English and ASL, as well as a
translation model, and attempt to generate a translation that maximizes the
posterior distribution defined by these models. Then, using these models, we
are able to quantify the concepts of fluency and faithfulness of a translation
between languages.
| 2,017 | Computation and Language |
OpenNMT: Open-Source Toolkit for Neural Machine Translation | We describe an open-source toolkit for neural machine translation (NMT). The
toolkit prioritizes efficiency, modularity, and extensibility with the goal of
supporting NMT research into model architectures, feature representations, and
source modalities, while maintaining competitive performance and reasonable
training requirements. The toolkit consists of modeling and translation
support, as well as detailed pedagogical documentation about the underlying
techniques.
| 2,017 | Computation and Language |
Towards Decoding as Continuous Optimization in Neural Machine
Translation | We propose a novel decoding approach for neural machine translation (NMT)
based on continuous optimisation. We convert decoding - basically a discrete
optimization problem - into a continuous optimization problem. The resulting
constrained continuous optimisation problem is then tackled using
gradient-based methods. Our powerful decoding framework enables decoding
intractable models such as the intersection of left-to-right and right-to-left
(bidirectional) as well as source-to-target and target-to-source (bilingual)
NMT models. Our empirical results show that our decoding framework is
effective, and leads to substantial improvements in translations generated from
the intersected models where the typical greedy or beam search is not feasible.
We also compare our framework against reranking, and analyse its advantages and
disadvantages.
| 2,017 | Computation and Language |
Generalisation in Named Entity Recognition: A Quantitative Analysis | Named Entity Recognition (NER) is a key NLP task, which is all the more
challenging on Web and user-generated content with their diverse and
continuously changing language. This paper aims to quantify how this diversity
impacts state-of-the-art NER methods, by measuring named entity (NE) and
context variability, feature sparsity, and their effects on precision and
recall. In particular, our findings indicate that NER approaches struggle to
generalise in diverse genres with limited training data. Unseen NEs, in
particular, play an important role, which have a higher incidence in diverse
genres such as social media than in more regular genres such as newswire.
Coupled with a higher incidence of unseen features more generally and the lack
of large training corpora, this leads to significantly lower F1 scores for
diverse genres as compared to more regular ones. We also find that leading
systems rely heavily on surface forms found in training data, having problems
generalising beyond these, and offer explanations for this observation.
| 2,017 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.