Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Automatic Quality Assessment for Speech Translation Using Joint ASR and
MT Features | This paper addresses automatic quality assessment of spoken language
translation (SLT). This relatively new task is defined and formalized as a
sequence labeling problem where each word in the SLT hypothesis is tagged as
good or bad according to a large feature set. We propose several word
confidence estimators (WCE) based on our automatic evaluation of transcription
(ASR) quality, translation (MT) quality, or both (combined ASR+MT). This
research work is possible because we built a specific corpus which contains
6.7k utterances for which a quintuplet containing: ASR output, verbatim
transcript, text translation, speech translation and post-edition of
translation is built. The conclusion of our multiple experiments using joint
ASR and MT features for WCE is that MT features remain the most influent while
ASR feature can bring interesting complementary information. Our robust quality
estimators for SLT can be used for re-scoring speech translation graphs or for
providing feedback to the user in interactive speech translation or
computer-assisted speech-to-text scenarios.
| 2,016 | Computation and Language |
Learning Robust Representations of Text | Deep neural networks have achieved remarkable results across many language
processing tasks, however these methods are highly sensitive to noise and
adversarial attacks. We present a regularization based method for limiting
network sensitivity to its inputs, inspired by ideas from computer vision, thus
learning models that are more robust. Empirical evaluation over a range of
sentiment datasets with a convolutional neural network shows that, compared to
a baseline model and the dropout method, our method achieves superior
performance over noisy inputs and out-of-domain data.
| 2,016 | Computation and Language |
A framework for mining process models from emails logs | Due to its wide use in personal, but most importantly, professional contexts,
email represents a valuable source of information that can be harvested for
understanding, reengineering and repurposing undocumented business processes of
companies and institutions. Towards this aim, a few researchers investigated
the problem of extracting process oriented information from email logs in order
to take benefit of the many available process mining techniques and tools. In
this paper we go further in this direction, by proposing a new method for
mining process models from email logs that leverage unsupervised machine
learning techniques with little human involvement. Moreover, our method allows
to semi-automatically label emails with activity names, that can be used for
activity recognition in new incoming emails. A use case demonstrates the
usefulness of the proposed solution using a modest in size, yet real-world,
dataset containing emails that belong to two different process models.
| 2,016 | Computation and Language |
Italy goes to Stanford: a collection of CoreNLP modules for Italian | In this we paper present Tint, an easy-to-use set of fast, accurate and
extendable Natural Language Processing modules for Italian. It is based on
Stanford CoreNLP and is freely available as a standalone software or a library
that can be integrated in an existing project.
| 2,017 | Computation and Language |
Generating Politically-Relevant Event Data | Automatically generated political event data is an important part of the
social science data ecosystem. The approaches for generating this data, though,
have remained largely the same for two decades. During this time, the field of
computational linguistics has progressed tremendously. This paper presents an
overview of political event data, including methods and ontologies, and a set
of experiments to determine the applicability of deep neural networks to the
extraction of political events from news text.
| 2,016 | Computation and Language |
Recognizing Implicit Discourse Relations via Repeated Reading: Neural
Networks with Multi-Level Attention | Recognizing implicit discourse relations is a challenging but important task
in the field of Natural Language Processing. For such a complex text processing
task, different from previous studies, we argue that it is necessary to
repeatedly read the arguments and dynamically exploit the efficient features
useful for recognizing discourse relations. To mimic the repeated reading
strategy, we propose the neural networks with multi-level attention (NNMA),
combining the attention mechanism and external memories to gradually fix the
attention on some specific words helpful to judging the discourse relations.
Experiments on the PDTB dataset show that our proposed method achieves the
state-of-art results. The visualization of the attention weights also
illustrates the progress that our model observes the arguments on each level
and progressively locates the important words.
| 2,016 | Computation and Language |
One Sentence One Model for Neural Machine Translation | Neural machine translation (NMT) becomes a new state-of-the-art and achieves
promising translation results using a simple encoder-decoder neural network.
This neural network is trained once on the parallel corpus and the fixed
network is used to translate all the test sentences. We argue that the general
fixed network cannot best fit the specific test sentences. In this paper, we
propose the dynamic NMT which learns a general network as usual, and then
fine-tunes the network for each test sentence. The fine-tune work is done on a
small set of the bilingual training data that is obtained through similarity
search according to the test sentence. Extensive experiments demonstrate that
this method can significantly improve the translation performance, especially
when highly similar sentences are available.
| 2,016 | Computation and Language |
Weakly supervised spoken term discovery using cross-lingual side
information | Recent work on unsupervised term discovery (UTD) aims to identify and cluster
repeated word-like units from audio alone. These systems are promising for some
very low-resource languages where transcribed audio is unavailable, or where no
written form of the language exists. However, in some cases it may still be
feasible (e.g., through crowdsourcing) to obtain (possibly noisy) text
translations of the audio. If so, this information could be used as a source of
side information to improve UTD. Here, we present a simple method for rescoring
the output of a UTD system using text translations, and test it on a corpus of
Spanish audio with English translations. We show that it greatly improves the
average precision of the results over a wide range of system configurations and
data preprocessing methods.
| 2,016 | Computation and Language |
Semi-supervised knowledge extraction for detection of drugs and their
effects | New Psychoactive Substances (NPS) are drugs that lay in a grey area of
legislation, since they are not internationally and officially banned, possibly
leading to their not prosecutable trade. The exacerbation of the phenomenon is
that NPS can be easily sold and bought online. Here, we consider large corpora
of textual posts, published on online forums specialized on drug discussions,
plus a small set of known substances and associated effects, which we call
seeds. We propose a semi-supervised approach to knowledge extraction, applied
to the detection of drugs (comprising NPS) and effects from the corpora under
investigation. Based on the very small set of initial seeds, the work
highlights how a contrastive approach and context deduction are effective in
detecting substances and effects from the corpora. Our promising results, which
feature a F1 score close to 0.9, pave the way for shortening the detection time
of new psychoactive substances, once these are discussed and advertised on the
Internet.
| 2,016 | Computation and Language |
Twitter Opinion Topic Model: Extracting Product Opinions from Tweets by
Leveraging Hashtags and Sentiment Lexicon | Aspect-based opinion mining is widely applied to review data to aggregate or
summarize opinions of a product, and the current state-of-the-art is achieved
with Latent Dirichlet Allocation (LDA)-based model. Although social media data
like tweets are laden with opinions, their "dirty" nature (as natural language)
has discouraged researchers from applying LDA-based opinion model for product
review mining. Tweets are often informal, unstructured and lacking labeled data
such as categories and ratings, making it challenging for product opinion
mining. In this paper, we propose an LDA-based opinion model named Twitter
Opinion Topic Model (TOTM) for opinion mining and sentiment analysis. TOTM
leverages hashtags, mentions, emoticons and strong sentiment words that are
present in tweets in its discovery process. It improves opinion prediction by
modeling the target-opinion interaction directly, thus discovering target
specific opinion words, neglected in existing approaches. Moreover, we propose
a new formulation of incorporating sentiment prior information into a topic
model, by utilizing an existing public sentiment lexicon. This is novel in that
it learns and updates with the data. We conduct experiments on 9 million tweets
on electronic products, and demonstrate the improved performance of TOTM in
both quantitative evaluations and qualitative analysis. We show that
aspect-based opinion analysis on massive volume of tweets provides useful
opinions on products.
| 2,014 | Computation and Language |
Gov2Vec: Learning Distributed Representations of Institutions and Their
Legal Text | We compare policy differences across institutions by embedding
representations of the entire legal corpus of each institution and the
vocabulary shared across all corpora into a continuous vector space. We apply
our method, Gov2Vec, to Supreme Court opinions, Presidential actions, and
official summaries of Congressional bills. The model discerns meaningful
differences between government branches. We also learn representations for more
fine-grained word sources: individual Presidents and (2-year) Congresses. The
similarities between learned representations of Congresses over time and
sitting Presidents are negatively correlated with the bill veto rate, and the
temporal ordering of Presidents and Congresses was implicitly learned from only
text. With the resulting vectors we answer questions such as: how does Obama
and the 113th House differ in addressing climate change and how does this vary
from environmental or economic perspectives? Our work illustrates
vector-arithmetic-based investigations of complex relationships between word
sources based on their texts. We are extending this to create a more
comprehensive legal semantic map.
| 2,016 | Computation and Language |
Minimally Supervised Written-to-Spoken Text Normalization | In speech-applications such as text-to-speech (TTS) or automatic speech
recognition (ASR), \emph{text normalization} refers to the task of converting
from a \emph{written} representation into a representation of how the text is
to be \emph{spoken}. In all real-world speech applications, the text
normalization engine is developed---in large part---by hand. For example, a
hand-built grammar may be used to enumerate the possible ways of saying a given
token in a given language, and a statistical model used to select the most
appropriate pronunciation in context. In this study we examine the tradeoffs
associated with using more or less language-specific domain knowledge in a text
normalization engine. In the most data-rich scenario, we have access to a
carefully constructed hand-built normalization grammar that for any given token
will produce a set of all possible verbalizations for that token. We also
assume a corpus of aligned written-spoken utterances, from which we can train a
ranking model that selects the appropriate verbalization for the given context.
As a substitute for the carefully constructed grammar, we also consider a
scenario with a language-universal normalization \emph{covering grammar}, where
the developer merely needs to provide a set of lexical items particular to the
language. As a substitute for the aligned corpus, we also consider a scenario
where one only has the spoken side, and the corresponding written side is
"hallucinated" by composing the spoken side with the inverted normalization
grammar. We investigate the accuracy of a text normalization engine under each
of these scenarios. We report the results of experiments on English and
Russian.
| 2,016 | Computation and Language |
Character-level and Multi-channel Convolutional Neural Networks for
Large-scale Authorship Attribution | Convolutional neural networks (CNNs) have demonstrated superior capability
for extracting information from raw signals in computer vision. Recently,
character-level and multi-channel CNNs have exhibited excellent performance for
sentence classification tasks. We apply CNNs to large-scale authorship
attribution, which aims to determine an unknown text's author among many
candidate authors, motivated by their ability to process character-level
signals and to differentiate between a large number of classes, while making
fast predictions in comparison to state-of-the-art approaches. We extensively
evaluate CNN-based approaches that leverage word and character channels and
compare them against state-of-the-art methods for a large range of author
numbers, shedding new light on traditional approaches. We show that
character-level CNNs outperform the state-of-the-art on four out of five
datasets in different domains. Additionally, we present the first application
of authorship attribution to reddit.
| 2,016 | Computation and Language |
Joint CTC-Attention based End-to-End Speech Recognition using Multi-task
Learning | Recently, there has been an increasing interest in end-to-end speech
recognition that directly transcribes speech to text without any predefined
alignments. One approach is the attention-based encoder-decoder framework that
learns a mapping between variable-length input and output sequences in one step
using a purely data-driven method. The attention model has often been shown to
improve the performance over another end-to-end approach, the Connectionist
Temporal Classification (CTC), mainly because it explicitly uses the history of
the target character without any conditional independence assumptions. However,
we observed that the performance of the attention has shown poor results in
noisy condition and is hard to learn in the initial training stage with long
input sequences. This is because the attention model is too flexible to predict
proper alignments in such cases due to the lack of left-to-right constraints as
used in CTC. This paper presents a novel method for end-to-end speech
recognition to improve robustness and achieve fast convergence by using a joint
CTC-attention model within the multi-task learning framework, thereby
mitigating the alignment issue. An experiment on the WSJ and CHiME-4 tasks
demonstrates its advantages over both the CTC and attention-based
encoder-decoder baselines, showing 5.4-14.6% relative improvements in Character
Error Rate (CER).
| 2,017 | Computation and Language |
Twitter-Network Topic Model: A Full Bayesian Treatment for Social
Network and Text Modeling | Twitter data is extremely noisy -- each tweet is short, unstructured and with
informal language, a challenge for current topic modeling. On the other hand,
tweets are accompanied by extra information such as authorship, hashtags and
the user-follower network. Exploiting this additional information, we propose
the Twitter-Network (TN) topic model to jointly model the text and the social
network in a full Bayesian nonparametric way. The TN topic model employs the
hierarchical Poisson-Dirichlet processes (PDP) for text modeling and a Gaussian
process random function model for social network modeling. We show that the TN
topic model significantly outperforms several existing nonparametric models due
to its flexibility. Moreover, the TN topic model enables additional informative
inference such as authors' interests, hashtag analysis, as well as leading to
further applications such as author recommendation, automatic topic labeling
and hashtag suggestion. Note our general inference framework can readily be
applied to other topic models with embedded PDP nodes.
| 2,013 | Computation and Language |
Generating Abstractive Summaries from Meeting Transcripts | Summaries of meetings are very important as they convey the essential content
of discussions in a concise form. Generally, it is time consuming to read and
understand the whole documents. Therefore, summaries play an important role as
the readers are interested in only the important context of discussions. In
this work, we address the task of meeting document summarization. Automatic
summarization systems on meeting conversations developed so far have been
primarily extractive, resulting in unacceptable summaries that are hard to
read. The extracted utterances contain disfluencies that affect the quality of
the extractive summaries. To make summaries much more readable, we propose an
approach to generating abstractive summaries by fusing important content from
several utterances. We first separate meeting transcripts into various topic
segments, and then identify the important utterances in each segment using a
supervised learning approach. The important utterances are then combined
together to generate a one-sentence summary. In the text generation step, the
dependency parses of the utterances in each segment are combined together to
create a directed graph. The most informative and well-formed sub-graph
obtained by integer linear programming (ILP) is selected to generate a
one-sentence summary for each topic segment. The ILP formulation reduces
disfluencies by leveraging grammatical relations that are more prominent in
non-conversational style of text, and therefore generates summaries that is
comparable to human-written abstractive summaries. Experimental results show
that our method can generate more informative summaries than the baselines. In
addition, readability assessments by human judges as well as log-likelihood
estimates obtained from the dependency parser show that our generated summaries
are significantly readable and well-formed.
| 2,016 | Computation and Language |
Multi-document abstractive summarization using ILP based multi-sentence
compression | Abstractive summarization is an ideal form of summarization since it can
synthesize information from multiple documents to create concise informative
summaries. In this work, we aim at developing an abstractive summarizer. First,
our proposed approach identifies the most important document in the
multi-document set. The sentences in the most important document are aligned to
sentences in other documents to generate clusters of similar sentences. Second,
we generate K-shortest paths from the sentences in each cluster using a
word-graph structure. Finally, we select sentences from the set of shortest
paths generated from all the clusters employing a novel integer linear
programming (ILP) model with the objective of maximizing information content
and readability of the final summary. Our ILP model represents the shortest
paths as binary variables and considers the length of the path, information
score and linguistic quality score in the objective function. Experimental
results on the DUC 2004 and 2005 multi-document summarization datasets show
that our proposed approach outperforms all the baselines and state-of-the-art
extractive summarizers as measured by the ROUGE scores. Our method also
outperforms a recent abstractive summarization technique. In manual evaluation,
our approach also achieves promising results on informativeness and
readability.
| 2,016 | Computation and Language |
Abstractive Meeting Summarization UsingDependency Graph Fusion | Automatic summarization techniques on meeting conversations developed so far
have been primarily extractive, resulting in poor summaries. To improve this,
we propose an approach to generate abstractive summaries by fusing important
content from several utterances. Any meeting is generally comprised of several
discussion topic segments. For each topic segment within a meeting
conversation, we aim to generate a one sentence summary from the most important
utterances using an integer linear programming-based sentence fusion approach.
Experimental results show that our method can generate more informative
summaries than the baselines.
| 2,016 | Computation and Language |
Semantic Tagging with Deep Residual Networks | We propose a novel semantic tagging task, sem-tagging, tailored for the
purpose of multilingual semantic parsing, and present the first tagger using
deep residual networks (ResNets). Our tagger uses both word and character
representations and includes a novel residual bypass architecture. We evaluate
the tagset both intrinsically on the new task of semantic tagging, as well as
on Part-of-Speech (POS) tagging. Our system, consisting of a ResNet and an
auxiliary loss function predicting our semantic tags, significantly outperforms
prior results on English Universal Dependencies POS tagging (95.71% accuracy on
UD v1.2 and 95.67% accuracy on UD v1.3).
| 2,016 | Computation and Language |
Knowledge Representation via Joint Learning of Sequential Text and
Knowledge Graphs | Textual information is considered as significant supplement to knowledge
representation learning (KRL). There are two main challenges for constructing
knowledge representations from plain texts: (1) How to take full advantages of
sequential contexts of entities in plain texts for KRL. (2) How to dynamically
select those informative sentences of the corresponding entities for KRL. In
this paper, we propose the Sequential Text-embodied Knowledge Representation
Learning to build knowledge representations from multiple sentences. Given each
reference sentence of an entity, we first utilize recurrent neural network with
pooling or long short-term memory network to encode the semantic information of
the sentence with respect to the entity. Then we further design an attention
model to measure the informativeness of each sentence, and build text-based
representations of entities. We evaluate our method on two tasks, including
triple classification and link prediction. Experimental results demonstrate
that our method outperforms other baselines on both tasks, which indicates that
our method is capable of selecting informative sentences and encoding the
textual information well into knowledge representations.
| 2,016 | Computation and Language |
Annotating Derivations: A New Evaluation Strategy and Dataset for
Algebra Word Problems | We propose a new evaluation for automatic solvers for algebra word problems,
which can identify mistakes that existing evaluations overlook. Our proposal is
to evaluate such solvers using derivations, which reflect how an equation
system was constructed from the word problem. To accomplish this, we develop an
algorithm for checking the equivalence between two derivations, and show how
derivation an- notations can be semi-automatically added to existing datasets.
To make our experiments more comprehensive, we include the derivation
annotation for DRAW-1K, a new dataset containing 1000 general algebra word
problems. In our experiments, we found that the annotated derivations enable a
more accurate evaluation of automatic solvers than previously used metrics. We
release derivation annotations for over 2300 algebra word problems for future
evaluations.
| 2,017 | Computation and Language |
Deep Multi-Task Learning with Shared Memory | Neural network based models have achieved impressive results on various
specific tasks. However, in previous works, most models are learned separately
based on single-task supervised objectives, which often suffer from
insufficient training data. In this paper, we propose two deep architectures
which can be trained jointly on multiple related tasks. More specifically, we
augment neural model with an external memory, which is shared by several tasks.
Experiments on two groups of text classification tasks show that our proposed
architectures can improve the performance of a task with the help of other
related tasks.
| 2,016 | Computation and Language |
Language as a Latent Variable: Discrete Generative Models for Sentence
Compression | In this work we explore deep generative models of text in which the latent
representation of a document is itself drawn from a discrete language model
distribution. We formulate a variational auto-encoder for inference in this
model and apply it to the task of compressing sentences. In this application
the generative model first draws a latent summary sentence from a background
language model, and then subsequently draws the observed sentence conditioned
on this latent summary. In our empirical evaluation we show that generative
formulations of both abstractive and extractive compression yield
state-of-the-art results when trained on a large amount of supervised data.
Further, we explore semi-supervised compression scenarios where we show that it
is possible to achieve performance competitive with previously proposed
supervised models while training on a fraction of the supervised data.
| 2,016 | Computation and Language |
AMR-to-text generation as a Traveling Salesman Problem | The task of AMR-to-text generation is to generate grammatical text that
sustains the semantic meaning for a given AMR graph. We at- tack the task by
first partitioning the AMR graph into smaller fragments, and then generating
the translation for each fragment, before finally deciding the order by solving
an asymmetric generalized traveling salesman problem (AGTSP). A Maximum Entropy
classifier is trained to estimate the traveling costs, and a TSP solver is used
to find the optimized solution. The final model reports a BLEU score of 22.44
on the SemEval-2016 Task8 dataset.
| 2,016 | Computation and Language |
Incorporating Relation Paths in Neural Relation Extraction | Distantly supervised relation extraction has been widely used to find novel
relational facts from plain text. To predict the relation between a pair of two
target entities, existing methods solely rely on those direct sentences
containing both entities. In fact, there are also many sentences containing
only one of the target entities, which provide rich and useful information for
relation extraction. To address this issue, we build inference chains between
two target entities via intermediate entities, and propose a path-based neural
relation extraction model to encode the relational semantics from both direct
sentences and inference chains. Experimental results on real-world datasets
show that, our model can make full use of those sentences containing only one
target entity, and achieves significant and consistent improvements on relation
extraction as compared with baselines. The source code of this paper can be
obtained from https: //github.com/thunlp/PathNRE.
| 2,017 | Computation and Language |
Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser | We introduce two first-order graph-based dependency parsers achieving a new
state of the art. The first is a consensus parser built from an ensemble of
independently trained greedy LSTM transition-based parsers with different
random initializations. We cast this approach as minimum Bayes risk decoding
(under the Hamming cost) and argue that weaker consensus within the ensemble is
a useful signal of difficulty or ambiguity. The second parser is a
"distillation" of the ensemble into a single model. We train the distillation
parser using a structured hinge loss objective with a novel cost that
incorporates ensemble uncertainty estimates for each possible attachment,
thereby avoiding the intractable cross-entropy computations required by
applying standard distillation objectives to problems with structured outputs.
The first-order distillation parser matches or surpasses the state of the art
on English, Chinese, and German.
| 2,016 | Computation and Language |
A Character-level Convolutional Neural Network for Distinguishing
Similar Languages and Dialects | Discriminating between closely-related language varieties is considered a
challenging and important task. This paper describes our submission to the DSL
2016 shared-task, which included two sub-tasks: one on discriminating similar
languages and one on identifying Arabic dialects. We developed a
character-level neural network for this task. Given a sequence of characters,
our model embeds each character in vector space, runs the sequence through
multiple convolutions with different filter widths, and pools the convolutional
representations to obtain a hidden vector representation of the text that is
used for predicting the language or dialect. We primarily focused on the Arabic
dialect identification task and obtained an F1 score of 0.4834, ranking 6th out
of 18 participants. We also analyze errors made by our system on the Arabic
data in some detail, and point to challenges such an approach is faced with.
| 2,016 | Computation and Language |
An Investigation of Recurrent Neural Architectures for Drug Name
Recognition | Drug name recognition (DNR) is an essential step in the Pharmacovigilance
(PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical
texts and classify them into predefined categories. State-of-the-art DNR
approaches heavily rely on hand crafted features and domain specific resources
which are difficult to collect and tune. For this reason, this paper
investigates the effectiveness of contemporary recurrent neural architectures -
the Elman and Jordan networks and the bidirectional LSTM with CRF decoding - at
performing DNR straight from the text. The experimental results achieved on the
authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional
LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems.
| 2,016 | Computation and Language |
Existence of Hierarchies and Human's Pursuit of Top Hierarchy Lead to
Power Law | The power law is ubiquitous in natural and social phenomena, and is
considered as a universal relationship between the frequency and its rank for
diverse social systems. However, a general model is still lacking to interpret
why these seemingly unrelated systems share great similarity. Through a
detailed analysis of natural language texts and simulation experiments based on
the proposed 'Hierarchical Selection Model', we found that the existence of
hierarchies and human's pursuit of top hierarchy lead to the power law.
Further, the power law is a statistical and emergent performance of
hierarchies, and it is the universality of hierarchies that contributes to the
ubiquity of the power law.
| 2,016 | Computation and Language |
The distribution of information content in English sentences | Sentence is a basic linguistic unit, however, little is known about how
information content is distributed across different positions of a sentence.
Based on authentic language data of English, the present study calculated the
entropy and other entropy-related statistics for different sentence positions.
The statistics indicate a three-step staircase-shaped distribution pattern,
with entropy in the initial position lower than the medial positions (positions
other than the initial and final), the medial positions lower than the final
position and the medial positions showing no significant difference. The
results suggest that: (1) the hypotheses of Constant Entropy Rate and Uniform
Information Density do not hold for the sentence-medial positions; (2) the
context of a word in a sentence should not be simply defined as all the words
preceding it in the same sentence; and (3) the contextual information content
in a sentence does not accumulate incrementally but follows a pattern of "the
whole is greater than the sum of parts".
| 2,016 | Computation and Language |
Large-Scale Machine Translation between Arabic and Hebrew: Available
Corpora and Initial Results | Machine translation between Arabic and Hebrew has so far been limited by a
lack of parallel corpora, despite the political and cultural importance of this
language pair. Previous work relied on manually-crafted grammars or pivoting
via English, both of which are unsatisfactory for building a scalable and
accurate MT system. In this work, we compare standard phrase-based and neural
systems on Arabic-Hebrew translation. We experiment with tokenization by
external tools and sub-word modeling by character-level neural models, and show
that both methods lead to improved translation performance, with a small
advantage to the neural models.
| 2,016 | Computation and Language |
Lattice-Based Recurrent Neural Network Encoders for Neural Machine
Translation | Neural machine translation (NMT) heavily relies on word-level modelling to
learn semantic representations of input sentences. However, for languages
without natural word delimiters (e.g., Chinese) where input sentences have to
be tokenized first, conventional NMT is confronted with two issues: 1) it is
difficult to find an optimal tokenization granularity for source sentence
modelling, and 2) errors in 1-best tokenizations may propagate to the encoder
of NMT. To handle these issues, we propose word-lattice based Recurrent Neural
Network (RNN) encoders for NMT, which generalize the standard RNN to word
lattice topology. The proposed encoders take as input a word lattice that
compactly encodes multiple tokenizations, and learn to generate new hidden
states from arbitrarily many inputs and hidden states in preceding time steps.
As such, the word-lattice based encoders not only alleviate the negative impact
of tokenization errors but also are more expressive and flexible to embed input
sentences. Experiment results on Chinese-English translation demonstrate the
superiorities of the proposed encoders over the conventional encoder.
| 2,016 | Computation and Language |
A Factorized Model for Transitive Verbs in Compositional Distributional
Semantics | We present a factorized compositional distributional semantics model for the
representation of transitive verb constructions. Our model first produces
(subject, verb) and (verb, object) vector representations based on the
similarity of the nouns in the construction to each of the nouns in the
vocabulary and the tendency of these nouns to take the subject and object roles
of the verb. These vectors are then combined into a final (subject,verb,object)
representation through simple vector operations. On two established tasks for
the transitive verb construction our model outperforms recent previous work.
| 2,016 | Computation and Language |
Pointer Sentinel Mixture Models | Recent neural network sequence models with softmax classifiers have achieved
their best language modeling performance only with very large hidden states and
large vocabularies. Even then they struggle to predict rare or unseen words
even if the context makes the prediction unambiguous. We introduce the pointer
sentinel mixture architecture for neural sequence models which has the ability
to either reproduce a word from the recent context or produce a word from a
standard softmax classifier. Our pointer sentinel-LSTM model achieves state of
the art language modeling performance on the Penn Treebank (70.9 perplexity)
while using far fewer parameters than a standard softmax LSTM. In order to
evaluate how well language models can exploit longer contexts and deal with
more realistic vocabularies and larger corpora we also introduce the freely
available WikiText corpus.
| 2,016 | Computation and Language |
Lexicon-Free Fingerspelling Recognition from Video: Data, Models, and
Signer Adaptation | We study the problem of recognizing video sequences of fingerspelled letters
in American Sign Language (ASL). Fingerspelling comprises a significant but
relatively understudied part of ASL. Recognizing fingerspelling is challenging
for a number of reasons: It involves quick, small motions that are often highly
coarticulated; it exhibits significant variation between signers; and there has
been a dearth of continuous fingerspelling data collected. In this work we
collect and annotate a new data set of continuous fingerspelling videos,
compare several types of recognizers, and explore the problem of signer
variation. Our best-performing models are segmental (semi-Markov) conditional
random fields using deep neural network-based features. In the signer-dependent
setting, our recognizers achieve up to about 92% letter accuracy. The
multi-signer setting is much more challenging, but with neural network
adaptation we achieve up to 83% letter accuracies in this setting.
| 2,016 | Computation and Language |
S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet
Entity Linking | Non-linear models recently receive a lot of attention as people are starting
to discover the power of statistical and embedding features. However,
tree-based models are seldom studied in the context of structured learning
despite their recent success on various classification and ranking tasks. In
this paper, we propose S-MART, a tree-based structured learning framework based
on multiple additive regression trees. S-MART is especially suitable for
handling tasks with dense features, and can be used to learn many different
structures under various loss functions.
We apply S-MART to the task of tweet entity linking --- a core component of
tweet information extraction, which aims to identify and link name mentions to
entities in a knowledge base. A novel inference algorithm is proposed to handle
the special structure of the task. The experimental results show that S-MART
significantly outperforms state-of-the-art tweet entity linking systems.
| 2,016 | Computation and Language |
Toward Socially-Infused Information Extraction: Embedding Authors,
Mentions, and Entities | Entity linking is the task of identifying mentions of entities in text, and
linking them to entries in a knowledge base. This task is especially difficult
in microblogs, as there is little additional text to provide disambiguating
context; rather, authors rely on an implicit common ground of shared knowledge
with their readers. In this paper, we attempt to capture some of this implicit
context by exploiting the social network structure in microblogs. We build on
the theory of homophily, which implies that socially linked individuals share
interests, and are therefore likely to mention the same sorts of entities. We
implement this idea by encoding authors, mentions, and entities in a continuous
vector space, which is constructed so that socially-connected authors have
similar vector representations. These vectors are incorporated into a neural
structured prediction model, which captures structural constraints that are
inherent in the entity linking task. Together, these design decisions yield F1
improvements of 1%-5% on benchmark datasets, as compared to the previous
state-of-the-art.
| 2,016 | Computation and Language |
Creating Causal Embeddings for Question Answering with Minimal
Supervision | A common model for question answering (QA) is that a good answer is one that
is closely related to the question, where relatedness is often determined using
general-purpose lexical models such as word embeddings. We argue that a better
approach is to look for answers that are related to the question in a relevant
way, according to the information need of the question, which may be determined
through task-specific embeddings. With causality as a use case, we implement
this insight in three steps. First, we generate causal embeddings
cost-effectively by bootstrapping cause-effect pairs extracted from free text
using a small set of seed patterns. Second, we train dedicated embeddings over
this data, by using task-specific contexts, i.e., the context of a cause is its
effect. Finally, we extend a state-of-the-art reranking approach for QA to
incorporate these causal embeddings. We evaluate the causal embedding models
both directly with a casual implication task, and indirectly, in a downstream
causal QA task using data from Yahoo! Answers. We show that explicitly modeling
causality improves performance in both tasks. In the QA task our best model
achieves 37.3% P@1, significantly outperforming a strong baseline by 7.7%
(relative).
| 2,016 | Computation and Language |
An Unsupervised Probability Model for Speech-to-Translation Alignment of
Low-Resource Languages | For many low-resource languages, spoken language resources are more likely to
be annotated with translations than with transcriptions. Translated speech data
is potentially valuable for documenting endangered languages or for training
speech translation systems. A first step towards making use of such data would
be to automatically align spoken words with their translations. We present a
model that combines Dyer et al.'s reparameterization of IBM Model 2
(fast-align) and k-means clustering using Dynamic Time Warping as a distance
metric. The two components are trained jointly using expectation-maximization.
In an extremely low-resource scenario, our model performs significantly better
than both a neural model and a strong baseline.
| 2,016 | Computation and Language |
Google's Neural Machine Translation System: Bridging the Gap between
Human and Machine Translation | Neural Machine Translation (NMT) is an end-to-end learning approach for
automated translation, with the potential to overcome many of the weaknesses of
conventional phrase-based translation systems. Unfortunately, NMT systems are
known to be computationally expensive both in training and in translation
inference. Also, most NMT systems have difficulty with rare words. These issues
have hindered NMT's use in practical deployments and services, where both
accuracy and speed are essential. In this work, we present GNMT, Google's
Neural Machine Translation system, which attempts to address many of these
issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder
layers using attention and residual connections. To improve parallelism and
therefore decrease training time, our attention mechanism connects the bottom
layer of the decoder to the top layer of the encoder. To accelerate the final
translation speed, we employ low-precision arithmetic during inference
computations. To improve handling of rare words, we divide words into a limited
set of common sub-word units ("wordpieces") for both input and output. This
method provides a good balance between the flexibility of "character"-delimited
models and the efficiency of "word"-delimited models, naturally handles
translation of rare words, and ultimately improves the overall accuracy of the
system. Our beam search technique employs a length-normalization procedure and
uses a coverage penalty, which encourages generation of an output sentence that
is most likely to cover all the words in the source sentence. On the WMT'14
English-to-French and English-to-German benchmarks, GNMT achieves competitive
results to state-of-the-art. Using a human side-by-side evaluation on a set of
isolated simple sentences, it reduces translation errors by an average of 60%
compared to Google's phrase-based production system.
| 2,016 | Computation and Language |
Online Segment to Segment Neural Transduction | We introduce an online neural sequence to sequence model that learns to
alternate between encoding and decoding segments of the input as it is read. By
independently tracking the encoding and decoding representations our algorithm
permits exact polynomial marginalization of the latent segmentation during
training, and during decoding beam search is employed to find the best
alignment path together with the predicted output sequence. Our model tackles
the bottleneck of vanilla encoder-decoders that have to read and memorize the
entire input sequence in their fixed-length hidden states before producing any
output. It is different from previous attentive models in that, instead of
treating the attention weights as output of a deterministic function, our model
assigns attention weights to a sequential latent variable which can be
marginalized out and permits online generation. Experiments on abstractive
sentence summarization and morphological inflection show significant
performance gains over the baseline encoder-decoders.
| 2,016 | Computation and Language |
Learning to Translate for Multilingual Question Answering | In multilingual question answering, either the question needs to be
translated into the document language, or vice versa. In addition to direction,
there are multiple methods to perform the translation, four of which we explore
in this paper: word-based, 10-best, context-based, and grammar-based. We build
a feature for each combination of translation direction and method, and train a
model that learns optimal feature weights. On a large forum dataset consisting
of posts in English, Arabic, and Chinese, our novel learn-to-translate approach
was more effective than a strong baseline (p<0.05): translating all text into
English, then training a classifier based only on English (original or
translated) text.
| 2,016 | Computation and Language |
Aligning Coordinated Text Streams through Burst Information Network
Construction and Decipherment | Aligning coordinated text streams from multiple sources and multiple
languages has opened many new research venues on cross-lingual knowledge
discovery. In this paper we aim to advance state-of-the-art by: (1). extending
coarse-grained topic-level knowledge mining to fine-grained information units
such as entities and events; (2). following a novel
Data-to-Network-to-Knowledge (D2N2K) paradigm to construct and utilize network
structures to capture and propagate reliable evidence. We introduce a novel
Burst Information Network (BINet) representation that can display the most
important information and illustrate the connections among bursty entities,
events and keywords in the corpus. We propose an effective approach to
construct and decipher BINets, incorporating novel criteria based on
multi-dimensional clues from pronunciation, translation, burst, neighbor and
graph topological structure. The experimental results on Chinese and English
coordinated text streams show that our approach can accurately decipher the
nodes with high confidence in the BINets and that the algorithm can be
efficiently run in parallel, which makes it possible to apply it to huge
amounts of streaming data for never-ending language and information
decipherment.
| 2,016 | Computation and Language |
The Effects of Data Size and Frequency Range on Distributional Semantic
Models | This paper investigates the effects of data size and frequency range on
distributional semantic models. We compare the performance of a number of
representative models for several test settings over data of varying sizes, and
over test items of various frequency. Our results show that neural
network-based models underperform when the data is small, and that the most
reliable model over data of varying sizes and frequency ranges is the inverted
factorized model.
| 2,016 | Computation and Language |
Multi-task Recurrent Model for True Multilingual Speech Recognition | Research on multilingual speech recognition remains attractive yet
challenging. Recent studies focus on learning shared structures under the
multi-task paradigm, in particular a feature sharing structure. This approach
has been found effective to improve performance on each individual language.
However, this approach is only useful when the deployed system supports just
one language. In a true multilingual scenario where multiple languages are
allowed, performance will be significantly reduced due to the competition among
languages in the decoding space. This paper presents a multi-task recurrent
model that involves a multilingual speech recognition (ASR) component and a
language recognition (LR) component, and the ASR component is informed of the
language information by the LR component, leading to a language-aware
recognition. We tested the approach on an English-Chinese bilingual recognition
task. The results show that the proposed multi-task recurrent model can improve
performance of multilingual recognition systems.
| 2,016 | Computation and Language |
emoji2vec: Learning Emoji Representations from their Description | Many current natural language processing applications for social media rely
on representation learning and utilize pre-trained word embeddings. There
currently exist several publicly-available, pre-trained sets of word
embeddings, but they contain few or no emoji representations even as emoji
usage in social media has increased. In this paper we release emoji2vec,
pre-trained embeddings for all Unicode emoji which are learned from their
description in the Unicode emoji standard. The resulting emoji embeddings can
be readily used in downstream social natural language processing applications
alongside word2vec. We demonstrate, for the downstream task of sentiment
analysis, that emoji embeddings learned from short descriptions outperforms a
skip-gram model trained on a large collection of tweets, while avoiding the
need for contexts in which emoji need to appear frequently in order to estimate
a representation.
| 2,016 | Computation and Language |
A Hackathon for Classical Tibetan | We describe the course of a hackathon dedicated to the development of
linguistic tools for Tibetan Buddhist studies. Over a period of five days, a
group of seventeen scholars, scientists, and students developed and compared
algorithms for intertextual alignment and text classification, along with some
basic language tools, including a stemmer and word segmenter.
| 2,019 | Computation and Language |
Modelling Radiological Language with Bidirectional Long Short-Term
Memory Networks | Motivated by the need to automate medical information extraction from
free-text radiological reports, we present a bi-directional long short-term
memory (BiLSTM) neural network architecture for modelling radiological
language. The model has been used to address two NLP tasks: medical
named-entity recognition (NER) and negation detection. We investigate whether
learning several types of word embeddings improves BiLSTM's performance on
those tasks. Using a large dataset of chest x-ray reports, we compare the
proposed model to a baseline dictionary-based NER system and a negation
detection system that leverages the hand-crafted rules of the NegEx algorithm
and the grammatical relations obtained from the Stanford Dependency Parser.
Compared to these more traditional rule-based systems, we argue that BiLSTM
offers a strong alternative for both our tasks.
| 2,016 | Computation and Language |
OC16-CE80: A Chinese-English Mixlingual Database and A Speech
Recognition Baseline | We present the OC16-CE80 Chinese-English mixlingual speech database which was
released as a main resource for training, development and test for the
Chinese-English mixlingual speech recognition (MixASR-CHEN) challenge on
O-COCOSDA 2016. This database consists of 80 hours of speech signals recorded
from more than 1,400 speakers, where the utterances are in Chinese but each
involves one or several English words. Based on the database and another two
free data resources (THCHS30 and the CMU dictionary), a speech recognition
(ASR) baseline was constructed with the deep neural network-hidden Markov model
(DNN-HMM) hybrid system. We then report the baseline results following the
MixASR-CHEN evaluation rules and demonstrate that OC16-CE80 is a reasonable
data resource for mixlingual research.
| 2,016 | Computation and Language |
AP16-OL7: A Multilingual Database for Oriental Languages and A Language
Recognition Baseline | We present the AP16-OL7 database which was released as the training and test
data for the oriental language recognition (OLR) challenge on APSIPA 2016.
Based on the database, a baseline system was constructed on the basis of the
i-vector model. We report the baseline results evaluated in various metrics
defined by the AP16-OLR evaluation plan and demonstrate that AP16-OL7 is a
reasonable data resource for multilingual research.
| 2,016 | Computation and Language |
WS4A: a Biomedical Question and Answering System based on public Web
Services and Ontologies | This paper describes our system, dubbed WS4A (Web Services for All), that
participated in the fourth edition of the BioASQ challenge (2016). We used WS4A
to perform the Question and Answering (QA) task 4b, which consisted on the
retrieval of relevant concepts, documents, snippets, RDF triples, exact answers
and ideal answers for each given question. The novelty in our approach consists
on the maximum exploitation of existing web services in each step of WS4A, such
as the annotation of text, and the retrieval of metadata for each annotation.
The information retrieved included concept identifiers, ontologies, ancestors,
and most importantly, PubMed identifiers. The paper describes the WS4A pipeline
and also presents the precision, recall and f-measure values obtained in task
4b. Our system achieved two second places in two subtasks on one of the five
batches.
| 2,016 | Computation and Language |
Topic Modeling over Short Texts by Incorporating Word Embeddings | Inferring topics from the overwhelming amount of short texts becomes a
critical but challenging task for many content analysis tasks, such as content
charactering, user interest profiling, and emerging topic detecting. Existing
methods such as probabilistic latent semantic analysis (PLSA) and latent
Dirichlet allocation (LDA) cannot solve this prob- lem very well since only
very limited word co-occurrence information is available in short texts. This
paper studies how to incorporate the external word correlation knowledge into
short texts to improve the coherence of topic modeling. Based on recent results
in word embeddings that learn se- mantically representations for words from a
large corpus, we introduce a novel method, Embedding-based Topic Model (ETM),
to learn latent topics from short texts. ETM not only solves the problem of
very limited word co-occurrence information by aggregating short texts into
long pseudo- texts, but also utilizes a Markov Random Field regularized model
that gives correlated words a better chance to be put into the same topic. The
experiments on real-world datasets validate the effectiveness of our model
comparing with the state-of-the-art models.
| 2,016 | Computation and Language |
Deep Reinforcement Learning for Mention-Ranking Coreference Models | Coreference resolution systems are typically trained with heuristic loss
functions that require careful tuning. In this paper we instead apply
reinforcement learning to directly optimize a neural mention-ranking model for
coreference evaluation metrics. We experiment with two approaches: the
REINFORCE policy gradient algorithm and a reward-rescaled max-margin objective.
We find the latter to be more effective, resulting in significant improvements
over the current state-of-the-art on the English and Chinese portions of the
CoNLL 2012 Shared Task.
| 2,016 | Computation and Language |
Optimizing Neural Network Hyperparameters with Gaussian Processes for
Dialog Act Classification | Systems based on artificial neural networks (ANNs) have achieved
state-of-the-art results in many natural language processing tasks. Although
ANNs do not require manually engineered features, ANNs have many
hyperparameters to be optimized. The choice of hyperparameters significantly
impacts models' performances. However, the ANN hyperparameters are typically
chosen by manual, grid, or random search, which either requires expert
experiences or is computationally expensive. Recent approaches based on
Bayesian optimization using Gaussian processes (GPs) is a more systematic way
to automatically pinpoint optimal or near-optimal machine learning
hyperparameters. Using a previously published ANN model yielding
state-of-the-art results for dialog act classification, we demonstrate that
optimizing hyperparameters using GP further improves the results, and reduces
the computational time by a factor of 4 compared to a random search. Therefore
it is a useful technique for tuning ANN models to yield the best performances
for natural language processing tasks.
| 2,016 | Computation and Language |
Character Sequence Models for ColorfulWords | We present a neural network architecture to predict a point in color space
from the sequence of characters in the color's name. Using large scale
color--name pairs obtained from an online color design forum, we evaluate our
model on a "color Turing test" and find that, given a name, the colors
predicted by our model are preferred by annotators to color names created by
humans. Our datasets and demo system are available online at colorlab.us.
| 2,016 | Computation and Language |
Effective Combination of Language and Vision Through Model Composition
and the R-CCA Method | We address the problem of integrating textual and visual information in
vector space models for word meaning representation. We first present the
Residual CCA (R-CCA) method, that complements the standard CCA method by
representing, for each modality, the difference between the original signal and
the signal projected to the shared, max correlation, space. We then show that
constructing visual and textual representations and then post-processing them
through composition of common modeling motifs such as PCA, CCA, R-CCA and
linear interpolation (a.k.a sequential modeling) yields high quality models. On
five standard semantic benchmarks our sequential models outperform recent
multimodal representation learning alternatives, including ones that rely on
joint representation learning. For two of these benchmarks our R-CCA method is
part of the Best configuration our algorithm yields.
| 2,016 | Computation and Language |
Equation Parsing: Mapping Sentences to Grounded Equations | Identifying mathematical relations expressed in text is essential to
understanding a broad range of natural language text from election reports, to
financial news, to sport commentaries to mathematical word problems. This paper
focuses on identifying and understanding mathematical relations described
within a single sentence. We introduce the problem of Equation Parsing -- given
a sentence, identify noun phrases which represent variables, and generate the
mathematical equation expressing the relation described in the sentence. We
introduce the notion of projective equation parsing and provide an efficient
algorithm to parse text to projective equations. Our system makes use of a high
precision lexicon of mathematical expressions and a pipeline of structured
predictors, and generates correct equations in $70\%$ of the cases. In $60\%$
of the time, it also identifies the correct noun phrase $\rightarrow$ variables
mapping, significantly outperforming baselines. We also release a new annotated
dataset for task evaluation.
| 2,016 | Computation and Language |
Byte-based Language Identification with Deep Convolutional Networks | We report on our system for the shared task on discriminating between similar
languages (DSL 2016). The system uses only byte representations in a deep
residual network (ResNet). The system, named ResIdent, is trained only on the
data released with the task (closed training). We obtain 84.88% accuracy on
subtask A, 68.80% accuracy on subtask B1, and 69.80% accuracy on subtask B2. A
large difference in accuracy on development data can be observed with
relatively minor changes in our network's architecture and hyperparameters. We
therefore expect fine-tuning of these parameters to yield higher accuracies.
| 2,016 | Computation and Language |
Unsupervised Neural Hidden Markov Models | In this work, we present the first results for neuralizing an Unsupervised
Hidden Markov Model. We evaluate our approach on tag in- duction. Our approach
outperforms existing generative models and is competitive with the
state-of-the-art though with a simpler model easily extended to include
additional context.
| 2,016 | Computation and Language |
Psychologically Motivated Text Mining | Natural language processing techniques are increasingly applied to identify
social trends and predict behavior based on large text collections. Existing
methods typically rely on surface lexical and syntactic information. Yet,
research in psychology shows that patterns of human conceptualisation, such as
metaphorical framing, are reliable predictors of human expectations and
decisions. In this paper, we present a method to learn patterns of metaphorical
framing from large text collections, using statistical techniques. We apply the
method to data in three different languages and evaluate the identified
patterns, demonstrating their psychological validity.
| 2,016 | Computation and Language |
Stance Classification in Rumours as a Sequential Task Exploiting the
Tree Structure of Social Media Conversations | Rumour stance classification, the task that determines if each tweet in a
collection discussing a rumour is supporting, denying, questioning or simply
commenting on the rumour, has been attracting substantial interest. Here we
introduce a novel approach that makes use of the sequence of transitions
observed in tree-structured conversation threads in Twitter. The conversation
threads are formed by harvesting users' replies to one another, which results
in a nested tree-like structure. Previous work addressing the stance
classification task has treated each tweet as a separate unit. Here we analyse
tweets by virtue of their position in a sequence and test two sequential
classifiers, Linear-Chain CRF and Tree CRF, each of which makes different
assumptions about the conversational structure. We experiment with eight
Twitter datasets, collected during breaking news, and show that exploiting the
sequential structure of Twitter conversations achieves significant improvements
over the non-sequential methods. Our work is the first to model Twitter
conversations as a tree structure in this manner, introducing a novel way of
tackling NLP tasks on Twitter conversations.
| 2,016 | Computation and Language |
Empirical Evaluation of RNN Architectures on Sentence Classification
Task | Recurrent Neural Networks have achieved state-of-the-art results for many
problems in NLP and two most popular RNN architectures are Tail Model and
Pooling Model. In this paper, a hybrid architecture is proposed and we present
the first empirical study using LSTMs to compare performance of the three RNN
structures on sentence classification task. Experimental results show that the
Max Pooling Model or Hybrid Max Pooling Model achieves the best performance on
most datasets, while Tail Model does not outperform other models.
| 2,016 | Computation and Language |
Topic Browsing for Research Papers with Hierarchical Latent Tree
Analysis | Academic researchers often need to face with a large collection of research
papers in the literature. This problem may be even worse for postgraduate
students who are new to a field and may not know where to start. To address
this problem, we have developed an online catalog of research papers where the
papers have been automatically categorized by a topic model. The catalog
contains 7719 papers from the proceedings of two artificial intelligence
conferences from 2000 to 2015. Rather than the commonly used Latent Dirichlet
Allocation, we use a recently proposed method called hierarchical latent tree
analysis for topic modeling. The resulting topic model contains a hierarchy of
topics so that users can browse the topics from the top level to the bottom
level. The topic model contains a manageable number of general topics at the
top level and allows thousands of fine-grained topics at the bottom level. It
also can detect topics that have emerged recently.
| 2,016 | Computation and Language |
Learning Sentence Representation with Guidance of Human Attention | Recently, much progress has been made in learning general-purpose sentence
representations that can be used across domains. However, most of the existing
models typically treat each word in a sentence equally. In contrast, extensive
studies have proven that human read sentences efficiently by making a sequence
of fixation and saccades. This motivates us to improve sentence representations
by assigning different weights to the vectors of the component words, which can
be treated as an attention mechanism on single sentences. To that end, we
propose two novel attention models, in which the attention weights are derived
using significant predictors of human reading time, i.e., Surprisal, POS tags
and CCG supertags. The extensive experiments demonstrate that the proposed
methods significantly improve upon the state-of-the-art sentence representation
models.
| 2,017 | Computation and Language |
Training Dependency Parsers with Partial Annotation | Recently, these has been a surge on studying how to obtain partially
annotated data for model supervision. However, there still lacks a systematic
study on how to train statistical models with partial annotation (PA). Taking
dependency parsing as our case study, this paper describes and compares two
straightforward approaches for three mainstream dependency parsers. The first
approach is previously proposed to directly train a log-linear graph-based
parser (LLGPar) with PA based on a forest-based objective. This work for the
first time proposes the second approach to directly training a linear
graph-based parse (LGPar) and a linear transition-based parser (LTPar) with PA
based on the idea of constrained decoding. We conduct extensive experiments on
Penn Treebank under three different settings for simulating PA, i.e., random
dependencies, most uncertain dependencies, and dependencies with divergent
outputs from the three parsers. The results show that LLGPar is most effective
in learning from PA and LTPar lags behind the graph-based counterparts by large
margin. Moreover, LGPar and LTPar can achieve best performance by using LLGPar
to complete PA into full annotation (FA).
| 2,016 | Computation and Language |
Semantic Parsing with Semi-Supervised Sequential Autoencoders | We present a novel semi-supervised approach for sequence transduction and
apply it to semantic parsing. The unsupervised component is based on a
generative model in which latent sentences generate the unpaired logical forms.
We apply this method to a number of semantic parsing tasks focusing on domains
with limited access to labelled training data and extend those datasets with
synthetically generated logical forms.
| 2,016 | Computation and Language |
Inducing Multilingual Text Analysis Tools Using Bidirectional Recurrent
Neural Networks | This work focuses on the rapid development of linguistic annotation tools for
resource-poor languages. We experiment several cross-lingual annotation
projection methods using Recurrent Neural Networks (RNN) models. The
distinctive feature of our approach is that our multilingual word
representation requires only a parallel corpus between the source and target
language. More precisely, our method has the following characteristics: (a) it
does not use word alignment information, (b) it does not assume any knowledge
about foreign languages, which makes it applicable to a wide range of
resource-poor languages, (c) it provides truly multilingual taggers. We
investigate both uni- and bi-directional RNN models and propose a method to
include external information (for instance low level information from POS) in
the RNN to train higher level taggers (for instance, super sense taggers). We
demonstrate the validity and genericity of our model by using parallel corpora
(obtained by manual or automatic translation). Our experiments are conducted to
induce cross-lingual POS and super sense taggers.
| 2,016 | Computation and Language |
Evaluating Induced CCG Parsers on Grounded Semantic Parsing | We compare the effectiveness of four different syntactic CCG parsers for a
semantic slot-filling task to explore how much syntactic supervision is
required for downstream semantic analysis. This extrinsic, task-based
evaluation provides a unique window to explore the strengths and weaknesses of
semantics captured by unsupervised grammar induction systems. We release a new
Freebase semantic parsing dataset called SPADES (Semantic PArsing of
DEclarative Sentences) containing 93K cloze-style questions paired with
answers. We evaluate all our models on this dataset. Our code and data are
available at https://github.com/sivareddyg/graph-parser.
| 2,017 | Computation and Language |
Controlling Output Length in Neural Encoder-Decoders | Neural encoder-decoder models have shown great success in many sequence
generation tasks. However, previous work has not investigated situations in
which we would like to control the length of encoder-decoder outputs. This
capability is crucial for applications such as text summarization, in which we
have to generate concise summaries with a desired length. In this paper, we
propose methods for controlling the output sequence length for neural
encoder-decoder models: two decoding-based methods and two learning-based
methods. Results show that our learning-based methods have the capability to
control length without degrading summary quality in a summarization task.
| 2,016 | Computation and Language |
Referential Uncertainty and Word Learning in High-dimensional,
Continuous Meaning Spaces | This paper discusses lexicon word learning in high-dimensional meaning spaces
from the viewpoint of referential uncertainty. We investigate various
state-of-the-art Machine Learning algorithms and discuss the impact of scaling,
representation and meaning space structure. We demonstrate that current Machine
Learning techniques successfully deal with high-dimensional meaning spaces. In
particular, we show that exponentially increasing dimensions linearly impact
learner performance and that referential uncertainty from word sensitivity has
no impact.
| 2,016 | Computation and Language |
Modeling Language Change in Historical Corpora: The Case of Portuguese | This paper presents a number of experiments to model changes in a historical
Portuguese corpus composed of literary texts for the purpose of temporal text
classification. Algorithms were trained to classify texts with respect to their
publication date taking into account lexical variation represented as word
n-grams, and morphosyntactic variation represented by part-of-speech (POS)
distribution. We report results of 99.8% accuracy using word unigram features
with a Support Vector Machines classifier to predict the publication date of
documents in time intervals of both one century and half a century. A feature
analysis is performed to investigate the most informative features for this
task and how they are linked to language change.
| 2,016 | Computation and Language |
Discriminating Similar Languages: Evaluations and Explorations | We present an analysis of the performance of machine learning classifiers on
discriminating between similar languages and language varieties. We carried out
a number of experiments using the results of the two editions of the
Discriminating between Similar Languages (DSL) shared task. We investigate the
progress made between the two tasks, estimate an upper bound on possible
performance using ensemble and oracle combination, and provide learning curves
to help us understand which languages are more challenging. A number of
difficult sentences are identified and investigated further with human
annotation.
| 2,016 | Computation and Language |
Vocabulary Selection Strategies for Neural Machine Translation | Classical translation models constrain the space of possible outputs by
selecting a subset of translation rules based on the input sentence. Recent
work on improving the efficiency of neural translation models adopted a similar
strategy by restricting the output vocabulary to a subset of likely candidates
given the source. In this paper we experiment with context and embedding-based
selection methods and extend previous work by examining speed and accuracy
trade-offs in more detail. We show that decoding time on CPUs can be reduced by
up to 90% and training time by 25% on the WMT15 English-German and WMT16
English-Romanian tasks at the same or only negligible change in accuracy. This
brings the time to decode with a state of the art neural translation system to
just over 140 msec per sentence on a single CPU core for English-German.
| 2,016 | Computation and Language |
Sentence Segmentation in Narrative Transcripts from Neuropsychological
Tests using Recurrent Convolutional Neural Networks | Automated discourse analysis tools based on Natural Language Processing (NLP)
aiming at the diagnosis of language-impairing dementias generally extract
several textual metrics of narrative transcripts. However, the absence of
sentence boundary segmentation in the transcripts prevents the direct
application of NLP methods which rely on these marks to function properly, such
as taggers and parsers. We present the first steps taken towards automatic
neuropsychological evaluation based on narrative discourse analysis, presenting
a new automatic sentence segmentation method for impaired speech. Our model
uses recurrent convolutional neural networks with prosodic, Part of Speech
(PoS) features, and word embeddings. It was evaluated intrinsically on
impaired, spontaneous speech, as well as, normal, prepared speech, and presents
better results for healthy elderly (CTL) (F1 = 0.74) and Mild Cognitive
Impairment (MCI) patients (F1 = 0.70) than the Conditional Random Fields method
(F1 = 0.55 and 0.53, respectively) used in the same context of our study. The
results suggest that our model is robust for impaired speech and can be used in
automated discourse analysis tools to differentiate narratives produced by MCI
and CTL.
| 2,017 | Computation and Language |
Very Deep Convolutional Neural Networks for Robust Speech Recognition | This paper describes the extension and optimization of our previous work on
very deep convolutional neural networks (CNNs) for effective recognition of
noisy speech in the Aurora 4 task. The appropriate number of convolutional
layers, the sizes of the filters, pooling operations and input feature maps are
all modified: the filter and pooling sizes are reduced and dimensions of input
feature maps are extended to allow adding more convolutional layers.
Furthermore appropriate input padding and input feature map selection
strategies are developed. In addition, an adaptation framework using joint
training of very deep CNN with auxiliary features i-vector and fMLLR features
is developed. These modifications give substantial word error rate reductions
over the standard CNN used as baseline. Finally the very deep CNN is combined
with an LSTM-RNN acoustic model and it is shown that state-level weighted log
likelihood score combination in a joint acoustic model decoding scheme is very
effective. On the Aurora 4 task, the very deep CNN achieves a WER of 8.81%,
further 7.99% with auxiliary feature joint training, and 7.09% with LSTM-RNN
joint decoding.
| 2,016 | Computation and Language |
Syntactic Structures and Code Parameters | We assign binary and ternary error-correcting codes to the data of syntactic
structures of world languages and we study the distribution of code points in
the space of code parameters. We show that, while most codes populate the lower
region approximating a superposition of Thomae functions, there is a
substantial presence of codes above the Gilbert-Varshamov bound and even above
the asymptotic bound and the Plotkin bound. We investigate the dynamics induced
on the space of code parameters by spin glass models of language change, and
show that, in the presence of entailment relations between syntactic parameters
the dynamics can sometimes improve the code. For large sets of languages and
syntactic data, one can gain information on the spin glass dynamics from the
induced dynamics in the space of code parameters.
| 2,016 | Computation and Language |
Sentiment Analysis on Bangla and Romanized Bangla Text (BRBT) using Deep
Recurrent models | Sentiment Analysis (SA) is an action research area in the digital age. With
rapid and constant growth of online social media sites and services, and the
increasing amount of textual data such as - statuses, comments, reviews etc.
available in them, application of automatic SA is on the rise. However, most of
the research works on SA in natural language processing (NLP) are based on
English language. Despite being the sixth most widely spoken language in the
world, Bangla still does not have a large and standard dataset. Because of
this, recent research works in Bangla have failed to produce results that can
be both comparable to works done by others and reusable as stepping stones for
future researchers to progress in this field. Therefore, we first tried to
provide a textual dataset - that includes not just Bangla, but Romanized Bangla
texts as well, is substantial, post-processed and multiple validated, ready to
be used in SA experiments. We tested this dataset in Deep Recurrent model,
specifically, Long Short Term Memory (LSTM), using two types of loss functions
- binary crossentropy and categorical crossentropy, and also did some
experimental pre-training by using data from one validation to pre-train the
other and vice versa. Lastly, we documented the results along with some
analysis on them, which were promising.
| 2,016 | Computation and Language |
Learning to Translate in Real-time with Neural Machine Translation | Translating in real-time, a.k.a. simultaneous translation, outputs
translation words before the input sentence ends, which is a challenging
problem for conventional machine translation methods. We propose a neural
machine translation (NMT) framework for simultaneous translation in which an
agent learns to make decisions on when to translate from the interaction with a
pre-trained NMT environment. To trade off quality and delay, we extensively
explore various targets for delay and design a method for beam-search
applicable in the simultaneous MT setting. Experiments against state-of-the-art
baselines on two language pairs demonstrate the efficacy of the proposed
framework both quantitatively and qualitatively.
| 2,017 | Computation and Language |
Nonsymbolic Text Representation | We introduce the first generic text representation model that is completely
nonsymbolic, i.e., it does not require the availability of a segmentation or
tokenization method that attempts to identify words or other symbolic units in
text. This applies to training the parameters of the model on a training corpus
as well as to applying it when computing the representation of a new text. We
show that our model performs better than prior work on an information
extraction and a text denoising task.
| 2,017 | Computation and Language |
FPGA-Based Low-Power Speech Recognition with Recurrent Neural Networks | In this paper, a neural network based real-time speech recognition (SR)
system is developed using an FPGA for very low-power operation. The implemented
system employs two recurrent neural networks (RNNs); one is a
speech-to-character RNN for acoustic modeling (AM) and the other is for
character-level language modeling (LM). The system also employs a statistical
word-level LM to improve the recognition accuracy. The results of the AM, the
character-level LM, and the word-level LM are combined using a fairly simple
N-best search algorithm instead of the hidden Markov model (HMM) based network.
The RNNs are implemented using massively parallel processing elements (PEs) for
low latency and high throughput. The weights are quantized to 6 bits to store
all of them in the on-chip memory of an FPGA. The proposed algorithm is
implemented on a Xilinx XC7Z045, and the system can operate much faster than
real-time.
| 2,016 | Computation and Language |
An Arabic-Hebrew parallel corpus of TED talks | We describe an Arabic-Hebrew parallel corpus of TED talks built upon WIT3,
the Web inventory that repurposes the original content of the TED website in a
way which is more convenient for MT researchers. The benchmark consists of
about 2,000 talks, whose subtitles in Arabic and Hebrew have been accurately
aligned and rearranged in sentences, for a total of about 3.5M tokens per
language. Talks have been partitioned in train, development and test sets
similarly in all respects to the MT tasks of the IWSLT 2016 evaluation
campaign. In addition to describing the benchmark, we list the problems
encountered in preparing it and the novel methods designed to solve them.
Baseline MT results and some measures on sentence length are provided as an
extrinsic evaluation of the quality of the benchmark.
| 2,016 | Computation and Language |
Multimodal Semantic Simulations of Linguistically Underspecified Motion
Events | In this paper, we describe a system for generating three-dimensional visual
simulations of natural language motion expressions. We use a rich formal model
of events and their participants to generate simulations that satisfy the
minimal constraints entailed by the associated utterance, relying on semantic
knowledge of physical objects and motion events. This paper outlines technical
considerations and discusses implementing the aforementioned semantic models
into such a system.
| 2,016 | Computation and Language |
Orthographic Syllable as basic unit for SMT between Related Languages | We explore the use of the orthographic syllable, a variable-length
consonant-vowel sequence, as a basic unit of translation between related
languages which use abugida or alphabetic scripts. We show that orthographic
syllable level translation significantly outperforms models trained over other
basic units (word, morpheme and character) when training over small parallel
corpora.
| 2,016 | Computation and Language |
Distributed Representations of Lexical Sets and Prototypes in Causal
Alternation Verbs | Lexical sets contain the words filling an argument slot of a verb, and are in
part determined by selectional preferences. The purpose of this paper is to
unravel the properties of lexical sets through distributional semantics. We
investigate 1) whether lexical set behave as prototypical categories with a
centre and a periphery; 2) whether they are polymorphic, i.e. composed by
subcategories; 3) whether the distance between lexical sets of different
arguments is explanatory of verb properties. In particular, our case study are
lexical sets of causative-inchoative verbs in Italian. Having studied several
vector models, we find that 1) based on spatial distance from the centroid,
object fillers are scattered uniformly across the category, whereas
intransitive subject fillers lie on its edge; 2) a correlation exists between
the amount of verb senses and that of clusters discovered automatically,
especially for intransitive subjects; 3) the distance between the centroids of
object and intransitive subject is correlated with other properties of verbs,
such as their cross-lingual tendency to appear in the intransitive pattern
rather than transitive one. This paper is noncommittal with respect to the
hypothesis that this connection is underpinned by a semantic reason, namely the
spontaneity of the event denoted by the verb.
| 2,020 | Computation and Language |
Chinese Event Extraction Using DeepNeural Network with Word Embedding | A lot of prior work on event extraction has exploited a variety of features
to represent events. Such methods have several drawbacks: 1) the features are
often specific for a particular domain and do not generalize well; 2) the
features are derived from various linguistic analyses and are error-prone; and
3) some features may be expensive and require domain expert. In this paper, we
develop a Chinese event extraction system that uses word embedding vectors to
represent language, and deep neural networks to learn the abstract feature
representation in order to greatly reduce the effort of feature engineering. In
addition, in this framework, we leverage large amount of unlabeled data, which
can address the problem of limited labeled corpus for this task. Our
experiments show that our proposed method performs better compared to the
system using rich language features, and using unlabeled data benefits the word
embeddings. This study suggests the potential of DNN and word embedding for the
event extraction task.
| 2,016 | Computation and Language |
A Computational Approach to Automatic Prediction of Drunk Texting | Alcohol abuse may lead to unsociable behavior such as crime, drunk driving,
or privacy leaks. We introduce automatic drunk-texting prediction as the task
of identifying whether a text was written when under the influence of alcohol.
We experiment with tweets labeled using hashtags as distant supervision. Our
classifiers use a set of N-gram and stylistic features to detect drunk tweets.
Our observations present the first quantitative evidence that text contains
signals that can be exploited to detect drunk-texting.
| 2,016 | Computation and Language |
Are Word Embedding-based Features Useful for Sarcasm Detection? | This paper makes a simple increment to state-of-the-art in sarcasm detection
research. Existing approaches are unable to capture subtle forms of context
incongruity which lies at the heart of sarcasm. We explore if prior work can be
enhanced using semantic similarity/discordance between word embeddings. We
augment word embedding-based features to four feature sets reported in the
past. We also experiment with four types of word embeddings. We observe an
improvement in sarcasm detection, irrespective of the word embedding used or
the original feature set to which our features are augmented. For example, this
augmentation results in an improvement in F-score of around 4\% for three out
of these four feature sets, and a minor degradation in case of the fourth, when
Word2Vec embeddings are used. Finally, a comparison of the four embeddings
shows that Word2Vec and dependency weight-based features outperform LSA and
GloVe, in terms of their benefit to sarcasm detection.
| 2,016 | Computation and Language |
Embracing data abundance: BookTest Dataset for Reading Comprehension | There is a practically unlimited amount of natural language data available.
Still, recent work in text comprehension has focused on datasets which are
small relative to current computing possibilities. This article is making a
case for the community to move to larger data and as a step in that direction
it is proposing the BookTest, a new dataset similar to the popular Children's
Book Test (CBT), however more than 60 times larger. We show that training on
the new data improves the accuracy of our Attention-Sum Reader model on the
original CBT test data by a much larger margin than many recent attempts to
improve the model architecture. On one version of the dataset our ensemble even
exceeds the human baseline provided by Facebook. We then show in our own human
study that there is still space for further improvement.
| 2,016 | Computation and Language |
Applications of Online Deep Learning for Crisis Response Using Social
Media Information | During natural or man-made disasters, humanitarian response organizations
look for useful information to support their decision-making processes. Social
media platforms such as Twitter have been considered as a vital source of
useful information for disaster response and management. Despite advances in
natural language processing techniques, processing short and informal Twitter
messages is a challenging task. In this paper, we propose to use Deep Neural
Network (DNN) to address two types of information needs of response
organizations: 1) identifying informative tweets and 2) classifying them into
topical classes. DNNs use distributed representation of words and learn the
representation as well as higher level features automatically for the
classification task. We propose a new online algorithm based on stochastic
gradient descent to train DNNs in an online fashion during disaster situations.
We test our models using a crisis-related real-world Twitter dataset.
| 2,016 | Computation and Language |
Is Neural Machine Translation Ready for Deployment? A Case Study on 30
Translation Directions | In this paper we provide the largest published comparison of translation
quality for phrase-based SMT and neural machine translation across 30
translation directions. For ten directions we also include hierarchical
phrase-based MT. Experiments are performed for the recently published United
Nations Parallel Corpus v1.0 and its large six-way sentence-aligned subcorpus.
In the second part of the paper we investigate aspects of translation speed,
introducing AmuNMT, our efficient neural machine translation decoder. We
demonstrate that current neural machine translation could already be used for
in-production systems when comparing words-per-second ratios.
| 2,016 | Computation and Language |
ECAT: Event Capture Annotation Tool | This paper introduces the Event Capture Annotation Tool (ECAT), a
user-friendly, open-source interface tool for annotating events and their
participants in video, capable of extracting the 3D positions and orientations
of objects in video captured by Microsoft's Kinect(R) hardware. The modeling
language VoxML (Pustejovsky and Krishnaswamy, 2016) underlies ECAT's object,
program, and attribute representations, although ECAT uses its own spec for
explicit labeling of motion instances. The demonstration will show the tool's
workflow and the options available for capturing event-participant relations
and browsing visual data. Mapping ECAT's output to VoxML will also be
addressed.
| 2,016 | Computation and Language |
Word2Vec vs DBnary: Augmenting METEOR using Vector Representations or
Lexical Resources? | This paper presents an approach combining lexico-semantic resources and
distributed representations of words applied to the evaluation in machine
translation (MT). This study is made through the enrichment of a well-known MT
evaluation metric: METEOR. This metric enables an approximate match (synonymy
or morphological similarity) between an automatic and a reference translation.
Our experiments are made in the framework of the Metrics task of WMT 2014. We
show that distributed representations are a good alternative to lexico-semantic
resources for MT evaluation and they can even bring interesting additional
information. The augmented versions of METEOR, using vector representations,
are made available on our Github page.
| 2,016 | Computation and Language |
Monaural Multi-Talker Speech Recognition using Factorial Speech
Processing Models | A Pascal challenge entitled monaural multi-talker speech recognition was
developed, targeting the problem of robust automatic speech recognition against
speech like noises which significantly degrades the performance of automatic
speech recognition systems. In this challenge, two competing speakers say a
simple command simultaneously and the objective is to recognize speech of the
target speaker. Surprisingly during the challenge, a team from IBM research,
could achieve a performance better than human listeners on this task. The
proposed method of the IBM team, consist of an intermediate speech separation
and then a single-talker speech recognition. This paper reconsiders the task of
this challenge based on gain adapted factorial speech processing models. It
develops a joint-token passing algorithm for direct utterance decoding of both
target and masker speakers, simultaneously. Comparing it to the challenge
winner, it uses maximum uncertainty during the decoding which cannot be used in
the past two-phased method. It provides detailed derivation of inference on
these models based on general inference procedures of probabilistic graphical
models. As another improvement, it uses deep neural networks for joint-speaker
identification and gain estimation which makes these two steps easier than
before producing competitive results for these steps. The proposed method of
this work outperforms past super-human results and even the results were
achieved recently by Microsoft research, using deep neural networks. It
achieved 5.5% absolute task performance improvement compared to the first
super-human system and 2.7% absolute task performance improvement compared to
its recent competitor.
| 2,016 | Computation and Language |
A tentative model for dimensionless phoneme distance from binary
distinctive features | This work proposes a tentative model for the calculation of dimensionless
distances between phonemes; sounds are described with binary distinctive
features and distances show linear consistency in terms of such features. The
model can be used as a scoring function for local and global pairwise alignment
of phoneme sequences, and the distances can be used as prior probabilities for
Bayesian analyses on the phylogenetic relationship between languages,
particularly for cognate identification in cases where no empirical prior
probability is available.
| 2,016 | Computation and Language |
VoxML: A Visualization Modeling Language | We present the specification for a modeling language, VoxML, which encodes
semantic knowledge of real-world objects represented as three-dimensional
models, and of events and attributes related to and enacted over these objects.
VoxML is intended to overcome the limitations of existing 3D visual markup
languages by allowing for the encoding of a broad range of semantic knowledge
that can be exploited by a variety of systems and platforms, leading to
multimodal simulations of real-world scenarios using conceptual objects that
represent their semantic values.
| 2,016 | Computation and Language |
Comparative study of LSA vs Word2vec embeddings in small corpora: a case
study in dreams database | Word embeddings have been extensively studied in large text datasets.
However, only a few studies analyze semantic representations of small corpora,
particularly relevant in single-person text production studies. In the present
paper, we compare Skip-gram and LSA capabilities in this scenario, and we test
both techniques to extract relevant semantic patterns in single-series dreams
reports. LSA showed better performance than Skip-gram in small size training
corpus in two semantic tests. As a study case, we show that LSA can capture
relevant words associations in dream reports series, even in cases of small
number of dreams or low-frequency words. We propose that LSA can be used to
explore words associations in dreams reports, which could bring new insight
into this classic research area of psychology
| 2,017 | Computation and Language |
Conversational Recommendation System with Unsupervised Learning | We will demonstrate a conversational products recommendation agent. This
system shows how we combine research in personalized recommendation systems
with research in dialogue systems to build a virtual sales agent. Based on new
deep learning technologies we developed, the virtual agent is capable of
learning how to interact with users, how to answer user questions, what is the
next question to ask, and what to recommend when chatting with a human user.
Normally a descent conversational agent for a particular domain requires tens
of thousands of hand labeled conversational data or hand written rules. This is
a major barrier when launching a conversation agent for a new domain. We will
explore and demonstrate the effectiveness of the learning solution even when
there is no hand written rules or hand labeled training data.
| 2,016 | Computation and Language |
Neural Structural Correspondence Learning for Domain Adaptation | Domain adaptation, adapting models from domains rich in labeled training data
to domains poor in such data, is a fundamental NLP challenge. We introduce a
neural network model that marries together ideas from two prominent strands of
research on domain adaptation through representation learning: structural
correspondence learning (SCL, (Blitzer et al., 2006)) and autoencoder neural
networks. Particularly, our model is a three-layer neural network that learns
to encode the nonpivot features of an input example into a low-dimensional
representation, so that the existence of pivot features (features that are
prominent in both domains and convey useful information for the NLP task) in
the example can be decoded from that representation. The low-dimensional
representation is then employed in a learning algorithm for the task. Moreover,
we show how to inject pre-trained word embeddings into our model in order to
improve generalization across examples with similar pivot features. On the task
of cross-domain product sentiment classification (Blitzer et al., 2007),
consisting of 12 domain pairs, our model outperforms both the SCL and the
marginalized stacked denoising autoencoder (MSDA, (Chen et al., 2012)) methods
by 3.77% and 2.17% respectively, on average across domain pairs.
| 2,017 | Computation and Language |
Generating Simulations of Motion Events from Verbal Descriptions | In this paper, we describe a computational model for motion events in natural
language that maps from linguistic expressions, through a dynamic event
interpretation, into three-dimensional temporal simulations in a model.
Starting with the model from (Pustejovsky and Moszkowicz, 2011), we analyze
motion events using temporally-traced Labelled Transition Systems. We model the
distinction between path- and manner-motion in an operational semantics, and
further distinguish different types of manner-of-motion verbs in terms of the
mereo-topological relations that hold throughout the process of movement. From
these representations, we generate minimal models, which are realized as
three-dimensional simulations in software developed with the game engine,
Unity. The generated simulations act as a conceptual "debugger" for the
semantics of different motion verbs: that is, by testing for consistency and
informativeness in the model, simulations expose the presuppositions associated
with linguistic expressions and their compositions. Because the model
generation component is still incomplete, this paper focuses on an
implementation which maps directly from linguistic interpretations into the
Unity code snippets that create the simulations.
| 2,016 | Computation and Language |
Automatic Detection of Small Groups of Persons, Influential Members,
Relations and Hierarchy in Written Conversations Using Fuzzy Logic | Nowadays a lot of data is collected in online forums. One of the key tasks is
to determine the social structure of these online groups, for example the
identification of subgroups within a larger group. We will approach the
grouping of individual as a classification problem. The classifier will be
based on fuzzy logic. The input to the classifier will be linguistic features
and degree of relationships (among individuals). The output of the classifiers
are the groupings of individuals. We also incorporate a method that ranks the
members of the detected subgroup to identify the hierarchies in each subgroup.
Data from the HBO television show The Wire is used to analyze the efficacy and
usefulness of fuzzy logic based methods as alternative methods to classical
statistical methods usually used for these problems. The proposed methodology
could detect automatically the most influential members of each organization
The Wire with 90% accuracy.
| 2,016 | Computation and Language |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.