Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Model Compression with Multi-Task Knowledge Distillation for Web-scale
Question Answering System | Deep pre-training and fine-tuning models (like BERT, OpenAI GPT) have
demonstrated excellent results in question answering areas. However, due to the
sheer amount of model parameters, the inference speed of these models is very
slow. How to apply these complex models to real business scenarios becomes a
challenging but practical problem. Previous works often leverage model
compression approaches to resolve this problem. However, these methods usually
induce information loss during the model compression procedure, leading to
incomparable results between compressed model and the original model. To tackle
this challenge, we propose a Multi-task Knowledge Distillation Model (MKDM for
short) for web-scale Question Answering system, by distilling knowledge from
multiple teacher models to a light-weight student model. In this way, more
generalized knowledge can be transferred. The experiment results show that our
method can significantly outperform the baseline methods and even achieve
comparable results with the original teacher models, along with significant
speedup of model inference.
| 2,019 | Computation and Language |
Dynamic Past and Future for Neural Machine Translation | Previous studies have shown that neural machine translation (NMT) models can
benefit from explicitly modeling translated (Past) and untranslated (Future) to
groups of translated and untranslated contents through parts-to-wholes
assignment. The assignment is learned through a novel variant of
routing-by-agreement mechanism (Sabour et al., 2017), namely {\em Guided
Dynamic Routing}, where the translating status at each decoding step {\em
guides} the routing process to assign each source word to its associated group
(i.e., translated or untranslated content) represented by a capsule, enabling
translation to be made from holistic context. Experiments show that our
approach achieves substantial improvements over both RNMT and Transformer by
producing more adequate translations. Extensive analysis demonstrates that our
method is highly interpretable, which is able to recognize the translated and
untranslated contents as expected.
| 2,019 | Computation and Language |
BERTScore: Evaluating Text Generation with BERT | We propose BERTScore, an automatic evaluation metric for text generation.
Analogously to common metrics, BERTScore computes a similarity score for each
token in the candidate sentence with each token in the reference sentence.
However, instead of exact matches, we compute token similarity using contextual
embeddings. We evaluate using the outputs of 363 machine translation and image
captioning systems. BERTScore correlates better with human judgments and
provides stronger model selection performance than existing metrics. Finally,
we use an adversarial paraphrase detection task to show that BERTScore is more
robust to challenging examples when compared to existing metrics.
| 2,020 | Computation and Language |
UniSent: Universal Adaptable Sentiment Lexica for 1000+ Languages | In this paper, we introduce UniSent universal sentiment lexica for $1000+$
languages. Sentiment lexica are vital for sentiment analysis in absence of
document-level annotations, a very common scenario for low-resource languages.
To the best of our knowledge, UniSent is the largest sentiment resource to date
in terms of the number of covered languages, including many low resource ones.
In this work, we use a massively parallel Bible corpus to project sentiment
information from English to other languages for sentiment analysis on Twitter
data. We introduce a method called DomDrift to mitigate the huge domain
mismatch between Bible and Twitter by a confidence weighting scheme that uses
domain-specific embeddings to compare the nearest neighbors for a candidate
sentiment word in the source (Bible) and target (Twitter) domain. We evaluate
the quality of UniSent in a subset of languages for which manually created
ground truth was available, Macedonian, Czech, German, Spanish, and French. We
show that the quality of UniSent is comparable to manually created sentiment
resources when it is used as the sentiment seed for the task of word sentiment
prediction on top of embedding representations. In addition, we show that
emoticon sentiments could be reliably predicted in the Twitter domain using
only UniSent and monolingual embeddings in German, Spanish, French, and
Italian. With the publication of this paper, we release the UniSent sentiment
lexica.
| 2,019 | Computation and Language |
Investigating Prior Knowledge for Challenging Chinese Machine Reading
Comprehension | Machine reading comprehension tasks require a machine reader to answer
questions relevant to the given document. In this paper, we present the first
free-form multiple-Choice Chinese machine reading Comprehension dataset (C^3),
containing 13,369 documents (dialogues or more formally written mixed-genre
texts) and their associated 19,577 multiple-choice free-form questions
collected from Chinese-as-a-second-language examinations.
We present a comprehensive analysis of the prior knowledge (i.e., linguistic,
domain-specific, and general world knowledge) needed for these real-world
problems. We implement rule-based and popular neural methods and find that
there is still a significant performance gap between the best performing model
(68.5%) and human readers (96.0%), especially on problems that require prior
knowledge. We further study the effects of distractor plausibility and data
augmentation based on translated relevant datasets for English on model
performance. We expect C^3 to present great challenges to existing systems as
answering 86.8% of questions requires both knowledge within and beyond the
accompanying document, and we hope that C^3 can serve as a platform to study
how to leverage various kinds of prior knowledge to better understand a given
written or orally oriented text. C^3 is available at https://dataset.org/c3/.
| 2,019 | Computation and Language |
Fine-Grained Argument Unit Recognition and Classification | Prior work has commonly defined argument retrieval from heterogeneous
document collections as a sentence-level classification task. Consequently,
argument retrieval suffers both from low recall and from sentence segmentation
errors making it difficult for humans and machines to consume the arguments. In
this work, we argue that the task should be performed on a more fine-grained
level of sequence labeling. For this, we define the task as Argument Unit
Recognition and Classification (AURC). We present a dataset of arguments from
heterogeneous sources annotated as spans of tokens within a sentence, as well
as with a corresponding stance. We show that and how such difficult argument
annotations can be effectively collected through crowdsourcing with high
interannotator agreement. The new benchmark, AURC-8, contains up to 15% more
arguments per topic as compared to annotations on the sentence level. We
identify a number of methods targeted at AURC sequence labeling, achieving
close to human performance on known domains. Further analysis also reveals
that, contrary to previous approaches, our methods are more robust against
sentence segmentation errors. We publicly release our code and the AURC-8
dataset.
| 2,019 | Computation and Language |
Exploring Unsupervised Pretraining and Sentence Structure Modelling for
Winograd Schema Challenge | Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing
computers' intelligence on common sense representation and reasoning. This
paper presents the new state-of-theart on WSC, achieving an accuracy of 71.1%.
We demonstrate that the leading performance benefits from jointly modelling
sentence structures, utilizing knowledge learned from cutting-edge pretraining
models, and performing fine-tuning. We conduct detailed analyses, showing that
fine-tuning is critical for achieving the performance, but it helps more on the
simpler associative problems. Modelling sentence dependency structures,
however, consistently helps on the harder non-associative subset of WSC.
Analysis also shows that larger fine-tuning datasets yield better performances,
suggesting the potential benefit of future work on annotating more Winograd
schema sentences.
| 2,019 | Computation and Language |
Understanding Roles and Entities: Datasets and Models for Natural
Language Inference | We present two new datasets and a novel attention mechanism for Natural
Language Inference (NLI). Existing neural NLI models, even though when trained
on existing large datasets, do not capture the notion of entity and role well
and often end up making mistakes such as "Peter signed a deal" can be inferred
from "John signed a deal". The two datasets have been developed to mitigate
such issues and make the systems better at understanding the notion of
"entities" and "roles". After training the existing architectures on the new
dataset we observe that the existing architectures does not perform well on one
of the new benchmark. We then propose a modification to the "word-to-word"
attention function which has been uniformly reused across several popular NLI
architectures. The resulting architectures perform as well as their unmodified
counterparts on the existing benchmarks and perform significantly well on the
new benchmark for "roles" and "entities".
| 2,019 | Computation and Language |
SocialIQA: Commonsense Reasoning about Social Interactions | We introduce Social IQa, the first largescale benchmark for commonsense
reasoning about social situations. Social IQa contains 38,000 multiple choice
questions for probing emotional and social intelligence in a variety of
everyday situations (e.g., Q: "Jordan wanted to tell Tracy a secret, so Jordan
leaned towards Tracy. Why did Jordan do this?" A: "Make sure no one else could
hear"). Through crowdsourcing, we collect commonsense questions along with
correct and incorrect answers about social interactions, using a new framework
that mitigates stylistic artifacts in incorrect answers by asking workers to
provide the right answer to a different but related question. Empirical results
show that our benchmark is challenging for existing question-answering models
based on pretrained language models, compared to human performance (>20% gap).
Notably, we further establish Social IQa as a resource for transfer learning of
commonsense knowledge, achieving state-of-the-art performance on multiple
commonsense reasoning tasks (Winograd Schemas, COPA).
| 2,019 | Computation and Language |
Tetra-Tagging: Word-Synchronous Parsing with Linear-Time Inference | We present a constituency parsing algorithm that, like a supertagger, works
by assigning labels to each word in a sentence. In order to maximally leverage
current neural architectures, the model scores each word's tags in parallel,
with minimal task-specific structure. After scoring, a left-to-right
reconciliation phase extracts a tree in (empirically) linear time. Our parser
achieves 95.4 F1 on the WSJ test set while also achieving substantial speedups
compared to current state-of-the-art parsers with comparable accuracies.
| 2,020 | Computation and Language |
The Curious Case of Neural Text Degeneration | Despite considerable advancements with deep neural language models, the
enigma of neural text degeneration persists when these models are tested as
text generators. The counter-intuitive empirical observation is that even
though the use of likelihood as training objective leads to high quality models
for a broad range of language understanding tasks, using likelihood as a
decoding objective leads to text that is bland and strangely repetitive.
In this paper, we reveal surprising distributional differences between human
text and machine text. In addition, we find that decoding strategies alone can
dramatically effect the quality of machine text, even when generated from
exactly the same neural language model. Our findings motivate Nucleus Sampling,
a simple but effective method to draw the best out of neural generation. By
sampling text from the dynamic nucleus of the probability distribution, which
allows for diversity while effectively truncating the less reliable tail of the
distribution, the resulting text better demonstrates the quality of human text,
yielding enhanced diversity without sacrificing fluency and coherence.
| 2,020 | Computation and Language |
Judging Chemical Reaction Practicality From Positive Sample only
Learning | Chemical reaction practicality is the core task among all symbol intelligence
based chemical information processing, for example, it provides indispensable
clue for further automatic synthesis route inference. Considering that chemical
reactions have been represented in a language form, we propose a new solution
to generally judge the practicality of organic reaction without considering
complex quantum physical modeling or chemistry knowledge. While tackling the
practicality judgment as a machine learning task from positive and negative
(chemical reaction) samples, all existing studies have to carefully handle the
serious insufficiency issue on the negative samples. We propose an
auto-construction method to well solve the extensively existed long-term
difficulty. Experimental results show our model can effectively predict the
practicality of chemical reactions, which achieves a high accuracy of 99.76\%
on real large-scale chemical lab reaction practicality judgment.
| 2,019 | Computation and Language |
Multi-Task Learning for Argumentation Mining | Multi-task learning has recently become a very active field in deep learning
research. In contrast to learning a single task in isolation, multiple tasks
are learned at the same time, thereby utilizing the training signal of related
tasks to improve the performance on the respective machine learning tasks.
Related work shows various successes in different domains when applying this
paradigm and this thesis extends the existing empirical results by evaluating
multi-task learning in four different scenarios: argumentation mining,
epistemic segmentation, argumentation component segmentation, and
grapheme-to-phoneme conversion. We show that multi-task learning can, indeed,
improve the performance compared to single-task learning in all these
scenarios, but may also hurt the performance. Therefore, we investigate the
reasons for successful and less successful applications of this paradigm and
find that dataset properties such as entropy or the size of the label inventory
are good indicators for a potential multi-task learning success and that
multi-task learning is particularly useful if the task at hand suffers from
data sparsity, i.e. a lack of training data. Moreover, multi-task learning is
particularly effective for long input sequences in our experiments. We have
observed this trend in all evaluated scenarios. Finally, we develop a highly
configurable and extensible sequence tagging framework which supports
multi-task learning to conduct our empirical experiments and to aid future
research regarding the multi-task learning paradigm and natural language
processing.
| 2,019 | Computation and Language |
Empirical Evaluation of Leveraging Named Entities for Arabic Sentiment
Analysis | Social media reflects the public attitudes towards specific events. Events
are often related to persons, locations or organizations, the so-called Named
Entities. This can define Named Entities as sentiment-bearing components. In
this paper, we dive beyond Named Entities recognition to the exploitation of
sentiment-annotated Named Entities in Arabic sentiment analysis. Therefore, we
develop an algorithm to detect the sentiment of Named Entities based on the
majority of attitudes towards them. This enabled tagging Named Entities with
proper tags and, thus, including them in a sentiment analysis framework of two
models: supervised and lexicon-based. Both models were applied on datasets of
multi-dialectal content. The results revealed that Named Entities have no
considerable impact on the supervised model, while employing them in the
lexicon-based model improved the classification performance and outperformed
most of the baseline systems.
| 2,019 | Computation and Language |
GumDrop at the DISRPT2019 Shared Task: A Model Stacking Approach to
Discourse Unit Segmentation and Connective Detection | In this paper we present GumDrop, Georgetown University's entry at the DISRPT
2019 Shared Task on automatic discourse unit segmentation and connective
detection. Our approach relies on model stacking, creating a heterogeneous
ensemble of classifiers, which feed into a metalearner for each final task. The
system encompasses three trainable component stacks: one for sentence
splitting, one for discourse unit segmentation and one for connective
detection. The flexibility of each ensemble allows the system to generalize
well to datasets of different sizes and with varying levels of homogeneity.
| 2,019 | Computation and Language |
Natural Language Interactions in Autonomous Vehicles: Intent Detection
and Slot Filling from Passenger Utterances | Understanding passenger intents and extracting relevant slots are important
building blocks towards developing contextual dialogue systems for natural
interactions in autonomous vehicles (AV). In this work, we explored AMIE
(Automated-vehicle Multi-modal In-cabin Experience), the in-cabin agent
responsible for handling certain passenger-vehicle interactions. When the
passengers give instructions to AMIE, the agent should parse such commands
properly and trigger the appropriate functionality of the AV system. In our
current explorations, we focused on AMIE scenarios describing usages around
setting or changing the destination and route, updating driving behavior or
speed, finishing the trip and other use-cases to support various natural
commands. We collected a multi-modal in-cabin dataset with multi-turn dialogues
between the passengers and AMIE using a Wizard-of-Oz scheme via a realistic
scavenger hunt game activity. After exploring various recent Recurrent Neural
Networks (RNN) based techniques, we introduced our own hierarchical joint
models to recognize passenger intents along with relevant slots associated with
the action to be performed in AV scenarios. Our experimental results
outperformed certain competitive baselines and achieved overall F1 scores of
0.91 for utterance-level intent detection and 0.96 for slot filling tasks. In
addition, we conducted initial speech-to-text explorations by comparing
intent/slot models trained and tested on human transcriptions versus noisy
Automatic Speech Recognition (ASR) outputs. Finally, we compared the results
with single passenger rides versus the rides with multiple passengers.
| 2,019 | Computation and Language |
Condition-Transforming Variational AutoEncoder for Conversation Response
Generation | This paper proposes a new model, called condition-transforming variational
autoencoder (CTVAE), to improve the performance of conversation response
generation using conditional variational autoencoders (CVAEs). In conventional
CVAEs , the prior distribution of latent variable z follows a multivariate
Gaussian distribution with mean and variance modulated by the input conditions.
Previous work found that this distribution tends to become condition
independent in practical application. In our proposed CTVAE model, the latent
variable z is sampled by performing a non-lineartransformation on the
combination of the input conditions and the samples from a
condition-independent prior distribution N (0; I). In our objective
evaluations, the CTVAE model outperforms the CVAE model on fluency metrics and
surpasses a sequence-to-sequence (Seq2Seq) model on diversity metrics. In
subjective preference tests, our proposed CTVAE model performs significantly
better than CVAE and Seq2Seq models on generating fluency, informative and
topic relevant responses.
| 2,019 | Computation and Language |
Objective Assessment of Social Skills Using Automated Language Analysis
for Identification of Schizophrenia and Bipolar Disorder | Several studies have shown that speech and language features, automatically
extracted from clinical interviews or spontaneous discourse, have diagnostic
value for mental disorders such as schizophrenia and bipolar disorder. They
typically make use of a large feature set to train a classifier for
distinguishing between two groups of interest, i.e. a clinical and control
group. However, a purely data-driven approach runs the risk of overfitting to a
particular data set, especially when sample sizes are limited. Here, we first
down-select the set of language features to a small subset that is related to a
well-validated test of functional ability, the Social Skills Performance
Assessment (SSPA). This helps establish the concurrent validity of the selected
features. We use only these features to train a simple classifier to
distinguish between groups of interest. Linear regression reveals that a subset
of language features can effectively model the SSPA, with a correlation
coefficient of 0.75. Furthermore, the same feature set can be used to build a
strong binary classifier to distinguish between healthy controls and a clinical
group (AUC = 0.96) and also between patients within the clinical group with
schizophrenia and bipolar I disorder (AUC = 0.83).
| 2,019 | Computation and Language |
Better Automatic Evaluation of Open-Domain Dialogue Systems with
Contextualized Embeddings | Despite advances in open-domain dialogue systems, automatic evaluation of
such systems is still a challenging problem. Traditional reference-based
metrics such as BLEU are ineffective because there could be many valid
responses for a given context that share no common words with reference
responses. A recent work proposed Referenced metric and Unreferenced metric
Blended Evaluation Routine (RUBER) to combine a learning-based metric, which
predicts relatedness between a generated response and a given query, with
reference-based metric; it showed high correlation with human judgments. In
this paper, we explore using contextualized word embeddings to compute more
accurate relatedness scores, thus better evaluation metrics. Experiments show
that our evaluation metrics outperform RUBER, which is trained on static
embeddings.
| 2,019 | Computation and Language |
Who Blames Whom in a Crisis? Detecting Blame Ties from News Articles
Using Neural Networks | Blame games tend to follow major disruptions, be they financial crises,
natural disasters or terrorist attacks. To study how the blame game evolves and
shapes the dominant crisis narratives is of great significance, as sense-making
processes can affect regulatory outcomes, social hierarchies, and cultural
norms. However, it takes tremendous time and efforts for social scientists to
manually examine each relevant news article and extract the blame ties (A
blames B). In this study, we define a new task, Blame Tie Extraction, and
construct a new dataset related to the United States financial crisis
(2007-2010) from The New York Times, The Wall Street Journal and USA Today. We
build a Bi-directional Long Short-Term Memory (BiLSTM) network for contexts
where the entities appear in and it learns to automatically extract such blame
ties at the document level. Leveraging the large unsupervised model such as
GloVe and ELMo, our best model achieves an F1 score of 70% on the test set for
blame tie extraction, making it a useful tool for social scientists to extract
blame ties more efficiently.
| 2,019 | Computation and Language |
Detecting Machine-Translated Paragraphs by Matching Similar Words | Machine-translated text plays an important role in modern life by smoothing
communication from various communities using different languages. However,
unnatural translation may lead to misunderstanding, a detector is thus needed
to avoid the unfortunate mistakes. While a previous method measured the
naturalness of continuous words using a N-gram language model, another method
matched noncontinuous words across sentences but this method ignores such words
in an individual sentence. We have developed a method matching similar words
throughout the paragraph and estimating the paragraph-level coherence, that can
identify machine-translated text. Experiment evaluates on 2000 English
human-generated and 2000 English machine-translated paragraphs from German
showing that the coherence-based method achieves high performance (accuracy =
87.0%; equal error rate = 13.0%). It is efficiently better than previous
methods (best accuracy = 72.4%; equal error rate = 29.7%). Similar experiments
on Dutch and Japanese obtain 89.2% and 97.9% accuracy, respectively. The
results demonstrate the persistence of the proposed method in various languages
with different resource levels.
| 2,019 | Computation and Language |
End-to-End Spoken Language Translation | In this paper, we address the task of spoken language understanding. We
present a method for translating spoken sentences from one language into spoken
sentences in another language. Given spectrogram-spectrogram pairs, our model
can be trained completely from scratch to translate unseen sentences. Our
method consists of a pyramidal-bidirectional recurrent network combined with a
convolutional network to output sentence-level spectrograms in the target
language. Empirically, our model achieves competitive performance with
state-of-the-art methods on multiple languages and can generalize to unseen
speakers.
| 2,019 | Computation and Language |
Semantic Drift in Multilingual Representations | Multilingual representations have mostly been evaluated based on their
performance on specific tasks. In this article, we look beyond engineering
goals and analyze the relations between languages in computational
representations. We introduce a methodology for comparing languages based on
their organization of semantic concepts. We propose to conduct an adapted
version of representational similarity analysis of a selected set of concepts
in computational multilingual representations. Using this analysis method, we
can reconstruct a phylogenetic tree that closely resembles those assumed by
linguistic experts. These results indicate that multilingual distributional
representations which are only trained on monolingual text and bilingual
dictionaries preserve relations between languages without the need for any
etymological information. In addition, we propose a measure to identify
semantic drift between language families. We perform experiments on word-based
and sentence-based multilingual models and provide both quantitative results
and qualitative examples. Analyses of semantic drift in multilingual
representations can serve two purposes: they can indicate unwanted
characteristics of the computational models and they provide a quantitative
means to study linguistic phenomena across languages. The code is available at
https://github.com/beinborn/SemanticDrift.
| 2,020 | Computation and Language |
Listening between the Lines: Learning Personal Attributes from
Conversations | Open-domain dialogue agents must be able to converse about many topics while
incorporating knowledge about the user into the conversation. In this work we
address the acquisition of such knowledge, for personalization in downstream
Web applications, by extracting personal attributes from conversations. This
problem is more challenging than the established task of information extraction
from scientific publications or Wikipedia articles, because dialogues often
give merely implicit cues about the speaker. We propose methods for inferring
personal attributes, such as profession, age or family status, from
conversations using deep learning. Specifically, we propose several Hidden
Attribute Models, which are neural networks leveraging attention mechanisms and
embeddings. Our methods are trained on a per-predicate basis to output rankings
of object values for a given subject-predicate combination (e.g., ranking the
doctor and nurse professions high when speakers talk about patients, emergency
rooms, etc). Experiments with various conversational texts including Reddit
discussions, movie scripts and a collection of crowdsourced personal dialogues
demonstrate the viability of our methods and their superior performance
compared to state-of-the-art baselines.
| 2,019 | Computation and Language |
On the Contributions of Visual and Textual Supervision in Low-Resource
Semantic Speech Retrieval | Recent work has shown that speech paired with images can be used to learn
semantically meaningful speech representations even without any textual
supervision. In real-world low-resource settings, however, we often have access
to some transcribed speech. We study whether and how visual grounding is useful
in the presence of varying amounts of textual supervision. In particular, we
consider the task of semantic speech retrieval in a low-resource setting. We
use a previously studied data set and task, where models are trained on images
with spoken captions and evaluated on human judgments of semantic relevance. We
propose a multitask learning approach to leverage both visual and textual
modalities, with visual supervision in the form of keyword probabilities from
an external tagger. We find that visual grounding is helpful even in the
presence of textual supervision, and we analyze this effect over a range of
sizes of transcribed data sets. With ~5 hours of transcribed speech, we obtain
23% higher average precision when also using visual supervision.
| 2,019 | Computation and Language |
Assessing the Tolerance of Neural Machine Translation Systems Against
Speech Recognition Errors | Machine translation systems are conventionally trained on textual resources
that do not model phenomena that occur in spoken language. While the evaluation
of neural machine translation systems on textual inputs is actively researched
in the literature , little has been discovered about the complexities of
translating spoken language data with neural models. We introduce and motivate
interesting problems one faces when considering the translation of automatic
speech recognition (ASR) outputs on neural machine translation (NMT) systems.
We test the robustness of sentence encoding approaches for NMT encoder-decoder
modeling, focusing on word-based over byte-pair encoding. We compare the
translation of utterances containing ASR errors in state-of-the-art NMT
encoder-decoder systems against a strong phrase-based machine translation
baseline in order to better understand which phenomena present in ASR outputs
are better represented under the NMT framework than approaches that represent
translation as a linear model.
| 2,019 | Computation and Language |
Toponym Identification in Epidemiology Articles - A Deep Learning
Approach | When analyzing the spread of viruses, epidemiologists often need to identify
the location of infected hosts. This information can be found in public
databases, such as GenBank, however, information provided in these databases
are usually limited to the country or state level. More fine-grained
localization information requires phylogeographers to manually read relevant
scientific articles. In this work we propose an approach to automate the
process of place name identification from medical (epidemiology) articles. The
focus of this paper is to propose a deep learning based model for toponym
detection and experiment with the use of external linguistic features and
domain specific information. The model was evaluated using a collection of 105
epidemiology articles from PubMed Central provided by the recent SemEval task
12. Our best detection model achieves an F1 score of $80.13\%$, a significant
improvement compared to the state of the art of $69.84\%$. These results
underline the importance of domain specific embedding as well as specific
linguistic features in toponym detection in medical journals.
| 2,019 | Computation and Language |
Phonetically-Oriented Word Error Alignment for Speech Recognition Error
Analysis in Speech Translation | We propose a variation to the commonly used Word Error Rate (WER) metric for
speech recognition evaluation which incorporates the alignment of phonemes, in
the absence of time boundary information. After computing the Levenshtein
alignment on words in the reference and hypothesis transcripts, spans of
adjacent errors are converted into phonemes with word and syllable boundaries
and a phonetic Levenshtein alignment is performed. The aligned phonemes are
recombined into aligned words that adjust the word alignment labels in each
error region. We demonstrate that our Phonetically-Oriented Word Error Rate
(POWER) yields similar scores to WER with the added advantages of better word
alignments and the ability to capture one-to-many word alignments corresponding
to homophonic errors in speech recognition hypotheses. These improved
alignments allow us to better trace the impact of Levenshtein error types on
downstream tasks such as speech translation.
| 2,019 | Computation and Language |
The Zero Resource Speech Challenge 2019: TTS without T | We present the Zero Resource Speech Challenge 2019, which proposes to build a
speech synthesizer without any text or phonetic labels: hence, TTS without T
(text-to-speech without text). We provide raw audio for a target voice in an
unknown language (the Voice dataset), but no alignment, text or labels.
Participants must discover subword units in an unsupervised way (using the Unit
Discovery dataset) and align them to the voice recordings in a way that works
best for the purpose of synthesizing novel utterances from novel speakers,
similar to the target speaker's voice. We describe the metrics used for
evaluation, a baseline system consisting of unsupervised subword unit discovery
plus a standard TTS system, and a topline TTS using gold phoneme
transcriptions. We present an overview of the 19 submitted systems from 10
teams and discuss the main results.
| 2,019 | Computation and Language |
Terminologies augmented recurrent neural network model for clinical
named entity recognition | We aimed to enhance the performance of a supervised model for clinical
named-entity recognition (NER) using medical terminologies. In order to
evaluate our system in French, we built a corpus for 5 types of clinical
entities. We used a terminology-based system as baseline, built upon UMLS and
SNOMED. Then, we evaluated a biGRU-CRF, and an hybrid system using the
prediction of the terminology-based system as feature for the biGRU-CRF. In
English, we evaluated the NER systems on the i2b2-2009 Medication Challenge for
Drug name recognition, which contained 8,573 entities for 268 documents. In
French, we built APcNER, a corpus of 147 documents annotated for 5 entities
(drug name, sign or symptom, disease or disorder, diagnostic procedure or lab
test and therapeutic procedure). We evaluated each NER systems using exact and
partial match definition of F-measure for NER. The APcNER contains 4,837
entities which took 28 hours to annotate, the inter-annotator agreement was
acceptable for Drug name in exact match (85%) and acceptable for other entity
types in non-exact match (>70%). For drug name recognition on both i2b2-2009
and APcNER, the biGRU-CRF performed better that the terminology-based system,
with an exact-match F-measure of 91.1% versus 73% and 81.9% versus 75%
respectively. Moreover, the hybrid system outperformed the biGRU-CRF, with an
exact-match F-measure of 92.2% versus 91.1% (i2b2-2009) and 88.4% versus 81.9%
(APcNER). On APcNER corpus, the micro-average F-measure of the hybrid system on
the 5 entities was 69.5% in exact match, and 84.1% in non-exact match. APcNER
is a French corpus for clinical-NER of five type of entities which covers a
large variety of document types. Extending supervised model with terminology
allowed for an easy performance gain, especially in low regimes of entities,
and established near state of the art results on the i2b2-2009 corpus.
| 2,019 | Computation and Language |
Importance of Copying Mechanism for News Headline Generation | News headline generation is an essential problem of text summarization
because it is constrained, well-defined, and is still hard to solve. Models
with a limited vocabulary can not solve it well, as new named entities can
appear regularly in the news and these entities often should be in the
headline. News articles in morphologically rich languages such as Russian
require model modifications due to a large number of possible word forms. This
study aims to validate that models with a possibility of copying words from the
original article performs better than models without such an option. The
proposed model achieves a mean ROUGE score of 23 on the provided test dataset,
which is 8 points greater than the result of a similar model without a copying
mechanism. Moreover, the resulting model performs better than any known model
on the new dataset of Russian news.
| 2,019 | Computation and Language |
Probing What Different NLP Tasks Teach Machines about Function Word
Comprehension | We introduce a set of nine challenge tasks that test for the understanding of
function words. These tasks are created by structurally mutating sentences from
existing datasets to target the comprehension of specific types of function
words (e.g., prepositions, wh-words). Using these probing tasks, we explore the
effects of various pretraining objectives for sentence encoders (e.g., language
modeling, CCG supertagging and natural language inference (NLI)) on the learned
representations. Our results show that pretraining on language modeling
performs the best on average across our probing tasks, supporting its
widespread use for pretraining state-of-the-art NLP models, and CCG
supertagging and NLI pretraining perform comparably. Overall, no pretraining
objective dominates across the board, and our function word probing tasks
highlight several intuitive differences between pretraining objectives, e.g.,
that NLI helps the comprehension of negation.
| 2,019 | Computation and Language |
Neural Text Generation from Rich Semantic Representations | We propose neural models to generate high-quality text from structured
representations based on Minimal Recursion Semantics (MRS). MRS is a rich
semantic representation that encodes more precise semantic detail than other
representations such as Abstract Meaning Representation (AMR). We show that a
sequence-to-sequence model that maps a linearization of Dependency MRS, a
graph-based representation of MRS, to English text can achieve a BLEU score of
66.11 when trained on gold data. The performance can be improved further using
a high-precision, broad coverage grammar-based parser to generate a large
silver training corpus, achieving a final BLEU score of 77.17 on the full test
set, and 83.37 on the subset of test data most closely matching the silver data
domain. Our results suggest that MRS-based representations are a good choice
for applications that need both structured semantics and the ability to produce
natural language text as output.
| 2,019 | Computation and Language |
Look Who's Talking: Inferring Speaker Attributes from Personal
Longitudinal Dialog | We examine a large dialog corpus obtained from the conversation history of a
single individual with 104 conversation partners. The corpus consists of half a
million instant messages, across several messaging platforms. We focus our
analyses on seven speaker attributes, each of which partitions the set of
speakers, namely: gender; relative age; family member; romantic partner;
classmate; co-worker; and native to the same country. In addition to the
content of the messages, we examine conversational aspects such as the time
messages are sent, messaging frequency, psycholinguistic word categories,
linguistic mirroring, and graph-based features reflecting how people in the
corpus mention each other. We present two sets of experiments predicting each
attribute using (1) short context windows; and (2) a larger set of messages. We
find that using all features leads to gains of 9-14% over using message text
only.
| 2,019 | Computation and Language |
Transformers with convolutional context for ASR | The recent success of transformer networks for neural machine translation and
other NLP tasks has led to a surge in research work trying to apply it for
speech recognition. Recent efforts studied key research questions around ways
of combining positional embedding with speech features, and stability of
optimization for large scale learning of transformer networks. In this paper,
we propose replacing the sinusoidal positional embedding for transformers with
convolutionally learned input representations. These contextual representations
provide subsequent transformer blocks with relative positional information
needed for discovering long-range relationships between local concepts. The
proposed system has favorable optimization characteristics where our reported
results are produced with fixed learning rate of 1.0 and no warmup steps. The
proposed model achieves a competitive 4.7% and 12.9% WER on the Librispeech
``test clean'' and ``test other'' subsets when no extra LM text is provided.
| 2,020 | Computation and Language |
Fake News Early Detection: An Interdisciplinary Study | Massive dissemination of fake news and its potential to erode democracy has
increased the demand for accurate fake news detection. Recent advancements in
this area have proposed novel techniques that aim to detect fake news by
exploring how it propagates on social networks. Nevertheless, to detect fake
news at an early stage, i.e., when it is published on a news outlet but not yet
spread on social media, one cannot rely on news propagation information as it
does not exist. Hence, there is a strong need to develop approaches that can
detect fake news by focusing on news content. In this paper, a theory-driven
model is proposed for fake news detection. The method investigates news content
at various levels: lexicon-level, syntax-level, semantic-level and
discourse-level. We represent news at each level, relying on well-established
theories in social and forensic psychology. Fake news detection is then
conducted within a supervised machine learning framework. As an
interdisciplinary research, our work explores potential fake news patterns,
enhances the interpretability in fake news feature engineering, and studies the
relationships among fake news, deception/disinformation, and clickbaits.
Experiments conducted on two real-world datasets indicate the proposed method
can outperform the state-of-the-art and enable fake news early detection when
there is limited content information.
| 2,020 | Computation and Language |
Are We Consistently Biased? Multidimensional Analysis of Biases in
Distributional Word Vectors | Word embeddings have recently been shown to reflect many of the pronounced
societal biases (e.g., gender bias or racial bias). Existing studies are,
however, limited in scope and do not investigate the consistency of biases
across relevant dimensions like embedding models, types of texts, and different
languages. In this work, we present a systematic study of biases encoded in
distributional word vector spaces: we analyze how consistent the bias effects
are across languages, corpora, and embedding models. Furthermore, we analyze
the cross-lingual biases encoded in bilingual embedding spaces, indicative of
the effects of bias transfer encompassed in cross-lingual transfer of NLP
models. Our study yields some unexpected findings, e.g., that biases can be
emphasized or downplayed by different embedding models or that user-generated
content may be less biased than encyclopedic text. We hope our work catalyzes
bias research in NLP and informs the development of bias reduction techniques.
| 2,019 | Computation and Language |
Think Again Networks and the Delta Loss | This short paper introduces an abstraction called Think Again Networks
(ThinkNet) which can be applied to any state-dependent function (such as a
recurrent neural network).
| 2,019 | Computation and Language |
Contextualized Word Embeddings Enhanced Event Temporal Relation
Extraction for Story Understanding | Learning causal and temporal relationships between events is an important
step towards deeper story and commonsense understanding. Though there are
abundant datasets annotated with event relations for story comprehension, many
have no empirical results associated with them. In this work, we establish
strong baselines for event temporal relation extraction on two under-explored
story narrative datasets: Richer Event Description (RED) and Causal and
Temporal Relation Scheme (CaTeRS). To the best of our knowledge, these are the
first results reported on these two datasets. We demonstrate that neural
network-based models can outperform some strong traditional linguistic
feature-based models. We also conduct comparative studies to show the
contribution of adopting contextualized word embeddings (BERT) for event
temporal relation extraction from stories. Detailed analyses are offered to
better understand the results.
| 2,019 | Computation and Language |
Experiments in Cuneiform Language Identification | This paper presents methods to discriminate between languages and dialects
written in Cuneiform script, one of the first writing systems in the world. We
report the results obtained by the PZ team in the Cuneiform Language
Identification (CLI) shared task organized within the scope of the VarDial
Evaluation Campaign 2019. The task included two languages, Sumerian and
Akkadian. The latter is divided into six dialects: Old Babylonian, Middle
Babylonian peripheral, Standard Babylonian, Neo Babylonian, Late Babylonian,
and Neo Assyrian. We approach the task using a meta-classifier trained on
various SVM models and we show the effectiveness of the system for this task.
Our submission achieved 0.738 F1 score in discriminating between the seven
languages and dialects and it was ranked fourth in the competition among eight
teams.
| 2,019 | Computation and Language |
Several Experiments on Investigating Pretraining and Knowledge-Enhanced
Models for Natural Language Inference | Natural language inference (NLI) is among the most challenging tasks in
natural language understanding. Recent work on unsupervised pretraining that
leverages unsupervised signals such as language-model and sentence prediction
objectives has shown to be very effective on a wide range of NLP problems. It
would still be desirable to further understand how it helps NLI; e.g., if it
learns artifacts in data annotation or instead learn true inference knowledge.
In addition, external knowledge that does not exist in the limited amount of
NLI training data may be added to NLI models in two typical ways, e.g., from
human-created resources or an unsupervised pretraining paradigm. We runs
several experiments here to investigate whether they help NLI in the same way,
and if not,how?
| 2,019 | Computation and Language |
Understanding Dataset Design Choices for Multi-hop Reasoning | Learning multi-hop reasoning has been a key challenge for reading
comprehension models, leading to the design of datasets that explicitly focus
on it. Ideally, a model should not be able to perform well on a multi-hop
question answering task without doing multi-hop reasoning. In this paper, we
investigate two recently proposed datasets, WikiHop and HotpotQA. First, we
explore sentence-factored models for these tasks; by design, these models
cannot do multi-hop reasoning, but they are still able to solve a large number
of examples in both datasets. Furthermore, we find spurious correlations in the
unmasked version of WikiHop, which make it easy to achieve high performance
considering only the questions and answers. Finally, we investigate one key
difference between these datasets, namely span-based vs. multiple-choice
formulations of the QA task. Multiple-choice versions of both datasets can be
easily gamed, and two models we examine only marginally exceed a baseline in
this setting. Overall, while these datasets are useful testbeds,
high-performing models may not be learning as much multi-hop reasoning as
previously thought.
| 2,019 | Computation and Language |
HELP: A Dataset for Identifying Shortcomings of Neural Models in
Monotonicity Reasoning | Large crowdsourced datasets are widely used for training and evaluating
neural models on natural language inference (NLI). Despite these efforts,
neural models have a hard time capturing logical inferences, including those
licensed by phrase replacements, so-called monotonicity reasoning. Since no
large dataset has been developed for monotonicity reasoning, it is still
unclear whether the main obstacle is the size of datasets or the model
architectures themselves. To investigate this issue, we introduce a new
dataset, called HELP, for handling entailments with lexical and logical
phenomena. We add it to training data for the state-of-the-art neural models
and evaluate them on test sets for monotonicity phenomena. The results showed
that our data augmentation improved the overall accuracy. We also find that the
improvement is better on monotonicity inferences with lexical replacements than
on downward inferences with disjunction and modification. This suggests that
some types of inferences can be improved by our data augmentation while others
are immune to it.
| 2,019 | Computation and Language |
Towards Recognizing Phrase Translation Processes: Experiments on
English-French | When translating phrases (words or group of words), human translators,
consciously or not, resort to different translation processes apart from the
literal translation, such as Idiom Equivalence, Generalization,
Particularization, Semantic Modulation, etc. Translators and linguists (such as
Vinay and Darbelnet, Newmark, etc.) have proposed several typologies to
characterize the different translation processes. However, to the best of our
knowledge, there has not been effort to automatically classify these
fine-grained translation processes. Recently, an English-French parallel corpus
of TED Talks has been manually annotated with translation process categories,
along with established annotation guidelines. Based on these annotated
examples, we propose an automatic classification of translation processes at
subsentential level. Experimental results show that we can distinguish
non-literal translation from literal translation with an accuracy of 87.09%,
and 55.20% for classifying among five non-literal translation processes. This
work demonstrates that it is possible to automatically classify translation
processes. Even with a small amount of annotated examples, our experiments show
the directions that we can follow in future work. One of our long term
objectives is leveraging this automatic classification to better control
paraphrase extraction from bilingual parallel corpora.
| 2,019 | Computation and Language |
OPIEC: An Open Information Extraction Corpus | Open information extraction (OIE) systems extract relations and their
arguments from natural language text in an unsupervised manner. The resulting
extractions are a valuable resource for downstream tasks such as knowledge base
construction, open question answering, or event schema induction. In this
paper, we release, describe, and analyze an OIE corpus called OPIEC, which was
extracted from the text of English Wikipedia. OPIEC complements the available
OIE resources: It is the largest OIE corpus publicly available to date (over
340M triples) and contains valuable metadata such as provenance information,
confidence scores, linguistic annotations, and semantic annotations including
spatial and temporal information. We analyze the OPIEC corpus by comparing its
content with knowledge bases such as DBpedia or YAGO, which are also based on
Wikipedia. We found that most of the facts between entities present in OPIEC
cannot be found in DBpedia and/or YAGO, that OIE facts often differ in the
level of specificity compared to knowledge base facts, and that OIE open
relations are generally highly polysemous. We believe that the OPIEC corpus is
a valuable resource for future research on automated knowledge base
construction.
| 2,019 | Computation and Language |
Logician: A Unified End-to-End Neural Approach for Open-Domain
Information Extraction | In this paper, we consider the problem of open information extraction (OIE)
for extracting entity and relation level intermediate structures from sentences
in open-domain. We focus on four types of valuable intermediate structures
(Relation, Attribute, Description, and Concept), and propose a unified
knowledge expression form, SAOKE, to express them. We publicly release a data
set which contains more than forty thousand sentences and the corresponding
facts in the SAOKE format labeled by crowd-sourcing. To our knowledge, this is
the largest publicly available human labeled data set for open information
extraction tasks. Using this labeled SAOKE data set, we train an end-to-end
neural model using the sequenceto-sequence paradigm, called Logician, to
transform sentences into facts. For each sentence, different to existing
algorithms which generally focus on extracting each single fact without
concerning other possible facts, Logician performs a global optimization over
all possible involved facts, in which facts not only compete with each other to
attract the attention of words, but also cooperate to share words. An
experimental study on various types of open domain relation extraction tasks
reveals the consistent superiority of Logician to other states-of-the-art
algorithms. The experiments verify the reasonableness of SAOKE format, the
valuableness of SAOKE data set, the effectiveness of the proposed Logician
model, and the feasibility of the methodology to apply end-to-end learning
paradigm on supervised data sets for the challenging tasks of open information
extraction.
| 2,019 | Computation and Language |
Semantic Matching of Documents from Heterogeneous Collections: A Simple
and Transparent Method for Practical Applications | We present a very simple, unsupervised method for the pairwise matching of
documents from heterogeneous collections. We demonstrate our method with the
Concept-Project matching task, which is a binary classification task involving
pairs of documents from heterogeneous collections. Although our method only
employs standard resources without any domain- or task-specific modifications,
it clearly outperforms the more complex system of the original authors. In
addition, our method is transparent, because it provides explicit information
about how a similarity score was computed, and efficient, because it is based
on the aggregation of (pre-computable) word-level similarities.
| 2,019 | Computation and Language |
Towards Coherent and Engaging Spoken Dialog Response Generation Using
Automatic Conversation Evaluators | Encoder-decoder based neural architectures serve as the basis of
state-of-the-art approaches in end-to-end open domain dialog systems. Since
most of such systems are trained with a maximum likelihood~(MLE) objective they
suffer from issues such as lack of generalizability and the generic response
problem, i.e., a system response that can be an answer to a large number of
user utterances, e.g., "Maybe, I don't know." Having explicit feedback on the
relevance and interestingness of a system response at each turn can be a useful
signal for mitigating such issues and improving system quality by selecting
responses from different approaches. Towards this goal, we present a system
that evaluates chatbot responses at each dialog turn for coherence and
engagement. Our system provides explicit turn-level dialog quality feedback,
which we show to be highly correlated with human evaluation. To show that
incorporating this feedback in the neural response generation models improves
dialog quality, we present two different and complementary mechanisms to
incorporate explicit feedback into a neural response generation model:
reranking and direct modification of the loss function during training. Our
studies show that a response generation model that incorporates these combined
feedback mechanisms produce more engaging and coherent responses in an
open-domain spoken dialog setting, significantly improving the response quality
using both automatic and human evaluation.
| 2,019 | Computation and Language |
A self-attention based deep learning method for lesion attribute
detection from CT reports | In radiology, radiologists not only detect lesions from the medical image,
but also describe them with various attributes such as their type, location,
size, shape, and intensity. While these lesion attributes are rich and useful
in many downstream clinical applications, how to extract them from the
radiology reports is less studied. This paper outlines a novel deep learning
method to automatically extract attributes of lesions of interest from the
clinical text. Different from classical CNN models, we integrated the
multi-head self-attention mechanism to handle the long-distance information in
the sentence, and to jointly correlate different portions of sentence
representation subspaces in parallel. Evaluation on an in-house corpus
demonstrates that our method can achieve high performance with 0.848 in
precision, 0.788 in recall, and 0.815 in F-score. The new method and
constructed corpus will enable us to build automatic systems with a
higher-level understanding of the radiological world.
| 2,019 | Computation and Language |
Fine-grained Entity Recognition with Reduced False Negatives and Large
Type Coverage | Fine-grained Entity Recognition (FgER) is the task of detecting and
classifying entity mentions to a large set of types spanning diverse domains
such as biomedical, finance and sports. We observe that when the type set spans
several domains, detection of entity mention becomes a limitation for
supervised learning models. The primary reason being lack of dataset where
entity boundaries are properly annotated while covering a large spectrum of
entity types. Our work directly addresses this issue. We propose Heuristics
Allied with Distant Supervision (HAnDS) framework to automatically construct a
quality dataset suitable for the FgER task. HAnDS framework exploits the high
interlink among Wikipedia and Freebase in a pipelined manner, reducing
annotation errors introduced by naively using distant supervision approach.
Using HAnDS framework, we create two datasets, one suitable for building FgER
systems recognizing up to 118 entity types based on the FIGER type hierarchy
and another for up to 1115 entity types based on the TypeNet hierarchy. Our
extensive empirical experimentation warrants the quality of the generated
datasets. Along with this, we also provide a manually annotated dataset for
benchmarking FgER systems.
| 2,019 | Computation and Language |
English Broadcast News Speech Recognition by Humans and Machines | With recent advances in deep learning, considerable attention has been given
to achieving automatic speech recognition performance close to human
performance on tasks like conversational telephone speech (CTS) recognition. In
this paper we evaluate the usefulness of these proposed techniques on broadcast
news (BN), a similar challenging task. We also perform a set of recognition
measurements to understand how close the achieved automatic speech recognition
results are to human performance on this task. On two publicly available BN
test sets, DEV04F and RT04, our speech recognition system using LSTM and
residual network based acoustic models with a combination of n-gram and neural
network language models performs at 6.5% and 5.9% word error rate. By achieving
new performance milestones on these test sets, our experiments show that
techniques developed on other related tasks, like CTS, can be transferred to
achieve similar performance. In contrast, the best measured human recognition
performance on these test sets is much lower, at 3.6% and 2.8% respectively,
indicating that there is still room for new techniques and improvements in this
space, to reach human performance levels.
| 2,019 | Computation and Language |
Don't Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word
Vectors | Recent literature suggests that averaged word vectors followed by simple
post-processing outperform many deep learning methods on semantic textual
similarity tasks. Furthermore, when averaged word vectors are trained
supervised on large corpora of paraphrases, they achieve state-of-the-art
results on standard STS benchmarks. Inspired by these insights, we push the
limits of word embeddings even further. We propose a novel fuzzy bag-of-words
(FBoW) representation for text that contains all the words in the vocabulary
simultaneously but with different degrees of membership, which are derived from
similarities between word vectors. We show that max-pooled word vectors are
only a special case of fuzzy BoW and should be compared via fuzzy Jaccard index
rather than cosine similarity. Finally, we propose DynaMax, a completely
unsupervised and non-parametric similarity measure that dynamically extracts
and max-pools good features depending on the sentence pair. This method is both
efficient and easy to implement, yet outperforms current baselines on STS tasks
by a large margin and is even competitive with supervised word vectors trained
to directly optimise cosine similarity.
| 2,019 | Computation and Language |
Very Deep Self-Attention Networks for End-to-End Speech Recognition | Recently, end-to-end sequence-to-sequence models for speech recognition have
gained significant interest in the research community. While previous
architecture choices revolve around time-delay neural networks (TDNN) and long
short-term memory (LSTM) recurrent neural networks, we propose to use
self-attention via the Transformer architecture as an alternative. Our analysis
shows that deep Transformer networks with high learning capacity are able to
exceed performance from previous end-to-end approaches and even match the
conventional hybrid systems. Moreover, we trained very deep models with up to
48 Transformer layers for both encoder and decoders combined with stochastic
residual connections, which greatly improve generalizability and training
efficiency. The resulting models outperform all previous end-to-end ASR
approaches on the Switchboard benchmark. An ensemble of these models achieve
9.9% and 17.7% WER on Switchboard and CallHome test sets respectively. This
finding brings our end-to-end models to competitive levels with previous hybrid
systems. Further, with model ensembling the Transformers can outperform certain
hybrid systems, which are more complicated in terms of both structure and
training procedure.
| 2,019 | Computation and Language |
FastContext: an efficient and scalable implementation of the ConText
algorithm | Objective: To develop and evaluate FastContext, an efficient, scalable
implementation of the ConText algorithm suitable for very large-scale clinical
natural language processing. Background: The ConText algorithm performs with
state-of-art accuracy in detecting the experiencer, negation status, and
temporality of concept mentions in clinical narratives. However, the speed
limitation of its current implementations hinders its use in big data
processing. Methods: We developed FastContext through hashing the ConText's
rules, then compared its speed and accuracy with JavaConText and
GeneralConText, two widely used Java implementations. Results: FastContext ran
two orders of magnitude faster and was less decelerated by rule increase than
the other two implementations used in this study for comparison. Additionally,
FastContext consistently gained accuracy improvement as the rules increased
(the desired outcome of adding new rules), while the other two implementations
did not. Conclusions: FastContext is an efficient, scalable implementation of
the popular ConText algorithm, suitable for natural language applications on
very large clinical corpora.
| 2,018 | Computation and Language |
Nested Variational Autoencoder for Topic Modeling on Microtexts with
Word Vectors | Most of the information on the Internet is represented in the form of
microtexts, which are short text snippets such as news headlines or tweets.
These sources of information are abundant, and mining these data could uncover
meaningful insights. Topic modeling is one of the popular methods to extract
knowledge from a collection of documents; however, conventional topic models
such as latent Dirichlet allocation (LDA) are unable to perform well on short
documents, mostly due to the scarcity of word co-occurrence statistics embedded
in the data. The objective of our research is to create a topic model that can
achieve great performances on microtexts while requiring a small runtime for
scalability to large datasets. To solve the lack of information of microtexts,
we allow our method to take advantage of word embeddings for additional
knowledge of relationships between words. For speed and scalability, we apply
autoencoding variational Bayes, an algorithm that can perform efficient
black-box inference in probabilistic models. The result of our work is a novel
topic model called the nested variational autoencoder, which is a distribution
that takes into account word vectors and is parameterized by a neural network
architecture. For optimization, the model is trained to approximate the
posterior distribution of the original LDA model. Experiments show the
improvements of our model on microtexts as well as its runtime advantage.
| 2,019 | Computation and Language |
Context-Dependent Semantic Parsing over Temporally Structured Data | We describe a new semantic parsing setting that allows users to query the
system using both natural language questions and actions within a graphical
user interface. Multiple time series belonging to an entity of interest are
stored in a database and the user interacts with the system to obtain a better
understanding of the entity's state and behavior, entailing sequences of
actions and questions whose answers may depend on previous factual or
navigational interactions. We design an LSTM-based encoder-decoder architecture
that models context dependency through copying mechanisms and multiple levels
of attention over inputs and previous outputs. When trained to predict tokens
using supervised learning, the proposed architecture substantially outperforms
standard sequence generation baselines. Training the architecture using policy
gradient leads to further improvements in performance, reaching a
sequence-level accuracy of 88.7% on artificial data and 74.8% on real data.
| 2,019 | Computation and Language |
Time-series Insights into the Process of Passing or Failing Online
University Courses using Neural-Induced Interpretable Student States | This paper addresses a key challenge in Educational Data Mining, namely to
model student behavioral trajectories in order to provide a means for
identifying students most at-risk, with the goal of providing supportive
interventions. While many forms of data including clickstream data or data from
sensors have been used extensively in time series models for such purposes, in
this paper we explore the use of textual data, which is sometimes available in
the records of students at large, online universities. We propose a time series
model that constructs an evolving student state representation using both
clickstream data and a signal extracted from the textual notes recorded by
human mentors assigned to each student. We explore how the addition of this
textual data improves both the predictive power of student states for the
purpose of identifying students at risk for course failure as well as for
providing interpretable insights about student course engagement processes.
| 2,019 | Computation and Language |
A system for the 2019 Sentiment, Emotion and Cognitive State Task of
DARPAs LORELEI project | During the course of a Humanitarian Assistance-Disaster Relief (HADR) crisis,
that can happen anywhere in the world, real-time information is often posted
online by the people in need of help which, in turn, can be used by different
stakeholders involved with management of the crisis. Automated processing of
such posts can considerably improve the effectiveness of such efforts; for
example, understanding the aggregated emotion from affected populations in
specific areas may help inform decision-makers on how to best allocate
resources for an effective disaster response. However, these efforts may be
severely limited by the availability of resources for the local language. The
ongoing DARPA project Low Resource Languages for Emergent Incidents (LORELEI)
aims to further language processing technologies for low resource languages in
the context of such a humanitarian crisis. In this work, we describe our
submission for the 2019 Sentiment, Emotion and Cognitive state (SEC) pilot task
of the LORELEI project. We describe a collection of sentiment analysis systems
included in our submission along with the features extracted. Our fielded
systems obtained the best results in both English and Spanish language
evaluations of the SEC pilot task.
| 2,019 | Computation and Language |
SuperGLUE: A Stickier Benchmark for General-Purpose Language
Understanding Systems | In the last year, new models and methods for pretraining and transfer
learning have driven striking performance improvements across a range of
language understanding tasks. The GLUE benchmark, introduced a little over one
year ago, offers a single-number metric that summarizes progress on a diverse
set of such tasks, but performance on the benchmark has recently surpassed the
level of non-expert humans, suggesting limited headroom for further research.
In this paper we present SuperGLUE, a new benchmark styled after GLUE with a
new set of more difficult language understanding tasks, a software toolkit, and
a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.
| 2,020 | Computation and Language |
Argument Identification in Public Comments from eRulemaking | Administrative agencies in the United States receive millions of comments
each year concerning proposed agency actions during the eRulemaking process.
These comments represent a diversity of arguments in support and opposition of
the proposals. While agencies are required to identify and respond to
substantive comments, they have struggled to keep pace with the volume of
information. In this work we address the tasks of identifying argumentative
text, classifying the type of argument claims employed, and determining the
stance of the comment. First, we propose a taxonomy of argument claims based on
an analysis of thousands of rules and millions of comments. Second, we collect
and semi-automatically bootstrap annotations to create a dataset of millions of
sentences with argument claim type annotation at the sentence level. Third, we
build a system for automatically determining argumentative spans and claim type
using our proposed taxonomy in a hierarchical classification model.
| 2,019 | Computation and Language |
KnowBias: A Novel AI Method to Detect Polarity in Online Content | We propose a novel training and inference method for detecting political bias
in long text content such as newspaper opinion articles. Obtaining long text
data and annotations at sufficient scale for training is difficult, but it is
relatively easy to extract political polarity from tweets through their
authorship; as such, we train on tweets and perform inference on articles.
Universal sentence encoders and other existing methods that aim to address this
domain-adaptation scenario deliver inaccurate and inconsistent predictions on
articles, which we show is due to a difference in opinion concentration between
tweets and articles. We propose a two-step classification scheme that utilizes
a neutral detector trained on tweets to remove neutral sentences from articles
in order to align opinion concentration and therefore improve accuracy on that
domain.
We evaluate our two-step approach using a variety of test suites, including a
set of tweets and long-form articles where annotations were crowd-sourced to
decrease label noise, measuring accuracy and Spearman-rho rank correlation. In
practice, KnowBias achieves a high accuracy of 86 (rho = 0.65) on these tweets
and 75 (rho = 0.69) on long-form articles. While we validate our method on
political bias, our scheme is general and can be readily applied to other
settings, where there exist such domain mismatches between source and target
domains. Our implementation is available for public use at https://knowbias.ml.
| 2,019 | Computation and Language |
A Topic-Agnostic Approach for Identifying Fake News Pages | Fake news and misinformation have been increasingly used to manipulate
popular opinion and influence political processes. To better understand fake
news, how they are propagated, and how to counter their effect, it is necessary
to first identify them. Recently, approaches have been proposed to
automatically classify articles as fake based on their content. An important
challenge for these approaches comes from the dynamic nature of news: as new
political events are covered, topics and discourse constantly change and thus,
a classifier trained using content from articles published at a given time is
likely to become ineffective in the future. To address this challenge, we
propose a topic-agnostic (TAG) classification strategy that uses linguistic and
web-markup features to identify fake news pages. We report experimental results
using multiple data sets which show that our approach attains high accuracy in
the identification of fake news, even as topics evolve over time.
| 2,019 | Computation and Language |
Context awareness and embedding for biomedical event extraction | Motivation: Biomedical event detection is fundamental for information
extraction in molecular biology and biomedical research. The detected events
form the central basis for comprehensive biomedical knowledge fusion,
facilitating the digestion of massive information influx from literature.
Limited by the feature context, the existing event detection models are mostly
applicable for a single task. A general and scalable computational model is
desiderated for biomedical knowledge management. Results: We consider and
propose a bottom-up detection framework to identify the events from recognized
arguments. To capture the relations between the arguments, we trained a
bi-directional Long Short-Term Memory (LSTM) network to model their context
embedding. Leveraging the compositional attributes, we further derived the
candidate samples for training event classifiers. We built our models on the
datasets from BioNLP Shared Task for evaluations. Our method achieved the
average F-scores of 0.81 and 0.92 on BioNLPST-BGI and BioNLPST-BB datasets
respectively. Comparing with 7 state-of-the-art methods, our method nearly
doubled the existing F-score performance (0.92 vs 0.56) on the BioNLPST-BB
dataset. Case studies were conducted to reveal the underlying reasons.
Availability: https://github.com/cskyan/evntextrc
| 2,019 | Computation and Language |
Effectiveness of Self Normalizing Neural Networks for Text
Classification | Self Normalizing Neural Networks(SNN) proposed on Feed Forward Neural
Networks(FNN) outperform regular FNN architectures in various machine learning
tasks. Particularly in the domain of Computer Vision, the activation function
Scaled Exponential Linear Units (SELU) proposed for SNNs, perform better than
other non linear activations such as ReLU. The goal of SNN is to produce a
normalized output for a normalized input. Established neural network
architectures like feed forward networks and Convolutional Neural Networks(CNN)
lack the intrinsic nature of normalizing outputs. Hence, requiring additional
layers such as Batch Normalization. Despite the success of SNNs, their
characteristic features on other network architectures like CNN haven't been
explored, especially in the domain of Natural Language Processing. In this
paper we aim to show the effectiveness of proposed, Self Normalizing
Convolutional Neural Networks(SCNN) on text classification. We analyze their
performance with the standard CNN architecture used on several text
classification datasets. Our experiments demonstrate that SCNN achieves
comparable results to standard CNN model with significantly fewer parameters.
Furthermore it also outperforms CNN with equal number of parameters.
| 2,019 | Computation and Language |
Contextualization of Morphological Inflection | Critical to natural language generation is the production of correctly
inflected text. In this paper, we isolate the task of predicting a fully
inflected sentence from its partially lemmatized version. Unlike traditional
morphological inflection or surface realization, our task input does not
provide ``gold'' tags that specify what morphological features to realize on
each lemmatized word; rather, such features must be inferred from sentential
context. We develop a neural hybrid graphical model that explicitly
reconstructs morphological features before predicting the inflected forms, and
compare this to a system that directly predicts the inflected forms without
relying on any morphological annotation. We experiment on several typologically
diverse languages from the Universal Dependencies treebanks, showing the
utility of incorporating linguistically-motivated latent variables into NLP
models.
| 2,019 | Computation and Language |
Learning to Denoise Distantly-Labeled Data for Entity Typing | Distantly-labeled data can be used to scale up training of statistical
models, but it is typically noisy and that noise can vary with the distant
labeling technique. In this work, we propose a two-stage procedure for handling
this type of data: denoise it with a learned model, then train our final model
on clean and denoised distant data with standard supervised training. Our
denoising approach consists of two parts. First, a filtering function discards
examples from the distantly labeled data that are wholly unusable. Second, a
relabeling function repairs noisy labels for the retained examples. Each of
these components is a model trained on synthetically-noised examples generated
from a small manually-labeled set. We investigate this approach on the
ultra-fine entity typing task of Choi et al. (2018). Our baseline model is an
extension of their model with pre-trained ELMo representations, which already
achieves state-of-the-art performance. Adding distant data that has been
denoised with our learned models gives further performance gains over this base
model, outperforming models trained on raw distant data or
heuristically-denoised distant data.
| 2,019 | Computation and Language |
A Typedriven Vector Semantics for Ellipsis with Anaphora using Lambek
Calculus with Limited Contraction | We develop a vector space semantics for verb phrase ellipsis with anaphora
using type-driven compositional distributional semantics based on the Lambek
calculus with limited contraction (LCC) of J\"ager (2006). Distributional
semantics has a lot to say about the statistical collocation-based meanings of
content words, but provides little guidance on how to treat function words.
Formal semantics on the other hand, has powerful mechanisms for dealing with
relative pronouns, coordinators, and the like. Type-driven compositional
distributional semantics brings these two models together. We review previous
compositional distributional models of relative pronouns, coordination and a
restricted account of ellipsis in the DisCoCat framework of Coecke et al.
(2010, 2013). We show how DisCoCat cannot deal with general forms of ellipsis,
which rely on copying of information, and develop a novel way of connecting
typelogical grammar to distributional semantics by assigning vector
interpretable lambda terms to derivations of LCC in the style of Muskens &
Sadrzadeh (2016). What follows is an account of (verb phrase) ellipsis in which
word meanings can be copied: the meaning of a sentence is now a program with
non-linear access to individual word embeddings. We present the theoretical
setting, work out examples, and demonstrate our results on a toy distributional
model motivated by data.
| 2,019 | Computation and Language |
BVS Corpus: A Multilingual Parallel Corpus of Biomedical Scientific
Texts | The BVS database (Health Virtual Library) is a centralized source of
biomedical information for Latin America and Carib, created in 1998 and
coordinated by BIREME (Biblioteca Regional de Medicina) in agreement with the
Pan American Health Organization (OPAS). Abstracts are available in English,
Spanish, and Portuguese, with a subset in more than one language, thus being a
possible source of parallel corpora. In this article, we present the
development of parallel corpora from BVS in three languages: English,
Portuguese, and Spanish. Sentences were automatically aligned using the
Hunalign algorithm for EN/ES and EN/PT language pairs, and for a subset of
trilingual articles also. We demonstrate the capabilities of our corpus by
training a Neural Machine Translation (OpenNMT) system for each language pair,
which outperformed related works on scientific biomedical articles. Sentence
alignment was also manually evaluated, presenting an average 96% of correctly
aligned sentences across all languages. Our parallel corpus is freely
available, with complementary information regarding article metadata.
| 2,019 | Computation and Language |
A Parallel Corpus of Theses and Dissertations Abstracts | In Brazil, the governmental body responsible for overseeing and coordinating
post-graduate programs, CAPES, keeps records of all theses and dissertations
presented in the country. Information regarding such documents can be accessed
online in the Theses and Dissertations Catalog (TDC), which contains abstracts
in Portuguese and English, and additional metadata. Thus, this database can be
a potential source of parallel corpora for the Portuguese and English
languages. In this article, we present the development of a parallel corpus
from TDC, which is made available by CAPES under the open data initiative.
Approximately 240,000 documents were collected and aligned using the Hunalign
tool. We demonstrate the capability of our developed corpus by training
Statistical Machine Translation (SMT) and Neural Machine Translation (NMT)
models for both language directions, followed by a comparison with Google
Translate (GT). Both translation models presented better BLEU scores than GT,
with NMT system being the most accurate one. Sentence alignment was also
manually evaluated, presenting an average of 82.30% correctly aligned
sentences. Our parallel corpus is freely available in TMX format, with
complementary information regarding document metadata
| 2,018 | Computation and Language |
HHMM at SemEval-2019 Task 2: Unsupervised Frame Induction using
Contextualized Word Embeddings | We present our system for semantic frame induction that showed the best
performance in Subtask B.1 and finished as the runner-up in Subtask A of the
SemEval 2019 Task 2 on unsupervised semantic frame induction (QasemiZadeh et
al., 2019). Our approach separates this task into two independent steps: verb
clustering using word and their context embeddings and role labeling by
combining these embeddings with syntactical features. A simple combination of
these steps shows very competitive results and can be extended to process other
datasets and languages.
| 2,019 | Computation and Language |
Anonymized BERT: An Augmentation Approach to the Gendered Pronoun
Resolution Challenge | We present our 7th place solution to the Gendered Pronoun Resolution
challenge, which uses BERT without fine-tuning and a novel augmentation
strategy designed for contextual embedding token-level tasks. Our method
anonymizes the referent by replacing candidate names with a set of common
placeholder names. Besides the usual benefits of effectively increasing
training data size, this approach diversifies idiosyncratic information
embedded in names. Using same set of common first names can also help the model
recognize names better, shorten token length, and remove gender and regional
biases associated with names. The system scored 0.1947 log loss in stage 2,
where the augmentation contributed to an improvements of 0.04. Post-competition
analysis shows that, when using different embedding layers, the system scores
0.1799 which would be third place.
| 2,019 | Computation and Language |
RSL19BD at DBDC4: Ensemble of Decision Tree-based and LSTM-based Models | RSL19BD (Waseda University Sakai Laboratory) participated in the Fourth
Dialogue Breakdown Detection Challenge (DBDC4) and submitted five runs to both
English and Japanese subtasks. In these runs, we utilise the Decision
Tree-based model and the Long Short-Term Memory-based (LSTM-based) model
following the approaches of RSL17BD and KTH in the Third Dialogue Breakdown
Detection Challenge (DBDC3) respectively. The Decision Tree-based model follows
the approach of RSL17BD but utilises RandomForestRegressor instead of
ExtraTreesRegressor. In addition, instead of predicting the mean and the
variance of the probability distribution of the three breakdown labels, it
predicts the probability of each label directly. The LSTM-based model follows
the approach of KTH with some changes in the architecture and utilises
Convolutional Neural Network (CNN) to perform text feature extraction. In
addition, instead of targeting the single breakdown label and minimising the
categorical cross entropy loss, it targets the probability distribution of the
three breakdown labels and minimises the mean squared error. Run 1 utilises a
Decision Tree-based model; Run 2 utilises an LSTM-based model; Run 3 performs
an ensemble of 5 LSTM-based models; Run 4 performs an ensemble of Run 1 and Run
2; Run 5 performs an ensemble of Run 1 and Run 3. Run 5 statistically
significantly outperformed all other runs in terms of MSE (NB, PB, B) for the
English data and all other runs except Run 4 in terms of MSE (NB, PB, B) for
the Japanese data (alpha level = 0.05).
| 2,019 | Computation and Language |
A Large Parallel Corpus of Full-Text Scientific Articles | The Scielo database is an important source of scientific information in Latin
America, containing articles from several research domains. A striking
characteristic of Scielo is that many of its full-text contents are presented
in more than one language, thus being a potential source of parallel corpora.
In this article, we present the development of a parallel corpus from Scielo in
three languages: English, Portuguese, and Spanish. Sentences were automatically
aligned using the Hunalign algorithm for all language pairs, and for a subset
of trilingual articles also. We demonstrate the capabilities of our corpus by
training a Statistical Machine Translation system (Moses) for each language
pair, which outperformed related works on scientific articles. Sentence
alignment was also manually evaluated, presenting an average of 98.8% correctly
aligned sentences across all languages. Our parallel corpus is freely available
in the TMX format, with complementary information regarding article metadata.
| 2,019 | Computation and Language |
UFRGS Participation on the WMT Biomedical Translation Shared Task | This paper describes the machine translation systems developed by the
Universidade Federal do Rio Grande do Sul (UFRGS) team for the biomedical
translation shared task. Our systems are based on statistical machine
translation and neural machine translation, using the Moses and OpenNMT
toolkits, respectively. We participated in four translation directions for the
English/Spanish and English/Portuguese language pairs. To create our training
data, we concatenated several parallel corpora, both from in-domain and
out-of-domain sources, as well as terminological resources from UMLS. Our
systems achieved the best BLEU scores according to the official shared task
evaluation.
| 2,019 | Computation and Language |
Distributional Semantics and Linguistic Theory | Distributional semantics provides multi-dimensional, graded, empirically
induced word representations that successfully capture many aspects of meaning
in natural languages, as shown in a large body of work in computational
linguistics; yet, its impact in theoretical linguistics has so far been
limited. This review provides a critical discussion of the literature on
distributional semantics, with an emphasis on methods and results that are of
relevance for theoretical linguistics, in three areas: semantic change,
polysemy and composition, and the grammar-semantics interface (specifically,
the interface of semantics with syntax and with derivational morphology). The
review aims at fostering greater cross-fertilization of theoretical and
computational approaches to language, as a means to advance our collective
knowledge of how it works.
| 2,020 | Computation and Language |
M2H-GAN: A GAN-based Mapping from Machine to Human Transcripts for
Speech Understanding | Deep learning is at the core of recent spoken language understanding (SLU)
related tasks. More precisely, deep neural networks (DNNs) drastically
increased the performances of SLU systems, and numerous architectures have been
proposed. In the real-life context of theme identification of telephone
conversations, it is common to hold both a human, manual (TRS) and an
automatically transcribed (ASR) versions of the conversations. Nonetheless, and
due to production constraints, only the ASR transcripts are considered to build
automatic classifiers. TRS transcripts are only used to measure the
performances of ASR systems. Moreover, the recent performances in term of
classification accuracy, obtained by DNN related systems are close to the
performances reached by humans, and it becomes difficult to further increase
the performances by only considering the ASR transcripts. This paper proposes
to distillates the TRS knowledge available during the training phase within the
ASR representation, by using a new generative adversarial network called
M2H-GAN to generate a TRS-like version of an ASR document, to improve the theme
identification performances.
| 2,019 | Computation and Language |
Text2Node: a Cross-Domain System for Mapping Arbitrary Phrases to a
Taxonomy | Electronic health record (EHR) systems are used extensively throughout the
healthcare domain. However, data interchangeability between EHR systems is
limited due to the use of different coding standards across systems. Existing
methods of mapping coding standards based on manual human experts mapping,
dictionary mapping, symbolic NLP and classification are unscalable and cannot
accommodate large scale EHR datasets.
In this work, we present Text2Node, a cross-domain mapping system capable of
mapping medical phrases to concepts in a large taxonomy (such as SNOMED CT).
The system is designed to generalize from a limited set of training samples and
map phrases to elements of the taxonomy that are not covered by training data.
As a result, our system is scalable, robust to wording variants between coding
systems and can output highly relevant concepts when no exact concept exists in
the target taxonomy. Text2Node operates in three main stages: first, the
lexicon is mapped to word embeddings; second, the taxonomy is vectorized using
node embeddings; and finally, the mapping function is trained to connect the
two embedding spaces. We compared multiple algorithms and architectures for
each stage of the training, including GloVe and FastText word embeddings, CNN
and Bi-LSTM mapping functions, and node2vec for node embeddings. We confirmed
the robustness and generalisation properties of Text2Node by mapping ICD-9-CM
Diagnosis phrases to SNOMED CT and by zero-shot training at comparable
accuracy.
This system is a novel methodological contribution to the task of normalizing
and linking phrases to a taxonomy, advancing data interchangeability in
healthcare. When applied, the system can use electronic health records to
generate an embedding that incorporates taxonomical medical knowledge to
improve clinical predictive models.
| 2,019 | Computation and Language |
Relation Discovery with Out-of-Relation Knowledge Base as Supervision | Unsupervised relation discovery aims to discover new relations from a given
text corpus without annotated data. However, it does not consider existing
human annotated knowledge bases even when they are relevant to the relations to
be discovered. In this paper, we study the problem of how to use
out-of-relation knowledge bases to supervise the discovery of unseen relations,
where out-of-relation means that relations to discover from the text corpus and
those in knowledge bases are not overlapped. We construct a set of constraints
between entity pairs based on the knowledge base embedding and then incorporate
constraints into the relation discovery by a variational auto-encoder based
algorithm. Experiments show that our new approach can improve the
state-of-the-art relation discovery performance by a large margin.
| 2,019 | Computation and Language |
Evaluating the Portability of an NLP System for Processing
Echocardiograms: A Retrospective, Multi-site Observational Study | While natural language processing (NLP) of unstructured clinical narratives
holds the potential for patient care and clinical research, portability of NLP
approaches across multiple sites remains a major challenge. This study
investigated the portability of an NLP system developed initially at the
Department of Veterans Affairs (VA) to extract 27 key cardiac concepts from
free-text or semi-structured echocardiograms from three academic medical
centers: Weill Cornell Medicine, Mayo Clinic and Northwestern Medicine. While
the NLP system showed high precision and recall measurements for four target
concepts (aortic valve regurgitation, left atrium size at end systole, mitral
valve regurgitation, tricuspid valve regurgitation) across all sites, we found
moderate or poor results for the remaining concepts and the NLP system
performance varied between individual sites.
| 2,019 | Computation and Language |
Harvey Mudd College at SemEval-2019 Task 4: The Clint Buchanan
Hyperpartisan News Detector | We investigate the recently developed Bidirectional Encoder Representations
from Transformers (BERT) model for the hyperpartisan news detection task. Using
a subset of hand-labeled articles from SemEval as a validation set, we test the
performance of different parameters for BERT models. We find that accuracy from
two different BERT models using different proportions of the articles is
consistently high, with our best-performing model on the validation set
achieving 85% accuracy and the best-performing model on the test set achieving
77%. We further determined that our model exhibits strong consistency, labeling
independent slices of the same article identically. Finally, we find that
randomizing the order of word pieces dramatically reduces validation accuracy
(to approximately 60%), but that shuffling groups of four or more word pieces
maintains an accuracy of about 80%, indicating the model mainly gains value
from local context.
| 2,019 | Computation and Language |
Neural Chinese Word Segmentation with Lexicon and Unlabeled Data via
Posterior Regularization | Existing methods for CWS usually rely on a large number of labeled sentences
to train word segmentation models, which are expensive and time-consuming to
annotate. Luckily, the unlabeled data is usually easy to collect and many
high-quality Chinese lexicons are off-the-shelf, both of which can provide
useful information for CWS. In this paper, we propose a neural approach for
Chinese word segmentation which can exploit both lexicon and unlabeled data.
Our approach is based on a variant of posterior regularization algorithm, and
the unlabeled data and lexicon are incorporated into model training as indirect
supervision by regularizing the prediction space of CWS models. Extensive
experiments on multiple benchmark datasets in both in-domain and cross-domain
scenarios validate the effectiveness of our approach.
| 2,019 | Computation and Language |
Neural Chinese Named Entity Recognition via CNN-LSTM-CRF and Joint
Training with Word Segmentation | Chinese named entity recognition (CNER) is an important task in Chinese
natural language processing field. However, CNER is very challenging since
Chinese entity names are highly context-dependent. In addition, Chinese texts
lack delimiters to separate words, making it difficult to identify the boundary
of entities. Besides, the training data for CNER in many domains is usually
insufficient, and annotating enough training data for CNER is very expensive
and time-consuming. In this paper, we propose a neural approach for CNER.
First, we introduce a CNN-LSTM-CRF neural architecture to capture both local
and long-distance contexts for CNER. Second, we propose a unified framework to
jointly train CNER and word segmentation models in order to enhance the ability
of CNER model in identifying entity boundaries. Third, we introduce an
automatic method to generate pseudo labeled samples from existing labeled data
which can enrich the training data. Experiments on two benchmark datasets show
that our approach can effectively improve the performance of Chinese named
entity recognition, especially when training data is insufficient.
| 2,019 | Computation and Language |
Arabic Text Diacritization Using Deep Neural Networks | Diacritization of Arabic text is both an interesting and a challenging
problem at the same time with various applications ranging from speech
synthesis to helping students learning the Arabic language. Like many other
tasks or problems in Arabic language processing, the weak efforts invested into
this problem and the lack of available (open-source) resources hinder the
progress towards solving this problem. This work provides a critical review for
the currently existing systems, measures and resources for Arabic text
diacritization. Moreover, it introduces a much-needed free-for-all cleaned
dataset that can be easily used to benchmark any work on Arabic diacritization.
Extracted from the Tashkeela Corpus, the dataset consists of 55K lines
containing about 2.3M words. After constructing the dataset, existing tools and
systems are tested on it. The results of the experiments show that the neural
Shakkala system significantly outperforms traditional rule-based approaches and
other closed-source tools with a Diacritic Error Rate (DER) of 2.88% compared
with 13.78%, which the best DER for the non-neural approach (obtained by the
Mishkal tool).
| 2,019 | Computation and Language |
Question Relatedness on Stack Overflow: The Task, Dataset, and
Corpus-inspired Models | Domain-specific community question answering is becoming an integral part of
professions. Finding related questions and answers in these communities can
significantly improve the effectiveness and efficiency of information seeking.
Stack Overflow is one of the most popular communities that is being used by
millions of programmers. In this paper, we analyze the problem of predicting
knowledge unit (question thread) relatedness in Stack Overflow. In particular,
we formulate the question relatedness task as a multi-class classification
problem with four degrees of relatedness. We present a large-scale dataset with
more than 300K pairs. To the best of our knowledge, this dataset is the largest
domain-specific dataset for Question-Question relatedness. We present the steps
that we took to collect, clean, process, and assure the quality of the dataset.
The proposed dataset Stack Overflow is a useful resource to develop novel
solutions, specifically data-hungry neural network models, for the prediction
of relatedness in technical community question-answering forums. We adopt a
neural network architecture and a traditional model for this task that
effectively utilize information from different parts of knowledge units to
compute the relatedness between them. These models can be used to benchmark
novel models, as they perform well in our task and in a closely similar task.
| 2,019 | Computation and Language |
PhonSenticNet: A Cognitive Approach to Microtext Normalization for
Concept-Level Sentiment Analysis | With the current upsurge in the usage of social media platforms, the trend of
using short text (microtext) in place of standard words has seen a significant
rise. The usage of microtext poses a considerable performance issue in
concept-level sentiment analysis, since models are trained on standard words.
This paper discusses the impact of coupling sub-symbolic (phonetics) with
symbolic (machine learning) Artificial Intelligence to transform the
out-of-vocabulary concepts into their standard in-vocabulary form. The phonetic
distance is calculated using the Sorensen similarity algorithm. The
phonetically similar invocabulary concepts thus obtained are then used to
compute the correct polarity value, which was previously being miscalculated
because of the presence of microtext. Our proposed framework increases the
accuracy of polarity detection by 6% as compared to the earlier model. This
also validates the fact that microtext normalization is a necessary
pre-requisite for the sentiment analysis task.
| 2,019 | Computation and Language |
Poly-encoders: Transformer Architectures and Pre-training Strategies for
Fast and Accurate Multi-sentence Scoring | The use of deep pre-trained bidirectional transformers has led to remarkable
progress in a number of applications (Devlin et al., 2018). For tasks that make
pairwise comparisons between sequences, matching a given input with a
corresponding label, two approaches are common: Cross-encoders performing full
self-attention over the pair and Bi-encoders encoding the pair separately. The
former often performs better, but is too slow for practical use. In this work,
we develop a new transformer architecture, the Poly-encoder, that learns global
rather than token level self-attention features. We perform a detailed
comparison of all three approaches, including what pre-training and fine-tuning
strategies work best. We show our models achieve state-of-the-art results on
three existing tasks; that Poly-encoders are faster than Cross-encoders and
more accurate than Bi-encoders; and that the best results are obtained by
pre-training on large datasets similar to the downstream tasks.
| 2,020 | Computation and Language |
An Evaluation of Transfer Learning for Classifying Sales Engagement
Emails at Large Scale | This paper conducts an empirical investigation to evaluate transfer learning
for classifying sales engagement emails arising from digital sales engagement
platforms. Given the complexity of content and context of sales engagement,
lack of standardized large corpora and benchmarks, limited labeled examples and
heterogenous context of intent, this real-world use case poses both a challenge
and an opportunity for adopting a transfer learning approach. We propose an
evaluation framework to assess a high performance transfer learning (HPTL)
approach in three key areas in addition to commonly used accuracy metrics: 1)
effective embeddings and pretrained language model usage, 2) minimum labeled
samples requirement and 3) transfer learning implementation strategies. We use
in-house sales engagement email samples as the experiment dataset, which
includes over 3000 emails labeled as positive, objection, unsubscribe, or
not-sure. We discuss our findings on evaluating BERT, ELMo, Flair and GloVe
embeddings with both feature-based and fine-tuning approaches and their
scalability on a GPU cluster with increasingly larger labeled samples. Our
results show that fine-tuning of the BERT model outperforms with as few as 300
labeled samples, but underperforms with fewer than 300 labeled samples,
relative to all the feature-based approaches using different embeddings.
| 2,019 | Computation and Language |
A Self-Attentive Emotion Recognition Network | Modern deep learning approaches have achieved groundbreaking performance in
modeling and classifying sequential data. Specifically, attention networks
constitute the state-of-the-art paradigm for capturing long temporal dynamics.
This paper examines the efficacy of this paradigm in the challenging task of
emotion recognition in dyadic conversations. In contrast to existing
approaches, our work introduces a novel attention mechanism capable of
inferring the immensity of the effect of each past utterance on the current
speaker emotional state. The proposed attention mechanism performs this
inference procedure without the need of a decoder network; this is achieved by
means of innovative self-attention arguments. Our self-attention networks
capture the correlation patterns among consecutive encoder network states, thus
allowing to robustly and effectively model temporal dynamics over arbitrary
long temporal horizons. Thus, we enable capturing strong affective patterns
over the course of long discussions. We exhibit the effectiveness of our
approach considering the challenging IEMOCAP benchmark. As we show, our devised
methodology outperforms state-of-the-art alternatives and commonly used
approaches, giving rise to promising new research directions in the context of
Online Social Network (OSN) analysis tasks.
| 2,019 | Computation and Language |
Who wrote this book? A challenge for e-commerce | Modern e-commerce catalogs contain millions of references, associated with
textual and visual information that is of paramount importance for the products
to be found via search or browsing. Of particular significance is the book
category, where the author name(s) field poses a significant challenge. Indeed,
books written by a given author (such as F. Scott Fitzgerald) might be listed
with different authors' names in a catalog due to abbreviations and spelling
variants and mistakes, among others. To solve this problem at scale, we design
a composite system involving open data sources for books as well as machine
learning components leveraging deep learning-based techniques for natural
language processing. In particular, we use Siamese neural networks for an
approximate match with known author names, and direct correction of the
provided author's name using sequence-to-sequence learning with neural
networks. We evaluate this approach on product data from the e-commerce website
Rakuten France, and find that the top proposal of the system is the normalized
author name with 72% accuracy.
| 2,019 | Computation and Language |
A Novel Task-Oriented Text Corpus in Silent Speech Recognition and its
Natural Language Generation Construction Method | Millions of people with severe speech disorders around the world may regain
their communication capabilities through techniques of silent speech
recognition (SSR). Using electroencephalography (EEG) as a biomarker for speech
decoding has been popular for SSR. However, the lack of SSR text corpus has
impeded the development of this technique. Here, we construct a novel
task-oriented text corpus, which is utilized in the field of SSR. In the
process of construction, we propose a task-oriented hybrid construction method
based on natural language generation algorithm. The algorithm focuses on the
strategy of data-to-text generation, and has two advantages including
linguistic quality and high diversity. These two advantages use template-based
method and deep neural networks respectively. In an SSR experiment with the
generated text corpus, analysis results show that the performance of our hybrid
construction method outperforms the pure method such as template-based natural
language generation or neural natural language generation models.
| 2,019 | Computation and Language |
Point-less: More Abstractive Summarization with Pointer-Generator
Networks | The Pointer-Generator architecture has shown to be a big improvement for
abstractive summarization seq2seq models. However, the summaries produced by
this model are largely extractive as over 30% of the generated sentences are
copied from the source text. This work proposes a multihead attention
mechanism, pointer dropout, and two new loss functions to promote more
abstractive summaries while maintaining similar ROUGE scores. Both the
multihead attention and dropout do not improve N-gram novelty, however, the
dropout acts as a regularizer which improves the ROUGE score. The new loss
function achieves significantly higher novel N-grams and sentences, at the cost
of a slightly lower ROUGE score.
| 2,019 | Computation and Language |
TextKD-GAN: Text Generation using KnowledgeDistillation and Generative
Adversarial Networks | Text generation is of particular interest in many NLP applications such as
machine translation, language modeling, and text summarization. Generative
adversarial networks (GANs) achieved a remarkable success in high quality image
generation in computer vision,and recently, GANs have gained lots of interest
from the NLP community as well. However, achieving similar success in NLP would
be more challenging due to the discrete nature of text. In this work, we
introduce a method using knowledge distillation to effectively exploit GAN
setup for text generation. We demonstrate how autoencoders (AEs) can be used
for providing a continuous representation of sentences, which is a smooth
representation that assign non-zero probabilities to more than one word. We
distill this representation to train the generator to synthesize similar smooth
representations. We perform a number of experiments to validate our idea using
different datasets and show that our proposed approach yields better
performance in terms of the BLEU score and Jensen-Shannon distance (JSD)
measure compared to traditional GAN-based text generation approaches without
pre-training.
| 2,019 | Computation and Language |
CraftAssist Instruction Parsing: Semantic Parsing for a Minecraft
Assistant | We propose a large scale semantic parsing dataset focused on
instruction-driven communication with an agent in Minecraft. We describe the
data collection process which yields additional 35K human generated
instructions with their semantic annotations. We report the performance of
three baseline models and find that while a dataset of this size helps us train
a usable instruction parser, it still poses interesting generalization
challenges which we hope will help develop better and more robust models.
| 2,019 | Computation and Language |
AI-Powered Text Generation for Harmonious Human-Machine Interaction:
Current State and Future Directions | In the last two decades, the landscape of text generation has undergone
tremendous changes and is being reshaped by the success of deep learning. New
technologies for text generation ranging from template-based methods to neural
network-based methods emerged. Meanwhile, the research objectives have also
changed from generating smooth and coherent sentences to infusing personalized
traits to enrich the diversification of newly generated content. With the rapid
development of text generation solutions, one comprehensive survey is urgent to
summarize the achievements and track the state of the arts. In this survey
paper, we present the general systematical framework, illustrate the widely
utilized models and summarize the classic applications of text generation.
| 2,019 | Computation and Language |
Semi-Unsupervised Lifelong Learning for Sentiment Classification: Less
Manual Data Annotation and More Self-Studying | Lifelong machine learning is a novel machine learning paradigm which can
continually accumulate knowledge during learning. The knowledge extracting and
reusing abilities enable the lifelong machine learning to solve the related
problems. The traditional approaches like Na\"ive Bayes and some neural network
based approaches only aim to achieve the best performance upon a single task.
Unlike them, the lifelong machine learning in this paper focuses on how to
accumulate knowledge during learning and leverage them for further tasks.
Meanwhile, the demand for labelled data for training also is significantly
decreased with the knowledge reusing. This paper suggests that the aim of the
lifelong learning is to use less labelled data and computational cost to
achieve the performance as well as or even better than the supervised learning.
| 2,019 | Computation and Language |
An Adversarial Learning Framework For A Persona-Based Multi-Turn
Dialogue Model | In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq)
neural network conversation model to a multi-turn dialogue scenario by
modifying the state-of-the-art hredGAN architecture to simultaneously capture
utterance attributes such as speaker identity, dialogue topic, speaker
sentiments and so on. The proposed system, phredGAN has a persona-based HRED
generator (PHRED) and a conditional discriminator. We also explore two
approaches to accomplish the conditional discriminator: (1) phredGAN_a, a
system that passes the attribute representation as an additional input into a
traditional adversarial discriminator, and (2) phredGAN_d, a dual discriminator
system which in addition to the adversarial discriminator, collaboratively
predicts the attribute(s) that generated the input utterance. To demonstrate
the superior performance of phredGAN over the persona Seq2Seq model, we
experiment with two conversational datasets, the Ubuntu Dialogue Corpus (UDC)
and TV series transcripts from the Big Bang Theory and Friends. Performance
comparison is made with respect to a variety of quantitative measures as well
as crowd-sourced human evaluation. We also explore the trade-offs from using
either variant of phredGAN on datasets with many but weak attribute modalities
(such as with Big Bang Theory and Friends) and ones with few but strong
attribute modalities (customer-agent interactions in Ubuntu dataset).
| 2,019 | Computation and Language |
Review-Driven Answer Generation for Product-Related Questions in
E-Commerce | The users often have many product-related questions before they make a
purchase decision in E-commerce. However, it is often time-consuming to examine
each user review to identify the desired information. In this paper, we propose
a novel review-driven framework for answer generation for product-related
questions in E-commerce, named RAGE. We develope RAGE on the basis of the
multi-layer convolutional architecture to facilitate speed-up of answer
generation with the parallel computation. For each question, RAGE first
extracts the relevant review snippets from the reviews of the corresponding
product. Then, we devise a mechanism to identify the relevant information from
the noise-prone review snippets and incorporate this information to guide the
answer generation. The experiments on two real-world E-Commerce datasets show
that the proposed RAGE significantly outperforms the existing alternatives in
producing more accurate and informative answers in natural language. Moreover,
RAGE takes much less time for both model training and answer generation than
the existing RNN based generation models.
| 2,019 | Computation and Language |
Using Context Information to Enhance Simple Question Answering | With the rapid development of knowledge bases(KBs),question
answering(QA)based on KBs has become a hot research issue. In this paper,we
propose two frameworks(i.e.,pipeline framework,an end-to-end framework)to focus
answering single-relation factoid question. In both of two frameworks,we study
the effect of context information on the quality of QA,such as the entity's
notable type,out-degree. In the end-to-end framework,we combine char-level
encoding and self-attention mechanisms,using weight sharing and multi-task
strategies to enhance the accuracy of QA. Experimental results show that
context information can get better results of simple QA whether it is the
pipeline framework or the end-to-end framework. In addition,we find that the
end-to-end framework achieves results competitive with state-of-the-art
approaches in terms of accuracy and take much shorter time than them.
| 2,019 | Computation and Language |
Neural Machine Translation with Recurrent Highway Networks | Recurrent Neural Networks have lately gained a lot of popularity in language
modelling tasks, especially in neural machine translation(NMT). Very recent NMT
models are based on Encoder-Decoder, where a deep LSTM based encoder is used to
project the source sentence to a fixed dimensional vector and then another deep
LSTM decodes the target sentence from the vector. However there has been very
little work on exploring architectures that have more than one layer in
space(i.e. in each time step). This paper examines the effectiveness of the
simple Recurrent Highway Networks(RHN) in NMT tasks. The model uses Recurrent
Highway Neural Network in encoder and decoder, with attention .We also explore
the reconstructor model to improve adequacy. We demonstrate the effectiveness
of all three approaches on the IWSLT English-Vietnamese dataset. We see that
RHN performs on par with LSTM based models and even better in some cases.We see
that deep RHN models are easy to train compared to deep LSTM based models
because of highway connections. The paper also investigates the effects of
increasing recurrent depth in each time step.
| 2,018 | Computation and Language |
A Persona-based Multi-turn Conversation Model in an Adversarial Learning
Framework | In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq)
neural network conversation model to multi-turn dialogue by modifying the
state-of-the-art hredGAN architecture. To achieve this, we introduce an
additional input modality into the encoder and decoder of hredGAN to capture
other attributes such as speaker identity, location, sub-topics, and other
external attributes that might be available from the corpus of human-to-human
interactions. The resulting persona hredGAN ($phredGAN$) shows better
performance than both the existing persona-based Seq2Seq and hredGAN models
when those external attributes are available in a multi-turn dialogue corpus.
This superiority is demonstrated on TV drama series with character consistency
(such as Big Bang Theory and Friends) and customer service interaction datasets
such as Ubuntu dialogue corpus in terms of perplexity, BLEU, ROUGE, and
Distinct n-gram scores.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.