Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
SpanBERT: Improving Pre-training by Representing and Predicting Spans | We present SpanBERT, a pre-training method that is designed to better
represent and predict spans of text. Our approach extends BERT by (1) masking
contiguous random spans, rather than random tokens, and (2) training the span
boundary representations to predict the entire content of the masked span,
without relying on the individual token representations within it. SpanBERT
consistently outperforms BERT and our better-tuned baselines, with substantial
gains on span selection tasks such as question answering and coreference
resolution. In particular, with the same training data and model size as
BERT-large, our single model obtains 94.6% and 88.7% F1 on SQuAD 1.1 and 2.0,
respectively. We also achieve a new state of the art on the OntoNotes
coreference resolution task (79.6\% F1), strong performance on the TACRED
relation extraction benchmark, and even show gains on GLUE.
| 2,020 | Computation and Language |
Investigating Evaluation of Open-Domain Dialogue Systems With Human
Generated Multiple References | The aim of this paper is to mitigate the shortcomings of automatic evaluation
of open-domain dialog systems through multi-reference evaluation. Existing
metrics have been shown to correlate poorly with human judgement, particularly
in open-domain dialog. One alternative is to collect human annotations for
evaluation, which can be expensive and time consuming. To demonstrate the
effectiveness of multi-reference evaluation, we augment the test set of
DailyDialog with multiple references. A series of experiments show that the use
of multiple references results in improved correlation between several
automatic metrics and human judgement for both the quality and the diversity of
system output.
| 2,019 | Computation and Language |
WinoGrande: An Adversarial Winograd Schema Challenge at Scale | The Winograd Schema Challenge (WSC) (Levesque, Davis, and Morgenstern 2011),
a benchmark for commonsense reasoning, is a set of 273 expert-crafted pronoun
resolution problems originally designed to be unsolvable for statistical models
that rely on selectional preferences or word associations. However, recent
advances in neural language models have already reached around 90% accuracy on
variants of WSC. This raises an important question whether these models have
truly acquired robust commonsense capabilities or whether they rely on spurious
biases in the datasets that lead to an overestimation of the true capabilities
of machine commonsense. To investigate this question, we introduce WinoGrande,
a large-scale dataset of 44k problems, inspired by the original WSC design, but
adjusted to improve both the scale and the hardness of the dataset. The key
steps of the dataset construction consist of (1) a carefully designed
crowdsourcing procedure, followed by (2) systematic bias reduction using a
novel AfLite algorithm that generalizes human-detectable word associations to
machine-detectable embedding associations. The best state-of-the-art methods on
WinoGrande achieve 59.4-79.1%, which are 15-35% below human performance of
94.0%, depending on the amount of the training data allowed. Furthermore, we
establish new state-of-the-art results on five related benchmarks - WSC
(90.1%), DPR (93.1%), COPA (90.6%), KnowRef (85.6%), and Winogender (97.1%).
These results have dual implications: on one hand, they demonstrate the
effectiveness of WinoGrande when used as a resource for transfer learning. On
the other hand, they raise a concern that we are likely to be overestimating
the true capabilities of machine commonsense across all these benchmarks. We
emphasize the importance of algorithmic bias reduction in existing and future
benchmarks to mitigate such overestimation.
| 2,019 | Computation and Language |
SlugBot: Developing a Computational Model andFramework of a Novel
Dialogue Genre | One of the most interesting aspects of the Amazon Alexa Prize competition is
that the framing of the competition requires the development of new
computational models of dialogue and its structure. Traditional computational
models of dialogue are of two types: (1) task-oriented dialogue, supported by
AI planning models,or simplified planning models consisting of frames with
slots to be filled; or (2)search-oriented dialogue where every user turn is
treated as a search query that may elaborate and extend current search results.
Alexa Prize dialogue systems such as SlugBot must support conversational
capabilities that go beyond what these traditional models can do. Moreover,
while traditional dialogue systems rely on theoretical computational models,
there are no existing computational theories that circumscribe the expected
system and user behaviors in the intended conversational genre of the Alexa
Prize Bots. This paper describes how UCSC's SlugBot team has combined the
development of a novel computational theoretical model, Discourse Relation
Dialogue Model, with its implementation in a modular system in order to test
and refine it. We highlight how our novel dialogue model has led us to create a
novel ontological resource, UniSlug, and how the structure of UniSlug determine
show we curate and structure content so that our dialogue manager implements
and tests our novel computational dialogue model.
| 2,019 | Computation and Language |
Semantic Web for Machine Translation: Challenges and Directions | A large number of machine translation approaches have recently been developed
to facilitate the fluid migration of content across languages. However, the
literature suggests that many obstacles must still be dealt with to achieve
better automatic translations. One of these obstacles is lexical and syntactic
ambiguity. A promising way of overcoming this problem is using Semantic Web
technologies. This article is an extended abstract of our systematic review on
machine translation approaches that rely on Semantic Web technologies for
improving the translation of texts. Overall, we present the challenges and
opportunities in the use of Semantic Web technologies in Machine Translation.
Moreover, our research suggests that while Semantic Web technologies can
enhance the quality of machine translation outputs for various problems, the
combination of both is still in its infancy.
| 2,019 | Computation and Language |
Careful Selection of Knowledge to solve Open Book Question Answering | Open book question answering is a type of natural language based QA (NLQA)
where questions are expected to be answered with respect to a given set of open
book facts, and common knowledge about a topic. Recently a challenge involving
such QA, OpenBookQA, has been proposed. Unlike most other NLQA tasks that focus
on linguistic understanding, OpenBookQA requires deeper reasoning involving
linguistic understanding as well as reasoning with common knowledge. In this
paper we address QA with respect to the OpenBookQA dataset and combine state of
the art language models with abductive information retrieval (IR), information
gain based re-ranking, passage selection and weighted scoring to achieve 72.0%
accuracy, an 11.6% improvement over the current state of the art.
| 2,019 | Computation and Language |
Bilingual Lexicon Induction through Unsupervised Machine Translation | A recent research line has obtained strong results on bilingual lexicon
induction by aligning independently trained word embeddings in two languages
and using the resulting cross-lingual embeddings to induce word translation
pairs through nearest neighbor or related retrieval methods. In this paper, we
propose an alternative approach to this problem that builds on the recent work
on unsupervised machine translation. This way, instead of directly inducing a
bilingual lexicon from cross-lingual embeddings, we use them to build a
phrase-table, combine it with a language model, and use the resulting machine
translation system to generate a synthetic parallel corpus, from which we
extract the bilingual lexicon using statistical word alignment techniques. As
such, our method can work with any word embedding and cross-lingual mapping
technique, and it does not require any additional resource besides the
monolingual corpus used to train the embeddings. When evaluated on the exact
same cross-lingual embeddings, our proposed method obtains an average
improvement of 6 accuracy points over nearest neighbor and 4 points over CSLS
retrieval, establishing a new state-of-the-art in the standard MUSE dataset.
| 2,021 | Computation and Language |
INS: An Interactive Chinese News Synthesis System | Nowadays, we are surrounded by more and more online news articles. Tens or
hundreds of news articles need to be read if we wish to explore a hot news
event or topic. So it is of vital importance to automatically synthesize a
batch of news articles related to the event or topic into a new synthesis
article (or overview article) for reader's convenience. It is so challenging to
make news synthesis fully automatic that there is no successful solution by
now. In this paper, we put forward a novel Interactive News Synthesis system
(i.e. INS), which can help generate news overview articles automatically or by
interacting with users. More importantly, INS can serve as a tool for editors
to help them finish their jobs. In our experiments, INS performs well on both
topic representation and synthesis article generation. A user study also
demonstrates the usefulness and users' satisfaction with the INS tool. A demo
video is available at \url{https://youtu.be/7ItteKW3GEk}.
| 2,019 | Computation and Language |
Summary Refinement through Denoising | We propose a simple method for post-processing the outputs of a text
summarization system in order to refine its overall quality. Our approach is to
train text-to-text rewriting models to correct information redundancy errors
that may arise during summarization. We train on synthetically generated noisy
summaries, testing three different types of noise that introduce out-of-context
information within each summary. When applied on top of extractive and
abstractive summarization baselines, our summary denoising models yield metric
improvements while reducing redundancy.
| 2,019 | Computation and Language |
Adaptive Noise Injection: A Structure-Expanding Regularization for RNN | The vanilla LSTM has become one of the most potential architectures in
word-level language modeling, like other recurrent neural networks, overfitting
is always a key barrier for its effectiveness. The existing noise-injected
regularizations introduce the random noises of fixation intensity, which
inhibits the learning of the RNN throughout the training process. In this
paper, we propose a new structure-expanding regularization method called
Adjective Noise Injection (ANI), which considers the output of an extra RNN
branch as a kind of adaptive noises and injects it into the main-branch RNN
output. Due to the adaptive noises can be improved as the training processes,
its negative effects can be weakened and even transformed into a positive
effect to further improve the expressiveness of the main-branch RNN. As a
result, ANI can regularize the RNN in the early stage of training and further
promoting its training performance in the later stage. We conduct experiments
on three widely-used corpora: PTB, WT2, and WT103, whose results verify both
the regularization and promoting the training performance functions of ANI.
Furthermore, we design a series simulation experiments to explore the reasons
that may lead to the regularization effect of ANI, and we find that in training
process, the robustness against the parameter update errors can be strengthened
when the LSTM equipped with ANI.
| 2,021 | Computation and Language |
Grammatical Sequence Prediction for Real-Time Neural Semantic Parsing | While sequence-to-sequence (seq2seq) models achieve state-of-the-art
performance in many natural language processing tasks, they can be too slow for
real-time applications. One performance bottleneck is predicting the most
likely next token over a large vocabulary; methods to circumvent this
bottleneck are a current research topic. We focus specifically on using seq2seq
models for semantic parsing, where we observe that grammars often exist which
specify valid formal representations of utterance semantics. By developing a
generic approach for restricting the predictions of a seq2seq model to
grammatically permissible continuations, we arrive at a widely applicable
technique for speeding up semantic parsing. The technique leads to a 74%
speed-up on an in-house dataset with a large vocabulary, compared to the same
neural model without grammatical restrictions.
| 2,019 | Computation and Language |
HireNet: a Hierarchical Attention Model for the Automatic Analysis of
Asynchronous Video Job Interviews | New technologies drastically change recruitment techniques. Some research
projects aim at designing interactive systems that help candidates practice job
interviews. Other studies aim at the automatic detection of social signals
(e.g. smile, turn of speech, etc...) in videos of job interviews. These studies
are limited with respect to the number of interviews they process, but also by
the fact that they only analyze simulated job interviews (e.g. students
pretending to apply for a fake position). Asynchronous video interviewing tools
have become mature products on the human resources market, and thus, a popular
step in the recruitment process. As part of a project to help recruiters, we
collected a corpus of more than 7000 candidates having asynchronous video job
interviews for real positions and recording videos of themselves answering a
set of questions. We propose a new hierarchical attention model called HireNet
that aims at predicting the hirability of the candidates as evaluated by
recruiters. In HireNet, an interview is considered as a sequence of questions
and answers containing salient socials signals. Two contextual sources of
information are modeled in HireNet: the words contained in the question and in
the job position. Our model achieves better F1-scores than previous approaches
for each modality (verbal content, audio and video). Results from early and
late multimodal fusion suggest that more sophisticated fusion schemes are
needed to improve on the monomodal results. Finally, some examples of moments
captured by the attention mechanisms suggest our model could potentially be
used to help finding key moments in an asynchronous job interview.
| 2,019 | Computation and Language |
DropAttention: A Regularization Method for Fully-Connected
Self-Attention Networks | Variants dropout methods have been designed for the fully-connected layer,
convolutional layer and recurrent layer in neural networks, and shown to be
effective to avoid overfitting. As an appealing alternative to recurrent and
convolutional layers, the fully-connected self-attention layer surprisingly
lacks a specific dropout method. This paper explores the possibility of
regularizing the attention weights in Transformers to prevent different
contextualized feature vectors from co-adaption. Experiments on a wide range of
tasks show that DropAttention can improve performance and reduce overfitting.
| 2,019 | Computation and Language |
Cross-Lingual Transfer for Distantly Supervised and Low-resources
Indonesian NER | Manually annotated corpora for low-resource languages are usually small in
quantity (gold), or large but distantly supervised (silver). Inspired by recent
progress of injecting pre-trained language model (LM) on many Natural Language
Processing (NLP) task, we proposed to fine-tune pre-trained language model from
high-resources languages to low-resources languages to improve the performance
of both scenarios. Our empirical experiment demonstrates significant
improvement when fine-tuning pre-trained language model in cross-lingual
transfer scenarios for small gold corpus and competitive results in large
silver compare to supervised cross-lingual transfer, which will be useful when
there is no parallel annotation in the same task to begin. We compare our
proposed method of cross-lingual transfer using pre-trained LM to different
sources of transfer such as mono-lingual LM and Part-of-Speech tagging (POS) in
the downstream task of both large silver and small gold NER dataset by
exploiting character-level input of bi-directional language model task.
| 2,019 | Computation and Language |
HEIDL: Learning Linguistic Expressions with Deep Learning and
Human-in-the-Loop | While the role of humans is increasingly recognized in machine learning
community, representation of and interaction with models in current
human-in-the-loop machine learning (HITL-ML) approaches are too low-level and
far-removed from human's conceptual models. We demonstrate HEIDL, a prototype
HITL-ML system that exposes the machine-learned model through high-level,
explainable linguistic expressions formed of predicates representing semantic
structure of text. In HEIDL, human's role is elevated from simply evaluating
model predictions to interpreting and even updating the model logic directly by
enabling interaction with rule predicates themselves. Raising the currency of
interaction to such semantic levels calls for new interaction paradigms between
humans and machines that result in improved productivity for text analytics
model development process. Moreover, by involving humans in the process, the
human-machine co-created models generalize better to unseen data as domain
experts are able to instill their expertise by extrapolating from what has been
learned by automated algorithms from few labelled data.
| 2,021 | Computation and Language |
Time Masking: Leveraging Temporal Information in Spoken Dialogue Systems | In a spoken dialogue system, dialogue state tracker (DST) components track
the state of the conversation by updating a distribution of values associated
with each of the slots being tracked for the current user turn, using the
interactions until then. Much of the previous work has relied on modeling the
natural order of the conversation, using distance based offsets as an
approximation of time. In this work, we hypothesize that leveraging the
wall-clock temporal difference between turns is crucial for finer-grained
control of dialogue scenarios. We develop a novel approach that applies a {\it
time mask}, based on the wall-clock time difference, to the associated slot
embeddings and empirically demonstrate that our proposed approach outperforms
existing approaches that leverage distance offsets, on both an internal
benchmark dataset as well as DSTC2.
| 2,019 | Computation and Language |
LINSPECTOR WEB: A Multilingual Probing Suite for Word Representations | We present LINSPECTOR WEB, an open source multilingual inspector to analyze
word representations. Our system provides researchers working in low-resource
settings with an easily accessible web based probing tool to gain quick
insights into their word embeddings especially outside of the English language.
To do this we employ 16 simple linguistic probing tasks such as gender, case
marking, and tense for a diverse set of 28 languages. We support probing of
static word embeddings along with pretrained AllenNLP models that are commonly
used for NLP downstream tasks such as named entity recognition, natural
language inference and dependency parsing. The results are visualized in a
polar chart and also provided as a table. LINSPECTOR WEB is available as an
offline tool or at https://linspector.ukp.informatik.tu-darmstadt.de.
| 2,019 | Computation and Language |
Weakly Supervised Domain Detection | In this paper we introduce domain detection as a new natural language
processing task. We argue that the ability to detect textual segments which are
domain-heavy, i.e., sentences or phrases which are representative of and
provide evidence for a given domain could enhance the robustness and
portability of various text classification applications. We propose an
encoder-detector framework for domain detection and bootstrap classifiers with
multiple instance learning (MIL). The model is hierarchically organized and
suited to multilabel classification. We demonstrate that despite learning with
minimal supervision, our model can be applied to text spans of different
granularities, languages, and genres. We also showcase the potential of domain
detection for text summarization.
| 2,019 | Computation and Language |
Investigating Self-Attention Network for Chinese Word Segmentation | Neural network has become the dominant method for Chinese word segmentation.
Most existing models cast the task as sequence labeling, using BiLSTM-CRF for
representing the input and making output predictions. Recently, attention-based
sequence models have emerged as a highly competitive alternative to LSTMs,
which allow better running speed by parallelization of computation. We
investigate self attention network for Chinese word segmentation, making
comparisons between BiLSTM-CRF models. In addition, the influence of
contextualized character embeddings is investigated using BERT, and a method is
proposed for integrating word information into SAN segmentation. Results show
that SAN gives highly competitive results compared with BiLSTMs, with BERT and
word information further improving segmentation for in-domain and cross-domain
segmentation. Our final models give the best results for 6 heterogenous domain
benchmarks.
| 2,019 | Computation and Language |
Deep Ranking Based Cost-sensitive Multi-label Learning for Distant
Supervision Relation Extraction | Knowledge base provides a potential way to improve the intelligence of
information retrieval (IR) systems, for that knowledge base has numerous
relations between entities which can help the IR systems to conduct inference
from one entity to another entity. Relation extraction is one of the
fundamental techniques to construct a knowledge base. Distant supervision is a
semi-supervised learning method for relation extraction which learns with
labeled and unlabeled data. However, this approach suffers the problem of
relation overlapping in which one entity tuple may have multiple relation
facts. We believe that relation types can have latent connections, which we
call class ties, and can be exploited to enhance relation extraction. However,
this property between relation classes has not been fully explored before. In
this paper, to exploit class ties between relations to improve relation
extraction, we propose a general ranking based multi-label learning framework
combined with convolutional neural networks, in which ranking based loss
functions with regularization technique are introduced to learn the latent
connections between relations. Furthermore, to deal with the problem of class
imbalance in distant supervision relation extraction, we further adopt
cost-sensitive learning to rescale the costs from the positive and negative
labels. Extensive experiments on a widely used dataset show the effectiveness
of our model to exploit class ties and to relieve class imbalance problem.
| 2,019 | Computation and Language |
On the Use/Misuse of the Term 'Phoneme' | The term 'phoneme' lies at the heart of speech science and technology, and
yet it is not clear that the research community fully appreciates its meaning
and implications. In particular, it is suspected that many researchers use the
term in a casual sense to refer to the sounds of speech, rather than as a well
defined abstract concept. If true, this means that some sections of the
community may be missing an opportunity to understand and exploit the
implications of this important psychological phenomenon. Here we review the
correct meaning of the term 'phoneme' and report the results of an
investigation into its use/misuse in the accepted papers at INTERSPEECH-2018.
It is confirmed that a significant proportion of the community (i) may not be
aware of the critical difference between `phonetic' and 'phonemic' levels of
description, (ii) may not fully understand the significance of 'phonemic
contrast', and as a consequence, (iii) consistently misuse the term 'phoneme'.
These findings are discussed, and recommendations are made as to how this
situation might be mitigated.
| 2,019 | Computation and Language |
RoBERTa: A Robustly Optimized BERT Pretraining Approach | Language model pretraining has led to significant performance gains but
careful comparison between different approaches is challenging. Training is
computationally expensive, often done on private datasets of different sizes,
and, as we will show, hyperparameter choices have significant impact on the
final results. We present a replication study of BERT pretraining (Devlin et
al., 2019) that carefully measures the impact of many key hyperparameters and
training data size. We find that BERT was significantly undertrained, and can
match or exceed the performance of every model published after it. Our best
model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results
highlight the importance of previously overlooked design choices, and raise
questions about the source of recently reported improvements. We release our
models and code.
| 2,019 | Computation and Language |
Automatically Learning Construction Injury Precursors from Text | In light of the increasing availability of digitally recorded safety reports
in the construction industry, it is important to develop methods to exploit
these data to improve our understanding of safety incidents and ability to
learn from them. In this study, we compare several approaches to automatically
learn injury precursors from raw construction accident reports. More precisely,
we experiment with two state-of-the-art deep learning architectures for Natural
Language Processing (NLP), Convolutional Neural Networks (CNN) and Hierarchical
Attention Networks (HAN), and with the established Term Frequency - Inverse
Document Frequency representation (TF-IDF) + Support Vector Machine (SVM)
approach. For each model, we provide a method to identify (after training) the
textual patterns that are, on average, the most predictive of each safety
outcome. We show that among those pieces of text, valid injury precursors can
be found. The proposed methods can also be used by the user to visualize and
understand the models' predictions.
| 2,020 | Computation and Language |
Supervised and Unsupervised Neural Approaches to Text Readability | We present a set of novel neural supervised and unsupervised approaches for
determining the readability of documents. In the unsupervised setting, we
leverage neural language models, whereas in the supervised setting, three
different neural classification architectures are tested. We show that the
proposed neural unsupervised approach is robust, transferable across languages
and allows adaptation to a specific readability task and data set. By
systematic comparison of several neural architectures on a number of benchmark
and new labelled readability datasets in two languages, this study also offers
a comprehensive analysis of different neural approaches to readability
classification. We expose their strengths and weaknesses, compare their
performance to current state-of-the-art classification approaches to
readability, which in most cases still rely on extensive feature engineering,
and propose possibilities for improvements.
| 2,021 | Computation and Language |
Analyzing Linguistic Complexity and Scientific Impact | The number of publications and the number of citations received have become
the most common indicators of scholarly success. In this context, scientific
writing increasingly plays an important role in scholars' scientific careers.
To understand the relationship between scientific writing and scientific
impact, this paper selected 12 variables of linguistic complexity as a proxy
for depicting scientific writing. We then analyzed these features from 36,400
full-text Biology articles and 1,797 full-text Psychology articles. These
features were compared to the scientific impact of articles, grouped into high,
medium, and low categories. The results suggested no practical significant
relationship between linguistic complexity and citation strata in either
discipline. This suggests that textual complexity plays little role in
scientific impact in our data sets.
| 2,019 | Computation and Language |
Towards Effective Rebuttal: Listening Comprehension using Corpus-Wide
Claim Mining | Engaging in a live debate requires, among other things, the ability to
effectively rebut arguments claimed by your opponent. In particular, this
requires identifying these arguments. Here, we suggest doing so by
automatically mining claims from a corpus of news articles containing billions
of sentences, and searching for them in a given speech. This raises the
question of whether such claims indeed correspond to those made in spoken
speeches. To this end, we collected a large dataset of $400$ speeches in
English discussing $200$ controversial topics, mined claims for each topic, and
asked annotators to identify the mined claims mentioned in each speech. Results
show that in the vast majority of speeches debaters indeed make use of such
claims. In addition, we present several baselines for the automatic detection
of mined claims in speeches, forming the basis for future work. All collected
data is freely available for research.
| 2,019 | Computation and Language |
Nefnir: A high accuracy lemmatizer for Icelandic | Lemmatization, finding the basic morphological form of a word in a corpus, is
an important step in many natural language processing tasks when working with
morphologically rich languages. We describe and evaluate Nefnir, a new open
source lemmatizer for Icelandic. Nefnir uses suffix substitution rules, derived
from a large morphological database, to lemmatize tagged text. Evaluation shows
that for correctly tagged text, Nefnir obtains an accuracy of 99.55%, and for
text tagged with a PoS tagger, the accuracy obtained is 96.88%.
| 2,019 | Computation and Language |
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on
Text Classification and Entailment | Machine learning algorithms are often vulnerable to adversarial examples that
have imperceptible alterations from the original counterparts but can fool the
state-of-the-art models. It is helpful to evaluate or even improve the
robustness of these models by exposing the maliciously crafted adversarial
examples. In this paper, we present TextFooler, a simple but strong baseline to
generate natural adversarial text. By applying it to two fundamental natural
language tasks, text classification and textual entailment, we successfully
attacked three target models, including the powerful pre-trained BERT, and the
widely used convolutional and recurrent neural networks. We demonstrate the
advantages of this framework in three ways: (1) effective---it outperforms
state-of-the-art attacks in terms of success rate and perturbation rate, (2)
utility-preserving---it preserves semantic content and grammaticality, and
remains correctly classified by humans, and (3) efficient---it generates
adversarial text with computational complexity linear to the text length. *The
code, pre-trained target models, and test examples are available at
https://github.com/jind11/TextFooler.
| 2,020 | Computation and Language |
A Hybrid Neural Network Model for Commonsense Reasoning | This paper proposes a hybrid neural network (HNN) model for commonsense
reasoning. An HNN consists of two component models, a masked language model and
a semantic similarity model, which share a BERT-based contextual encoder but
use different model-specific input and output layers. HNN obtains new
state-of-the-art results on three classic commonsense reasoning tasks, pushing
the WNLI benchmark to 89%, the Winograd Schema Challenge (WSC) benchmark to
75.1%, and the PDP60 benchmark to 90.0%. An ablation study shows that language
models and semantic similarity models are complementary approaches to
commonsense reasoning, and HNN effectively combines the strengths of both. The
code and pre-trained models will be publicly available at
https://github.com/namisan/mt-dnn.
| 2,019 | Computation and Language |
Representation Degeneration Problem in Training Natural Language
Generation Models | We study an interesting problem in training neural network-based models for
natural language generation tasks, which we call the \emph{representation
degeneration problem}. We observe that when training a model for natural
language generation tasks through likelihood maximization with the weight tying
trick, especially with big training datasets, most of the learnt word
embeddings tend to degenerate and be distributed into a narrow cone, which
largely limits the representation power of word embeddings. We analyze the
conditions and causes of this problem and propose a novel regularization method
to address it. Experiments on language modeling and machine translation show
that our method can largely mitigate the representation degeneration problem
and achieve better performance than baseline algorithms.
| 2,019 | Computation and Language |
What Should I Ask? Using Conversationally Informative Rewards for
Goal-Oriented Visual Dialog | The ability to engage in goal-oriented conversations has allowed humans to
gain knowledge, reduce uncertainty, and perform tasks more efficiently.
Artificial agents, however, are still far behind humans in having goal-driven
conversations. In this work, we focus on the task of goal-oriented visual
dialogue, aiming to automatically generate a series of questions about an image
with a single objective. This task is challenging since these questions must
not only be consistent with a strategy to achieve a goal, but also consider the
contextual information in the image. We propose an end-to-end goal-oriented
visual dialogue system, that combines reinforcement learning with regularized
information gain. Unlike previous approaches that have been proposed for the
task, our work is motivated by the Rational Speech Act framework, which models
the process of human inquiry to reach a goal. We test the two versions of our
model on the GuessWhat?! dataset, obtaining significant results that outperform
the current state-of-the-art models in the task of generating questions to find
an undisclosed object in an image.
| 2,019 | Computation and Language |
CAiRE: An Empathetic Neural Chatbot | In this paper, we present an end-to-end empathetic conversation agent CAiRE.
Our system adapts TransferTransfo (Wolf et al., 2019) learning approach that
fine-tunes a large-scale pre-trained language model with multi-task objectives:
response language modeling, response prediction and dialogue emotion detection.
We evaluate our model on the recently proposed empathetic-dialogues dataset
(Rashkin et al., 2019), the experiment results show that CAiRE achieves
state-of-the-art performance on dialogue emotion detection and empathetic
response generation.
| 2,020 | Computation and Language |
Hybrid Code Networks using a convolutional neural network as an input
layer achieves higher turn accuracy | The dialogue management is a task of conversational artificial intelligence.
The goal of the dialogue manager is to select the appropriate response to the
conversational partner conditioned by the input message and recent dialogue
state. Hybrid Code Networks is one of the models of dialogue managers, which
uses an average of word embeddings and bag-of-words as input features. We
perform experiments on Dialogue bAbI Task 6 and Alquist Conversational Dataset.
The experiments show that the convolutional neural network used as an input
layer of the Hybrid Code Network improves the model's turn accuracy.
| 2,019 | Computation and Language |
Legal entity recognition in an agglutinating language and document
connection network for EU Legislation and EU/Hungarian Case Law | We have developed an application aiming at federated search for EU and
Hungarian legislation and jurisdiction. It now contains above 1 million
documents, with daily updates. The database holds documents downloaded from the
EU sources EUR-Lex and Curia Online as well as public jurisdiction documents
from the Constitutional Court of Hungary and The National Office for The
Judiciary. The application is termed Justeus. Justeus provides comprehensible
search possibilities. Besides free text and metadata (dropdown list) searches,
it features hierarchical data structures (concept hierarchy trees) of directory
codes and classification as well as subject terms. Justeus collects all links
of a particular document to other documents (court judgements citing other case
law documents as well as legislation, national court decisions referring to EU
regulation etc.) as tables and directed graph networks. Choosing a document,
its relations to other documents are visualized in real time as a network.
Network graphs help in identifying key documents influencing or referred by
many other documents (legislative and/or jurisdictive) and sets of documents
predominantly referring to each other (citation networks).
| 2,019 | Computation and Language |
A mathematical model for universal semantics | We characterize the meaning of words with language-independent numerical
fingerprints, through a mathematical analysis of recurring patterns in texts.
Approximating texts by Markov processes on a long-range time scale, we are able
to extract topics, discover synonyms, and sketch semantic fields from a
particular document of moderate length, without consulting external
knowledge-base or thesaurus. Our Markov semantic model allows us to represent
each topical concept by a low-dimensional vector, interpretable as algebraic
invariants in succinct statistical operations on the document, targeting local
environments of individual words. These language-independent semantic
representations enable a robot reader to both understand short texts in a given
language (automated question-answering) and match medium-length texts across
different languages (automated word translation). Our semantic fingerprints
quantify local meaning of words in 14 representative languages across 5 major
language families, suggesting a universal and cost-effective mechanism by which
human languages are processed at the semantic level. Our protocols and source
codes are publicly available on
https://github.com/yajun-zhou/linguae-naturalis-principia-mathematica
| 2,022 | Computation and Language |
Hierarchical Multi-Label Dialog Act Recognition on Spanish Data | Dialog acts reveal the intention behind the uttered words. Thus, their
automatic recognition is important for a dialog system trying to understand its
conversational partner. The study presented in this article approaches that
task on the DIHANA corpus, whose three-level dialog act annotation scheme poses
problems which have not been explored in recent studies. In addition to the
hierarchical problem, the two lower levels pose multi-label classification
problems. Furthermore, each level in the hierarchy refers to a different aspect
concerning the intention of the speaker both in terms of the structure of the
dialog and the task. Also, since its dialogs are in Spanish, it allows us to
assess whether the state-of-the-art approaches on English data generalize to a
different language. More specifically, we compare the performance of different
segment representation approaches focusing on both sequences and patterns of
words and assess the importance of the dialog history and the relations between
the multiple levels of the hierarchy. Concerning the single-label
classification problem posed by the top level, we show that the conclusions
drawn on English data also hold on Spanish data. Furthermore, we show that the
approaches can be adapted to multi-label scenarios. Finally, by hierarchically
combining the best classifiers for each level, we achieve the best results
reported for this corpus.
| 2,019 | Computation and Language |
ERNIE 2.0: A Continual Pre-training Framework for Language Understanding | Recently, pre-trained models have achieved state-of-the-art results in
various language understanding tasks, which indicates that pre-training on
large-scale corpora may play a crucial role in natural language processing.
Current pre-training procedures usually focus on training the model with
several simple tasks to grasp the co-occurrence of words or sentences. However,
besides co-occurring, there exists other valuable lexical, syntactic and
semantic information in training corpora, such as named entity, semantic
closeness and discourse relations. In order to extract to the fullest extent,
the lexical, syntactic and semantic information from training corpora, we
propose a continual pre-training framework named ERNIE 2.0 which builds and
learns incrementally pre-training tasks through constant multi-task learning.
Experimental results demonstrate that ERNIE 2.0 outperforms BERT and XLNet on
16 tasks including English tasks on GLUE benchmarks and several common tasks in
Chinese. The source codes and pre-trained models have been released at
https://github.com/PaddlePaddle/ERNIE.
| 2,019 | Computation and Language |
VIANA: Visual Interactive Annotation of Argumentation | Argumentation Mining addresses the challenging tasks of identifying
boundaries of argumentative text fragments and extracting their relationships.
Fully automated solutions do not reach satisfactory accuracy due to their
insufficient incorporation of semantics and domain knowledge. Therefore,
experts currently rely on time-consuming manual annotations. In this paper, we
present a visual analytics system that augments the manual annotation process
by automatically suggesting which text fragments to annotate next. The accuracy
of those suggestions is improved over time by incorporating linguistic
knowledge and language modeling to learn a measure of argument similarity from
user interactions. Based on a long-term collaboration with domain experts, we
identify and model five high-level analysis tasks. We enable close reading and
note-taking, annotation of arguments, argument reconstruction, extraction of
argument relations, and exploration of argument graphs. To avoid context
switches, we transition between all views through seamless morphing, visually
anchoring all text- and graph-based layers. We evaluate our system with a
two-stage expert user study based on a corpus of presidential debates. The
results show that experts prefer our system over existing solutions due to the
speedup provided by the automatic suggestions and the tight integration between
text and graph views.
| 2,019 | Computation and Language |
A Baseline Neural Machine Translation System for Indian Languages | We present a simple, yet effective, Neural Machine Translation system for
Indian languages. We demonstrate the feasibility for multiple language pairs,
and establish a strong baseline for further research.
| 2,019 | Computation and Language |
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks | Unsupervised pre-training of large neural models has recently revolutionized
Natural Language Processing. By warm-starting from the publicly released
checkpoints, NLP practitioners have pushed the state-of-the-art on multiple
benchmarks while saving significant amounts of compute time. So far the focus
has been mainly on the Natural Language Understanding tasks. In this paper, we
demonstrate the efficacy of pre-trained checkpoints for Sequence Generation. We
developed a Transformer-based sequence-to-sequence model that is compatible
with publicly available pre-trained BERT, GPT-2 and RoBERTa checkpoints and
conducted an extensive empirical study on the utility of initializing our
model, both encoder and decoder, with these checkpoints. Our models result in
new state-of-the-art results on Machine Translation, Text Summarization,
Sentence Splitting, and Sentence Fusion.
| 2,022 | Computation and Language |
Joey NMT: A Minimalist NMT Toolkit for Novices | We present Joey NMT, a minimalist neural machine translation toolkit based on
PyTorch that is specifically designed for novices. Joey NMT provides many
popular NMT features in a small and simple code base, so that novices can
easily and quickly learn to use it and adapt it to their needs. Despite its
focus on simplicity, Joey NMT supports classic architectures (RNNs,
transformers), fast beam search, weight tying, and more, and achieves
performance comparable to more complex toolkits on standard benchmarks. We
evaluate the accessibility of our toolkit in a user study where novices with
general knowledge about Pytorch and NMT and experts work through a
self-contained Joey NMT tutorial, showing that novices perform almost as well
as experts in a subsequent code quiz. Joey NMT is available at
https://github.com/joeynmt/joeynmt .
| 2,019 | Computation and Language |
Neural Mention Detection | Mention detection is an important preprocessing step for annotation and
interpretation in applications such as NER and coreference resolution, but few
stand-alone neural models have been proposed able to handle the full range of
mentions. In this work, we propose and compare three neural network-based
approaches to mention detection. The first approach is based on the mention
detection part of a state of the art coreference resolution system; the second
uses ELMO embeddings together with a bidirectional LSTM and a biaffine
classifier; the third approach uses the recently introduced BERT model. Our
best model (using a biaffine classifier) achieves gains of up to 1.8 percentage
points on mention recall when compared with a strong baseline in a HIGH RECALL
coreference annotation setting. The same model achieves improvements of up to
5.3 and 6.2 p.p. when compared with the best-reported mention detection F1 on
the CONLL and CRAC coreference data sets respectively in a HIGH F1 annotation
setting. We then evaluate our models for coreference resolution by using
mentions predicted by our best model in start-of-the-art coreference systems.
The enhanced model achieved absolute improvements of up to 1.7 and 0.7 p.p.
when compared with our strong baseline systems (pipeline system and end-to-end
system) respectively. For nested NER, the evaluation of our model on the GENIA
corpora shows that our model matches or outperforms state-of-the-art models
despite not being specifically designed for this task.
| 2,020 | Computation and Language |
CUNI Systems for the Unsupervised News Translation Task in WMT 2019 | In this paper we describe the CUNI translation system used for the
unsupervised news shared task of the ACL 2019 Fourth Conference on Machine
Translation (WMT19). We follow the strategy of Artexte et al. (2018b), creating
a seed phrase-based system where the phrase table is initialized from
cross-lingual embedding mappings trained on monolingual data, followed by a
neural machine translation system trained on synthetic parallel data. The
synthetic corpus was produced from a monolingual corpus by a tuned PBMT model
refined through iterative back-translation. We further focus on the handling of
named entities, i.e. the part of vocabulary where the cross-lingual embedding
mapping suffers most. Our system reaches a BLEU score of 15.3 on the
German-Czech WMT19 shared task.
| 2,019 | Computation and Language |
Reinforced Dynamic Reasoning for Conversational Question Generation | This paper investigates a new task named Conversational Question Generation
(CQG) which is to generate a question based on a passage and a conversation
history (i.e., previous turns of question-answer pairs). CQG is a crucial task
for developing intelligent agents that can drive question-answering style
conversations or test user understanding of a given passage. Towards that end,
we propose a new approach named Reinforced Dynamic Reasoning (ReDR) network,
which is based on the general encoder-decoder framework but incorporates a
reasoning procedure in a dynamic manner to better understand what has been
asked and what to ask next about the passage. To encourage producing meaningful
questions, we leverage a popular question answering (QA) model to provide
feedback and fine-tune the question generator using a reinforcement learning
mechanism. Empirical results on the recently released CoQA dataset demonstrate
the effectiveness of our method in comparison with various baselines and model
variants. Moreover, to show the applicability of our method, we also apply it
to create multi-turn question-answering conversations for passages in SQuAD.
| 2,019 | Computation and Language |
One-to-X analogical reasoning on word embeddings: a case for diachronic
armed conflict prediction from news texts | We extend the well-known word analogy task to a one-to-X formulation,
including one-to-none cases, when no correct answer exists. The task is cast as
a relation discovery problem and applied to historical armed conflicts
datasets, attempting to predict new relations of type `location:armed-group'
based on data about past events. As the source of semantic information, we use
diachronic word embedding models trained on English news texts. A simple
technique to improve diachronic performance in such task is demonstrated, using
a threshold based on a function of cosine distance to decrease the number of
false positives; this approach is shown to be beneficial on two different
corpora. Finally, we publish a ready-to-use test set for one-to-X analogy
evaluation on historical armed conflicts data.
| 2,019 | Computation and Language |
Machine Translation Evaluation with BERT Regressor | We introduce the metric using BERT (Bidirectional Encoder Representations
from Transformers) (Devlin et al., 2019) for automatic machine translation
evaluation. The experimental results of the WMT-2017 Metrics Shared Task
dataset show that our metric achieves state-of-the-art performance in
segment-level metrics task for all to-English language pairs.
| 2,019 | Computation and Language |
Dual-FOFE-net Neural Models for Entity Linking with PageRank | This paper presents a simple and computationally efficient approach for
entity linking (EL), compared with recurrent neural networks (RNNs) or
convolutional neural networks (CNNs), by making use of feedforward neural
networks (FFNNs) and the recent dual fixed-size ordinally forgetting encoding
(dual-FOFE) method to fully encode the sentence fragment and its left/right
contexts into a fixed-size representation. Furthermore, in this work, we
propose to incorporate PageRank based distillation in our candidate generation
module. Our neural linking models consist of three parts: a PageRank based
candidate generation module, a dual-FOFE-net neural ranking model and a simple
NIL entity clustering system. Experimental results have shown that our proposed
neural linking models achieved higher EL accuracy than state-of-the-art models
on the TAC2016 task dataset over the baseline system, without requiring any
in-house data or complicated handcrafted features. Moreover, it achieves a
competitive accuracy on the TAC2017 task dataset.
| 2,019 | Computation and Language |
English-Czech Systems in WMT19: Document-Level Transformer | We describe our NMT systems submitted to the WMT19 shared task in
English-Czech news translation. Our systems are based on the Transformer model
implemented in either Tensor2Tensor (T2T) or Marian framework.
We aimed at improving the adequacy and coherence of translated documents by
enlarging the context of the source and target. Instead of translating each
sentence independently, we split the document into possibly overlapping
multi-sentence segments. In case of the T2T implementation, this
"document-level"-trained system achieves a $+0.6$ BLEU improvement ($p<0.05$)
relative to the same system applied on isolated sentences. To assess the
potential effect document-level models might have on lexical coherence, we
performed a semi-automatic analysis, which revealed only a few sentences
improved in this aspect. Thus, we cannot draw any conclusions from this weak
evidence.
| 2,019 | Computation and Language |
IPRE: a Dataset for Inter-Personal Relationship Extraction | Inter-personal relationship is the basis of human society. In order to
automatically identify the relations between persons from texts, we need
annotated data for training systems. However, there is a lack of a massive
amount of such data so far. To address this situation, we introduce IPRE, a new
dataset for inter-personal relationship extraction which aims to facilitate
information extraction and knowledge graph construction research. In total,
IPRE has over 41,000 labeled sentences for 34 types of relations, including
about 9,000 sentences annotated by workers. Our data is the first dataset for
inter-personal relationship extraction. Additionally, we define three
evaluation tasks based on IPRE and provide the baseline systems for further
comparison in future work.
| 2,019 | Computation and Language |
Confirmatory Aspect-based Opinion Mining Processes | A new opinion extraction method is proposed to summarize unstructured,
user-generated content (i.e., online customer reviews) in the fixed topic
domains. To differentiate the current approach from other opinion extraction
approaches, which are often exposed to a sparsity problem and lack of sentiment
scores, a confirmatory aspect-based opinion mining framework is introduced
along with its practical algorithm called DiSSBUS. In this procedure, 1) each
customer review is disintegrated into a set of clauses; 2) each clause is
summarized to bi-terms-a topic word and an evaluation word-using a
part-of-speech (POS) tagger; and 3) each bi-term is matched to a pre-specified
topic relevant to a specific domain. The proposed processes have two primary
advantages over existing methods: 1) they can decompose a single review into a
set of bi-terms related to pre-specified topics in the domain of interest and,
therefore, 2) allow identification of the reviewer's opinions on the topics via
evaluation words within the set of bi-terms. The proposed aspect-based opinion
mining is applied to customer reviews of restaurants in Hawaii obtained from
TripAdvisor, and the empirical findings validate the effectiveness of the
method.
Keywords: Clause-based sentiment analysis, Customer review, Opinion mining,
Topic modeling, User-generate-contents.
| 2,019 | Computation and Language |
Deep Retrieval-Based Dialogue Systems: A Short Review | Building dialogue systems that naturally converse with humans is being an
attractive and an active research domain. Multiple systems are being designed
everyday and several datasets are being available. For this reason, it is being
hard to keep an up-to-date state-of-the-art. In this work, we present the
latest and most relevant retrieval-based dialogue systems and the available
datasets used to build and evaluate them. We discuss their limitations and
provide insights and guidelines for future work.
| 2,019 | Computation and Language |
Zero-shot transfer for implicit discourse relation classification | Automatically classifying the relation between sentences in a discourse is a
challenging task, in particular when there is no overt expression of the
relation. It becomes even more challenging by the fact that annotated training
data exists only for a small number of languages, such as English and Chinese.
We present a new system using zero-shot transfer learning for implicit
discourse relation classification, where the only resource used for the target
language is unannotated parallel text. This system is evaluated on the
discourse-annotated TED-MDB parallel corpus, where it obtains good results for
all seven languages using only English training data.
| 2,019 | Computation and Language |
Reward Learning for Efficient Reinforcement Learning in Extractive
Document Summarisation | Document summarisation can be formulated as a sequential decision-making
problem, which can be solved by Reinforcement Learning (RL) algorithms. The
predominant RL paradigm for summarisation learns a cross-input policy, which
requires considerable time, data and parameter tuning due to the huge search
spaces and the delayed rewards. Learning input-specific RL policies is a more
efficient alternative but so far depends on handcrafted rewards, which are
difficult to design and yield poor performance. We propose RELIS, a novel RL
paradigm that learns a reward function with Learning-to-Rank (L2R) algorithms
at training time and uses this reward function to train an input-specific RL
policy at test time. We prove that RELIS guarantees to generate near-optimal
summaries with appropriate L2R and RL algorithms. Empirically, we evaluate our
approach on extractive multi-document summarisation. We show that RELIS reduces
the training time by two orders of magnitude compared to the state-of-the-art
models while performing on par with them.
| 2,019 | Computation and Language |
MaSS: A Large and Clean Multilingual Corpus of Sentence-aligned Spoken
Utterances Extracted from the Bible | The CMU Wilderness Multilingual Speech Dataset (Black, 2019) is a newly
published multilingual speech dataset based on recorded readings of the New
Testament. It provides data to build Automatic Speech Recognition (ASR) and
Text-to-Speech (TTS) models for potentially 700 languages. However, the fact
that the source content (the Bible) is the same for all the languages is not
exploited to date.Therefore, this article proposes to add multilingual links
between speech segments in different languages, and shares a large and clean
dataset of 8,130 parallel spoken utterances across 8 languages (56 language
pairs). We name this corpus MaSS (Multilingual corpus of Sentence-aligned
Spoken utterances). The covered languages (Basque, English, Finnish, French,
Hungarian, Romanian, Russian and Spanish) allow researches on speech-to-speech
alignment as well as on translation for typologically different language pairs.
The quality of the final corpus is attested by human evaluation performed on a
corpus subset (100 utterances, 8 language pairs). Lastly, we showcase the
usefulness of the final product on a bilingual speech retrieval task.
| 2,020 | Computation and Language |
Abstractive Document Summarization without Parallel Data | Abstractive summarization typically relies on large collections of paired
articles and summaries. However, in many cases, parallel data is scarce and
costly to obtain. We develop an abstractive summarization system that relies
only on large collections of example summaries and non-matching articles. Our
approach consists of an unsupervised sentence extractor that selects salient
sentences to include in the final summary, as well as a sentence abstractor
that is trained on pseudo-parallel and synthetic data, that paraphrases each of
the extracted sentences. We perform an extensive evaluation of our method: on
the CNN/DailyMail benchmark, on which we compare our approach to fully
supervised baselines, as well as on the novel task of automatically generating
a press release from a scientific journal article, which is well suited for our
system. We show promising performance on both tasks, without relying on any
article-summary pairs.
| 2,020 | Computation and Language |
DuTongChuan: Context-aware Translation Model for Simultaneous
Interpreting | In this paper, we present DuTongChuan, a novel context-aware translation
model for simultaneous interpreting. This model allows to constantly read
streaming text from the Automatic Speech Recognition (ASR) model and
simultaneously determine the boundaries of Information Units (IUs) one after
another. The detected IU is then translated into a fluent translation with two
simple yet effective decoding strategies: partial decoding and context-aware
decoding. In practice, by controlling the granularity of IUs and the size of
the context, we can get a good trade-off between latency and translation
quality easily. Elaborate evaluation from human translators reveals that our
system achieves promising translation quality (85.71% for Chinese-English, and
86.36% for English-Chinese), specially in the sense of surprisingly good
discourse coherence. According to an End-to-End (speech-to-speech simultaneous
interpreting) evaluation, this model presents impressive performance in
reducing latency (to less than 3 seconds at most times). Furthermore, we
successfully deploy this model in a variety of Baidu's products which have
hundreds of millions of users, and we release it as a service in our AI
platform.
| 2,019 | Computation and Language |
SenseFitting: Sense Level Semantic Specialization of Word Embeddings for
Word Sense Disambiguation | We introduce a neural network-based system of Word Sense Disambiguation (WSD)
for German that is based on SenseFitting, a novel method for optimizing WSD. We
outperform knowledge-based WSD methods by up to 25% F1-score and produce a new
state-of-the-art on the German sense-annotated dataset WebCAGe. Our method uses
three feature vectors consisting of a) sense, b) gloss, and c) relational
vectors to represent target senses and to compare them with the vector
centroids of sample contexts. Utilizing widely available word embeddings and
lexical resources, we are able to compensate for the lower resource
availability of German. SenseFitting builds upon the recently introduced
semantic specialization procedure Attract-Repel, and leverages sense level
semantic constraints from lexical-semantic networks (e.g. GermaNet) or online
social dictionaries (e.g. Wiktionary) to produce high-quality sense embeddings
from pre-trained word embeddings. We evaluate our sense embeddings with a new
SimLex-999 based similarity dataset, called SimSense, that we developed for
this work. We achieve results that outperform current lemma-based
specialization methods for German, making them comparable to results achieved
for English.
| 2,019 | Computation and Language |
Learning Question-Guided Video Representation for Multi-Turn Video
Question Answering | Understanding and conversing about dynamic scenes is one of the key
capabilities of AI agents that navigate the environment and convey useful
information to humans. Video question answering is a specific scenario of such
AI-human interaction where an agent generates a natural language response to a
question regarding the video of a dynamic scene. Incorporating features from
multiple modalities, which often provide supplementary information, is one of
the challenging aspects of video question answering. Furthermore, a question
often concerns only a small segment of the video, hence encoding the entire
video sequence using a recurrent neural network is not computationally
efficient. Our proposed question-guided video representation module efficiently
generates the token-level video summary guided by each word in the question.
The learned representations are then fused with the question to generate the
answer. Through empirical evaluation on the Audio Visual Scene-aware Dialog
(AVSD) dataset, our proposed models in single-turn and multi-turn question
answering achieve state-of-the-art performance on several automatic natural
language generation evaluation metrics.
| 2,019 | Computation and Language |
Lifelong and Interactive Learning of Factual Knowledge in Dialogues | Dialogue systems are increasingly using knowledge bases (KBs) storing
real-world facts to help generate quality responses. However, as the KBs are
inherently incomplete and remain fixed during conversation, it limits dialogue
systems' ability to answer questions and to handle questions involving entities
or relations that are not in the KB. In this paper, we make an attempt to
propose an engine for Continuous and Interactive Learning of Knowledge (CILK)
for dialogue systems to give them the ability to continuously and interactively
learn and infer new knowledge during conversations. With more knowledge
accumulated over time, they will be able to learn better and answer more
questions. Our empirical evaluation shows that CILK is promising.
| 2,019 | Computation and Language |
Simple Unsupervised Summarization by Contextual Matching | We propose an unsupervised method for sentence summarization using only
language modeling. The approach employs two language models, one that is
generic (i.e. pretrained), and the other that is specific to the target domain.
We show that by using a product-of-experts criteria these are enough for
maintaining continuous contextual matching while maintaining output fluency.
Experiments on both abstractive and extractive sentence summarization data sets
show promising results of our method without being exposed to any paired data.
| 2,019 | Computation and Language |
Normalyzing Numeronyms -- A NLP approach | This paper presents a method to apply Natural Language Processing for
normalizing numeronyms to make them understandable by humans. We approach the
problem through a two-step mechanism. We make use of the state of the art
Levenshtein distance of words. We then apply Cosine Similarity for selection of
the normalized text and reach greater accuracy in solving the problem. Our
approach garners accuracy figures of 71\% and 72\% for Bengali and English
language, respectively.
| 2,019 | Computation and Language |
On conducting better validation studies of automatic metrics in natural
language generation evaluation | Natural language generation (NLG) has received increasing attention, which
has highlighted evaluation as a central methodological concern. Since human
evaluations for these systems are costly, automatic metrics have broad appeal
in NLG. Research in language generation often finds situations where it is
appropriate to apply existing metrics or propose new ones. The application of
these metrics are entirely dependent on validation studies - studies that
determine a metric's correlation to human judgment. However, there are many
details and considerations in conducting strong validation studies. This
document is intended for those validating existing metrics or proposing new
ones in the broad context of NLG: we 1) begin with a write-up of best practices
in validation studies, 2) outline how to adopt these practices, 3) conduct
analyses in the WMT'17 metrics shared task\footnote{Our jupyter notebook
containing the analyses is available at \url{https://github.com}}, and 4)
highlight promising approaches to NLG metrics 5) conclude with our opinions on
the future of this area.
| 2,019 | Computation and Language |
Personalizing ASR for Dysarthric and Accented Speech with Limited Data | Automatic speech recognition (ASR) systems have dramatically improved over
the last few years. ASR systems are most often trained from 'typical' speech,
which means that underrepresented groups don't experience the same level of
improvement. In this paper, we present and evaluate finetuning techniques to
improve ASR for users with non-standard speech. We focus on two types of
non-standard speech: speech from people with amyotrophic lateral sclerosis
(ALS) and accented speech. We train personalized models that achieve 62% and
35% relative WER improvement on these two groups, bringing the absolute WER for
ALS speakers, on a test set of message bank phrases, down to 10% for mild
dysarthria and 20% for more serious dysarthria. We show that 71% of the
improvement comes from only 5 minutes of training data. Finetuning a particular
subset of layers (with many fewer parameters) often gives better results than
finetuning the entire model. This is the first step towards building state of
the art ASR models for dysarthric speech.
| 2,021 | Computation and Language |
What BERT is not: Lessons from a new suite of psycholinguistic
diagnostics for language models | Pre-training by language modeling has become a popular and successful
approach to NLP tasks, but we have yet to understand exactly what linguistic
capacities these pre-training processes confer upon models. In this paper we
introduce a suite of diagnostics drawn from human language experiments, which
allow us to ask targeted questions about the information used by language
models for generating predictions in context. As a case study, we apply these
diagnostics to the popular BERT model, finding that it can generally
distinguish good from bad completions involving shared category or role
reversal, albeit with less sensitivity than humans, and it robustly retrieves
noun hypernyms, but it struggles with challenging inferences and role-based
event prediction -- and in particular, it shows clear insensitivity to the
contextual impacts of negation.
| 2,020 | Computation and Language |
GraphFlow: Exploiting Conversation Flow with Graph Neural Networks for
Conversational Machine Comprehension | Conversational machine comprehension (MC) has proven significantly more
challenging compared to traditional MC since it requires better utilization of
conversation history. However, most existing approaches do not effectively
capture conversation history and thus have trouble handling questions involving
coreference or ellipsis. Moreover, when reasoning over passage text, most of
them simply treat it as a word sequence without exploring rich semantic
relationships among words. In this paper, we first propose a simple yet
effective graph structure learning technique to dynamically construct a
question and conversation history aware context graph at each conversation
turn. Then we propose a novel Recurrent Graph Neural Network, and based on
that, we introduce a flow mechanism to model the temporal dependencies in a
sequence of context graphs. The proposed GraphFlow model can effectively
capture conversational flow in a dialog, and shows competitive performance
compared to existing state-of-the-art methods on CoQA, QuAC and DoQA
benchmarks. In addition, visualization experiments show that our proposed model
can offer good interpretability for the reasoning process.
| 2,020 | Computation and Language |
Simple and Effective Text Matching with Richer Alignment Features | In this paper, we present a fast and strong neural approach for general
purpose text matching applications. We explore what is sufficient to build a
fast and well-performed text matching model and propose to keep three key
features available for inter-sequence alignment: original point-wise features,
previous aligned features, and contextual features while simplifying all the
remaining components. We conduct experiments on four well-studied benchmark
datasets across tasks of natural language inference, paraphrase identification
and answer selection. The performance of our model is on par with the
state-of-the-art on all datasets with much fewer parameters and the inference
speed is at least 6 times faster compared with similarly performed ones.
| 2,019 | Computation and Language |
MSnet: A BERT-based Network for Gendered Pronoun Resolution | The pre-trained BERT model achieves a remarkable state of the art across a
wide range of tasks in natural language processing. For solving the gender bias
in gendered pronoun resolution task, I propose a novel neural network model
based on the pre-trained BERT. This model is a type of mention score classifier
and uses an attention mechanism with no parameters to compute the contextual
representation of entity span, and a vector to represent the triple-wise
semantic similarity among the pronoun and the entities. In stage 1 of the
gendered pronoun resolution task, a variant of this model, trained in the
fine-tuning approach, reduced the multi-class logarithmic loss to 0.3033 in the
5-fold cross-validation of training set and 0.2795 in testing set. Besides,
this variant won the 2nd place with a score at 0.17289 in stage 2 of the task.
The code in this paper is available at:
https://github.com/ziliwang/MSnet-for-Gendered-PronounResolution
| 2,019 | Computation and Language |
Sentiment Analysis at SEPLN (TASS)-2019: Sentiment Analysis at Tweet
level using Deep Learning | This paper describes the system submitted to "Sentiment Analysis at SEPLN
(TASS)-2019" shared task. The task includes sentiment analysis of Spanish
tweets, where the tweets are in different dialects spoken in Spain, Peru, Costa
Rica, Uruguay and Mexico. The tweets are short (up to 240 characters) and the
language is informal, i.e., it contains misspellings, emojis, onomatopeias etc.
Sentiment analysis includes classification of the tweets into 4 classes, viz.,
Positive, Negative, Neutral and None. For preparing the proposed system, we use
Deep Learning networks like LSTMs.
| 2,019 | Computation and Language |
JUCBNMT at WMT2018 News Translation Task: Character Based Neural Machine
Translation of Finnish to English | In the current work, we present a description of the system submitted to WMT
2018 News Translation Shared task. The system was created to translate news
text from Finnish to English. The system used a Character Based Neural Machine
Translation model to accomplish the given task. The current paper documents the
preprocessing steps, the description of the submitted system and the results
produced using the same. Our system garnered a BLEU score of 12.9.
| 2,019 | Computation and Language |
Dolphin: A Spoken Language Proficiency Assessment System for Elementary
Education | Spoken language proficiency is critically important for children's growth and
personal development. Due to the limited and imbalanced educational resources
in China, elementary students barely have chances to improve their oral
language skills in classes. Verbal fluency tasks (VFTs) were invented to let
the students practice their spoken language proficiency after school. VFTs are
simple but concrete math related questions that ask students to not only report
answers but speak out the entire thinking process. In spite of the great
success of VFTs, they bring a heavy grading burden to elementary teachers. To
alleviate this problem, we develop Dolphin, a spoken language proficiency
assessment system for Chinese elementary education. Dolphin is able to
automatically evaluate both phonological fluency and semantic relevance of
students' VFT answers. We conduct a wide range of offline and online
experiments to demonstrate the effectiveness of Dolphin. In our offline
experiments, we show that Dolphin improves both phonological fluency and
semantic relevance evaluation performance when compared to state-of-the-art
baselines on real-world educational data sets. In our online A/B experiments,
we test Dolphin with 183 teachers from 2 major cities (Hangzhou and Xi'an) in
China for 10 weeks and the results show that VFT assignments grading coverage
is improved by 22\%.
| 2,020 | Computation and Language |
Visualizing RNN States with Predictive Semantic Encodings | Recurrent Neural Networks are an effective and prevalent tool used to model
sequential data such as natural language text. However, their deep nature and
massive number of parameters pose a challenge for those intending to study
precisely how they work. We present a visual technique that gives a high level
intuition behind the semantics of the hidden states within Recurrent Neural
Networks. This semantic encoding allows for hidden states to be compared
throughout the model independent of their internal details. The proposed
technique is displayed in a proof of concept visualization tool which is
demonstrated to visualize the natural language processing task of language
modelling.
| 2,020 | Computation and Language |
Contrastive Reasons Detection and Clustering from Online Polarized
Debate | This work tackles the problem of unsupervised modeling and extraction of the
main contrastive sentential reasons conveyed by divergent viewpoints on
polarized issues. It proposes a pipeline approach centered around the detection
and clustering of phrases, assimilated to argument facets using a novel Phrase
Author Interaction Topic-Viewpoint model. The evaluation is based on the
informativeness, the relevance and the clustering accuracy of extracted
reasons. The pipeline approach shows a significant improvement over
state-of-the-art methods in contrastive summarization on online debate
datasets.
| 2,019 | Computation and Language |
Predicting Behavior in Cancer-Afflicted Patient and Spouse Interactions
using Speech and Language | Cancer impacts the quality of life of those diagnosed as well as their spouse
caregivers, in addition to potentially influencing their day-to-day behaviors.
There is evidence that effective communication between spouses can improve
well-being related to cancer but it is difficult to efficiently evaluate the
quality of daily life interactions using manual annotation frameworks.
Automated recognition of behaviors based on the interaction cues of speakers
can help analyze interactions in such couples and identify behaviors which are
beneficial for effective communication. In this paper, we present and detail a
dataset of dyadic interactions in 85 real-life cancer-afflicted couples and a
set of observational behavior codes pertaining to interpersonal communication
attributes. We describe and employ neural network-based systems for classifying
these behaviors based on turn-level acoustic and lexical speech patterns.
Furthermore, we investigate the effect of controlling for factors such as
gender, patient/caregiver role and conversation content on behavior
classification. Analysis of our preliminary results indicates the challenges in
this task due to the nature of the targeted behaviors and suggests that
techniques incorporating contextual processing might be better suited to tackle
this problem.
| 2,019 | Computation and Language |
A Speech Test Set of Practice Business Presentations with Additional
Relevant Texts | We present a test corpus of audio recordings and transcriptions of
presentations of students' enterprises together with their slides and
web-pages. The corpus is intended for evaluation of automatic speech
recognition (ASR) systems, especially in conditions where the prior
availability of in-domain vocabulary and named entities is benefitable. The
corpus consists of 39 presentations in English, each up to 90 seconds long. The
speakers are high school students from European countries with English as their
second language. We benchmark three baseline ASR systems on the corpus and show
their imperfection.
| 2,019 | Computation and Language |
Multilingual Speech Recognition with Corpus Relatedness Sampling | Multilingual acoustic models have been successfully applied to low-resource
speech recognition. Most existing works have combined many small corpora
together and pretrained a multilingual model by sampling from each corpus
uniformly. The model is eventually fine-tuned on each target corpus. This
approach, however, fails to exploit the relatedness and similarity among
corpora in the training set. For example, the target corpus might benefit more
from a corpus in the same domain or a corpus from a close language. In this
work, we propose a simple but useful sampling strategy to take advantage of
this relatedness. We first compute the corpus-level embeddings and estimate the
similarity between each corpus. Next, we start training the multilingual model
with uniform-sampling from each corpus at first, then we gradually increase the
probability to sample from related corpora based on its similarity with the
target corpus. Finally, the model would be fine-tuned automatically on the
target corpus. Our sampling strategy outperforms the baseline multilingual
model on 16 low-resource tasks. Additionally, we demonstrate that our corpus
embeddings capture the language and domain information of each corpus.
| 2,019 | Computation and Language |
SANTLR: Speech Annotation Toolkit for Low Resource Languages | While low resource speech recognition has attracted a lot of attention from
the speech community, there are a few tools available to facilitate low
resource speech collection. In this work, we present SANTLR: Speech Annotation
Toolkit for Low Resource Languages. It is a web-based toolkit which allows
researchers to easily collect and annotate a corpus of speech in a low resource
language. Annotators may use this toolkit for two purposes: transcription or
recording. In transcription, annotators would transcribe audio files provided
by the researchers; in recording, annotators would record their voice by
reading provided texts. We highlight two properties of this toolkit. First,
SANTLR has a very user-friendly User Interface (UI). Both researchers and
annotators may use this simple web interface to interact. There is no
requirement for the annotators to have any expertise in audio or text
processing. The toolkit would handle all preprocessing and postprocessing
steps. Second, we employ a multi-step ranking mechanism facilitate the
annotation process. In particular, the toolkit would give higher priority to
utterances which are easier to annotate and are more beneficial to achieving
the goal of the annotation, e.g. quickly training an acoustic model.
| 2,019 | Computation and Language |
The TALP-UPC System for the WMT Similar Language Task: Statistical vs
Neural Machine Translation | Although the problem of similar language translation has been an area of
research interest for many years, yet it is still far from being solved. In
this paper, we study the performance of two popular approaches: statistical and
neural. We conclude that both methods yield similar results; however, the
performance varies depending on the language pair. While the statistical
approach outperforms the neural one by a difference of 6 BLEU points for the
Spanish-Portuguese language pair, the proposed neural model surpasses the
statistical one by a difference of 2 BLEU points for Czech-Polish. In the
former case, the language similarity (based on perplexity) is much higher than
in the latter case. Additionally, we report negative results for the system
combination with back-translation. Our TALP-UPC system submission won 1st place
for Czech-to-Polish and 2nd place for Spanish-to-Portuguese in the official
evaluation of the 1st WMT Similar Language Translation task.
| 2,020 | Computation and Language |
Word2vec to behavior: morphology facilitates the grounding of language
in machines | Enabling machines to respond appropriately to natural language commands could
greatly expand the number of people to whom they could be of service. Recently,
advances in neural network-trained word embeddings have empowered non-embodied
text-processing algorithms, and suggest they could be of similar utility for
embodied machines. Here we introduce a method that does so by training robots
to act similarly to semantically-similar word2vec encoded commands. We show
that this enables them to act appropriately, after training, to
previously-unheard commands. Finally, we show that inducing such an alignment
between motoric and linguistic similarities can be facilitated or hindered by
the mechanical structure of the robot. This points to future, large scale
methods that find and exploit relationships between action, language, and robot
structure.
| 2,020 | Computation and Language |
Semi-supervised Thai Sentence Segmentation Using Local and Distant Word
Representations | A sentence is typically treated as the minimal syntactic unit used for
extracting valuable information from a longer piece of text. However, in
written Thai, there are no explicit sentence markers. We proposed a deep
learning model for the task of sentence segmentation that includes three main
contributions. First, we integrate n-gram embedding as a local representation
to capture word groups near sentence boundaries. Second, to focus on the
keywords of dependent clauses, we combine the model with a distant
representation obtained from self-attention modules. Finally, due to the
scarcity of labeled data, for which annotation is difficult and time-consuming,
we also investigate and adapt Cross-View Training (CVT) as a semi-supervised
learning technique, allowing us to utilize unlabeled data to improve the model
representations. In the Thai sentence segmentation experiments, our model
reduced the relative error by 7.4% and 10.5% compared with the baseline models
on the Orchid and UGWC datasets, respectively. We also applied our model to the
task of pronunciation recovery on the IWSLT English dataset. Our model
outperformed the prior sequence tagging models, achieving a relative error
reduction of 2.5%. Ablation studies revealed that utilizing n-gram
presentations was the main contributing factor for Thai, while the
semi-supervised training helped the most for English.
| 2,019 | Computation and Language |
Automatic Fact-Checking Using Context and Discourse Information | We study the problem of automatic fact-checking, paying special attention to
the impact of contextual and discourse information. We address two related
tasks: (i) detecting check-worthy claims, and (ii) fact-checking claims. We
develop supervised systems based on neural networks, kernel-based support
vector machines, and combinations thereof, which make use of rich input
representations in terms of discourse cues and contextual features. For the
check-worthiness estimation task, we focus on political debates, and we model
the target claim in the context of the full intervention of a participant and
the previous and the following turns in the debate, taking into account
contextual meta information. For the fact-checking task, we focus on answer
verification in a community forum, and we model the veracity of the answer with
respect to the entire question--answer thread in which it occurs as well as
with respect to other related posts from the entire forum. We develop annotated
datasets for both tasks and we run extensive experimental evaluation,
confirming that both types of information ---but especially contextual
features--- play an important role.
| 2,019 | Computation and Language |
JUMT at WMT2019 News Translation Task: A Hybrid approach to Machine
Translation for Lithuanian to English | In the current work, we present a description of the system submitted to WMT
2019 News Translation Shared task. The system was created to translate news
text from Lithuanian to English. To accomplish the given task, our system used
a Word Embedding based Neural Machine Translation model to post edit the
outputs generated by a Statistical Machine Translation model. The current paper
documents the architecture of our model, descriptions of the various modules
and the results produced using the same. Our system garnered a BLEU score of
17.6.
| 2,019 | Computation and Language |
Separating Argument Structure from Logical Structure in AMR | The AMR (Abstract Meaning Representation) formalism for representing meaning
of natural language sentences was not designed to deal with scope and
quantifiers. By extending AMR with indices for contexts and formulating
constraints on these contexts, a formalism is derived that makes correct
prediction for inferences involving negation and bound variables. The
attractive core predicate-argument structure of AMR is preserved. The resulting
framework is similar to that of Discourse Representation Theory.
| 2,020 | Computation and Language |
Beyond English-Only Reading Comprehension: Experiments in Zero-Shot
Multilingual Transfer for Bulgarian | Recently, reading comprehension models achieved near-human performance on
large-scale datasets such as SQuAD, CoQA, MS Macro, RACE, etc. This is largely
due to the release of pre-trained contextualized representations such as BERT
and ELMo, which can be fine-tuned for the target task. Despite those advances
and the creation of more challenging datasets, most of the work is still done
for English. Here, we study the effectiveness of multilingual BERT fine-tuned
on large-scale English datasets for reading comprehension (e.g., for RACE), and
we apply it to Bulgarian multiple-choice reading comprehension. We propose a
new dataset containing 2,221 questions from matriculation exams for twelfth
grade in various subjects -history, biology, geography and philosophy-, and 412
additional questions from online quizzes in history. While the quiz authors
gave no relevant context, we incorporate knowledge from Wikipedia, retrieving
documents matching the combination of question + each answer option. Moreover,
we experiment with different indexing and pre-training strategies. The
evaluation results show accuracy of 42.23%, which is well above the baseline of
24.89%.
| 2,019 | Computation and Language |
Predicting Actions to Help Predict Translations | We address the task of text translation on the How2 dataset using a state of
the art transformer-based multimodal approach. The question we ask ourselves is
whether visual features can support the translation process, in particular,
given that this is a dataset extracted from videos, we focus on the translation
of actions, which we believe are poorly captured in current static image-text
datasets currently used for multimodal translation. For that purpose, we
extract different types of action features from the videos and carefully
investigate how helpful this visual information is by testing whether it can
increase translation quality when used in conjunction with (i) the original
text and (ii) the original text where action-related words (or all verbs) are
masked out. The latter is a simulation that helps us assess the utility of the
image in cases where the text does not provide enough context about the action,
or in the presence of noise in the input text.
| 2,019 | Computation and Language |
Processamento de linguagem natural em Portugu\^es e aprendizagem
profunda para o dom\'inio de \'Oleo e G\'as | Over the last few decades, institutions around the world have been challenged
to deal with the sheer volume of information captured in unstructured formats,
especially in textual documents. The so called Digital Transformation age,
characterized by important technological advances and the advent of disruptive
methods in Artificial Intelligence, offers opportunities to make better use of
this information. Recent techniques in Natural Language Processing (NLP) with
Deep Learning approaches allow to efficiently process a large volume of data in
order to obtain relevant information, to identify patterns, classify text,
among other applications. In this context, the highly technical vocabulary of
Oil and Gas (O&G) domain represents a challenge for these NLP algorithms, in
which terms can assume a very different meaning in relation to common sense
understanding. The search for suitable mathematical representations and
specific models requires a large amount of representative corpora in the O&G
domain. However, public access to this material is scarce in the scientific
literature, especially considering the Portuguese language. This paper presents
a literature review about the main techniques for deep learning NLP and their
major applications for O&G domain in Portuguese.
| 2,019 | Computation and Language |
Thoth: Improved Rapid Serial Visual Presentation using Natural Language
Processing | Thoth is a tool designed to combine many different types of speed reading
technology. The largest insight is using natural language parsing for more
optimal rapid serial visual presentation and more effective reading
information.
| 2,019 | Computation and Language |
Hybrid Neural Tagging Model for Open Relation Extraction | Open relation extraction (ORE) remains a challenge to obtain a semantic
representation by discovering arbitrary relation tuples from the unstructured
text. Conventional methods heavily depend on feature engineering or syntactic
parsing, they are inefficient or error-cascading. Recently, leveraging
supervised deep learning structures to address the ORE task is an
extraordinarily promising way. However, there are two main challenges: (1) The
lack of enough labeled corpus to support supervised training; (2) The
exploration of specific neural architecture that adapts to the characteristics
of open relation extracting. In this paper, to overcome these difficulties, we
build a large-scale, high-quality training corpus in a fully automated way, and
design a tagging scheme to assist in transforming the ORE task into a sequence
tagging processing. Furthermore, we propose a hybrid neural network model
(HNN4ORT) for open relation tagging. The model employs the Ordered Neurons LSTM
to encode potential syntactic information for capturing the associations among
the arguments and relations. It also emerges a novel Dual Aware Mechanism,
including Local-aware Attention and Global-aware Convolution. The dual aware
nesses complement each other so that the model can take the sentence-level
semantics as a global perspective, and at the same time implement salient local
features to achieve sparse annotation. Experimental results on various testing
sets show that our model can achieve state-of-the-art performances compared to
the conventional methods or other neural models.
| 2,020 | Computation and Language |
Exploring Neural Net Augmentation to BERT for Question Answering on
SQUAD 2.0 | Enhancing machine capabilities to answer questions has been a topic of
considerable focus in recent years of NLP research. Language models like
Embeddings from Language Models (ELMo)[1] and Bidirectional Encoder
Representations from Transformers (BERT) [2] have been very successful in
developing general purpose language models that can be optimized for a large
number of downstream language tasks. In this work, we focused on augmenting the
pre-trained BERT language model with different output neural net architectures
and compared their performance on question answering task posed by the Stanford
Question Answering Dataset 2.0 (SQUAD 2.0) [3]. Additionally, we also
fine-tuned the pre-trained BERT model parameters to demonstrate its
effectiveness in adapting to specialized language tasks. Our best output
network, is the contextualized CNN that performs on both the unanswerable and
answerable question answering tasks with F1 scores of 75.32 and 64.85
respectively.
| 2,020 | Computation and Language |
Pars-ABSA: an Aspect-based Sentiment Analysis dataset for Persian | Due to the increased availability of online reviews, sentiment analysis had
been witnessed a booming interest from the researchers. Sentiment analysis is a
computational treatment of sentiment used to extract and understand the
opinions of authors. While many systems were built to predict the sentiment of
a document or a sentence, many others provide the necessary detail on various
aspects of the entity (i.e. aspect-based sentiment analysis). Most of the
available data resources were tailored to English and the other popular
European languages. Although Persian is a language with more than 110 million
speakers, to the best of our knowledge, there is a lack of public dataset on
aspect-based sentiment analysis for Persian. This paper provides a manually
annotated Persian dataset, Pars-ABSA, which is verified by 3 native Persian
speakers. The dataset consists of 5,114 positive, 3,061 negative and 1,827
neutral data samples from 5,602 unique reviews. Moreover, as a baseline, this
paper reports the performance of some state-of-the-art aspect-based sentiment
analysis methods with a focus on deep learning, on Pars-ABSA. The obtained
results are impressive compared to similar English state-of-the-art.
| 2,019 | Computation and Language |
MacNet: Transferring Knowledge from Machine Comprehension to
Sequence-to-Sequence Models | Machine Comprehension (MC) is one of the core problems in natural language
processing, requiring both understanding of the natural language and knowledge
about the world. Rapid progress has been made since the release of several
benchmark datasets, and recently the state-of-the-art models even surpass human
performance on the well-known SQuAD evaluation. In this paper, we transfer
knowledge learned from machine comprehension to the sequence-to-sequence tasks
to deepen the understanding of the text. We propose MacNet: a novel
encoder-decoder supplementary architecture to the widely used attention-based
sequence-to-sequence models. Experiments on neural machine translation (NMT)
and abstractive text summarization show that our proposed framework can
significantly improve the performance of the baseline models, and our method
for the abstractive text summarization achieves the state-of-the-art results on
the Gigaword dataset.
| 2,019 | Computation and Language |
Sparsity Emerges Naturally in Neural Language Models | Concerns about interpretability, computational resources, and principled
inductive priors have motivated efforts to engineer sparse neural models for
NLP tasks. If sparsity is important for NLP, might well-trained neural models
naturally become roughly sparse? Using the Taxi-Euclidean norm to measure
sparsity, we find that frequent input words are associated with concentrated or
sparse activations, while frequent target words are associated with dispersed
activations but concentrated gradients. We find that gradients associated with
function words are more concentrated than the gradients of content words, even
controlling for word frequency.
| 2,019 | Computation and Language |
An Unsupervised Character-Aware Neural Approach to Word and Context
Representation Learning | In the last few years, neural networks have been intensively used to develop
meaningful distributed representations of words and contexts around them. When
these representations, also known as "embeddings", are learned from
unsupervised large corpora, they can be transferred to different tasks with
positive effects in terms of performances, especially when only a few
supervisions are available. In this work, we further extend this concept, and
we present an unsupervised neural architecture that jointly learns word and
context embeddings, processing words as sequences of characters. This allows
our model to spot the regularities that are due to the word morphology, and to
avoid the need of a fixed-sized input vocabulary of words. We show that we can
learn compact encoders that, despite the relatively small number of parameters,
reach high-level performances in downstream tasks, comparing them with related
state-of-the-art approaches or with fully supervised methods.
| 2,018 | Computation and Language |
Dialogue Act Classification in Group Chats with DAG-LSTMs | Dialogue act (DA) classification has been studied for the past two decades
and has several key applications such as workflow automation and conversation
analytics. Researchers have used, to address this problem, various traditional
machine learning models, and more recently deep neural network models such as
hierarchical convolutional neural networks (CNNs) and long short-term memory
(LSTM) networks. In this paper, we introduce a new model architecture,
directed-acyclic-graph LSTM (DAG-LSTM) for DA classification. A DAG-LSTM
exploits the turn-taking structure naturally present in a multi-party
conversation, and encodes this relation in its model structure. Using the STAC
corpus, we show that the proposed method performs roughly 0.8% better in
accuracy and 1.2% better in macro-F1 score when compared to existing methods.
The proposed method is generic and not limited to conversation applications.
| 2,019 | Computation and Language |
Word Sense Disambiguation using Diffusion Kernel PCA | One of the major problems in natural language processing (NLP) is the word
sense disambiguation (WSD) problem. It is the task of computationally
identifying the right sense of a polysemous word based on its context.
Resolving the WSD problem boosts the accuracy of many NLP focused algorithms
such as text classification and machine translation. In this paper, we
introduce a new supervised algorithm for WSD, that is based on Kernel PCA and
Semantic Diffusion Kernel, which is called Diffusion Kernel PCA (DKPCA). DKPCA
grasps the semantic similarities within terms, and it is based on PCA. These
properties enable us to perform feature extraction and dimension reduction
guided by semantic similarities and within the algorithm. Our empirical results
on SensEval data demonstrate that DKPCA achieves higher or very close accuracy
results compared to SVM and KPCA with various well-known kernels when the
labeled data ratio is meager. Considering the scarcity of labeled data, whereas
large quantities of unlabeled textual data are easily accessible, these are
highly encouraging first results to develop DKPCA further.
| 2,019 | Computation and Language |
Structured Knowledge Discovery from Massive Text Corpus | Nowadays, with the booming development of the Internet, people benefit from
its convenience due to its open and sharing nature. A large volume of natural
language texts is being generated by users in various forms, such as search
queries, documents, and social media posts. As the unstructured text corpus is
usually noisy and messy, it becomes imperative to correctly identify and
accurately annotate structured information in order to obtain meaningful
insights or better understand unstructured texts. On the other hand, the
existing structured information, which embodies our knowledge such as entity or
concept relations, often suffers from incompleteness or quality-related issues.
Given a gigantic collection of texts which offers rich semantic information, it
is also important to harness the massiveness of the unannotated text corpus to
expand and refine existing structured knowledge with fewer annotation efforts.
In this dissertation, I will introduce principles, models, and algorithms for
effective structured knowledge discovery from the massive text corpus. We are
generally interested in obtaining insights and better understanding
unstructured texts with the help of structured annotations or by
structure-aware modeling. Also, given the existing structured knowledge, we are
interested in expanding its scale and improving its quality harnessing the
massiveness of the text corpus. In particular, four problems are studied in
this dissertation: Structured Intent Detection for Natural Language
Understanding, Structure-aware Natural Language Modeling, Generative Structured
Knowledge Expansion, and Synonym Refinement on Structured Knowledge.
| 2,019 | Computation and Language |
Text-to-SQL Generation for Question Answering on Electronic Medical
Records | Electronic medical records (EMR) contain comprehensive patient information
and are typically stored in a relational database with multiple tables.
Effective and efficient patient information retrieval from EMR data is a
challenging task for medical experts. Question-to-SQL generation methods tackle
this problem by first predicting the SQL query for a given question about a
database, and then, executing the query on the database. However, most of the
existing approaches have not been adapted to the healthcare domain due to a
lack of healthcare Question-to-SQL dataset for learning models specific to this
domain. In addition, wide use of the abbreviation of terminologies and possible
typos in questions introduce additional challenges for accurately generating
the corresponding SQL queries. In this paper, we tackle these challenges by
developing a deep learning based TRanslate-Edit Model for Question-to-SQL
(TREQS) generation, which adapts the widely used sequence-to-sequence model to
directly generate the SQL query for a given question, and further performs the
required edits using an attentive-copying mechanism and task-specific look-up
tables. Based on the widely used publicly available electronic medical
database, we create a new large-scale Question-SQL pair dataset, named
MIMICSQL, in order to perform the Question-to-SQL generation task in healthcare
domain. An extensive set of experiments are conducted to evaluate the
performance of our proposed model on MIMICSQL. Both quantitative and
qualitative experimental results indicate the flexibility and efficiency of our
proposed method in predicting condition values and its robustness to random
questions with abbreviations and typos.
| 2,020 | Computation and Language |
DLGNet: A Transformer-based Model for Dialogue Response Generation | Neural dialogue models, despite their successes, still suffer from lack of
relevance, diversity, and in many cases coherence in their generated responses.
These issues can attributed to reasons including (1) short-range model
architectures that capture limited temporal dependencies, (2) limitations of
the maximum likelihood training objective, (3) the concave entropy profile of
dialogue datasets resulting in short and generic responses, and (4) the
out-of-vocabulary problem leading to generation of a large number of <UNK>
tokens. On the other hand, transformer-based models such as GPT-2 have
demonstrated an excellent ability to capture long-range structures in language
modeling tasks. In this paper, we present DLGNet, a transformer-based model for
dialogue modeling. We specifically examine the use of DLGNet for multi-turn
dialogue response generation. In our experiments, we evaluate DLGNet on the
open-domain Movie Triples dataset and the closed-domain Ubuntu Dialogue
dataset. DLGNet models, although trained with only the maximum likelihood
objective, achieve significant improvements over state-of-the-art multi-turn
dialogue models. They also produce best performance to date on the two datasets
based on several metrics, including BLEU, ROUGE, and distinct n-gram. Our
analysis shows that the performance improvement is mostly due to the
combination of (1) the long-range transformer architecture with (2) the
injection of random informative paddings. Other contributing factors include
the joint modeling of dialogue context and response, and the 100% tokenization
coverage from the byte pair encoding (BPE).
| 2,019 | Computation and Language |
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact
Verification | Fact verification (FV) is a challenging task which requires to retrieve
relevant evidence from plain text and use the evidence to verify given claims.
Many claims require to simultaneously integrate and reason over several pieces
of evidence for verification. However, previous work employs simple models to
extract information from evidence without letting evidence communicate with
each other, e.g., merely concatenate the evidence for processing. Therefore,
these methods are unable to grasp sufficient relational and logical information
among the evidence. To alleviate this issue, we propose a graph-based evidence
aggregating and reasoning (GEAR) framework which enables information to
transfer on a fully-connected evidence graph and then utilizes different
aggregators to collect multi-evidence information. We further employ BERT, an
effective pre-trained language representation model, to improve the
performance. Experimental results on a large-scale benchmark dataset FEVER have
demonstrated that GEAR could leverage multi-evidence information for FV and
thus achieves the promising result with a test FEVER score of 67.10%. Our code
is available at https://github.com/thunlp/GEAR.
| 2,019 | Computation and Language |
Self-Knowledge Distillation in Natural Language Processing | Since deep learning became a key player in natural language processing (NLP),
many deep learning models have been showing remarkable performances in a
variety of NLP tasks, and in some cases, they are even outperforming humans.
Such high performance can be explained by efficient knowledge representation of
deep learning models. While many methods have been proposed to learn more
efficient representation, knowledge distillation from pretrained deep networks
suggest that we can use more information from the soft target probability to
train other neural networks. In this paper, we propose a new knowledge
distillation method self-knowledge distillation, based on the soft target
probabilities of the training model itself, where multimode information is
distilled from the word embedding space right below the softmax layer. Due to
the time complexity, our method approximates the soft target probabilities. In
experiments, we applied the proposed method to two different and fundamental
NLP tasks: language model and neural machine translation. The experiment
results show that our proposed method improves performance on the tasks.
| 2,019 | Computation and Language |
DELTA: A DEep learning based Language Technology plAtform | In this paper we present DELTA, a deep learning based language technology
platform. DELTA is an end-to-end platform designed to solve industry level
natural language and speech processing problems. It integrates most popular
neural network models for training as well as comprehensive deployment tools
for production. DELTA aims to provide easy and fast experiences for using,
deploying, and developing natural language processing and speech models for
both academia and industry use cases. We demonstrate the reliable performance
with DELTA on several natural language processing and speech tasks, including
text classification, named entity recognition, natural language inference,
speech recognition, speaker verification, etc. DELTA has been used for
developing several state-of-the-art algorithms for publications and delivering
real production to serve millions of users.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.