Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Acquisition of Inflectional Morphology in Artificial Neural Networks
With Prior Knowledge | How does knowledge of one language's morphology influence learning of
inflection rules in a second one? In order to investigate this question in
artificial neural network models, we perform experiments with a
sequence-to-sequence architecture, which we train on different combinations of
eight source and three target languages. A detailed analysis of the model
outputs suggests the following conclusions: (i) if source and target language
are closely related, acquisition of the target language's inflectional
morphology constitutes an easier task for the model; (ii) knowledge of a
prefixing (resp. suffixing) language makes acquisition of a suffixing (resp.
prefixing) language's morphology more challenging; and (iii) surprisingly, a
source language which exhibits an agglutinative morphology simplifies learning
of a second language's inflectional morphology, independent of their
relatedness.
| 2,019 | Computation and Language |
Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence
Representations | We investigate whether off-the-shelf deep bidirectional sentence
representations trained on a massively multilingual corpus (multilingual BERT)
enable the development of an unsupervised universal dependency parser. This
approach only leverages a mix of monolingual corpora in many languages and does
not require any translation data making it applicable to low-resource
languages. In our experiments we outperform the best CoNLL 2018
language-specific systems in all of the shared task's six truly low-resource
languages while using a single system. However, we also find that (i) parsing
accuracy still varies dramatically when changing the training languages and
(ii) in some target languages zero-shot transfer fails under all tested
conditions, raising concerns on the 'universality' of the whole approach.
| 2,019 | Computation and Language |
From the Paft to the Fiiture: a Fully Automatic NMT and Word Embeddings
Method for OCR Post-Correction | A great deal of historical corpora suffer from errors introduced by the OCR
(optical character recognition) methods used in the digitization process.
Correcting these errors manually is a time-consuming process and a great part
of the automatic approaches have been relying on rules or supervised machine
learning. We present a fully automatic unsupervised way of extracting parallel
data for training a character-based sequence-to-sequence NMT (neural machine
translation) model to conduct OCR error correction.
| 2,019 | Computation and Language |
SmokEng: Towards Fine-grained Classification of Tobacco-related Social
Media Text | Contemporary datasets on tobacco consumption focus on one of two topics,
either public health mentions and disease surveillance, or sentiment analysis
on topical tobacco products and services. However, two primary considerations
are not accounted for, the language of the demographic affected and a
combination of the topics mentioned above in a fine-grained classification
mechanism. In this paper, we create a dataset of 3144 tweets, which are
selected based on the presence of colloquial slang related to smoking and
analyze it based on the semantics of the tweet. Each class is created and
annotated based on the content of the tweets such that further hierarchical
methods can be easily applied.
Further, we prove the efficacy of standard text classification methods on
this dataset, by designing experiments which do both binary as well as
multi-class classification. Our experiments tackle the identification of either
a specific topic (such as tobacco product promotion), a general mention
(cigarettes and related products) or a more fine-grained classification. This
methodology paves the way for further analysis, such as understanding sentiment
or style, which makes this dataset a vital contribution to both disease
surveillance and tobacco use research.
| 2,020 | Computation and Language |
VAIS ASR: Building a conversational speech recognition system using
language model combination | Automatic Speech Recognition (ASR) systems have been evolving quickly and
reaching human parity in certain cases. The systems usually perform pretty well
on reading style and clean speech, however, most of the available systems
suffer from situation where the speaking style is conversation and in noisy
environments. It is not straight-forward to tackle such problems due to
difficulties in data collection for both speech and text. In this paper, we
attempt to mitigate the problems using language models combination techniques
that allows us to utilize both large amount of writing style text and small
number of conversation text data. Evaluation on the VLSP 2019 ASR challenges
showed that our system achieved 4.85% WER on the VLSP 2018 and 15.09% WER on
the VLSP 2019 data sets.
| 2,019 | Computation and Language |
VAIS Hate Speech Detection System: A Deep Learning based Approach for
System Combination | Nowadays, Social network sites (SNSs) such as Facebook, Twitter are common
places where people show their opinions, sentiments and share information with
others. However, some people use SNSs to post abuse and harassment threats in
order to prevent other SNSs users from expressing themselves as well as seeking
different opinions. To deal with this problem, SNSs have to use a lot of
resources including people to clean the aforementioned content. In this paper,
we propose a supervised learning model based on the ensemble method to solve
the problem of detecting hate content on SNSs in order to make conversations on
SNSs more effective. Our proposed model got the first place for public
dashboard with 0.730 F1 macro-score and the third place with 0.584 F1
macro-score for private dashboard at the sixth international workshop on
Vietnamese Language and Speech Processing 2019.
| 2,019 | Computation and Language |
VATEX Captioning Challenge 2019: Multi-modal Information Fusion and
Multi-stage Training Strategy for Video Captioning | Multi-modal information is essential to describe what has happened in a
video. In this work, we represent videos by various appearance, motion and
audio information guided with video topic. By following multi-stage training
strategy, our experiments show steady and significant improvement on the VATEX
benchmark. This report presents an overview and comparative analysis of our
system designed for both Chinese and English tracks on VATEX Captioning
Challenge 2019.
| 2,019 | Computation and Language |
Progress Notes Classification and Keyword Extraction using
Attention-based Deep Learning Models with BERT | Various deep learning algorithms have been developed to analyze different
types of clinical data including clinical text classification and extracting
information from 'free text' and so on. However, automate the keyword
extraction from the clinical notes is still challenging. The challenges include
dealing with noisy clinical notes which contain various abbreviations, possible
typos, and unstructured sentences. The objective of this research is to
investigate the attention-based deep learning models to classify the
de-identified clinical progress notes extracted from a real-world EHR system.
The attention-based deep learning models can be used to interpret the models
and understand the critical words that drive the correct or incorrect
classification of the clinical progress notes. The attention-based models in
this research are capable of presenting the human interpretable text
classification models. The results show that the fine-tuned BERT with the
attention layer can achieve a high classification accuracy of 97.6%, which is
higher than the baseline fine-tuned BERT classification model. In this
research, we also demonstrate that the attention-based models can identify
relevant keywords that are strongly related to the clinical progress note
categories.
| 2,019 | Computation and Language |
Transformers without Tears: Improving the Normalization of
Self-Attention | We evaluate three simple, normalization-centric changes to improve
Transformer training. First, we show that pre-norm residual connections
(PreNorm) and smaller initializations enable warmup-free, validation-based
training with large learning rates. Second, we propose $\ell_2$ normalization
with a single scale parameter (ScaleNorm) for faster training and better
performance. Finally, we reaffirm the effectiveness of normalizing word
embeddings to a fixed length (FixNorm). On five low-resource translation pairs
from TED Talks-based corpora, these changes always converge, giving an average
+1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on
IWSLT'15 English-Vietnamese. We observe sharper performance curves, more
consistent gradient norms, and a linear relationship between activation scaling
and decoder depth. Surprisingly, in the high-resource setting (WMT'14
English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades
performance.
| 2,020 | Computation and Language |
Knowledge-guided Unsupervised Rhetorical Parsing for Text Summarization | Automatic text summarization (ATS) has recently achieved impressive
performance thanks to recent advances in deep learning and the availability of
large-scale corpora. To make the summarization results more faithful, this
paper presents an unsupervised approach that combines rhetorical structure
theory, deep neural model and domain knowledge concern for ATS. This
architecture mainly contains three components: domain knowledge base
construction based on representation learning, attentional encoder-decoder
model for rhetorical parsing and subroutine-based model for text summarization.
Domain knowledge can be effectively used for unsupervised rhetorical parsing
thus rhetorical structure trees for each document can be derived. In the
unsupervised rhetorical parsing module, the idea of translation was adopted to
alleviate the problem of data scarcity. The subroutine-based summarization
model purely depends on the derived rhetorical structure trees and can generate
content-balanced results. To evaluate the summary results without golden
standard, we proposed an unsupervised evaluation metric, whose hyper-parameters
were tuned by supervised learning. Experimental results show that, on a
large-scale Chinese dataset, our proposed approach can obtain comparable
performances compared with existing methods.
| 2,019 | Computation and Language |
Improving Question Generation With to the Point Context | Question generation (QG) is the task of generating a question from a
reference sentence and a specified answer within the sentence. A major
challenge in QG is to identify answer-relevant context words to finish the
declarative-to-interrogative sentence transformation. Existing
sequence-to-sequence neural models achieve this goal by proximity-based answer
position encoding under the intuition that neighboring words of answers are of
high possibility to be answer-relevant. However, such intuition may not apply
to all cases especially for sentences with complex answer-relevant relations.
Consequently, the performance of these models drops sharply when the relative
distance between the answer fragment and other non-stop sentence words that
also appear in the ground truth question increases. To address this issue, we
propose a method to jointly model the unstructured sentence and the structured
answer-relevant relation (extracted from the sentence in advance) for question
generation. Specifically, the structured answer-relevant relation acts as the
to the point context and it thus naturally helps keep the generated question to
the point, while the unstructured sentence provides the full information.
Extensive experiments show that to the point context helps our question
generation model achieve significant improvements on several automatic
evaluation metrics. Furthermore, our model is capable of generating diverse
questions for a sentence which conveys multiple relations of its answer
fragment.
| 2,019 | Computation and Language |
STANCY: Stance Classification Based on Consistency Cues | Controversial claims are abundant in online media and discussion forums. A
better understanding of such claims requires analyzing them from different
perspectives. Stance classification is a necessary step for inferring these
perspectives in terms of supporting or opposing the claim. In this work, we
present a neural network model for stance classification leveraging BERT
representations and augmenting them with a novel consistency constraint.
Experiments on the Perspectrum dataset, consisting of claims and users'
perspectives from various debate websites, demonstrate the effectiveness of our
approach over state-of-the-art baselines.
| 2,019 | Computation and Language |
Feature-Dependent Confusion Matrices for Low-Resource NER Labeling with
Noisy Labels | In low-resource settings, the performance of supervised labeling models can
be improved with automatically annotated or distantly supervised data, which is
cheap to create but often noisy. Previous works have shown that significant
improvements can be reached by injecting information about the confusion
between clean and noisy labels in this additional training data into the
classifier training. However, for noise estimation, these approaches either do
not take the input features (in our case word embeddings) into account, or they
need to learn the noise modeling from scratch which can be difficult in a
low-resource setting. We propose to cluster the training data using the input
features and then compute different confusion matrices for each cluster. To the
best of our knowledge, our approach is the first to leverage feature-dependent
noise modeling with pre-initialized confusion matrices. We evaluate on
low-resource named entity recognition settings in several languages, showing
that our methods improve upon other confusion-matrix based methods by up to 9%.
| 2,019 | Computation and Language |
Q8BERT: Quantized 8Bit BERT | Recently, pre-trained Transformer based language models such as BERT and GPT,
have shown great improvement in many Natural Language Processing (NLP) tasks.
However, these models contain a large amount of parameters. The emergence of
even larger and more accurate models such as GPT2 and Megatron, suggest a trend
of large pre-trained Transformer models. However, using these large models in
production environments is a complex task requiring a large amount of compute,
memory and power resources. In this work we show how to perform
quantization-aware training during the fine-tuning phase of BERT in order to
compress BERT by $4\times$ with minimal accuracy loss. Furthermore, the
produced quantized model can accelerate inference speed if it is optimized for
8bit Integer supporting hardware.
| 2,021 | Computation and Language |
Estimating post-editing effort: a study on human judgements, task-based
and reference-based metrics of MT quality | Devising metrics to assess translation quality has always been at the core of
machine translation (MT) research. Traditional automatic reference-based
metrics, such as BLEU, have shown correlations with human judgements of
adequacy and fluency and have been paramount for the advancement of MT system
development. Crowd-sourcing has popularised and enabled the scalability of
metrics based on human judgements, such as subjective direct assessments (DA)
of adequacy, that are believed to be more reliable than reference-based
automatic metrics. Finally, task-based measurements, such as post-editing time,
are expected to provide a more detailed evaluation of the usefulness of
translations for a specific task. Therefore, while DA averages adequacy
judgements to obtain an appraisal of (perceived) quality independently of the
task, and reference-based automatic metrics try to objectively estimate quality
also in a task-independent way, task-based metrics are measurements obtained
either during or after performing a specific task. In this paper we argue that,
although expensive, task-based measurements are the most reliable when
estimating MT quality in a specific task; in our case, this task is
post-editing. To that end, we report experiments on a dataset with
newly-collected post-editing indicators and show their usefulness when
estimating post-editing effort. Our results show that task-based metrics
comparing machine-translated and post-edited versions are the best at tracking
post-editing effort, as expected. These metrics are followed by DA, and then by
metrics comparing the machine-translated version and independent references. We
suggest that MT practitioners should be aware of these differences and
acknowledge their implications when deciding how to evaluate MT for
post-editing purposes.
| 2,019 | Computation and Language |
Updating Pre-trained Word Vectors and Text Classifiers using Monolingual
Alignment | In this paper, we focus on the problem of adapting word vector-based models
to new textual data. Given a model pre-trained on large reference data, how can
we adapt it to a smaller piece of data with a slightly different language
distribution? We frame the adaptation problem as a monolingual word vector
alignment problem, and simply average models after alignment. We align vectors
using the RCSLS criterion. Our formulation results in a simple and efficient
algorithm that allows adapting general-purpose models to changing word
distributions. In our evaluation, we consider applications to word embedding
and text classification models. We show that the proposed approach yields good
performance in all setups and outperforms a baseline consisting in fine-tuning
the model on new data.
| 2,019 | Computation and Language |
Restoring ancient text using deep learning: a case study on Greek
epigraphy | Ancient history relies on disciplines such as epigraphy, the study of ancient
inscribed texts, for evidence of the recorded past. However, these texts,
"inscriptions", are often damaged over the centuries, and illegible parts of
the text must be restored by specialists, known as epigraphists. This work
presents Pythia, the first ancient text restoration model that recovers missing
characters from a damaged text input using deep neural networks. Its
architecture is carefully designed to handle long-term context information, and
deal efficiently with missing or corrupted character and word representations.
To train it, we wrote a non-trivial pipeline to convert PHI, the largest
digital corpus of ancient Greek inscriptions, to machine actionable text, which
we call PHI-ML. On PHI-ML, Pythia's predictions achieve a 30.1% character error
rate, compared to the 57.3% of human epigraphists. Moreover, in 73.5% of cases
the ground-truth sequence was among the Top-20 hypotheses of Pythia, which
effectively demonstrates the impact of this assistive method on the field of
digital epigraphy, and sets the state-of-the-art in ancient text restoration.
| 2,019 | Computation and Language |
Training Compact Models for Low Resource Entity Tagging using
Pre-trained Language Models | Training models on low-resource named entity recognition tasks has been shown
to be a challenge, especially in industrial applications where deploying
updated models is a continuous effort and crucial for business operations. In
such cases there is often an abundance of unlabeled data, while labeled data is
scarce or unavailable. Pre-trained language models trained to extract
contextual features from text were shown to improve many natural language
processing (NLP) tasks, including scarcely labeled tasks, by leveraging
transfer learning. However, such models impose a heavy memory and computational
burden, making it a challenge to train and deploy such models for inference
use. In this work-in-progress we combined the effectiveness of transfer
learning provided by pre-trained masked language models with a semi-supervised
approach to train a fast and compact model using labeled and unlabeled
examples. Preliminary evaluations show that the compact models can achieve
competitive accuracy with 36x compression rate when compared with a
state-of-the-art pre-trained language model, and run significantly faster in
inference, allowing deployment of such models in production environments or on
edge devices.
| 2,019 | Computation and Language |
Structured Pruning of a BERT-based Question Answering Model | The recent trend in industry-setting Natural Language Processing (NLP)
research has been to operate large %scale pretrained language models like BERT
under strict computational limits. While most model compression work has
focused on "distilling" a general-purpose language representation using
expensive pretraining distillation, less attention has been paid to creating
smaller task-specific language representations which, arguably, are more useful
in an industry setting. In this paper, we investigate compressing BERT- and
RoBERTa-based question answering systems by structured pruning of parameters
from the underlying transformer model. We find that an inexpensive combination
of task-specific structured pruning and task-specific distillation, without the
expense of pretraining distillation, yields highly-performing models across a
range of speed/accuracy tradeoff operating points. We start from existing
full-size models trained for SQuAD 2.0 or Natural Questions and introduce gates
that allow selected parts of transformers to be individually eliminated.
Specifically, we investigate (1) structured pruning to reduce the number of
parameters in each transformer layer, (2) applicability to both BERT- and
RoBERTa-based models, (3) applicability to both SQuAD 2.0 and Natural
Questions, and (4) combining structured pruning with distillation. We achieve a
near-doubling of inference speed with less than a 0.5 F1-point loss in short
answer accuracy on Natural Questions.
| 2,021 | Computation and Language |
In-training Matrix Factorization for Parameter-frugal Neural Machine
Translation | In this paper, we propose the use of in-training matrix factorization to
reduce the model size for neural machine translation. Using in-training matrix
factorization, parameter matrices may be decomposed into the products of
smaller matrices, which can compress large machine translation architectures by
vastly reducing the number of learnable parameters. We apply in-training matrix
factorization to different layers of standard neural architectures and show
that in-training factorization is capable of reducing nearly 50% of learnable
parameters without any associated loss in BLEU score. Further, we find that
in-training matrix factorization is especially powerful on embedding layers,
providing a simple and effective method to curtail the number of parameters
with minimal impact on model performance, and, at times, an increase in
performance.
| 2,020 | Computation and Language |
Mapping Supervised Bilingual Word Embeddings from English to
low-resource languages | It is very challenging to work with low-resource languages due to the
inadequate availability of data. Using a dictionary to map independently
trained word embeddings into a shared vector space has proved to be very useful
in learning bilingual embeddings in the past. Here we have tried to map
individual embeddings of words in English and their corresponding translated
words in low-resource languages like Estonian, Slovenian, Slovakian, and
Hungarian. We have used a supervised learning approach. We report accuracy
scores through various retrieval strategies which show that it is possible to
approach challenging tasks in Natural Language Processing like machine
translation for such languages, provided that we have at least some amount of
proper bilingual data. We also conclude that we can follow an unsupervised
learning path on monolingual text data as that is more suitable for
low-resource languages.
| 2,019 | Computation and Language |
Whatcha lookin' at? DeepLIFTing BERT's Attention in Question Answering | There has been great success recently in tackling challenging NLP tasks by
neural networks which have been pre-trained and fine-tuned on large amounts of
task data. In this paper, we investigate one such model, BERT for
question-answering, with the aim to analyze why it is able to achieve
significantly better results than other models. We run DeepLIFT on the model
predictions and test the outcomes to monitor shift in the attention values for
input. We also cluster the results to analyze any possible patterns similar to
human reasoning depending on the kind of input paragraph and question the model
is trying to answer.
| 2,019 | Computation and Language |
Hierarchical Semantic Correspondence Learning for Post-Discharge Patient
Mortality Prediction | Predicting patient mortality is an important and challenging problem in the
healthcare domain, especially for intensive care unit (ICU) patients.
Electronic health notes serve as a rich source for learning patient
representations, that can facilitate effective risk assessment. However, a
large portion of clinical notes are unstructured and also contain domain
specific terminologies, from which we need to extract structured information.
In this paper, we introduce an embedding framework to learn
semantically-plausible distributed representations of clinical notes that
exploits the semantic correspondence between the unstructured texts and their
corresponding structured knowledge, known as semantic frame, in a hierarchical
fashion. Our approach integrates text modeling and semantic correspondence
learning into a single model that comprises 1) an unstructured embedding module
that makes use of self-similarity matrix representations in order to inject
structural regularities of different segments inherent in clinical texts to
promote local coherence, 2) a structured embedding module to embed the semantic
frames (e.g., UMLS semantic types) with deep ConvNet and 3) a hierarchical
semantic correspondence module that embeds by enhancing the interactions
between text-semantic frame embedding pairs at multiple levels (i.e., words,
sentence, note). Evaluations on multiple embedding benchmarks on post discharge
intensive care patient mortality prediction tasks demonstrate its effectiveness
compared to approaches that do not exploit the semantic interactions between
structured and unstructured information present in clinical notes.
| 2,019 | Computation and Language |
Detecting Machine-Translated Text using Back Translation | Machine-translated text plays a crucial role in the communication of people
using different languages. However, adversaries can use such text for malicious
purposes such as plagiarism and fake review. The existing methods detected a
machine-translated text only using the text's intrinsic content, but they are
unsuitable for classifying the machine-translated and human-written texts with
the same meanings. We have proposed a method to extract features used to
distinguish machine/human text based on the similarity between the intrinsic
text and its back-translation. The evaluation of detecting translated sentences
with French shows that our method achieves 75.0% of both accuracy and F-score.
It outperforms the existing methods whose the best accuracy is 62.8% and the
F-score is 62.7%. The proposed method even detects more efficiently the
back-translated text with 83.4% of accuracy, which is higher than 66.7% of the
best previous accuracy. We also achieve similar results not only with F-score
but also with similar experiments related to Japanese. Moreover, we prove that
our detector can recognize both machine-translated and machine-back-translated
texts without the language information which is used to generate these machine
texts. It demonstrates the persistence of our method in various applications in
both low- and rich-resource languages.
| 2,019 | Computation and Language |
Text2Math: End-to-end Parsing Text into Math Expressions | We propose Text2Math, a model for semantically parsing text into math
expressions. The model can be used to solve different math related problems
including arithmetic word problems and equation parsing problems. Unlike
previous approaches, we tackle the problem from an end-to-end structured
prediction perspective where our algorithm aims to predict the complete math
expression at once as a tree structure, where minimal manual efforts are
involved in the process. Empirical results on benchmark datasets demonstrate
the efficacy of our approach.
| 2,019 | Computation and Language |
Aligning Cross-Lingual Entities with Multi-Aspect Information | Multilingual knowledge graphs (KGs), such as YAGO and DBpedia, represent
entities in different languages. The task of cross-lingual entity alignment is
to match entities in a source language with their counterparts in target
languages. In this work, we investigate embedding-based approaches to encode
entities from multilingual KGs into the same vector space, where equivalent
entities are close to each other. Specifically, we apply graph convolutional
networks (GCNs) to combine multi-aspect information of entities, including
topological connections, relations, and attributes of entities, to learn entity
embeddings. To exploit the literal descriptions of entities expressed in
different languages, we propose two uses of a pretrained multilingual BERT
model to bridge cross-lingual gaps. We further propose two strategies to
integrate GCN-based and BERT-based modules to boost performance. Extensive
experiments on two benchmark datasets demonstrate that our method significantly
outperforms existing systems.
| 2,019 | Computation and Language |
FacTweet: Profiling Fake News Twitter Accounts | We present an approach to detect fake news in Twitter at the account level
using a neural recurrent model and a variety of different semantic and
stylistic features. Our method extracts a set of features from the timelines of
news Twitter accounts by reading their posts as chunks, rather than dealing
with each tweet independently. We show the experimental benefits of modeling
latent stylistic signatures of mixed fake and real news with a sequential model
over a wide range of strong baselines.
| 2,019 | Computation and Language |
Robust Semantic Parsing with Adversarial Learning for Domain
Generalization | This paper addresses the issue of generalization for Semantic Parsing in an
adversarial framework. Building models that are more robust to inter-document
variability is crucial for the integration of Semantic Parsing technologies in
real applications. The underlying question throughout this study is whether
adversarial learning can be used to train models on a higher level of
abstraction in order to increase their robustness to lexical and stylistic
variations.We propose to perform Semantic Parsing with a domain classification
adversarial task without explicit knowledge of the domain. The strategy is
first evaluated on a French corpus of encyclopedic documents, annotated with
FrameNet, in an information retrieval perspective, then on PropBank Semantic
Role Labeling task on the CoNLL-2005 benchmark. We show that adversarial
learning increases all models generalization capabilities both on in and
out-of-domain data.
| 2,019 | Computation and Language |
NumNet: Machine Reading Comprehension with Numerical Reasoning | Numerical reasoning, such as addition, subtraction, sorting and counting is a
critical skill in human's reading comprehension, which has not been well
considered in existing machine reading comprehension (MRC) systems. To address
this issue, we propose a numerical MRC model named as NumNet, which utilizes a
numerically-aware graph neural network to consider the comparing information
and performs numerical reasoning over numbers in the question and passage. Our
system achieves an EM-score of 64.56% on the DROP dataset, outperforming all
existing machine reading comprehension models by considering the numerical
relations among numbers.
| 2,019 | Computation and Language |
Auto-Sizing the Transformer Network: Improving Speed, Efficiency, and
Performance for Low-Resource Machine Translation | Neural sequence-to-sequence models, particularly the Transformer, are the
state of the art in machine translation. Yet these neural networks are very
sensitive to architecture and hyperparameter settings. Optimizing these
settings by grid or random search is computationally expensive because it
requires many training runs. In this paper, we incorporate architecture search
into a single training run through auto-sizing, which uses regularization to
delete neurons in a network over the course of training. On very low-resource
language pairs, we show that auto-sizing can improve BLEU scores by up to 3.9
points while removing one-third of the parameters from the model.
| 2,019 | Computation and Language |
Tree-Structured Semantic Encoder with Knowledge Sharing for Domain
Adaptation in Natural Language Generation | Domain adaptation in natural language generation (NLG) remains challenging
because of the high complexity of input semantics across domains and limited
data of a target domain. This is particularly the case for dialogue systems,
where we want to be able to seamlessly include new domains into the
conversation. Therefore, it is crucial for generation models to share knowledge
across domains for the effective adaptation from one domain to another. In this
study, we exploit a tree-structured semantic encoder to capture the internal
structure of complex semantic representations required for multi-domain
dialogues in order to facilitate knowledge sharing across domains. In addition,
a layer-wise attention mechanism between the tree encoder and the decoder is
adopted to further improve the model's capability. The automatic evaluation
results show that our model outperforms previous methods in terms of the BLEU
score and the slot error rate, in particular when the adaptation data is
limited. In subjective evaluation, human judges tend to prefer the sentences
generated by our model, rating them more highly on informativeness and
naturalness than other systems.
| 2,019 | Computation and Language |
Improving Word Embedding Factorization for Compression Using Distilled
Nonlinear Neural Decomposition | Word-embeddings are vital components of Natural Language Processing (NLP)
models and have been extensively explored. However, they consume a lot of
memory which poses a challenge for edge deployment. Embedding matrices,
typically, contain most of the parameters for language models and about a third
for machine translation systems. In this paper, we propose Distilled Embedding,
an (input/output) embedding compression method based on low-rank matrix
decomposition and knowledge distillation. First, we initialize the weights of
our decomposed matrices by learning to reconstruct the full pre-trained
word-embedding and then fine-tune end-to-end, employing knowledge distillation
on the factorized embedding. We conduct extensive experiments with various
compression rates on machine translation and language modeling, using different
data-sets with a shared word-embedding matrix for both embedding and vocabulary
projection matrices. We show that the proposed technique is simple to
replicate, with one fixed parameter controlling compression size, has higher
BLEU score on translation and lower perplexity on language modeling compared to
complex, difficult to tune state-of-the-art methods.
| 2,020 | Computation and Language |
Language Identification on Massive Datasets of Short Message using an
Attention Mechanism CNN | Language Identification (LID) is a challenging task, especially when the
input texts are short and noisy such as posts and statuses on social media or
chat logs on gaming forums. The task has been tackled by either designing a
feature set for a traditional classifier (e.g. Naive Bayes) or applying a deep
neural network classifier (e.g. Bi-directional Gated Recurrent Unit,
Encoder-Decoder). These methods are usually trained and tested on a huge amount
of private data, then used and evaluated as off-the-shelf packages by other
researchers using their own datasets, and consequently the various results
published are not directly comparable. In this paper, we first create a new
massive labelled dataset based on one year of Twitter data. We use this dataset
to test several existing language identification systems, in order to obtain a
set of coherent benchmarks, and we make our dataset publicly available so that
others can add to this set of benchmarks. Finally, we propose a shallow but
efficient neural LID system, which is a ngram-regional convolution neural
network enhanced with an attention mechanism. Experimental results show that
our architecture is able to predict tens of thousands of samples per second and
surpasses all state-of-the-art systems with an improvement of 5%.
| 2,019 | Computation and Language |
On the Importance of Word Boundaries in Character-level Neural Machine
Translation | Neural Machine Translation (NMT) models generally perform translation using a
fixed-size lexical vocabulary, which is an important bottleneck on their
generalization capability and overall translation quality. The standard
approach to overcome this limitation is to segment words into subword units,
typically using some external tools with arbitrary heuristics, resulting in
vocabulary units not optimized for the translation task. Recent studies have
shown that the same approach can be extended to perform NMT directly at the
level of characters, which can deliver translation accuracy on-par with
subword-based models, on the other hand, this requires relatively deeper
networks. In this paper, we propose a more computationally-efficient solution
for character-level NMT which implements a hierarchical decoding architecture
where translations are subsequently generated at the level of words and
characters. We evaluate different methods for open-vocabulary NMT in the
machine translation task from English into five languages with distinct
morphological typology, and show that the hierarchical decoding model can reach
higher translation accuracy than the subword-level NMT model using
significantly fewer parameters, while demonstrating better capacity in learning
longer-distance contextual and grammatical dependencies than the standard
character-level NMT model.
| 2,019 | Computation and Language |
Facebook AI's WAT19 Myanmar-English Translation Task Submission | This paper describes Facebook AI's submission to the WAT 2019 Myanmar-English
translation task. Our baseline systems are BPE-based transformer models. We
explore methods to leverage monolingual data to improve generalization,
including self-training, back-translation and their combination. We further
improve results by using noisy channel re-ranking and ensembling. We
demonstrate that these techniques can significantly improve not only a system
trained with additional monolingual data, but even the baseline system trained
exclusively on the provided small parallel dataset. Our system ranks first in
both directions according to human evaluation and BLEU, with a gain of over 8
BLEU points above the second best system.
| 2,019 | Computation and Language |
Context Matters: Recovering Human Semantic Structure from Machine
Learning Analysis of Large-Scale Text Corpora | Applying machine learning algorithms to large-scale, text-based corpora
(embeddings) presents a unique opportunity to investigate at scale how human
semantic knowledge is organized and how people use it to judge fundamental
relationships, such as similarity between concepts. However, efforts to date
have shown a substantial discrepancy between algorithm predictions and
empirical judgments. Here, we introduce a novel approach of generating
embeddings motivated by the psychological theory that semantic context plays a
critical role in human judgments. Specifically, we train state-of-the-art
machine learning algorithms using contextually-constrained text corpora and
show that this greatly improves predictions of similarity judgments and feature
ratings. By improving the correspondence between representations derived using
embeddings generated by machine learning methods and empirical measurements of
human judgments, the approach we describe helps advance the use of large-scale
text corpora to understand the structure of human semantic representations.
| 2,020 | Computation and Language |
Answering Complex Open-domain Questions Through Iterative Query
Generation | It is challenging for current one-step retrieve-and-read question answering
(QA) systems to answer questions like "Which novel by the author of 'Armada'
will be adapted as a feature film by Steven Spielberg?" because the question
seldom contains retrievable clues about the missing entity (here, the author).
Answering such a question requires multi-hop reasoning where one must gather
information about the missing entity (or facts) to proceed with further
reasoning. We present GoldEn (Gold Entity) Retriever, which iterates between
reading context and retrieving more supporting documents to answer open-domain
multi-hop questions. Instead of using opaque and computationally expensive
neural retrieval models, GoldEn Retriever generates natural language search
queries given the question and available context, and leverages off-the-shelf
information retrieval systems to query for missing entities. This allows GoldEn
Retriever to scale up efficiently for open-domain multi-hop reasoning while
maintaining interpretability. We evaluate GoldEn Retriever on the recently
proposed open-domain multi-hop QA dataset, HotpotQA, and demonstrate that it
outperforms the best previously published model despite not using pretrained
language models such as BERT.
| 2,019 | Computation and Language |
Iterative Delexicalization for Improved Spoken Language Understanding | Recurrent neural network (RNN) based joint intent classification and slot
tagging models have achieved tremendous success in recent years for building
spoken language understanding and dialog systems. However, these models suffer
from poor performance for slots which often encounter large semantic
variability in slot values after deployment (e.g. message texts, partial
movie/artist names). While greedy delexicalization of slots in the input
utterance via substring matching can partly improve performance, it often
produces incorrect input. Moreover, such techniques cannot delexicalize slots
with out-of-vocabulary slot values not seen at training. In this paper, we
propose a novel iterative delexicalization algorithm, which can accurately
delexicalize the input, even with out-of-vocabulary slot values. Based on model
confidence of the current delexicalized input, our algorithm improves
delexicalization in every iteration to converge to the best input having the
highest confidence. We show on benchmark and in-house datasets that our
algorithm can greatly improve parsing performance for RNN based models,
especially for out-of-distribution slot values.
| 2,019 | Computation and Language |
Analyzing the Forgetting Problem in the Pretrain-Finetuning of Dialogue
Response Models | In this work, we study how the finetuning stage in the pretrain-finetune
framework changes the behavior of a pretrained neural language generator. We
focus on the transformer encoder-decoder model for the open-domain dialogue
response generation task. Our major finding is that after standard finetuning,
the model forgets some of the important language generation skills acquired
during large-scale pretraining. We demonstrate the forgetting phenomenon
through a set of detailed behavior analysis from the perspectives of knowledge
transfer, context sensitivity, and function space projection. As a preliminary
attempt to alleviate the forgetting problem, we propose an intuitive finetuning
strategy named "mix-review". We find that mix-review effectively regularizes
the finetuning process, and the forgetting problem is alleviated to some
extent. Finally, we discuss interesting behavior of the resulting dialogue
model and its implications.
| 2,021 | Computation and Language |
FewRel 2.0: Towards More Challenging Few-Shot Relation Classification | We present FewRel 2.0, a more challenging task to investigate two aspects of
few-shot relation classification models: (1) Can they adapt to a new domain
with only a handful of instances? (2) Can they detect none-of-the-above (NOTA)
relations? To construct FewRel 2.0, we build upon the FewRel dataset (Han et
al., 2018) by adding a new test set in a quite different domain, and a NOTA
relation choice. With the new dataset and extensive experimental analysis, we
found (1) that the state-of-the-art few-shot relation classification models
struggle on these two aspects, and (2) that the commonly-used techniques for
domain adaptation and NOTA detection still cannot handle the two challenges
well. Our research calls for more attention and further efforts to these two
real-world issues. All details and resources about the dataset and baselines
are released at https: //github.com/thunlp/fewrel.
| 2,019 | Computation and Language |
Efficiency through Auto-Sizing: Notre Dame NLP's Submission to the WNGT
2019 Efficiency Task | This paper describes the Notre Dame Natural Language Processing Group's
(NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We
investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al.,
2019) to the Transformer network (Vaswani et al., 2017) with the goal of
substantially reducing the number of parameters in the model. Our method was
able to eliminate more than 25% of the model's parameters while suffering a
decrease of only 1.1 BLEU.
| 2,019 | Computation and Language |
Joint Learning of Word and Label Embeddings for Sequence Labelling in
Spoken Language Understanding | We propose an architecture to jointly learn word and label embeddings for
slot filling in spoken language understanding. The proposed approach encodes
labels using a combination of word embeddings and straightforward word-label
association from the training data. Compared to the state-of-the-art methods,
our approach does not require label embeddings as part of the input and
therefore lends itself nicely to a wide range of model architectures. In
addition, our architecture computes contextual distances between words and
labels to avoid adding contextual windows, thus reducing memory footprint. We
validate the approach on established spoken dialogue datasets and show that it
can achieve state-of-the-art performance with much fewer trainable parameters.
| 2,019 | Computation and Language |
Unsupervised Question Answering for Fact-Checking | Recent Deep Learning (DL) models have succeeded in achieving human-level
accuracy on various natural language tasks such as question-answering, natural
language inference (NLI), and textual entailment. These tasks not only require
the contextual knowledge but also the reasoning abilities to be solved
efficiently. In this paper, we propose an unsupervised question-answering based
approach for a similar task, fact-checking. We transform the FEVER dataset into
a Cloze-task by masking named entities provided in the claims. To predict the
answer token, we utilize pre-trained Bidirectional Encoder Representations from
Transformers (BERT). The classifier computes label based on the correctly
answered questions and a threshold. Currently, the classifier is able to
classify the claims as "SUPPORTS" and "MANUAL_REVIEW". This approach achieves a
label accuracy of 80.2% on the development set and 80.25% on the test set of
the transformed dataset.
| 2,019 | Computation and Language |
Content Enhanced BERT-based Text-to-SQL Generation | We present a simple methods to leverage the table content for the BERT-based
model to solve the text-to-SQL problem. Based on the observation that some of
the table content match some words in question string and some of the table
header also match some words in question string, we encode two addition feature
vector for the deep model. Our methods also benefit the model inference in
testing time as the tables are almost the same in training and testing time. We
test our model on the WikiSQL dataset and outperform the BERT-based baseline by
3.7% in logic form and 3.7% in execution accuracy and achieve state-of-the-art.
| 2,020 | Computation and Language |
BERTRAM: Improved Word Embeddings Have Big Impact on Contextualized
Model Performance | Pretraining deep language models has led to large performance gains in NLP.
Despite this success, Schick and Sch\"utze (2020) recently showed that these
models struggle to understand rare words. For static word embeddings, this
problem has been addressed by separately learning representations for rare
words. In this work, we transfer this idea to pretrained language models: We
introduce BERTRAM, a powerful architecture based on BERT that is capable of
inferring high-quality embeddings for rare words that are suitable as input
representations for deep language models. This is achieved by enabling the
surface form and contexts of a word to interact with each other in a deep
architecture. Integrating BERTRAM into BERT leads to large performance
increases due to improved representations of rare and medium frequency words on
both a rare word probing task and three downstream tasks.
| 2,020 | Computation and Language |
Meemi: A Simple Method for Post-processing and Integrating Cross-lingual
Word Embeddings | Word embeddings have become a standard resource in the toolset of any Natural
Language Processing practitioner. While monolingual word embeddings encode
information about words in the context of a particular language, cross-lingual
embeddings define a multilingual space where word embeddings from two or more
languages are integrated together. Current state-of-the-art approaches learn
these embeddings by aligning two disjoint monolingual vector spaces through an
orthogonal transformation which preserves the structure of the monolingual
counterparts. In this work, we propose to apply an additional transformation
after this initial alignment step, which aims to bring the vector
representations of a given word and its translations closer to their average.
Since this additional transformation is non-orthogonal, it also affects the
structure of the monolingual spaces. We show that our approach both improves
the integration of the monolingual spaces as well as the quality of the
monolingual spaces themselves. Furthermore, because our transformation can be
applied to an arbitrary number of languages, we are able to effectively obtain
a truly multilingual space. The resulting (monolingual and multilingual) spaces
show consistent gains over the current state-of-the-art in standard intrinsic
tasks, namely dictionary induction and word similarity, as well as in extrinsic
tasks such as cross-lingual hypernym discovery and cross-lingual natural
language inference.
| 2,020 | Computation and Language |
Lead2Gold: Towards exploiting the full potential of noisy transcriptions
for speech recognition | The transcriptions used to train an Automatic Speech Recognition (ASR) system
may contain errors. Usually, either a quality control stage discards
transcriptions with too many errors, or the noisy transcriptions are used as
is. We introduce Lead2Gold, a method to train an ASR system that exploits the
full potential of noisy transcriptions. Based on a noise model of transcription
errors, Lead2Gold searches for better transcriptions of the training data with
a beam search that takes this noise model into account. The beam search is
differentiable and does not require a forced alignment step, thus the whole
system is trained end-to-end. Lead2Gold can be viewed as a new loss function
that can be used on top of any sequence-to-sequence deep neural network. We
conduct proof-of-concept experiments on noisy transcriptions generated from
letter corruptions with different noise levels. We show that Lead2Gold obtains
a better ASR accuracy than a competitive baseline which does not account for
the (artificially-introduced) transcription noise.
| 2,019 | Computation and Language |
A Probabilistic Framework for Learning Domain Specific Hierarchical Word
Embeddings | The meaning of a word often varies depending on its usage in different
domains. The standard word embedding models struggle to represent this
variation, as they learn a single global representation for a word. We propose
a method to learn domain-specific word embeddings, from text organized into
hierarchical domains, such as reviews in an e-commerce website, where products
follow a taxonomy. Our structured probabilistic model allows vector
representations for the same word to drift away from each other for distant
domains in the taxonomy, to accommodate its domain-specific meanings. By
learning sets of domain-specific word representations jointly, our model can
leverage domain relationships, and it scales well with the number of domains.
Using large real-world review datasets, we demonstrate the effectiveness of our
model compared to state-of-the-art approaches, in learning domain-specific word
embeddings that are both intuitive to humans and benefit downstream NLP tasks.
| 2,019 | Computation and Language |
Why can't memory networks read effectively? | Memory networks have been a popular choice among neural architectures for
machine reading comprehension and question answering. While recent work
revealed that memory networks can't truly perform multi-hop reasoning, we show
in the present paper that vanilla memory networks are ineffective even in
single-hop reading comprehension. We analyze the reasons for this on two
cloze-style datasets, one from the medical domain and another including
children's fiction. We find that the output classification layer with
entity-specific weights, and the aggregation of passage information with
relatively flat attention distributions are the most important contributors to
poor results. We propose network adaptations that can serve as simple remedies.
We also find that the presence of unseen answers at test time can dramatically
affect the reported results, so we suggest controlling for this factor during
evaluation.
| 2,019 | Computation and Language |
Generating Challenge Datasets for Task-Oriented Conversational Agents
through Self-Play | End-to-end neural approaches are becoming increasingly common in
conversational scenarios due to their promising performances when provided with
sufficient amount of data. In this paper, we present a novel methodology to
address the interpretability of neural approaches in such scenarios by creating
challenge datasets using dialogue self-play over multiple tasks/intents.
Dialogue self-play allows generating large amount of synthetic data; by taking
advantage of the complete control over the generation process, we show how
neural approaches can be evaluated in terms of unseen dialogue patterns. We
propose several out-of-pattern test cases each of which introduces a natural
and unexpected user utterance phenomenon. As a proof of concept, we built a
single and a multiple memory network, and show that these two architectures
have diverse performances depending on the peculiar dialogue patterns.
| 2,019 | Computation and Language |
Evolution of transfer learning in natural language processing | In this paper, we present a study of the recent advancements which have
helped bring Transfer Learning to NLP through the use of semi-supervised
training. We discuss cutting-edge methods and architectures such as BERT, GPT,
ELMo, ULMFit among others. Classically, tasks in natural language processing
have been performed through rule-based and statistical methodologies. However,
owing to the vast nature of natural languages these methods do not generalise
well and failed to learn the nuances of language. Thus machine learning
algorithms such as Naive Bayes and decision trees coupled with traditional
models such as Bag-of-Words and N-grams were used to usurp this problem.
Eventually, with the advent of advanced recurrent neural network architectures
such as the LSTM, we were able to achieve state-of-the-art performance in
several natural language processing tasks such as text classification and
machine translation. We talk about how Transfer Learning has brought about the
well-known ImageNet moment for NLP. Several advanced architectures such as the
Transformer and its variants have allowed practitioners to leverage knowledge
gained from unrelated task to drastically fasten convergence and provide better
performance on the target task. This survey represents an effort at providing a
succinct yet complete understanding of the recent advances in natural language
processing using deep learning in with a special focus on detailing transfer
learning and its potential advantages.
| 2,019 | Computation and Language |
Comprehend Medical: a Named Entity Recognition and Relationship
Extraction Web Service | Comprehend Medical is a stateless and Health Insurance Portability and
Accountability Act (HIPAA) eligible Named Entity Recognition (NER) and
Relationship Extraction (RE) service launched under Amazon Web Services (AWS)
trained using state-of-the-art deep learning models. Contrary to many existing
open source tools, Comprehend Medical is scalable and does not require steep
learning curve, dependencies, pipeline configurations, or installations.
Currently, Comprehend Medical performs NER in five medical categories: Anatomy,
Medical Condition, Medications, Protected Health Information (PHI) and
Treatment, Test and Procedure (TTP). Additionally, the service provides
relationship extraction for the detected entities as well as contextual
information such as negation and temporality in the form of traits. Comprehend
Medical provides two Application Programming Interfaces (API): 1) the NERe API
which returns all the extracted named entities, their traits and the
relationships between them and 2) the PHId API which returns just the protected
health information contained in the text. Furthermore, Comprehend Medical is
accessible through AWS Console, Java and Python Software Development Kit (SDK),
making it easier for non-developers and developers to use.
| 2,019 | Computation and Language |
Bridging the Knowledge Gap: Enhancing Question Answering with World and
Domain Knowledge | In this paper we present OSCAR (Ontology-based Semantic Composition Augmented
Regularization), a method for injecting task-agnostic knowledge from an
Ontology or knowledge graph into a neural network during pretraining. We
evaluated the impact of including OSCAR when pretraining BERT with Wikipedia
articles by measuring the performance when fine-tuning on two question
answering tasks involving world knowledge and causal reasoning and one
requiring domain (healthcare) knowledge and obtained 33:3%, 18:6%, and 4%
improved accuracy compared to pretraining BERT without OSCAR and obtaining new
state-of-the-art results on two of the tasks.
| 2,019 | Computation and Language |
Linguistic evaluation of German-English Machine Translation using a Test
Suite | We present the results of the application of a grammatical test suite for
German$\rightarrow$English MT on the systems submitted at WMT19, with a
detailed analysis for 107 phenomena organized in 14 categories. The systems
still translate wrong one out of four test items in average. Low performance is
indicated for idioms, modals, pseudo-clefts, multi-word expressions and verb
valency. When compared to last year, there has been a improvement of function
words, non-verbal agreement and punctuation. More detailed conclusions about
particular systems and phenomena are also presented.
| 2,019 | Computation and Language |
Fine-grained evaluation of German-English Machine Translation based on a
Test Suite | We present an analysis of 16 state-of-the-art MT systems on German-English
based on a linguistically-motivated test suite. The test suite has been devised
manually by a team of language professionals in order to cover a broad variety
of linguistic phenomena that MT often fails to translate properly. It contains
5,000 test sentences covering 106 linguistic phenomena in 14 categories, with
an increased focus on verb tenses, aspects and moods. The MT outputs are
evaluated in a semi-automatic way through regular expressions that focus only
on the part of the sentence that is relevant to each phenomenon. Through our
analysis, we are able to compare systems based on their performance on these
categories. Additionally, we reveal strengths and weaknesses of particular
systems and we identify grammatical phenomena where the overall performance of
MT is relatively low.
| 2,018 | Computation and Language |
Fine-grained evaluation of Quality Estimation for Machine translation
based on a linguistically-motivated Test Suite | We present an alternative method of evaluating Quality Estimation systems,
which is based on a linguistically-motivated Test Suite. We create a test-set
consisting of 14 linguistic error categories and we gather for each of them a
set of samples with both correct and erroneous translations. Then, we measure
the performance of 5 Quality Estimation systems by checking their ability to
distinguish between the correct and the erroneous translations. The detailed
results are much more informative about the ability of each system. The fact
that different Quality Estimation systems perform differently at various
phenomena confirms the usefulness of the Test Suite.
| 2,018 | Computation and Language |
MLQA: Evaluating Cross-lingual Extractive Question Answering | Question answering (QA) models have shown rapid progress enabled by the
availability of large, high-quality benchmark datasets. Such annotated datasets
are difficult and costly to collect, and rarely exist in languages other than
English, making training QA systems in other languages challenging. An
alternative to building large monolingual training datasets is to develop
cross-lingual systems which can transfer to a target language without requiring
training data in that language. In order to develop such systems, it is crucial
to invest in high quality multilingual evaluation benchmarks to measure
progress. We present MLQA, a multi-way aligned extractive QA evaluation
benchmark intended to spur research in this area. MLQA contains QA instances in
7 languages, namely English, Arabic, German, Spanish, Hindi, Vietnamese and
Simplified Chinese. It consists of over 12K QA instances in English and 5K in
each other language, with each QA instance being parallel between 4 languages
on average. MLQA is built using a novel alignment context strategy on Wikipedia
articles, and serves as a cross-lingual extension to existing extractive QA
datasets. We evaluate current state-of-the-art cross-lingual representations on
MLQA, and also provide machine-translation-based baselines. In all cases,
transfer results are shown to be significantly behind training-language
performance.
| 2,020 | Computation and Language |
Using Whole Document Context in Neural Machine Translation | In Machine Translation, considering the document as a whole can help to
resolve ambiguities and inconsistencies. In this paper, we propose a simple yet
promising approach to add contextual information in Neural Machine Translation.
We present a method to add source context that capture the whole document with
accurate boundaries, taking every word into account. We provide this additional
information to a Transformer model and study the impact of our method on three
language pairs. The proposed approach obtains promising results in the
English-German, English-French and French-English document-level translation
tasks. We observe interesting cross-sentential behaviors where the model learns
to use document-level information to improve translation coherence.
| 2,019 | Computation and Language |
Imperial College London Submission to VATEX Video Captioning Task | This paper describes the Imperial College London team's submission to the
2019' VATEX video captioning challenge, where we first explore two
sequence-to-sequence models, namely a recurrent (GRU) model and a transformer
model, which generate captions from the I3D action features. We then
investigate the effect of dropping the encoder and the attention mechanism and
instead conditioning the GRU decoder over two different vectorial
representations: (i) a max-pooled action feature vector and (ii) the output of
a multi-label classifier trained to predict visual entities from the action
features. Our baselines achieved scores comparable to the official baseline.
Conditioning over entity predictions performed substantially better than
conditioning on the max-pooled feature vector, and only marginally worse than
the GRU-based sequence-to-sequence baseline.
| 2,019 | Computation and Language |
Right-wing German Hate Speech on Twitter: Analysis and Automatic
Detection | Discussion about the social network Twitter often concerns its role in
political discourse, involving the question of when an expression of opinion
becomes offensive, immoral, and/or illegal, and how to deal with it. Given the
growing amount of offensive communication on the internet, there is a demand
for new technology that can automatically detect hate speech, to assist content
moderation by humans. This comes with new challenges, such as defining exactly
what is free speech and what is illegal in a specific country, and knowing
exactly what the linguistic characteristics of hate speech are. To shed light
on the German situation, we analyzed over 50,000 right-wing German hate tweets
posted between August 2017 and April 2018, at the time of the 2017 German
federal elections, using both quantitative and qualitative methods. In this
paper, we discuss the results of the analysis and demonstrate how the insights
can be employed for the development of automatic detection systems.
| 2,019 | Computation and Language |
Contextual Joint Factor Acoustic Embeddings | Embedding acoustic information into fixed length representations is of
interest for a whole range of applications in speech and audio technology. Two
novel unsupervised approaches to generate acoustic embeddings by modelling of
acoustic context are proposed. The first approach is a contextual joint factor
synthesis encoder, where the encoder in an encoder/decoder framework is trained
to extract joint factors from surrounding audio frames to best generate the
target output. The second approach is a contextual joint factor analysis
encoder, where the encoder is trained to analyse joint factors from the source
signal that correlates best with the neighbouring audio. To evaluate the
effectiveness of our approaches compared to prior work, two tasks are conducted
-- phone classification and speaker recognition -- and test on different TIMIT
data sets. Experimental results show that one of the proposed approaches
outperforms phone classification baselines, yielding a classification accuracy
of 74.1%. When using additional out-of-domain data for training, an additional
3% improvements can be obtained, for both for phone classification and speaker
recognition tasks.
| 2,021 | Computation and Language |
Towards Annotating and Creating Sub-Sentence Summary Highlights | Highlighting is a powerful tool to pick out important content and emphasize.
Creating summary highlights at the sub-sentence level is particularly
desirable, because sub-sentences are more concise than whole sentences. They
are also better suited than individual words and phrases that can potentially
lead to disfluent, fragmented summaries. In this paper we seek to generate
summary highlights by annotating summary-worthy sub-sentences and teaching
classifiers to do the same. We frame the task as jointly selecting important
sentences and identifying a single most informative textual unit from each
sentence. This formulation dramatically reduces the task complexity involved in
sentence compression. Our study provides new benchmarks and baselines for
generating highlights at the sub-sentence level.
| 2,019 | Computation and Language |
BIG MOOD: Relating Transformers to Explicit Commonsense Knowledge | We introduce a simple yet effective method of integrating contextual
embeddings with commonsense graph embeddings, dubbed BERT Infused Graphs:
Matching Over Other embeDdings. First, we introduce a preprocessing method to
improve the speed of querying knowledge bases. Then, we develop a method of
creating knowledge embeddings from each knowledge base. We introduce a method
of aligning tokens between two misaligned tokenization methods. Finally, we
contribute a method of contextualizing BERT after combining with knowledge base
embeddings. We also show BERTs tendency to correct lower accuracy question
types. Our model achieves a higher accuracy than BERT, and we score fifth on
the official leaderboard of the shared task and score the highest without any
additional language model pretraining.
| 2,019 | Computation and Language |
Using a KG-Copy Network for Non-Goal Oriented Dialogues | Non-goal oriented, generative dialogue systems lack the ability to generate
answers with grounded facts. A knowledge graph can be considered an abstraction
of the real world consisting of well-grounded facts. This paper addresses the
problem of generating well grounded responses by integrating knowledge graphs
into the dialogue systems response generation process, in an end-to-end manner.
A dataset for nongoal oriented dialogues is proposed in this paper in the
domain of soccer, conversing on different clubs and national teams along with a
knowledge graph for each of these teams. A novel neural network architecture is
also proposed as a baseline on this dataset, which can integrate knowledge
graphs into the response generation process, producing well articulated,
knowledge grounded responses. Empirical evidence suggests that the proposed
model performs better than other state-of-the-art models for knowledge graph
integrated dialogue systems.
| 2,019 | Computation and Language |
Topical Keyphrase Extraction with Hierarchical Semantic Networks | Topical keyphrase extraction is used to summarize large collections of text
documents. However, traditional methods cannot properly reflect the intrinsic
semantics and relationships of keyphrases because they rely on a simple
term-frequency-based process. Consequently, these methods are not effective in
obtaining significant contextual knowledge. To resolve this, we propose a
topical keyphrase extraction method based on a hierarchical semantic network
and multiple centrality network measures that together reflect the hierarchical
semantics of keyphrases. We conduct experiments on real data to examine the
practicality of the proposed method and to compare its performance with that of
existing topical keyphrase extraction methods. The results confirm that the
proposed method outperforms state-of-the-art topical keyphrase extraction
methods in terms of the representativeness of the selected keyphrases for each
topic. The proposed method can effectively reflect intrinsic keyphrase
semantics and interrelationships.
| 2,019 | Computation and Language |
H-VECTORS: Utterance-level Speaker Embedding Using A Hierarchical
Attention Model | In this paper, a hierarchical attention network to generate utterance-level
embeddings (H-vectors) for speaker identification is proposed. Since different
parts of an utterance may have different contributions to speaker identities,
the use of hierarchical structure aims to learn speaker related information
locally and globally. In the proposed approach, frame-level encoder and
attention are applied on segments of an input utterance and generate individual
segment vectors. Then, segment level attention is applied on the segment
vectors to construct an utterance representation. To evaluate the effectiveness
of the proposed approach, NIST SRE 2008 Part1 dataset is used for training, and
two datasets, Switchboard Cellular part1 and CallHome American English Speech,
are used to evaluate the quality of extracted utterance embeddings on speaker
identification and verification tasks. In comparison with two baselines,
X-vector, X-vector+Attention, the obtained results show that H-vectors can
achieve a significantly better performance. Furthermore, the extracted
utterance-level embeddings are more discriminative than the two baselines when
mapped into a 2D space using t-SNE.
| 2,019 | Computation and Language |
LibriVoxDeEn: A Corpus for German-to-English Speech Translation and
German Speech Recognition | We present a corpus of sentence-aligned triples of German audio, German text,
and English translation, based on German audiobooks. The speech translation
data consist of 110 hours of audio material aligned to over 50k parallel
sentences. An even larger dataset comprising 547 hours of German speech aligned
to German text is available for speech recognition. The audio data is read
speech and thus low in disfluencies. The quality of audio and sentence
alignments has been checked by a manual evaluation, showing that speech
alignment quality is in general very high. The sentence alignment quality is
comparable to well-used parallel translation data and can be adjusted by
cutoffs on the automatic alignment score. To our knowledge, this corpus is to
date the largest resource for German speech recognition and for end-to-end
German-to-English speech translation.
| 2,020 | Computation and Language |
PLATO: Pre-trained Dialogue Generation Model with Discrete Latent
Variable | Pre-training models have been proved effective for a wide range of natural
language processing tasks. Inspired by this, we propose a novel dialogue
generation pre-training framework to support various kinds of conversations,
including chit-chat, knowledge grounded dialogues, and conversational question
answering. In this framework, we adopt flexible attention mechanisms to fully
leverage the bi-directional context and the uni-directional characteristic of
language generation. We also introduce discrete latent variables to tackle the
inherent one-to-many mapping problem in response generation. Two reciprocal
tasks of response generation and latent act recognition are designed and
carried out simultaneously within a shared network. Comprehensive experiments
on three publicly available datasets verify the effectiveness and superiority
of the proposed framework.
| 2,020 | Computation and Language |
Cross-lingual Parsing with Polyglot Training and Multi-treebank
Learning: A Faroese Case Study | Cross-lingual dependency parsing involves transferring syntactic knowledge
from one language to another. It is a crucial component for inducing dependency
parsers in low-resource scenarios where no training data for a language exists.
Using Faroese as the target language, we compare two approaches using
annotation projection: first, projecting from multiple monolingual source
models; second, projecting from a single polyglot model which is trained on the
combination of all source languages. Furthermore, we reproduce multi-source
projection (Tyers et al., 2018), in which dependency trees of multiple sources
are combined. Finally, we apply multi-treebank modelling to the projected
treebanks, in addition to or alternatively to polyglot modelling on the source
side. We find that polyglot training on the source languages produces an
overall trend of better results on the target language but the single best
result for the target language is obtained by projecting from monolingual
source parsing models and then training multi-treebank POS tagging and parsing
models on the target side.
| 2,019 | Computation and Language |
Universal Text Representation from BERT: An Empirical Study | We present a systematic investigation of layer-wise BERT activations for
general-purpose text representations to understand what linguistic information
they capture and how transferable they are across different tasks.
Sentence-level embeddings are evaluated against two state-of-the-art models on
downstream and probing tasks from SentEval, while passage-level embeddings are
evaluated on four question-answering (QA) datasets under a learning-to-rank
problem setting. Embeddings from the pre-trained BERT model perform poorly in
semantic similarity and sentence surface information probing tasks. Fine-tuning
BERT on natural language inference data greatly improves the quality of the
embeddings. Combining embeddings from different BERT layers can further boost
performance. BERT embeddings outperform BM25 baseline significantly on factoid
QA datasets at the passage level, but fail to perform better than BM25 on
non-factoid datasets. For all QA datasets, there is a gap between
embedding-based method and in-domain fine-tuned BERT (we report new
state-of-the-art results on two datasets), which suggests deep interactions
between question and answer pairs are critical for those hard tasks.
| 2,019 | Computation and Language |
Marpa, A practical general parser: the recognizer | The Marpa recognizer is described. Marpa is a practical and fully implemented
algorithm for the recognition, parsing and evaluation of context-free grammars.
The Marpa recognizer is the first to unite the improvements to Earley's
algorithm found in Joop Leo's 1991 paper to those in Aycock and Horspool's 2002
paper. Marpa tracks the full state of the parse, at it proceeds, in a form
convenient for the application. This greatly improves error detection and
enables event-driven parsing. One such technique is "Ruby Slippers" parsing, in
which the input is altered in response to the parser's expectations.
| 2,023 | Computation and Language |
Explainable Authorship Verification in Social Media via Attention-based
Similarity Learning | Authorship verification is the task of analyzing the linguistic patterns of
two or more texts to determine whether they were written by the same author or
not. The analysis is traditionally performed by experts who consider linguistic
features, which include spelling mistakes, grammatical inconsistencies, and
stylistics for example. Machine learning algorithms, on the other hand, can be
trained to accomplish the same, but have traditionally relied on so-called
stylometric features. The disadvantage of such features is that their
reliability is greatly diminished for short and topically varied social media
texts. In this interdisciplinary work, we propose a substantial extension of a
recently published hierarchical Siamese neural network approach, with which it
is feasible to learn neural features and to visualize the decision-making
process. For this purpose, a new large-scale corpus of short Amazon reviews for
text comparison research is compiled and we show that the Siamese network
topologies outperform state-of-the-art approaches that were built up on
stylometric features. Our linguistic analysis of the internal attention weights
of the network shows that the proposed method is indeed able to latch on to
some traditional linguistic categories.
| 2,019 | Computation and Language |
SetExpan: Corpus-Based Set Expansion via Context Feature Selection and
Rank Ensemble | Corpus-based set expansion (i.e., finding the "complete" set of entities
belonging to the same semantic class, based on a given corpus and a tiny set of
seeds) is a critical task in knowledge discovery. It may facilitate numerous
downstream applications, such as information extraction, taxonomy induction,
question answering, and web search. To discover new entities in an expanded
set, previous approaches either make one-time entity ranking based on
distributional similarity, or resort to iterative pattern-based bootstrapping.
The core challenge for these methods is how to deal with noisy context features
derived from free-text corpora, which may lead to entity intrusion and semantic
drifting. In this study, we propose a novel framework, SetExpan, which tackles
this problem, with two techniques: (1) a context feature selection method that
selects clean context features for calculating entity-entity distributional
similarity, and (2) a ranking-based unsupervised ensemble method for expanding
entity set based on denoised context features. Experiments on three datasets
show that SetExpan is robust and outperforms previous state-of-the-art methods
in terms of mean average precision.
| 2,019 | Computation and Language |
HiExpan: Task-Guided Taxonomy Construction by Hierarchical Tree
Expansion | Taxonomies are of great value to many knowledge-rich applications. As the
manual taxonomy curation costs enormous human effects, automatic taxonomy
construction is in great demand. However, most existing automatic taxonomy
construction methods can only build hypernymy taxonomies wherein each edge is
limited to expressing the "is-a" relation. Such a restriction limits their
applicability to more diverse real-world tasks where the parent-child may carry
different relations. In this paper, we aim to construct a task-guided taxonomy
from a domain-specific corpus and allow users to input a "seed" taxonomy,
serving as the task guidance. We propose an expansion-based taxonomy
construction framework, namely HiExpan, which automatically generates key term
list from the corpus and iteratively grows the seed taxonomy. Specifically,
HiExpan views all children under each taxonomy node forming a coherent set and
builds the taxonomy by recursively expanding all these sets. Furthermore,
HiExpan incorporates a weakly-supervised relation extraction module to extract
the initial children of a newly-expanded node and adjusts the taxonomy tree by
optimizing its global structure. Our experiments on three real datasets from
different domains demonstrate the effectiveness of HiExpan for building
task-guided taxonomies.
| 2,019 | Computation and Language |
RTFM: Generalising to Novel Environment Dynamics via Reading | Obtaining policies that can generalise to new environments in reinforcement
learning is challenging. In this work, we demonstrate that language
understanding via a reading policy learner is a promising vehicle for
generalisation to new environments. We propose a grounded policy learning
problem, Read to Fight Monsters (RTFM), in which the agent must jointly reason
over a language goal, relevant dynamics described in a document, and
environment observations. We procedurally generate environment dynamics and
corresponding language descriptions of the dynamics, such that agents must read
to understand new environment dynamics instead of memorising any particular
information. In addition, we propose txt2$\pi$, a model that captures three-way
interactions between the goal, document, and observations. On RTFM, txt2$\pi$
generalises to new environments with dynamics not seen during training via
reading. Furthermore, our model outperforms baselines such as FiLM and
language-conditioned CNNs on RTFM. Through curriculum learning, txt2$\pi$
produces policies that excel on complex RTFM tasks requiring several reasoning
and coreference steps.
| 2,021 | Computation and Language |
Relational Graph Representation Learning for Open-Domain Question
Answering | We introduce a relational graph neural network with bi-directional attention
mechanism and hierarchical representation learning for open-domain question
answering task. Our model can learn contextual representation by jointly
learning and updating the query, knowledge graph, and document representations.
The experiments suggest that our model achieves state-of-the-art on the
WebQuestionsSP benchmark.
| 2,019 | Computation and Language |
Learning to Answer Subjective, Specific Product-Related Queries using
Customer Reviews by Adversarial Domain Adaptation | Online customer reviews on large-scale e-commerce websites, represent a rich
and varied source of opinion data, often providing subjective qualitative
assessments of product usage that can help potential customers to discover
features that meet their personal needs and preferences. Thus they have the
potential to automatically answer specific queries about products, and to
address the problems of answer starvation and answer augmentation on associated
consumer Q & A forums, by providing good answer alternatives. In this work, we
explore several recently successful neural approaches to modeling sentence
pairs, that could better learn the relationship between questions and ground
truth answers, and thus help infer reviews that can best answer a question or
augment a given answer. In particular, we hypothesize that our adversarial
domain adaptation-based approach, due to its ability to additionally learn
domain-invariant features from a large number of unlabeled, unpaired
question-review samples, would perform better than our proposed baselines, at
answering specific, subjective product-related queries using reviews. We
validate this hypothesis using a small gold standard dataset of question-review
pairs evaluated by human experts, significantly surpassing our chosen
baselines. Moreover, our approach, using no labeled question-review sentence
pair data for training, gives performance at par with another method utilizing
labeled question-review samples for the same task.
| 2,019 | Computation and Language |
Unsupervised Context Rewriting for Open Domain Conversation | Context modeling has a pivotal role in open domain conversation. Existing
works either use heuristic methods or jointly learn context modeling and
response generation with an encoder-decoder framework. This paper proposes an
explicit context rewriting method, which rewrites the last utterance by
considering context history. We leverage pseudo-parallel data and elaborate a
context rewriting network, which is built upon the CopyNet with the
reinforcement learning method. The rewritten utterance is beneficial to
candidate retrieval, explainable context modeling, as well as enabling to
employ a single-turn framework to the multi-turn scenario. The empirical
results show that our model outperforms baselines in terms of the rewriting
quality, the multi-turn response generation, and the end-to-end retrieval-based
chatbots.
| 2,019 | Computation and Language |
ALOHA: Artificial Learning of Human Attributes for Dialogue Agents | For conversational AI and virtual assistants to communicate with humans in a
realistic way, they must exhibit human characteristics such as expression of
emotion and personality. Current attempts toward constructing human-like
dialogue agents have presented significant difficulties. We propose Human Level
Attributes (HLAs) based on tropes as the basis of a method for learning
dialogue agents that can imitate the personalities of fictional characters.
Tropes are characteristics of fictional personalities that are observed
recurrently and determined by viewers' impressions. By combining detailed HLA
data with dialogue data for specific characters, we present a dataset,
HLA-Chat, that models character profiles and gives dialogue agents the ability
to learn characters' language styles through their HLAs. We then introduce a
three-component system, ALOHA (which stands for Artificial Learning of Human
Attributes), that combines character space mapping, character community
detection, and language style retrieval to build a character (or personality)
specific language model. Our preliminary experiments demonstrate that two
variations of ALOHA, combined with our proposed dataset, can outperform
baseline models at identifying the correct dialogue responses of chosen target
characters, and are stable regardless of the character's identity, the genre of
the show, and the context of the dialogue.
| 2,021 | Computation and Language |
Towards Computing Inferences from English News Headlines | Newspapers are a popular form of written discourse, read by many people,
thanks to the novelty of the information provided by the news content in it. A
headline is the most widely read part of any newspaper due to its appearance in
a bigger font and sometimes in colour print. In this paper, we suggest and
implement a method for computing inferences from English news headlines,
excluding the information from the context in which the headlines appear. This
method attempts to generate the possible assumptions a reader formulates in
mind upon reading a fresh headline. The generated inferences could be useful
for assessing the impact of the news headline on readers including children.
The understandability of the current state of social affairs depends greatly on
the assimilation of the headlines. As the inferences that are independent of
the context depend mainly on the syntax of the headline, dependency trees of
headlines are used in this approach, to find the syntactical structure of the
headlines and to compute inferences out of them.
| 2,019 | Computation and Language |
A Mutual Information Maximization Perspective of Language Representation
Learning | We show state-of-the-art word representation learning methods maximize an
objective function that is a lower bound on the mutual information between
different parts of a word sequence (i.e., a sentence). Our formulation provides
an alternative perspective that unifies classical word embedding models (e.g.,
Skip-gram) and modern contextual embeddings (e.g., BERT, XLNet). In addition to
enhancing our theoretical understanding of these methods, our derivation leads
to a principled framework that can be used to construct new self-supervised
tasks. We provide an example by drawing inspirations from related methods based
on mutual information maximization that have been successful in computer
vision, and introduce a simple self-supervised objective that maximizes the
mutual information between a global sentence representation and n-grams in the
sentence. Our analysis offers a holistic view of representation learning
methods to transfer knowledge and translate progress across multiple domains
(e.g., natural language processing, computer vision, audio processing).
| 2,019 | Computation and Language |
Model Compression with Two-stage Multi-teacher Knowledge Distillation
for Web Question Answering System | Deep pre-training and fine-tuning models (such as BERT and OpenAI GPT) have
demonstrated excellent results in question answering areas. However, due to the
sheer amount of model parameters, the inference speed of these models is very
slow. How to apply these complex models to real business scenarios becomes a
challenging but practical problem. Previous model compression methods usually
suffer from information loss during the model compression procedure, leading to
inferior models compared with the original one. To tackle this challenge, we
propose a Two-stage Multi-teacher Knowledge Distillation (TMKD for short)
method for web Question Answering system. We first develop a general Q\&A
distillation task for student model pre-training, and further fine-tune this
pre-trained student model with multi-teacher knowledge distillation on
downstream tasks (like Web Q\&A task, MNLI, SNLI, RTE tasks from GLUE), which
effectively reduces the overfitting bias in individual teacher models, and
transfers more general knowledge to the student model. The experiment results
show that our method can significantly outperform the baseline methods and even
achieve comparable results with the original teacher models, along with
substantial speedup of model inference.
| 2,019 | Computation and Language |
Controlling Utterance Length in NMT-based Word Segmentation with
Attention | One of the basic tasks of computational language documentation (CLD) is to
identify word boundaries in an unsegmented phonemic stream. While several
unsupervised monolingual word segmentation algorithms exist in the literature,
they are challenged in real-world CLD settings by the small amount of available
data. A possible remedy is to take advantage of glosses or translation in a
foreign, well-resourced, language, which often exist for such data. In this
paper, we explore and compare ways to exploit neural machine translation models
to perform unsupervised boundary detection with bilingual information, notably
introducing a new loss function for jointly learning alignment and
segmentation. We experiment with an actual under-resourced language, Mboshi,
and show that these techniques can effectively control the output segmentation
length.
| 2,019 | Computation and Language |
Using Local Knowledge Graph Construction to Scale Seq2Seq Models to
Multi-Document Inputs | Query-based open-domain NLP tasks require information synthesis from long and
diverse web results. Current approaches extractively select portions of web
text as input to Sequence-to-Sequence models using methods such as TF-IDF
ranking. We propose constructing a local graph structured knowledge base for
each query, which compresses the web search information and reduces redundancy.
We show that by linearizing the graph into a structured input sequence, models
can encode the graph representations within a standard Sequence-to-Sequence
setting. For two generative tasks with very long text input, long-form question
answering and multi-document summarization, feeding graph representations as
input can achieve better performance than using retrieved text portions.
| 2,019 | Computation and Language |
Concept Pointer Network for Abstractive Summarization | A quality abstractive summary should not only copy salient source texts as
summaries but should also tend to generate new conceptual words to express
concrete details. Inspired by the popular pointer generator
sequence-to-sequence model, this paper presents a concept pointer network for
improving these aspects of abstractive summarization. The network leverages
knowledge-based, context-aware conceptualizations to derive an extended set of
candidate concepts. The model then points to the most appropriate choice using
both the concept set and original source text. This joint approach generates
abstractive summaries with higher-level semantic concepts. The training model
is also optimized in a way that adapts to different data, which is based on a
novel method of distantly-supervised learning guided by reference summaries and
testing set. Overall, the proposed approach provides statistically significant
improvements over several state-of-the-art models on both the DUC-2004 and
Gigaword datasets. A human evaluation of the model's abstractive abilities also
supports the quality of the summaries produced within this framework.
| 2,019 | Computation and Language |
End-to-End Speech Recognition: A review for the French Language | Recently, end-to-end ASR based either on sequence-to-sequence networks or on
the CTC objective function gained a lot of interest from the community,
achieving competitive results over traditional systems using robust but complex
pipelines. One of the main features of end-to-end systems, in addition to the
ability to free themselves from extra linguistic resources such as dictionaries
or language models, is the capacity to model acoustic units such as characters,
subwords or directly words; opening up the capacity to directly translate
speech with different representations or levels of knowledge depending on the
target language. In this paper we propose a review of the existing end-to-end
ASR approaches for the French language. We compare results to conventional
state-of-the-art ASR systems and discuss which units are more suited to model
the French language.
| 2,019 | Computation and Language |
Many Faces of Feature Importance: Comparing Built-in and Post-hoc
Feature Importance in Text Classification | Feature importance is commonly used to explain machine predictions. While
feature importance can be derived from a machine learning model with a variety
of methods, the consistency of feature importance via different methods remains
understudied. In this work, we systematically compare feature importance from
built-in mechanisms in a model such as attention values and post-hoc methods
that approximate model behavior such as LIME. Using text classification as a
testbed, we find that 1) no matter which method we use, important features from
traditional models such as SVM and XGBoost are more similar with each other,
than with deep learning models; 2) post-hoc methods tend to generate more
similar important features for two models than built-in methods. We further
demonstrate how such similarity varies across instances. Notably, important
features do not always resemble each other better when two models agree on the
predicted label than when they disagree.
| 2,019 | Computation and Language |
Towards Learning Cross-Modal Perception-Trace Models | Representation learning is a key element of state-of-the-art deep learning
approaches. It enables to transform raw data into structured vector space
embeddings. Such embeddings are able to capture the distributional semantics of
their context, e.g. by word windows on natural language sentences, graph walks
on knowledge graphs or convolutions on images. So far, this context is manually
defined, resulting in heuristics which are solely optimized for computational
performance on certain tasks like link-prediction. However, such heuristic
models of context are fundamentally different to how humans capture
information. For instance, when reading a multi-modal webpage (i) humans do not
perceive all parts of a document equally: Some words and parts of images are
skipped, others are revisited several times which makes the perception trace
highly non-sequential; (ii) humans construct meaning from a document's content
by shifting their attention between text and image, among other things, guided
by layout and design elements. In this paper we empirically investigate the
difference between human perception and context heuristics of basic embedding
models. We conduct eye tracking experiments to capture the underlying
characteristics of human perception of media documents containing a mixture of
text and images. Based on that, we devise a prototypical computational
perception-trace model, called CMPM. We evaluate empirically how CMPM can
improve a basic skip-gram embedding approach. Our results suggest, that even
with a basic human-inspired computational perception model, there is a huge
potential for improving embeddings since such a model does inherently capture
multiple modalities, as well as layout and design elements.
| 2,019 | Computation and Language |
Automatic Post-Editing for Machine Translation | Automatic Post-Editing (APE) aims to correct systematic errors in a machine
translated text. This is primarily useful when the machine translation (MT)
system is not accessible for improvement, leaving APE as a viable option to
improve translation quality as a downstream task - which is the focus of this
thesis. This field has received less attention compared to MT due to several
reasons, which include: the limited availability of data to perform a sound
research, contrasting views reported by different researchers about the
effectiveness of APE, and limited attention from the industry to use APE in
current production pipelines. In this thesis, we perform a thorough
investigation of APE as a downstream task in order to: i) understand its
potential to improve translation quality; ii) advance the core technology -
starting from classical methods to recent deep-learning based solutions; iii)
cope with limited and sparse data; iv) better leverage multiple input sources;
v) mitigate the task-specific problem of over-correction; vi) enhance neural
decoding to leverage external knowledge; and vii) establish an online learning
framework to handle data diversity in real-time. All the above contributions
are discussed across several chapters, and most of them are evaluated in the
APE shared task organized each year at the Conference on Machine Translation.
Our efforts in improving the technology resulted in the best system at the 2017
APE shared task, and our work on online learning received a distinguished paper
award at the Italian Conference on Computational Linguistics. Overall, outcomes
and findings of our work have boost interest among researchers and attracted
industries to examine this technology to solve real-word problems.
| 2,019 | Computation and Language |
Sticking to the Facts: Confident Decoding for Faithful Data-to-Text
Generation | We address the issue of hallucination in data-to-text generation, i.e.,
reducing the generation of text that is unsupported by the source. We
conjecture that hallucination can be caused by an encoder-decoder model
generating content phrases without attending to the source; so we propose a
confidence score to ensure that the model attends to the source whenever
necessary, as well as a variational Bayes training framework that can learn the
score from data. Experiments on the WikiBio (Lebretet al., 2016) dataset show
that our approach is more faithful to the source than existing state-of-the-art
approaches, according to both PARENT score (Dhingra et al., 2019) and human
evaluation. We also report strong results on the WebNLG (Gardent et al., 2017)
dataset.
| 2,020 | Computation and Language |
An Improved Historical Embedding without Alignment | Many words have evolved in meaning as a result of cultural and social change.
Understanding such changes is crucial for modelling language and cultural
evolution. Low-dimensional embedding methods have shown promise in detecting
words' meaning change by encoding them into dense vectors. However, when
exploring semantic change of words over time, these methods require the
alignment of word embeddings across different time periods. This process is
computationally expensive, prohibitively time consuming and suffering from
contextual variability. In this paper, we propose a new and scalable method for
encoding words from different time periods into one dense vector space. This
can greatly improve performance when it comes to identifying words that have
changed in meaning over time. We evaluated our method on dataset from Google
Books N-gram. Our method outperformed three other popular methods in terms of
the number of words correctly identified to have changed in meaning.
Additionally, we provide an intuitive visualization of the semantic evolution
of some words extracted by our method
| 2,019 | Computation and Language |
MonaLog: a Lightweight System for Natural Language Inference Based on
Monotonicity | We present a new logic-based inference engine for natural language inference
(NLI) called MonaLog, which is based on natural logic and the monotonicity
calculus. In contrast to existing logic-based approaches, our system is
intentionally designed to be as lightweight as possible, and operates using a
small set of well-known (surface-level) monotonicity facts about quantifiers,
lexical items and tokenlevel polarity information. Despite its simplicity, we
find our approach to be competitive with other logic-based NLI models on the
SICK benchmark. We also use MonaLog in combination with the current
state-of-the-art model BERT in a variety of settings, including for
compositional data augmentation. We show that MonaLog is capable of generating
large amounts of high-quality training data for BERT, improving its accuracy on
SICK.
| 2,019 | Computation and Language |
Natural Question Generation with Reinforcement Learning Based
Graph-to-Sequence Model | Natural question generation (QG) aims to generate questions from a passage
and an answer. In this paper, we propose a novel reinforcement learning (RL)
based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a
Graph2Seq generator where a novel Bidirectional Gated Graph Neural Network is
proposed to embed the passage, and a hybrid evaluator with a mixed objective
combining both cross-entropy and RL losses to ensure the generation of
syntactically and semantically valid text. The proposed model outperforms
previous state-of-the-art methods by a large margin on the SQuAD dataset.
| 2,020 | Computation and Language |
Keyphrase Extraction from Scholarly Articles as Sequence Labeling using
Contextualized Embeddings | In this paper, we formulate keyphrase extraction from scholarly articles as a
sequence labeling task solved using a BiLSTM-CRF, where the words in the input
text are represented using deep contextualized embeddings. We evaluate the
proposed architecture using both contextualized and fixed word embedding models
on three different benchmark datasets (Inspec, SemEval 2010, SemEval 2017) and
compare with existing popular unsupervised and supervised techniques. Our
results quantify the benefits of (a) using contextualized embeddings (e.g.
BERT) over fixed word embeddings (e.g. Glove); (b) using a BiLSTM-CRF
architecture with contextualized word embeddings over fine-tuning the
contextualized word embedding model directly, and (c) using genre-specific
contextualized embeddings (SciBERT). Through error analysis, we also provide
some insights into why particular models work better than others. Lastly, we
present a case study where we analyze different self-attention layers of the
two best models (BERT and SciBERT) to better understand the predictions made by
each for the task of keyphrase extraction.
| 2,019 | Computation and Language |
Improving Sequence Modeling Ability of Recurrent Neural Networks via
Sememes | Sememes, the minimum semantic units of human languages, have been
successfully utilized in various natural language processing applications.
However, most existing studies exploit sememes in specific tasks and few
efforts are made to utilize sememes more fundamentally. In this paper, we
propose to incorporate sememes into recurrent neural networks (RNNs) to improve
their sequence modeling ability, which is beneficial to all kinds of downstream
tasks. We design three different sememe incorporation methods and employ them
in typical RNNs including LSTM, GRU and their bidirectional variants. In
evaluation, we use several benchmark datasets involving PTB and WikiText-2 for
language modeling, SNLI for natural language inference and another two datasets
for sentiment analysis and paraphrase detection. Experimental results show
evident and consistent improvement of our sememe-incorporated models compared
with vanilla RNNs, which proves the effectiveness of our sememe incorporation
methods. Moreover, we find the sememe-incorporated models have higher
robustness and outperform adversarial training in defending adversarial attack.
All the code and data of this work can be obtained at
https://github.com/thunlp/SememeRNN.
| 2,020 | Computation and Language |
PT-CoDE: Pre-trained Context-Dependent Encoder for Utterance-level
Emotion Recognition | Utterance-level emotion recognition (ULER) is a significant research topic
for understanding human behaviors and developing empathetic chatting machines
in the artificial intelligence area. Unlike traditional text classification
problem, this task is supported by a limited number of datasets, among which
most contain inadequate conversations or speeches. Such a data scarcity issue
limits the possibility of training larger and more powerful models for this
task. Witnessing the success of transfer learning in natural language process
(NLP), we propose to pre-train a context-dependent encoder (CoDE) for ULER by
learning from unlabeled conversation data. Essentially, CoDE is a hierarchical
architecture that contains an utterance encoder and a conversation encoder,
making it different from those works that aim to pre-train a universal sentence
encoder. Also, we propose a new pre-training task named "conversation
completion" (CoCo), which attempts to select the correct answer from candidate
answers to fill a masked utterance in a question conversation. The CoCo task is
carried out on pure movie subtitles so that our CoDE can be pre-trained in an
unsupervised fashion. Finally, the pre-trained CoDE (PT-CoDE) is fine-tuned for
ULER and boosts the model performance significantly on five datasets.
| 2,019 | Computation and Language |
Predicting the Leading Political Ideology of YouTube Channels Using
Acoustic, Textual, and Metadata Information | We address the problem of predicting the leading political ideology, i.e.,
left-center-right bias, for YouTube channels of news media. Previous work on
the problem has focused exclusively on text and on analysis of the language
used, topics discussed, sentiment, and the like. In contrast, here we study
videos, which yields an interesting multimodal setup. Starting with gold
annotations about the leading political ideology of major world news media from
Media Bias/Fact Check, we searched on YouTube to find their corresponding
channels, and we downloaded a recent sample of videos from each channel. We
crawled more than 1,000 YouTube hours along with the corresponding subtitles
and metadata, thus producing a new multimodal dataset. We further developed a
multimodal deep-learning architecture for the task. Our analysis shows that the
use of acoustic signal helped to improve bias detection by more than 6%
absolute over using text and metadata only. We release the dataset to the
research community, hoping to help advance the field of multi-modal political
bias detection.
| 2,019 | Computation and Language |
Byte-Pair Encoding for Text-to-SQL Generation | Neural sequence-to-sequence models provide a competitive approach to the task
of mapping a question in natural language to an SQL query, also referred to as
text-to-SQL generation. The Byte-Pair Encoding algorithm (BPE) has previously
been used to improve machine translation (MT) between natural languages. In
this work, we adapt BPE for text-to-SQL generation. As the datasets for this
task are rather small compared to MT, we present a novel stopping criterion
that prevents overfitting the BPE encoding to the training set. Additionally,
we present AST BPE, which is a version of BPE that uses the Abstract Syntax
Tree (AST) of the SQL statement to guide BPE merges and therefore produce BPE
encodings that generalize better. We improved the accuracy of a strong
attentive seq2seq baseline on five out of six English text-to-SQL tasks while
reducing training time by more than 50% on four of them due to the shortened
targets. Finally, on two of these tasks we exceeded previously reported
accuracies.
| 2,019 | Computation and Language |
Diamonds in the Rough: Generating Fluent Sentences from Early-Stage
Drafts for Academic Writing Assistance | The writing process consists of several stages such as drafting, revising,
editing, and proofreading. Studies on writing assistance, such as grammatical
error correction (GEC), have mainly focused on sentence editing and
proofreading, where surface-level issues such as typographical, spelling, or
grammatical errors should be corrected. We broaden this focus to include the
earlier revising stage, where sentences require adjustment to the information
included or major rewriting and propose Sentence-level Revision (SentRev) as a
new writing assistance task. Well-performing systems in this task can help
inexperienced authors by producing fluent, complete sentences given their
rough, incomplete drafts. We build a new freely available crowdsourced
evaluation dataset consisting of incomplete sentences authored by non-native
writers paired with their final versions extracted from published academic
papers for developing and evaluating SentRev models. We also establish baseline
performance on SentRev using our newly built evaluation dataset.
| 2,019 | Computation and Language |
Semantic Graph Convolutional Network for Implicit Discourse Relation
Classification | Implicit discourse relation classification is of great importance for
discourse parsing, but remains a challenging problem due to the absence of
explicit discourse connectives communicating these relations. Modeling the
semantic interactions between the two arguments of a relation has proven useful
for detecting implicit discourse relations. However, most previous approaches
model such semantic interactions from a shallow interactive level, which is
inadequate on capturing enough semantic information. In this paper, we propose
a novel and effective Semantic Graph Convolutional Network (SGCN) to enhance
the modeling of inter-argument semantics on a deeper interaction level for
implicit discourse relation classification. We first build an interaction graph
over representations of the two arguments, and then automatically extract
in-depth semantic interactive information through graph convolution.
Experimental results on the English corpus PDTB and the Chinese corpus CDTB
both demonstrate the superiority of our model to previous state-of-the-art
systems.
| 2,019 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.