Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Subjective Question Answering: Deciphering the inner workings of
Transformers in the realm of subjectivity | Understanding subjectivity demands reasoning skills beyond the realm of
common knowledge. It requires a machine learning model to process sentiment and
to perform opinion mining. In this work, I've exploited a recently released
dataset for span-selection Question Answering, namely SubjQA. SubjQA is the
first QA dataset that contains questions that ask for subjective opinions
corresponding to review paragraphs from six different domains. Hence, to answer
these subjective questions, a learner must extract opinions and process
sentiment for various domains, and additionally, align the knowledge extracted
from a paragraph with the natural language utterances in the corresponding
question, which together enhance the difficulty of a QA task. The primary goal
of this thesis was to investigate the inner workings (i.e., latent
representations) of a Transformer-based architecture to contribute to a better
understanding of these not yet well understood "black-box" models.
Transformer's hidden representations, concerning the true answer span, are
clustered more closely in vector space than those representations corresponding
to erroneous predictions. This observation holds across the top three
Transformer layers for both objective and subjective questions and generally
increases as a function of layer dimensions. Moreover, the probability to
achieve a high cosine similarity among hidden representations in latent space
concerning the true answer span tokens is significantly higher for correct
compared to incorrect answer span predictions. These results have decisive
implications for down-stream applications, where it is crucial to know about
why a neural network made mistakes, and in which point, in space and time the
mistake has happened (e.g., to automatically predict correctness of an answer
span prediction without the necessity of labeled data).
| 2,020 | Computation and Language |
Wat zei je? Detecting Out-of-Distribution Translations with Variational
Transformers | We detect out-of-training-distribution sentences in Neural Machine
Translation using the Bayesian Deep Learning equivalent of Transformer models.
For this we develop a new measure of uncertainty designed specifically for long
sequences of discrete random variables -- i.e. words in the output sentence.
Our new measure of uncertainty solves a major intractability in the naive
application of existing approaches on long sentences. We use our new measure on
a Transformer model trained with dropout approximate inference. On the task of
German-English translation using WMT13 and Europarl, we show that with dropout
uncertainty our measure is able to identify when Dutch source sentences,
sentences which use the same word types as German, are given to the model
instead of German.
| 2,020 | Computation and Language |
Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually
Grounded Speech | The language acquisition literature shows that children do not build their
lexicon by segmenting the spoken input into phonemes and then building up words
from them, but rather adopt a top-down approach and start by segmenting
word-like units and then break them down into smaller units. This suggests that
the ideal way of learning a language is by starting from full semantic units.
In this paper, we investigate if this is also the case for a neural model of
Visually Grounded Speech trained on a speech-image retrieval task. We evaluated
how well such a network is able to learn a reliable speech-to-image mapping
when provided with phone, syllable, or word boundary information. We present a
simple way to introduce such information into an RNN-based model and
investigate which type of boundary is the most efficient. We also explore at
which level of the network's architecture such information should be introduced
so as to maximise its performances. Finally, we show that using multiple
boundary types at once in a hierarchical structure, by which low-level segments
are used to recompose high-level segments, is beneficial and yields better
results than using low-level or high-level segments in isolation.
| 2,020 | Computation and Language |
"Notic My Speech" -- Blending Speech Patterns With Multimedia | Speech as a natural signal is composed of three parts - visemes (visual part
of speech), phonemes (spoken part of speech), and language (the imposed
structure). However, video as a medium for the delivery of speech and a
multimedia construct has mostly ignored the cognitive aspects of speech
delivery. For example, video applications like transcoding and compression have
till now ignored the fact how speech is delivered and heard. To close the gap
between speech understanding and multimedia video applications, in this paper,
we show the initial experiments by modelling the perception on visual speech
and showing its use case on video compression. On the other hand, in the visual
speech recognition domain, existing studies have mostly modeled it as a
classification problem, while ignoring the correlations between views,
phonemes, visemes, and speech perception. This results in solutions which are
further away from how human perception works. To bridge this gap, we propose a
view-temporal attention mechanism to model both the view dependence and the
visemic importance in speech recognition and understanding. We conduct
experiments on three public visual speech recognition datasets. The
experimental results show that our proposed method outperformed the existing
work by 4.99% in terms of the viseme error rate. Moreover, we show that there
is a strong correlation between our model's understanding of multi-view speech
and the human perception. This characteristic benefits downstream applications
such as video compression and streaming where a significant number of less
important frames can be compressed or eliminated while being able to maximally
preserve human speech understanding with good user experience.
| 2,020 | Computation and Language |
To Pretrain or Not to Pretrain: Examining the Benefits of Pretraining on
Resource Rich Tasks | Pretraining NLP models with variants of Masked Language Model (MLM)
objectives has recently led to a significant improvements on many tasks. This
paper examines the benefits of pretrained models as a function of the number of
training samples used in the downstream task. On several text classification
tasks, we show that as the number of training examples grow into the millions,
the accuracy gap between finetuning BERT-based model and training vanilla LSTM
from scratch narrows to within 1%. Our findings indicate that MLM-based models
might reach a diminishing return point as the supervised data size increases
significantly.
| 2,020 | Computation and Language |
DynE: Dynamic Ensemble Decoding for Multi-Document Summarization | Sequence-to-sequence (s2s) models are the basis for extensive work in natural
language processing. However, some applications, such as multi-document
summarization, multi-modal machine translation, and the automatic post-editing
of machine translation, require mapping a set of multiple distinct inputs into
a single output sequence. Recent work has introduced bespoke architectures for
these multi-input settings, and developed models which can handle increasingly
longer inputs; however, the performance of special model architectures is
limited by the available in-domain training data. In this work we propose a
simple decoding methodology which ensembles the output of multiple instances of
the same model on different inputs. Our proposed approach allows models trained
for vanilla s2s tasks to be directly used in multi-input settings. This works
particularly well when each of the inputs has significant overlap with the
others, as when compressing a cluster of news articles about the same event
into a single coherent summary, and we obtain state-of-the-art results on
several multi-document summarization datasets.
| 2,020 | Computation and Language |
Automatic Validation of Textual Attribute Values in E-commerce Catalog
by Learning with Limited Labeled Data | Product catalogs are valuable resources for eCommerce website. In the
catalog, a product is associated with multiple attributes whose values are
short texts, such as product name, brand, functionality and flavor. Usually
individual retailers self-report these key values, and thus the catalog
information unavoidably contains noisy facts. Although existing deep neural
network models have shown success in conducting cross-checking between two
pieces of texts, their success has to be dependent upon a large set of quality
labeled data, which are hard to obtain in this validation task: products span a
variety of categories. To address the aforementioned challenges, we propose a
novel meta-learning latent variable approach, called MetaBridge, which can
learn transferable knowledge from a subset of categories with limited labeled
data and capture the uncertainty of never-seen categories with unlabeled data.
More specifically, we make the following contributions. (1) We formalize the
problem of validating the textual attribute values of products from a variety
of categories as a natural language inference task in the few-shot learning
setting, and propose a meta-learning latent variable model to jointly process
the signals obtained from product profiles and textual attribute values. (2) We
propose to integrate meta learning and latent variable in a unified model to
effectively capture the uncertainty of various categories. (3) We propose a
novel objective function based on latent variable model in the few-shot
learning setting, which ensures distribution consistency between unlabeled and
labeled data and prevents overfitting by sampling from the learned
distribution. Extensive experiments on real eCommerce datasets from hundreds of
categories demonstrate the effectiveness of MetaBridge on textual attribute
validation and its outstanding performance compared with state-of-the-art
approaches.
| 2,020 | Computation and Language |
On the use of human reference data for evaluating automatic image
descriptions | Automatic image description systems are commonly trained and evaluated using
crowdsourced, human-generated image descriptions. The best-performing system is
then determined using some measure of similarity to the reference data (BLEU,
Meteor, CIDER, etc). Thus, both the quality of the systems as well as the
quality of the evaluation depends on the quality of the descriptions. As
Section 2 will show, the quality of current image description datasets is
insufficient. I argue that there is a need for more detailed guidelines that
take into account the needs of visually impaired users, but also the
feasibility of generating suitable descriptions. With high-quality data,
evaluation of image description systems could use reference descriptions, but
we should also look for alternatives.
| 2,020 | Computation and Language |
End-to-End Code Switching Language Models for Automatic Speech
Recognition | In this paper, we particularly work on the code-switched text, one of the
most common occurrences in the bilingual communities across the world. Due to
the discrepancies in the extraction of code-switched text from an Automated
Speech Recognition(ASR) module, and thereby extracting the monolingual text
from the code-switched text, we propose an approach for extracting monolingual
text using Deep Bi-directional Language Models(LM) such as BERT and other
Machine Translation models, and also explore different ways of extracting
code-switched text from the ASR model. We also explain the robustness of the
model by comparing the results of Perplexity and other different metrics like
WER, to the standard bi-lingual text output without any external information.
| 2,020 | Computation and Language |
Scalable Cross Lingual Pivots to Model Pronoun Gender for Translation | Machine translation systems with inadequate document understanding can make
errors when translating dropped or neutral pronouns into languages with
gendered pronouns (e.g., English). Predicting the underlying gender of these
pronouns is difficult since it is not marked textually and must instead be
inferred from coreferent mentions in the context. We propose a novel
cross-lingual pivoting technique for automatically producing high-quality
gender labels, and show that this data can be used to fine-tune a BERT
classifier with 92% F1 for Spanish dropped feminine pronouns, compared with
30-51% for neural machine translation models and 54-71% for a non-fine-tuned
BERT model. We augment a neural machine translation model with labels from our
classifier to improve pronoun translation, while still having parallelizable
translation models that translate a sentence at a time.
| 2,020 | Computation and Language |
Causal Knowledge Extraction from Scholarly Papers in Social Sciences | The scale and scope of scholarly articles today are overwhelming human
researchers who seek to timely digest and synthesize knowledge. In this paper,
we seek to develop natural language processing (NLP) models to accelerate the
speed of extraction of relationships from scholarly papers in social sciences,
identify hypotheses from these papers, and extract the cause-and-effect
entities. Specifically, we develop models to 1) classify sentences in scholarly
documents in business and management as hypotheses (hypothesis classification),
2) classify these hypotheses as causal relationships or not (causality
classification), and, if they are causal, 3) extract the cause and effect
entities from these hypotheses (entity extraction). We have achieved high
performance for all the three tasks using different modeling techniques. Our
approach may be generalizable to scholarly documents in a wide range of social
sciences, as well as other types of textual materials.
| 2,020 | Computation and Language |
Manipulating emotions for ground truth emotion analysis | Text data are being used as a lens through which human cognition can be
studied at a large scale. Methods like emotion analysis are now in the standard
toolkit of computational social scientists but typically rely on third-person
annotation with unknown validity. As an alternative, this paper introduces
online emotion induction techniques from experimental behavioural research as a
method for text-based emotion analysis. Text data were collected from
participants who were randomly allocated to a happy, neutral or sad condition.
The findings support the mood induction procedure. We then examined how well
lexicon approaches can retrieve the induced emotion. All approaches resulted in
statistical differences between the true emotion conditions. Overall, only up
to one-third of the variance in emotion was captured by text-based
measurements. Pretrained classifiers performed poorly on detecting true
emotions. The paper concludes with limitations and suggestions for future
research.
| 2,020 | Computation and Language |
The SPPD System for Schema Guided Dialogue State Tracking Challenge | This paper introduces one of our group's work on the Dialog System Technology
Challenges 8 (DSTC8), the SPPD system for Schema Guided dialogue state tracking
challenge. This challenge, named as Track 4 in DSTC8, provides a brand new and
challenging dataset for developing scalable multi-domain dialogue state
tracking algorithms for real world dialogue systems. We propose a zero-shot
dialogue state tracking system for this task. The key components of the system
is a number of BERT based zero-shot NLU models that can effectively capture
semantic relations between natural language descriptions of services' schemas
and utterances from dialogue turns. We also propose some strategies to make the
system better to exploit information from longer dialogue history and to
overcome the slot carryover problem for multi-domain dialogues. The
experimental results show that the proposed system achieves a significant
improvement compared with the baseline system.
| 2,020 | Computation and Language |
PERL: Pivot-based Domain Adaptation for Pre-trained Deep Contextualized
Embedding Models | Pivot-based neural representation models have lead to significant progress in
domain adaptation for NLP. However, previous works that follow this approach
utilize only labeled data from the source domain and unlabeled data from the
source and target domains, but neglect to incorporate massive unlabeled corpora
that are not necessarily drawn from these domains. To alleviate this, we
propose PERL: A representation learning model that extends contextualized word
embedding models such as BERT with pivot-based fine-tuning. PERL outperforms
strong baselines across 22 sentiment classification domain adaptation setups,
improves in-domain model performance, yields effective reduced-size models and
increases model stability.
| 2,020 | Computation and Language |
How to Probe Sentence Embeddings in Low-Resource Languages: On
Structural Design Choices for Probing Task Evaluation | Sentence encoders map sentences to real valued vectors for use in downstream
applications. To peek into these representations - e.g., to increase
interpretability of their results - probing tasks have been designed which
query them for linguistic knowledge. However, designing probing tasks for
lesser-resourced languages is tricky, because these often lack large-scale
annotated data or (high-quality) dependency parsers as a prerequisite of
probing task design in English. To investigate how to probe sentence embeddings
in such cases, we investigate sensitivity of probing task results to structural
design choices, conducting the first such large scale study. We show that
design choices like size of the annotated probing dataset and type of
classifier used for evaluation do (sometimes substantially) influence probing
outcomes. We then probe embeddings in a multilingual setup with design choices
that lie in a 'stable region', as we identify for English, and find that
results on English do not transfer to other languages. Fairer and more
comprehensive sentence-level probing evaluation should thus be carried out on
multiple languages in the future.
| 2,020 | Computation and Language |
CUHK at SemEval-2020 Task 4: CommonSense Explanation, Reasoning and
Prediction with Multi-task Learning | This paper describes our system submitted to task 4 of SemEval 2020:
Commonsense Validation and Explanation (ComVE) which consists of three
sub-tasks. The task is to directly validate the given sentence whether or not
it makes sense and require the model to explain it. Based on BERTarchitecture
with a multi-task setting, we propose an effective and interpretable "Explain,
Reason and Predict" (ERP) system to solve the three sub-tasks about
commonsense: (a) Validation, (b)Reasoning, and (c) Explanation. Inspired by
cognitive studies of common sense, our system first generates a reason or
understanding of the sentences and then chooses which one statement makes
sense, which is achieved by multi-task learning. During the post-evaluation,
our system has reached 92.9% accuracy in subtask A (rank 11), 89.7% accuracy in
subtask B (rank 9), andBLEU score of 12.9 in subtask C (rank 8)
| 2,020 | Computation and Language |
Results of the seventh edition of the BioASQ Challenge | The results of the seventh edition of the BioASQ challenge are presented in
this paper. The aim of the BioASQ challenge is the promotion of systems and
methodologies through the organization of a challenge on the tasks of
large-scale biomedical semantic indexing and question answering. In total, 30
teams with more than 100 systems participated in the challenge this year. As in
previous years, the best systems were able to outperform the strong baselines.
This suggests that state-of-the-art systems are continuously improving, pushing
the frontier of research.
| 2,019 | Computation and Language |
A Hybrid Natural Language Generation System Integrating Rules and Deep
Learning Algorithms | This paper proposes an enhanced natural language generation system combining
the merits of both rule-based approaches and modern deep learning algorithms,
boosting its performance to the extent where the generated textual content is
capable of exhibiting agile human-writing styles and the content logic of which
is highly controllable. We also come up with a novel approach called HMCU to
measure the performance of the natural language processing comprehensively and
precisely.
| 2,020 | Computation and Language |
FFR v1.1: Fon-French Neural Machine Translation | All over the world and especially in Africa, researchers are putting efforts
into building Neural Machine Translation (NMT) systems to help tackle the
language barriers in Africa, a continent of over 2000 different languages.
However, the low-resourceness, diacritical, and tonal complexities of African
languages are major issues being faced. The FFR project is a major step towards
creating a robust translation model from Fon, a very low-resource and tonal
language, to French, for research and public use. In this paper, we introduce
FFR Dataset, a corpus of Fon-to-French translations, describe the diacritical
encoding process, and introduce our FFR v1.1 model, trained on the dataset. The
dataset and model are made publicly available at https://github.com/
bonaventuredossou/ffr-v1, to promote collaboration and reproducibility.
| 2,020 | Computation and Language |
Weakly-supervised Domain Adaption for Aspect Extraction via Multi-level
Interaction Transfer | Fine-grained aspect extraction is an essential sub-task in aspect based
opinion analysis. It aims to identify the aspect terms (a.k.a. opinion targets)
of a product or service in each sentence. However, expensive annotation process
is usually involved to acquire sufficient token-level labels for each domain.
To address this limitation, some previous works propose domain adaptation
strategies to transfer knowledge from a sufficiently labeled source domain to
unlabeled target domains. But due to both the difficulty of fine-grained
prediction problems and the large domain gap between domains, the performance
remains unsatisfactory. This work conducts a pioneer study on leveraging
sentence-level aspect category labels that can be usually available in
commercial services like review sites to promote token-level transfer for the
extraction purpose. Specifically, the aspect category information is used to
construct pivot knowledge for transfer with assumption that the interactions
between sentence-level aspect category and token-level aspect terms are
invariant across domains. To this end, we propose a novel multi-level
reconstruction mechanism that aligns both the fine-grained and coarse-grained
information in multiple levels of abstractions. Comprehensive experiments
demonstrate that our approach can fully utilize sentence-level aspect category
labels to improve cross-domain aspect extraction with a large performance gain.
| 2,020 | Computation and Language |
Modeling Graph Structure via Relative Position for Text Generation from
Knowledge Graphs | We present Graformer, a novel Transformer-based encoder-decoder architecture
for graph-to-text generation. With our novel graph self-attention, the encoding
of a node relies on all nodes in the input graph - not only direct neighbors -
facilitating the detection of global patterns. We represent the relation
between two nodes as the length of the shortest path between them. Graformer
learns to weight these node-node relations differently for different attention
heads, thus virtually learning differently connected views of the input graph.
We evaluate Graformer on two popular graph-to-text generation benchmarks,
AGENDA and WebNLG, where it achieves strong performance while using many fewer
parameters than other approaches.
| 2,021 | Computation and Language |
Communicative need modulates competition in language change | All living languages change over time. The causes for this are many, one
being the emergence and borrowing of new linguistic elements. Competition
between the new elements and older ones with a similar semantic or grammatical
function may lead to speakers preferring one of them, and leaving the other to
go out of use. We introduce a general method for quantifying competition
between linguistic elements in diachronic corpora which does not require
language-specific resources other than a sufficiently large corpus. This
approach is readily applicable to a wide range of languages and linguistic
subsystems. Here, we apply it to lexical data in five corpora differing in
language, type, genre, and time span. We find that changes in communicative
need are consistently predictive of lexical competition dynamics.
Near-synonymous words are more likely to directly compete if they belong to a
topic of conversation whose importance to language users is constant over time,
possibly leading to the extinction of one of the competing words. By contrast,
in topics which are increasing in importance for language users,
near-synonymous words tend not to compete directly and can coexist. This
suggests that, in addition to direct competition between words, language change
can be driven by competition between topics or semantic subspaces.
| 2,020 | Computation and Language |
Cross-Cultural Similarity Features for Cross-Lingual Transfer Learning
of Pragmatically Motivated Tasks | Much work in cross-lingual transfer learning explored how to select better
transfer languages for multilingual tasks, primarily focusing on typological
and genealogical similarities between languages. We hypothesize that these
measures of linguistic proximity are not enough when working with
pragmatically-motivated tasks, such as sentiment analysis. As an alternative,
we introduce three linguistic features that capture cross-cultural similarities
that manifest in linguistic patterns and quantify distinct aspects of language
pragmatics: language context-level, figurative language, and the lexification
of emotion concepts. Our analyses show that the proposed pragmatic features do
capture cross-cultural similarities and align well with existing work in
sociolinguistics and linguistic anthropology. We further corroborate the
effectiveness of pragmatically-driven transfer in the downstream task of
choosing transfer languages for cross-lingual sentiment analysis.
| 2,021 | Computation and Language |
The Role of Verb Semantics in Hungarian Verb-Object Order | Hungarian is often referred to as a discourse-configurational language, since
the structural position of constituents is determined by their logical function
(topic or comment) rather than their grammatical function (e.g., subject or
object). We build on work by Koml\'osy (1989) and argue that in addition to
discourse context, the lexical semantics of the verb also plays a significant
role in determining Hungarian word order. In order to investigate the role of
lexical semantics in determining Hungarian word order, we conduct a
large-scale, data-driven analysis on the ordering of 380 transitive verbs and
their objects, as observed in hundreds of thousands of examples extracted from
the Hungarian Gigaword Corpus. We test the effect of lexical semantics on the
ordering of verbs and their objects by grouping verbs into 11 semantic classes.
In addition to the semantic class of the verb, we also include two control
features related to information structure, object definiteness and object NP
weight, chosen to allow a comparison of their effect size to that of verb
semantics. Our results suggest that all three features have a significant
effect on verb-object ordering in Hungarian and among these features, the
semantic class of the verb has the largest effect. Specifically, we find that
stative verbs, such as fed "cover", jelent "mean" and \"ovez "surround", tend
to be OV-preferring (with the exception of psych verbs which are strongly
VO-preferring) and non-stative verbs, such as b\'ir\'al "judge", cs\"okkent
"reduce" and cs\'okol "kiss", verbs tend to be VO-preferring. These findings
support our hypothesis that lexical semantic factors influence word order in
Hungarian.
| 2,020 | Computation and Language |
Selective Question Answering under Domain Shift | To avoid giving wrong answers, question answering (QA) models need to know
when to abstain from answering. Moreover, users often ask questions that
diverge from the model's training data, making errors more likely and thus
abstention more critical. In this work, we propose the setting of selective
question answering under domain shift, in which a QA model is tested on a
mixture of in-domain and out-of-domain data, and must answer (i.e., not abstain
on) as many questions as possible while maintaining high accuracy. Abstention
policies based solely on the model's softmax probabilities fare poorly, since
models are overconfident on out-of-domain inputs. Instead, we train a
calibrator to identify inputs on which the QA model errs, and abstain when it
predicts an error is likely. Crucially, the calibrator benefits from observing
the model's behavior on out-of-domain data, even if from a different domain
than the test data. We combine this method with a SQuAD-trained QA model and
evaluate on mixtures of SQuAD and five other QA datasets. Our method answers
56% of questions while maintaining 80% accuracy; in contrast, directly using
the model's probabilities only answers 48% at 80% accuracy.
| 2,020 | Computation and Language |
EPIE Dataset: A Corpus For Possible Idiomatic Expressions | Idiomatic expressions have always been a bottleneck for language
comprehension and natural language understanding, specifically for tasks like
Machine Translation(MT). MT systems predominantly produce literal translations
of idiomatic expressions as they do not exhibit generic and linguistically
deterministic patterns which can be exploited for comprehension of the
non-compositional meaning of the expressions. These expressions occur in
parallel corpora used for training, but due to the comparatively high
occurrences of the constituent words of idiomatic expressions in literal
context, the idiomatic meaning gets overpowered by the compositional meaning of
the expression. State of the art Metaphor Detection Systems are able to detect
non-compositional usage at word level but miss out on idiosyncratic phrasal
idiomatic expressions. This creates a dire need for a dataset with a wider
coverage and higher occurrence of commonly occurring idiomatic expressions, the
spans of which can be used for Metaphor Detection. With this in mind, we
present our English Possible Idiomatic Expressions(EPIE) corpus containing
25206 sentences labelled with lexical instances of 717 idiomatic expressions.
These spans also cover literal usages for the given set of idiomatic
expressions. We also present the utility of our dataset by using it to train a
sequence labelling module and testing on three independent datasets with high
accuracy, precision and recall scores.
| 2,020 | Computation and Language |
Cross-lingual Retrieval for Iterative Self-Supervised Training | Recent studies have demonstrated the cross-lingual alignment ability of
multilingual pretrained language models. In this work, we found that the
cross-lingual alignment can be further improved by training seq2seq models on
sentence pairs mined using their own encoder outputs. We utilized these
findings to develop a new approach -- cross-lingual retrieval for iterative
self-supervised training (CRISS), where mining and training processes are
applied iteratively, improving cross-lingual alignment and translation ability
at the same time. Using this method, we achieved state-of-the-art unsupervised
machine translation results on 9 language directions with an average
improvement of 2.4 BLEU, and on the Tatoeba sentence retrieval task in the
XTREME benchmark on 16 languages with an average improvement of 21.5% in
absolute accuracy. Furthermore, CRISS also brings an additional 1.8 BLEU
improvement on average compared to mBART, when finetuned on supervised machine
translation downstream tasks.
| 2,020 | Computation and Language |
Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives | Crime reporting is a prevalent form of journalism with the power to shape
public perceptions and social policies. How does the language of these reports
act on readers? We seek to address this question with the SuspectGuilt Corpus
of annotated crime stories from English-language newspapers in the U.S. For
SuspectGuilt, annotators read short crime articles and provided text-level
ratings concerning the guilt of the main suspect as well as span-level
annotations indicating which parts of the story they felt most influenced their
ratings. SuspectGuilt thus provides a rich picture of how linguistic choices
affect subjective guilt judgments. In addition, we use SuspectGuilt to train
and assess predictive models, and show that these models benefit from genre
pretraining and joint supervision from the text-level ratings and span-level
annotations. Such models might be used as tools for understanding the societal
effects of crime reporting.
| 2,020 | Computation and Language |
Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural
Network | Noun phrases and relational phrases in Open Knowledge Bases are often not
canonical, leading to redundant and ambiguous facts. In this work, we integrate
structural information (from which tuple, which sentence) and semantic
information (semantic similarity) to do the canonicalization. We represent the
two types of information as a multi-layered graph: the structural information
forms the links across the sentence, relational phrase, and noun phrase layers;
the semantic information forms weighted intra-layer links for each layer. We
propose a graph neural network model to aggregate the representations of noun
phrases and relational phrases through the multi-layered meta-graph structure.
Experiments show that our model outperforms existing approaches on a public
datasets in general domain.
| 2,020 | Computation and Language |
Building Low-Resource NER Models Using Non-Speaker Annotation | In low-resource natural language processing (NLP), the key problems are a
lack of target language training data, and a lack of native speakers to create
it. Cross-lingual methods have had notable success in addressing these
concerns, but in certain common circumstances, such as insufficient
pre-training corpora or languages far from the source language, their
performance suffers. In this work we propose a complementary approach to
building low-resource Named Entity Recognition (NER) models using
``non-speaker'' (NS) annotations, provided by annotators with no prior
experience in the target language. We recruit 30 participants in a carefully
controlled annotation experiment with Indonesian, Russian, and Hindi. We show
that use of NS annotators produces results that are consistently on par or
better than cross-lingual methods built on modern contextual representations,
and have the potential to outperform with additional effort. We conclude with
observations of common annotation patterns and recommended implementation
practices, and motivate how NS annotations can be used in addition to prior
methods for improved performance. For more details,
http://cogcomp.org/page/publication_view/941
| 2,021 | Computation and Language |
Iterative Edit-Based Unsupervised Sentence Simplification | We present a novel iterative, edit-based approach to unsupervised sentence
simplification. Our model is guided by a scoring function involving fluency,
simplicity, and meaning preservation. Then, we iteratively perform word and
phrase-level edits on the complex sentence. Compared with previous approaches,
our model does not require a parallel training set, but is more controllable
and interpretable. Experiments on Newsela and WikiLarge datasets show that our
approach is nearly as effective as state-of-the-art supervised approaches.
| 2,020 | Computation and Language |
Exploiting Review Neighbors for Contextualized Helpfulness Prediction | Helpfulness prediction techniques have been widely used to identify and
recommend high-quality online reviews to customers. Currently, the vast
majority of studies assume that a review's helpfulness is self-contained. In
practice, however, customers hardly process reviews independently given the
sequential nature. The perceived helpfulness of a review is likely to be
affected by its sequential neighbors (i.e., context), which has been largely
ignored. This paper proposes a new methodology to capture the missing
interaction between reviews and their neighbors. The first end-to-end neural
architecture is developed for neighbor-aware helpfulness prediction (NAP). For
each review, NAP allows for three types of neighbor selection: its preceding,
following, and surrounding neighbors. Four weighting schemes are designed to
learn context clues from the selected neighbors. A review is then
contextualized into the learned clues for neighbor-aware helpfulness
prediction. NAP is evaluated on six domains of real-world online reviews
against a series of state-of-the-art baselines. Extensive experiments confirm
the effectiveness of NAP and the influence of sequential neighbors on a current
reviews. Further hyperparameter analysis reveals three main findings. (1) On
average, eight neighbors treated with uneven importance are engaged for context
construction. (2) The benefit of neighbor-aware prediction mainly results from
closer neighbors. (3) Equally considering up to five closest neighbors of a
review can usually produce a weaker but tolerable prediction result.
| 2,020 | Computation and Language |
Automatically Ranked Russian Paraphrase Corpus for Text Generation | The article is focused on automatic development and ranking of a large corpus
for Russian paraphrase generation which proves to be the first corpus of such
type in Russian computational linguistics. Existing manually annotated
paraphrase datasets for Russian are limited to small-sized ParaPhraser corpus
and ParaPlag which are suitable for a set of NLP tasks, such as paraphrase and
plagiarism detection, sentence similarity and relatedness estimation, etc. Due
to size restrictions, these datasets can hardly be applied in end-to-end text
generation solutions. Meanwhile, paraphrase generation requires a large amount
of training data. In our study we propose a solution to the problem: we
collect, rank and evaluate a new publicly available headline paraphrase corpus
(ParaPhraser Plus), and then perform text generation experiments with manual
evaluation on automatically ranked corpora using the Universal Transformer
architecture.
| 2,020 | Computation and Language |
A Tweet-based Dataset for Company-Level Stock Return Prediction | Public opinion influences events, especially related to stock market
movement, in which a subtle hint can influence the local outcome of the market.
In this paper, we present a dataset that allows for company-level analysis of
tweet based impact on one-, two-, three-, and seven-day stock returns. Our
dataset consists of 862, 231 labelled instances from twitter in English, we
also release a cleaned subset of 85, 176 labelled instances to the community.
We also provide baselines using standard machine learning algorithms and a
multi-view learning based approach that makes use of different types of
features. Our dataset, scripts and models are publicly available at:
https://github.com/ImperialNLP/stockreturnpred.
| 2,020 | Computation and Language |
Improving unsupervised neural aspect extraction for online discussions
using out-of-domain classification | Deep learning architectures based on self-attention have recently achieved
and surpassed state of the art results in the task of unsupervised aspect
extraction and topic modeling. While models such as neural attention-based
aspect extraction (ABAE) have been successfully applied to user-generated
texts, they are less coherent when applied to traditional data sources such as
news articles and newsgroup documents. In this work, we introduce a simple
approach based on sentence filtering in order to improve topical aspects
learned from newsgroups-based content without modifying the basic mechanism of
ABAE. We train a probabilistic classifier to distinguish between out-of-domain
texts (outer dataset) and in-domain texts (target dataset). Then, during data
preparation we filter out sentences that have a low probability of being
in-domain and train the neural model on the remaining sentences. The positive
effect of sentence filtering on topic coherence is demonstrated in comparison
to aspect extraction models trained on unfiltered texts.
| 2,020 | Computation and Language |
An Exploratory Study of Argumentative Writing by Young Students: A
Transformer-based Approach | We present a computational exploration of argument critique writing by young
students. Middle school students were asked to criticize an argument presented
in the prompt, focusing on identifying and explaining the reasoning flaws. This
task resembles an established college-level argument critique task. Lexical and
discourse features that utilize detailed domain knowledge to identify critiques
exist for the college task but do not perform well on the young students data.
Instead, transformer-based architecture (e.g., BERT) fine-tuned on a large
corpus of critique essays from the college task performs much better (over 20%
improvement in F1 score). Analysis of the performance of various configurations
of the system suggests that while children's writing does not exhibit the
standard discourse structure of an argumentative essay, it does share basic
local sequential structures with the more mature writers.
| 2,020 | Computation and Language |
Fine-grained Sentiment Controlled Text Generation | Controlled text generation techniques aim to regulate specific attributes
(e.g. sentiment) while preserving the attribute independent content. The
state-of-the-art approaches model the specified attribute as a structured or
discrete representation while making the content representation independent of
it to achieve a better control. However, disentangling the text representation
into separate latent spaces overlooks complex dependencies between content and
attribute, leading to generation of poorly constructed and not so meaningful
sentences. Moreover, such an approach fails to provide a finer control on the
degree of attribute change. To address these problems of controlled text
generation, in this paper, we propose DE-VAE, a hierarchical framework which
captures both information enriched entangled representation and attribute
specific disentangled representation in different hierarchies. DE-VAE achieves
better control of sentiment as an attribute while preserving the content by
learning a suitable lossless transformation network from the disentangled
sentiment space to the desired entangled representation. Through feature
supervision on a single dimension of the disentangled representation, DE-VAE
maps the variation of sentiment to a continuous space which helps in smoothly
regulating sentiment from positive to negative and vice versa. Detailed
experiments on three publicly available review datasets show the superiority of
DE-VAE over recent state-of-the-art approaches.
| 2,020 | Computation and Language |
On the Learnability of Concepts: With Applications to Comparing Word
Embedding Algorithms | Word Embeddings are used widely in multiple Natural Language Processing (NLP)
applications. They are coordinates associated with each word in a dictionary,
inferred from statistical properties of these words in a large corpus. In this
paper we introduce the notion of "concept" as a list of words that have shared
semantic content. We use this notion to analyse the learnability of certain
concepts, defined as the capability of a classifier to recognise unseen members
of a concept after training on a random subset of it. We first use this method
to measure the learnability of concepts on pretrained word embeddings. We then
develop a statistical analysis of concept learnability, based on hypothesis
testing and ROC curves, in order to compare the relative merits of various
embedding algorithms using a fixed corpora and hyper parameters. We find that
all embedding methods capture the semantic content of those word lists, but
fastText performs better than the others.
| 2,020 | Computation and Language |
Extensively Matching for Few-shot Learning Event Detection | Current event detection models under super-vised learning settings fail to
transfer to newevent types. Few-shot learning has not beenexplored in event
detection even though it al-lows a model to perform well with high
gener-alization on new event types. In this work, weformulate event detection
as a few-shot learn-ing problem to enable to extend event detec-tion to new
event types. We propose two novelloss factors that matching examples in the
sup-port set to provide more training signals to themodel. Moreover, these
training signals can beapplied in many metric-based few-shot learn-ing models.
Our extensive experiments on theACE-2005 dataset (under a few-shot
learningsetting) show that the proposed method can im-prove the performance of
few-shot learning
| 2,020 | Computation and Language |
Is this Dialogue Coherent? Learning from Dialogue Acts and Entities | In this work, we investigate the human perception of coherence in open-domain
dialogues. In particular, we address the problem of annotating and modeling the
coherence of next-turn candidates while considering the entire history of the
dialogue. First, we create the Switchboard Coherence (SWBD-Coh) corpus, a
dataset of human-human spoken dialogues annotated with turn coherence ratings,
where next-turn candidate utterances ratings are provided considering the full
dialogue context. Our statistical analysis of the corpus indicates how turn
coherence perception is affected by patterns of distribution of entities
previously introduced and the Dialogue Acts used. Second, we experiment with
different architectures to model entities, Dialogue Acts and their combination
and evaluate their performance in predicting human coherence ratings on
SWBD-Coh. We find that models combining both DA and entity information yield
the best performances both for response selection and turn coherence rating.
| 2,020 | Computation and Language |
Political Advertising Dataset: the use case of the Polish 2020
Presidential Elections | Political campaigns are full of political ads posted by candidates on social
media. Political advertisements constitute a basic form of campaigning,
subjected to various social requirements. We present the first publicly open
dataset for detecting specific text chunks and categories of political
advertising in the Polish language. It contains 1,705 human-annotated tweets
tagged with nine categories, which constitute campaigning under Polish
electoral law. We achieved a 0.65 inter-annotator agreement (Cohen's kappa
score). An additional annotator resolved the mismatches between the first two
annotators improving the consistency and complexity of the annotation process.
We used the newly created dataset to train a well established neural tagger
(achieving a 70% percent points F1 score). We also present a possible direction
of use cases for such datasets and models with an initial analysis of the
Polish 2020 Presidential Elections on Twitter.
| 2,020 | Computation and Language |
SEAL: Segment-wise Extractive-Abstractive Long-form Text Summarization | Most prior work in the sequence-to-sequence paradigm focused on datasets with
input sequence lengths in the hundreds of tokens due to the computational
constraints of common RNN and Transformer architectures. In this paper, we
study long-form abstractive text summarization, a sequence-to-sequence setting
with input sequence lengths up to 100,000 tokens and output sequence lengths up
to 768 tokens. We propose SEAL, a Transformer-based model, featuring a new
encoder-decoder attention that dynamically extracts/selects input snippets to
sparsely attend to for each output segment. Using only the original documents
and summaries, we derive proxy labels that provide weak supervision for
extractive layers simultaneously with regular supervision from abstractive
summaries. The SEAL model achieves state-of-the-art results on existing
long-form summarization tasks, and outperforms strong baseline models on a new
dataset/task we introduce, Search2Wiki, with much longer input text. Since
content selection is explicit in the SEAL model, a desirable side effect is
that the selection can be inspected for enhanced interpretability.
| 2,020 | Computation and Language |
STEAM: Self-Supervised Taxonomy Expansion with Mini-Paths | Taxonomies are important knowledge ontologies that underpin numerous
applications on a daily basis, but many taxonomies used in practice suffer from
the low coverage issue. We study the taxonomy expansion problem, which aims to
expand existing taxonomies with new concept terms. We propose a self-supervised
taxonomy expansion model named STEAM, which leverages natural supervision in
the existing taxonomy for expansion. To generate natural self-supervision
signals, STEAM samples mini-paths from the existing taxonomy, and formulates a
node attachment prediction task between anchor mini-paths and query terms. To
solve the node attachment task, it learns feature representations for
query-anchor pairs from multiple views and performs multi-view co-training for
prediction. Extensive experiments show that STEAM outperforms state-of-the-art
methods for taxonomy expansion by 11.6\% in accuracy and 7.0\% in mean
reciprocal rank on three public benchmarks. The implementation of STEAM can be
found at \url{https://github.com/yueyu1030/STEAM}.
| 2,020 | Computation and Language |
Multi-branch Attentive Transformer | While the multi-branch architecture is one of the key ingredients to the
success of computer vision tasks, it has not been well investigated in natural
language processing, especially sequence learning tasks. In this work, we
propose a simple yet effective variant of Transformer called multi-branch
attentive Transformer (briefly, MAT), where the attention layer is the average
of multiple branches and each branch is an independent multi-head attention
layer. We leverage two training techniques to regularize the training:
drop-branch, which randomly drops individual branches during training, and
proximal initialization, which uses a pre-trained Transformer model to
initialize multiple branches. Experiments on machine translation, code
generation and natural language understanding demonstrate that such a simple
variant of Transformer brings significant improvements. Our code is available
at \url{https://github.com/HA-Transformer}.
| 2,020 | Computation and Language |
Octet: Online Catalog Taxonomy Enrichment with Self-Supervision | Taxonomies have found wide applications in various domains, especially online
for item categorization, browsing, and search. Despite the prevalent use of
online catalog taxonomies, most of them in practice are maintained by humans,
which is labor-intensive and difficult to scale. While taxonomy construction
from scratch is considerably studied in the literature, how to effectively
enrich existing incomplete taxonomies remains an open yet important research
question. Taxonomy enrichment not only requires the robustness to deal with
emerging terms but also the consistency between existing taxonomy structure and
new term attachment. In this paper, we present a self-supervised end-to-end
framework, Octet, for Online Catalog Taxonomy EnrichmenT. Octet leverages
heterogeneous information unique to online catalog taxonomies such as user
queries, items, and their relations to the taxonomy nodes while requiring no
other supervision than the existing taxonomies. We propose to distantly train a
sequence labeling model for term extraction and employ graph neural networks
(GNNs) to capture the taxonomy structure as well as the query-item-taxonomy
interactions for term attachment. Extensive experiments in different online
domains demonstrate the superiority of Octet over state-of-the-art methods via
both automatic and human evaluations. Notably, Octet enriches an online catalog
taxonomy in production to 2 times larger in the open-world evaluation.
| 2,020 | Computation and Language |
Automatic Speech Recognition Benchmark for Air-Traffic Communications | Advances in Automatic Speech Recognition (ASR) over the last decade opened
new areas of speech-based automation such as in Air-Traffic Control (ATC)
environment. Currently, voice communication and data links communications are
the only way of contact between pilots and Air-Traffic Controllers (ATCo),
where the former is the most widely used and the latter is a non-spoken method
mandatory for oceanic messages and limited for some domestic issues. ASR
systems on ATCo environments inherit increasing complexity due to accents from
non-English speakers, cockpit noise, speaker-dependent biases, and small
in-domain ATC databases for training. Hereby, we introduce CleanSky EC-H2020
ATCO2, a project that aims to develop an ASR-based platform to collect,
organize and automatically pre-process ATCo speech-data from air space. This
paper conveys an exploratory benchmark of several state-of-the-art ASR models
trained on more than 170 hours of ATCo speech-data. We demonstrate that the
cross-accent flaws due to speakers' accents are minimized due to the amount of
data, making the system feasible for ATC environments. The developed ASR system
achieves an averaged word error rate (WER) of 7.75% across four databases. An
additional 35% relative improvement in WER is achieved on one test set when
training a TDNNF system with byte-pair encoding.
| 2,020 | Computation and Language |
Extraction and Evaluation of Formulaic Expressions Used in Scholarly
Papers | Formulaic expressions, such as 'in this paper we propose', are helpful for
authors of scholarly papers because they convey communicative functions; in the
above, it is showing the aim of this paper'. Thus, resources of formulaic
expressions, such as a dictionary, that could be looked up easily would be
useful. However, forms of formulaic expressions can often vary to a great
extent. For example, 'in this paper we propose', 'in this study we propose' and
'in this paper we propose a new method to' are all regarded as formulaic
expressions. Such a diversity of spans and forms causes problems in both
extraction and evaluation of formulaic expressions. In this paper, we propose a
new approach that is robust to variation of spans and forms of formulaic
expressions. Our approach regards a sentence as consisting of a formulaic part
and non-formulaic part. Then, instead of trying to extract formulaic
expressions from a whole corpus, by extracting them from each sentence,
different forms can be dealt with at once. Based on this formulation, to avoid
the diversity problem, we propose evaluating extraction methods by how much
they convey specific communicative functions rather than by comparing extracted
expressions to an existing lexicon. We also propose a new extraction method
that utilises named entities and dependency structures to remove the
non-formulaic part from a sentence. Experimental results show that the proposed
extraction method achieved the best performance compared to other existing
methods.
| 2,020 | Computation and Language |
Deep Encoder, Shallow Decoder: Reevaluating Non-autoregressive Machine
Translation | Much recent effort has been invested in non-autoregressive neural machine
translation, which appears to be an efficient alternative to state-of-the-art
autoregressive machine translation on modern GPUs. In contrast to the latter,
where generation is sequential, the former allows generation to be parallelized
across target token positions. Some of the latest non-autoregressive models
have achieved impressive translation quality-speed tradeoffs compared to
autoregressive baselines. In this work, we reexamine this tradeoff and argue
that autoregressive baselines can be substantially sped up without loss in
accuracy. Specifically, we study autoregressive models with encoders and
decoders of varied depths. Our extensive experiments show that given a
sufficiently deep encoder, a single-layer autoregressive decoder can
substantially outperform strong non-autoregressive models with comparable
inference speed. We show that the speed disadvantage for autoregressive
baselines compared to non-autoregressive methods has been overestimated in
three aspects: suboptimal layer allocation, insufficient speed measurement, and
lack of knowledge distillation. Our results establish a new protocol for future
research toward fast, accurate machine translation. Our code is available at
https://github.com/jungokasai/deep-shallow.
| 2,021 | Computation and Language |
Are Pretrained Language Models Symbolic Reasoners Over Knowledge? | How can pretrained language models (PLMs) learn factual knowledge from the
training set? We investigate the two most important mechanisms: reasoning and
memorization. Prior work has attempted to quantify the number of facts PLMs
learn, but we present, using synthetic data, the first study that investigates
the causal relation between facts present in training and facts learned by the
PLM. For reasoning, we show that PLMs seem to learn to apply some symbolic
reasoning rules correctly but struggle with others, including two-hop
reasoning. Further analysis suggests that even the application of learned
reasoning rules is flawed. For memorization, we identify schema conformity
(facts systematically supported by other facts) and frequency as key factors
for its success.
| 2,020 | Computation and Language |
Explainable and Discourse Topic-aware Neural Language Understanding | Marrying topic models and language models exposes language understanding to a
broader source of document-level context beyond sentences via topics. While
introducing topical semantics in language models, existing approaches
incorporate latent document topic proportions and ignore topical discourse in
sentences of the document. This work extends the line of research by
additionally introducing an explainable topic representation in language
understanding, obtained from a set of key terms correspondingly for each latent
topic of the proportion. Moreover, we retain sentence-topic associations along
with document-topic association by modeling topical discourse for every
sentence in the document. We present a novel neural composite language model
that exploits both the latent and explainable topics along with topical
discourse at sentence-level in a joint learning framework of topic and language
models. Experiments over a range of tasks such as language modeling, word sense
disambiguation, document classification, retrieval and text generation
demonstrate ability of the proposed model in improving language understanding.
| 2,023 | Computation and Language |
AMALGUM -- A Free, Balanced, Multilayer English Web Corpus | We present a freely available, genre-balanced English web corpus totaling 4M
tokens and featuring a large number of high-quality automatic annotation
layers, including dependency trees, non-named entity annotations, coreference
resolution, and discourse trees in Rhetorical Structure Theory. By tapping open
online data sources the corpus is meant to offer a more sizable alternative to
smaller manually created annotated data sets, while avoiding pitfalls such as
imbalanced or unknown composition, licensing problems, and low-quality natural
language processing. We harness knowledge from multiple annotation layers in
order to achieve a "better than NLP" benchmark and evaluate the accuracy of the
resulting resource.
| 2,020 | Computation and Language |
Neural Topic Modeling with Continual Lifelong Learning | Lifelong learning has recently attracted attention in building machine
learning systems that continually accumulate and transfer knowledge to help
future learning. Unsupervised topic modeling has been popularly used to
discover topics from document collections. However, the application of topic
modeling is challenging due to data sparsity, e.g., in a small collection of
(short) documents and thus, generate incoherent topics and sub-optimal document
representations. To address the problem, we propose a lifelong learning
framework for neural topic modeling that can continuously process streams of
document collections, accumulate topics and guide future topic modeling tasks
by knowledge transfer from several sources to better deal with the sparse data.
In the lifelong process, we particularly investigate jointly: (1) sharing
generative homologies (latent topics) over lifetime to transfer prior
knowledge, and (2) minimizing catastrophic forgetting to retain the past
learning via novel selective data augmentation, co-training and topic
regularization approaches. Given a stream of document collections, we apply the
proposed Lifelong Neural Topic Modeling (LNTM) framework in modeling three
sparse document collections as future tasks and demonstrate improved
performance quantified by perplexity, topic coherence and information retrieval
task.
| 2,023 | Computation and Language |
Sentiment Frames for Attitude Extraction in Russian | Texts can convey several types of inter-related information concerning
opinions and attitudes. Such information includes the author's attitude towards
mentioned entities, attitudes of the entities towards each other, positive and
negative effects on the entities in the described situations. In this paper, we
described the lexicon RuSentiFrames for Russian, where predicate words and
expressions are collected and linked to so-called sentiment frames conveying
several types of presupposed information on attitudes and effects. We applied
the created frames in the task of extracting attitudes from a large news
collection.
| 2,020 | Computation and Language |
A Survey of Syntactic-Semantic Parsing Based on Constituent and
Dependency Structures | Syntactic and semantic parsing has been investigated for decades, which is
one primary topic in the natural language processing community. This article
aims for a brief survey on this topic. The parsing community includes many
tasks, which are difficult to be covered fully. Here we focus on two of the
most popular formalizations of parsing: constituent parsing and dependency
parsing. Constituent parsing is majorly targeted to syntactic analysis, and
dependency parsing can handle both syntactic and semantic analysis. This
article briefly reviews the representative models of constituent parsing and
dependency parsing, and also dependency graph parsing with rich semantics.
Besides, we also review the closely-related topics such as cross-domain,
cross-lingual and joint parsing models, parser application as well as corpus
development of parsing in the article.
| 2,020 | Computation and Language |
Dataset for Automatic Summarization of Russian News | Automatic text summarization has been studied in a variety of domains and
languages. However, this does not hold for the Russian language. To overcome
this issue, we present Gazeta, the first dataset for summarization of Russian
news. We describe the properties of this dataset and benchmark several
extractive and abstractive models. We demonstrate that the dataset is a valid
task for methods of text summarization for Russian. Additionally, we prove the
pretrained mBART model to be useful for Russian text summarization.
| 2,020 | Computation and Language |
Mechanisms for Handling Nested Dependencies in Neural-Network Language
Models and Humans | Recursive processing in sentence comprehension is considered a hallmark of
human linguistic abilities. However, its underlying neural mechanisms remain
largely unknown. We studied whether a modern artificial neural network trained
with "deep learning" methods mimics a central aspect of human sentence
processing, namely the storing of grammatical number and gender information in
working memory and its use in long-distance agreement (e.g., capturing the
correct number agreement between subject and verb when they are separated by
other phrases). Although the network, a recurrent architecture with Long
Short-Term Memory units, was solely trained to predict the next word in a large
corpus, analysis showed the emergence of a very sparse set of specialized units
that successfully handled local and long-distance syntactic agreement for
grammatical number. However, the simulations also showed that this mechanism
does not support full recursion and fails with some long-range embedded
dependencies. We tested the model's predictions in a behavioral experiment
where humans detected violations in number agreement in sentences with
systematic variations in the singular/plural status of multiple nouns, with or
without embedding. Human and model error patterns were remarkably similar,
showing that the model echoes various effects observed in human data. However,
a key difference was that, with embedded long-range dependencies, humans
remained above chance level, while the model's systematic errors brought it
below chance. Overall, our study shows that exploring the ways in which modern
artificial neural networks process sentences leads to precise and testable
hypotheses about human linguistic performance.
| 2,021 | Computation and Language |
New Vietnamese Corpus for Machine Reading Comprehension of Health News
Articles | Large-scale and high-quality corpora are necessary for evaluating machine
reading comprehension models on a low-resource language like Vietnamese.
Besides, machine reading comprehension (MRC) for the health domain offers great
potential for practical applications; however, there is still very little MRC
research in this domain. This paper presents ViNewsQA as a new corpus for the
Vietnamese language to evaluate healthcare reading comprehension models. The
corpus comprises 22,057 human-generated question-answer pairs. Crowd-workers
create the questions and their answers based on a collection of over 4,416
online Vietnamese healthcare news articles, where the answers comprise spans
extracted from the corresponding articles. In particular, we develop a process
of creating a corpus for the Vietnamese machine reading comprehension.
Comprehensive evaluations demonstrate that our corpus requires abilities beyond
simple reasoning, such as word matching and demanding difficult reasoning based
on single-or-multiple-sentence information. We conduct experiments using
different types of machine reading comprehension methods to achieve the first
baseline performances, compared with further models' performances. We also
measure human performance on the corpus and compared it with several powerful
neural network-based and transfer learning-based models. Our experiments show
that the best machine model is ALBERT, which achieves an exact match score of
65.26% and an F1-score of 84.89% on our corpus. The significant differences
between humans and the best-performance model (14.53% of EM and 10.90% of
F1-score) on the test set of our corpus indicate that improvements in ViNewsQA
could be explored in the future study. Our corpus is publicly available on our
website for the research purpose to encourage the research community to make
these improvements.
| 2,021 | Computation and Language |
SqueezeBERT: What can computer vision teach NLP about efficient neural
networks? | Humans read and write hundreds of billions of messages every day. Further,
due to the availability of large datasets, large computing systems, and better
neural network models, natural language processing (NLP) technology has made
significant strides in understanding, proofreading, and organizing these
messages. Thus, there is a significant opportunity to deploy NLP in myriad
applications to help web users, social networks, and businesses. In particular,
we consider smartphones and other mobile devices as crucial platforms for
deploying NLP models at scale. However, today's highly-accurate NLP neural
network models such as BERT and RoBERTa are extremely computationally
expensive, with BERT-base taking 1.7 seconds to classify a text snippet on a
Pixel 3 smartphone. In this work, we observe that methods such as grouped
convolutions have yielded significant speedups for computer vision networks,
but many of these techniques have not been adopted by NLP neural network
designers. We demonstrate how to replace several operations in self-attention
layers with grouped convolutions, and we use this technique in a novel network
architecture called SqueezeBERT, which runs 4.3x faster than BERT-base on the
Pixel 3 while achieving competitive accuracy on the GLUE test set. The
SqueezeBERT code will be released.
| 2,020 | Computation and Language |
wav2vec 2.0: A Framework for Self-Supervised Learning of Speech
Representations | We show for the first time that learning powerful representations from speech
audio alone followed by fine-tuning on transcribed speech can outperform the
best semi-supervised methods while being conceptually simpler. wav2vec 2.0
masks the speech input in the latent space and solves a contrastive task
defined over a quantization of the latent representations which are jointly
learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER
on the clean/other test sets. When lowering the amount of labeled data to one
hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour
subset while using 100 times less labeled data. Using just ten minutes of
labeled data and pre-training on 53k hours of unlabeled data still achieves
4.8/8.2 WER. This demonstrates the feasibility of speech recognition with
limited amounts of labeled data.
| 2,020 | Computation and Language |
Improving Query Safety at Pinterest | Query recommendations in search engines is a double edged sword, with
undeniable benefits but potential of harm. Identifying unsafe queries is
necessary to protect users from inappropriate query suggestions. However,
identifying these is non-trivial because of the linguistic diversity resulting
from large vocabularies, social-group-specific slang and typos, and because the
inappropriateness of a term depends on the context. Here we formulate the
problem as query-set expansion, where we are given a small and potentially
biased seed set and the aim is to identify a diverse set of semantically
related queries. We present PinSets, a system for query-set expansion, which
applies a simple yet powerful mechanism to search user sessions, expanding a
tiny seed set into thousands of related queries at nearly perfect precision,
deep into the tail, along with explanations that are easy to interpret. PinSets
owes its high quality expansion to using a hybrid of textual and behavioral
techniques (i.e., treating queries both as compositional and as black boxes).
Experiments show that, for the domain of drugs-related queries, PinSets expands
20 seed queries into 15,670 positive training examples at over 99\% precision.
The generated expansions have diverse vocabulary and correctly handles words
with ambiguous safety. PinSets decreased unsafe query suggestions at Pinterest
by 90\%.
| 2,020 | Computation and Language |
Sarcasm Detection in Tweets with BERT and GloVe Embeddings | Sarcasm is a form of communication in whichthe person states opposite of what
he actually means. It is ambiguous in nature. In this paper, we propose using
machine learning techniques with BERT and GloVe embeddings to detect sarcasm in
tweets. The dataset is preprocessed before extracting the embeddings. The
proposed model also uses the context in which the user is reacting to along
with his actual response.
| 2,020 | Computation and Language |
Memory Transformer | Transformer-based models have achieved state-of-the-art results in many
natural language processing tasks. The self-attention architecture allows
transformer to combine information from all elements of a sequence into
context-aware representations. However, information about the context is stored
mostly in the same element-wise representations. This might limit the
processing of properties related to the sequence as a whole more difficult.
Adding trainable memory to selectively store local as well as global
representations of a sequence is a promising direction to improve the
Transformer model. Memory-augmented neural networks (MANNs) extend traditional
neural architectures with general-purpose memory for representations. MANNs
have demonstrated the capability to learn simple algorithms like Copy or
Reverse and can be successfully trained via backpropagation on diverse tasks
from question answering to language modeling outperforming RNNs and LSTMs of
comparable complexity. In this work, we propose and study few extensions of the
Transformer baseline (1) by adding memory tokens to store non-local
representations, (2) creating memory bottleneck for the global information, (3)
controlling memory update with dedicated layer. We evaluate these memory
augmented Transformers and demonstrate that presence of memory positively
correlates with the model performance for machine translation and language
modelling tasks. Augmentation of pre-trained masked language model with memory
tokens shows mixed results for tasks from GLUE benchmark. Visualization of
attention patterns over the memory suggest that it improves the model's ability
to process a global context.
| 2,021 | Computation and Language |
Named Entity Extraction with Finite State Transducers | We describe a named entity tagging system that requires minimal linguistic
knowledge and can be applied to more target languages without substantial
changes. The system is based on the ideas of the Brill's tagger which makes it
really simple. Using supervised machine learning, we construct a series of
automatons (or transducers) in order to tag a given text. The final model is
composed entirely of automatons and it requires a lineal time for tagging. It
was tested with the Spanish data set provided in the CoNLL-$2002$ attaining an
overall $F_{\beta = 1}$ measure of $60\%.$ Also, we present an algorithm for
the construction of the final transducer used to encode all the learned
contextual rules.
| 2,020 | Computation and Language |
Seq2Seq and Joint Learning Based Unix Command Line Prediction System | Despite being an open-source operating system pioneered in the early 90s,
UNIX based platforms have not been able to garner an overwhelming reception
from amateur end users. One of the rationales for under popularity of UNIX
based systems is the steep learning curve corresponding to them due to
extensive use of command line interface instead of usual interactive graphical
user interface. In past years, the majority of insights used to explore the
concern are eminently centered around the notion of utilizing chronic log
history of the user to make the prediction of successive command. The
approaches directed at anatomization of this notion are predominantly in
accordance with Probabilistic inference models. The techniques employed in
past, however, have not been competent enough to address the predicament as
legitimately as anticipated. Instead of deploying usual mechanism of
recommendation systems, we have employed a simple yet novel approach of Seq2seq
model by leveraging continuous representations of self-curated exhaustive
Knowledge Base (KB) to enhance the embedding employed in the model. This work
describes an assistive, adaptive and dynamic way of enhancing UNIX command line
prediction systems. Experimental methods state that our model has achieved
accuracy surpassing mixture of other techniques and adaptive command line
interface mechanism as acclaimed in the past.
| 2,020 | Computation and Language |
SIGMORPHON 2020 Shared Task 0: Typologically Diverse Morphological
Inflection | A broad goal in natural language processing (NLP) is to develop a system that
has the capacity to process any natural language. Most systems, however, are
developed using data from just one language such as English. The SIGMORPHON
2020 shared task on morphological reinflection aims to investigate systems'
ability to generalize across typologically distinct languages, many of which
are low resource. Systems were developed using data from 45 languages and just
5 language families, fine-tuned with data from an additional 45 languages and
10 language families (13 in total), and evaluated on all 90 languages. A total
of 22 systems (19 neural) from 10 teams were submitted to the task. All four
winning systems were neural (two monolingual transformers and two massively
multilingual RNN-based models with gated attention). Most teams demonstrate
utility of data hallucination and augmentation, ensembles, and multilingual
training for low-resource languages. Non-neural learners and manually designed
grammars showed competitive and even superior performance on some languages
(such as Ingrian, Tajik, Tagalog, Zarma, Lingala), especially with very limited
data. Some language families (Afro-Asiatic, Niger-Congo, Turkic) were
relatively easy for most systems and achieved over 90% mean accuracy while
others were more challenging.
| 2,020 | Computation and Language |
Learning aligned embeddings for semi-supervised word translation using
Maximum Mean Discrepancy | Word translation is an integral part of language translation. In machine
translation, each language is considered a domain with its own word embedding.
The alignment between word embeddings allows linking semantically equivalent
words in multilingual contexts. Moreover, it offers a way to infer
cross-lingual meaning for words without a direct translation. Current methods
for word embedding alignment are either supervised, i.e. they require known
word pairs, or learn a cross-domain transformation on fixed embeddings in an
unsupervised way. Here we propose an end-to-end approach for word embedding
alignment that does not require known word pairs. Our method, termed Word
Alignment through MMD (WAM), learns embeddings that are aligned during sentence
translation training using a localized Maximum Mean Discrepancy (MMD)
constraint between the embeddings. We show that our method not only
out-performs unsupervised methods, but also supervised methods that train on
known word translations.
| 2,020 | Computation and Language |
AraDIC: Arabic Document Classification using Image-Based Character
Embeddings and Class-Balanced Loss | Classical and some deep learning techniques for Arabic text classification
often depend on complex morphological analysis, word segmentation, and
hand-crafted feature engineering. These could be eliminated by using
character-level features. We propose a novel end-to-end Arabic document
classification framework, Arabic document image-based classifier (AraDIC),
inspired by the work on image-based character embeddings. AraDIC consists of an
image-based character encoder and a classifier. They are trained in an
end-to-end fashion using the class balanced loss to deal with the long-tailed
data distribution problem. To evaluate the effectiveness of AraDIC, we created
and published two datasets, the Arabic Wikipedia title (AWT) dataset and the
Arabic poetry (AraP) dataset. To the best of our knowledge, this is the first
image-based character embedding framework addressing the problem of Arabic text
classification. We also present the first deep learning-based text classifier
widely evaluated on modern standard Arabic, colloquial Arabic and classical
Arabic. AraDIC shows performance improvement over classical and deep learning
baselines by 12.29% and 23.05% for the micro and macro F-score, respectively.
| 2,020 | Computation and Language |
Studying Attention Models in Sentiment Attitude Extraction Task | In the sentiment attitude extraction task, the aim is to identify
<<attitudes>> -- sentiment relations between entities mentioned in text. In
this paper, we provide a study on attention-based context encoders in the
sentiment attitude extraction task. For this task, we adapt attentive context
encoders of two types: (i) feature-based; (ii) self-based. Our experiments with
a corpus of Russian analytical texts RuSentRel illustrate that the models
trained with attentive encoders outperform ones that were trained without them
and achieve 1.5-5.9% increase by F1. We also provide the analysis of attention
weight distributions in dependence on the term type.
| 2,020 | Computation and Language |
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
Ensemble | Despite neural networks have achieved prominent performance on many natural
language processing (NLP) tasks, they are vulnerable to adversarial examples.
In this paper, we propose Dirichlet Neighborhood Ensemble (DNE), a randomized
smoothing method for training a robust model to defense substitution-based
attacks. During training, DNE forms virtual sentences by sampling embedding
vectors for each word in an input sentence from a convex hull spanned by the
word and its synonyms, and it augments them with the training data. In such a
way, the model is robust to adversarial attacks while maintaining the
performance on the original clean data. DNE is agnostic to the network
architectures and scales to large models for NLP applications. We demonstrate
through extensive experimentation that our method consistently outperforms
recently proposed defense methods by a significant margin across different
network architectures and multiple data sets.
| 2,020 | Computation and Language |
MDR Cluster-Debias: A Nonlinear WordEmbedding Debiasing Pipeline | Existing methods for debiasing word embeddings often do so only
superficially, in that words that are stereotypically associated with, e.g., a
particular gender in the original embedding space can still be clustered
together in the debiased space. However, there has yet to be a study that
explores why this residual clustering exists, and how it might be addressed.
The present work fills this gap. We identify two potential reasons for which
residual bias exists and develop a new pipeline, MDR Cluster-Debias, to
mitigate this bias. We explore the strengths and weaknesses of our method,
finding that it significantly outperforms other existing debiasing approaches
on a variety of upstream bias tests but achieves limited improvement on
decreasing gender bias in a downstream task. This indicates that word
embeddings encode gender bias in still other ways, not necessarily captured by
upstream tests.
| 2,020 | Computation and Language |
The Importance of Category Labels in Grammar Induction with
Child-directed Utterances | Recent progress in grammar induction has shown that grammar induction is
possible without explicit assumptions of language-specific knowledge. However,
evaluation of induced grammars usually has ignored phrasal labels, an essential
part of a grammar. Experiments in this work using a labeled evaluation metric,
RH, show that linguistically motivated predictions about grammar sparsity and
use of categories can only be revealed through labeled evaluation. Furthermore,
depth-bounding as an implementation of human memory constraints in grammar
inducers is still effective with labeled evaluation on multilingual transcribed
child-directed utterances.
| 2,020 | Computation and Language |
Enriching Large-Scale Eventuality Knowledge Graph with Entailment
Relations | Computational and cognitive studies suggest that the abstraction of
eventualities (activities, states, and events) is crucial for humans to
understand daily eventualities. In this paper, we propose a scalable approach
to model the entailment relations between eventualities ("eat an apple''
entails ''eat fruit''). As a result, we construct a large-scale eventuality
entailment graph (EEG), which has 10 million eventuality nodes and 103 million
entailment edges. Detailed experiments and analysis demonstrate the
effectiveness of the proposed approach and quality of the resulting knowledge
graph. Our datasets and code are available at
https://github.com/HKUST-KnowComp/ASER-EEG.
| 2,020 | Computation and Language |
The NYU-CUBoulder Systems for SIGMORPHON 2020 Task 0 and Task 2 | We describe the NYU-CUBoulder systems for the SIGMORPHON 2020 Task 0 on
typologically diverse morphological inflection and Task 2 on unsupervised
morphological paradigm completion. The former consists of generating
morphological inflections from a lemma and a set of morphosyntactic features
describing the target form. The latter requires generating entire paradigms for
a set of given lemmas from raw text alone. We model morphological inflection as
a sequence-to-sequence problem, where the input is the sequence of the lemma's
characters with morphological tags, and the output is the sequence of the
inflected form's characters. First, we apply a transformer model to the task.
Second, as inflected forms share most characters with the lemma, we further
propose a pointer-generator transformer model to allow easy copying of input
characters. Our best performing system for Task 0 is placed 6th out of 23
systems. We further use our inflection systems as subcomponents of approaches
for Task 2. Our best performing system for Task 2 is the 2nd best out of 7
submissions.
| 2,020 | Computation and Language |
AdvAug: Robust Adversarial Augmentation for Neural Machine Translation | In this paper, we propose a new adversarial augmentation method for Neural
Machine Translation (NMT). The main idea is to minimize the vicinal risk over
virtual sentences sampled from two vicinity distributions, of which the crucial
one is a novel vicinity distribution for adversarial sentences that describes a
smooth interpolated embedding space centered around observed training sentence
pairs. We then discuss our approach, AdvAug, to train NMT models using the
embeddings of virtual sentences in sequence-to-sequence learning. Experiments
on Chinese-English, English-French, and English-German translation benchmarks
show that AdvAug achieves significant improvements over the Transformer (up to
4.9 BLEU points), and substantially outperforms other data augmentation
techniques (e.g. back-translation) without using extra corpora.
| 2,020 | Computation and Language |
Labeling Explicit Discourse Relations using Pre-trained Language Models | Labeling explicit discourse relations is one of the most challenging
sub-tasks of the shallow discourse parsing where the goal is to identify the
discourse connectives and the boundaries of their arguments. The
state-of-the-art models achieve slightly above 45% of F-score by using
hand-crafted features. The current paper investigates the efficacy of the
pre-trained language models in this task. We find that the pre-trained language
models, when finetuned, are powerful enough to replace the linguistic features.
We evaluate our model on PDTB 2.0 and report the state-of-the-art results in
the extraction of the full relation. This is the first time when a model
outperforms the knowledge intensive models without employing any linguistic
features.
| 2,020 | Computation and Language |
A Survey on Machine Reading Comprehension: Tasks, Evaluation Metrics and
Benchmark Datasets | Machine Reading Comprehension (MRC) is a challenging Natural Language
Processing(NLP) research field with wide real-world applications. The great
progress of this field in recent years is mainly due to the emergence of
large-scale datasets and deep learning. At present, a lot of MRC models have
already surpassed human performance on various benchmark datasets despite the
obvious giant gap between existing MRC models and genuine human-level reading
comprehension. This shows the need for improving existing datasets, evaluation
metrics, and models to move current MRC models toward "real" understanding. To
address the current lack of comprehensive survey of existing MRC tasks,
evaluation metrics, and datasets, herein, (1) we analyze 57 MRC tasks and
datasets and propose a more precise classification method of MRC tasks with 4
different attributes; (2) we summarized 9 evaluation metrics of MRC tasks, 7
attributes and 10 characteristics of MRC datasets; (3) We also discuss key open
issues in MRC research and highlighted future research directions. In addition,
we have collected, organized, and published our data on the companion
website(https://mrc-datasets.github.io/) where MRC researchers could directly
access each MRC dataset, papers, baseline projects, and the leaderboard.
| 2,020 | Computation and Language |
Students Need More Attention: BERT-based AttentionModel for Small Data
with Application to AutomaticPatient Message Triage | Small and imbalanced datasets commonly seen in healthcare represent a
challenge when training classifiers based on deep learning models. So
motivated, we propose a novel framework based on BioBERT (Bidirectional Encoder
Representations from Transformers forBiomedical TextMining). Specifically, (i)
we introduce Label Embeddings for Self-Attention in each layer of BERT, which
we call LESA-BERT, and (ii) by distilling LESA-BERT to smaller variants, we aim
to reduce overfitting and model size when working on small datasets. As an
application, our framework is utilized to build a model for patient portal
message triage that classifies the urgency of a message into three categories:
non-urgent, medium and urgent. Experiments demonstrate that our approach can
outperform several strong baseline classifiers by a significant margin of 4.3%
in terms of macro F1 score. The code for this project is publicly available at
\url{https://github.com/shijing001/text_classifiers}.
| 2,020 | Computation and Language |
Efficient text generation of user-defined topic using generative
adversarial networks | This study focused on efficient text generation using generative adversarial
networks (GAN). Assuming that the goal is to generate a paragraph of a
user-defined topic and sentimental tendency, conventionally the whole network
has to be re-trained to obtain new results each time when a user changes the
topic. This would be time-consuming and impractical. Therefore, we propose a
User-Defined GAN (UD-GAN) with two-level discriminators to solve this problem.
The first discriminator aims to guide the generator to learn paragraph-level
information and sentence syntactic structure, which is constructed by
multiple-LSTMs. The second one copes with higher-level information, such as the
user-defined sentiment and topic for text generation. The cosine similarity
based on TF-IDF and length penalty are adopted to determine the relevance of
the topic. Then, the second discriminator is re-trained with the generator if
the topic or sentiment for text generation is modified. The system evaluations
are conducted to compare the performance of the proposed method with other
GAN-based ones. The objective results showed that the proposed method is
capable of generating texts with less time than others and the generated text
is related to the user-defined topic and sentiment. We will further investigate
the possibility of incorporating more detailed paragraph information such as
semantics into text generation to enhance the result.
| 2,020 | Computation and Language |
Clinical Predictive Keyboard using Statistical and Neural Language
Modeling | A language model can be used to predict the next word during authoring, to
correct spelling or to accelerate writing (e.g., in sms or emails). Language
models, however, have only been applied in a very small scale to assist
physicians during authoring (e.g., discharge summaries or radiology reports).
But along with the assistance to the physician, computer-based systems which
expedite the patient's exit also assist in decreasing the hospital infections.
We employed statistical and neural language modeling to predict the next word
of a clinical text and assess all the models in terms of accuracy and keystroke
discount in two datasets with radiology reports. We show that a neural language
model can achieve as high as 51.3% accuracy in radiology reports (one out of
two words predicted correctly). We also show that even when the models are
employed only for frequent words, the physician can save valuable time.
| 2,020 | Computation and Language |
Exploiting Non-Taxonomic Relations for Measuring Semantic Similarity and
Relatedness in WordNet | Various applications in the areas of computational linguistics and artificial
intelligence employ semantic similarity to solve challenging tasks, such as
word sense disambiguation, text classification, information retrieval, machine
translation, and document clustering. Previous work on semantic similarity
followed a mono-relational approach using mostly the taxonomic relation "ISA".
This paper explores the benefits of using all types of non-taxonomic relations
in large linked data, such as WordNet knowledge graph, to enhance existing
semantic similarity and relatedness measures. We propose a holistic
poly-relational approach based on a new relation-based information content and
non-taxonomic-based weighted paths to devise a comprehensive semantic
similarity and relatedness measure. To demonstrate the benefits of exploiting
non-taxonomic relations in a knowledge graph, we used three strategies to
deploy non-taxonomic relations at different granularity levels. We conducted
experiments on four well-known gold standard datasets, and the results
demonstrated the robustness and scalability of the proposed semantic similarity
and relatedness measure, which significantly improves existing similarity
measures.
| 2,021 | Computation and Language |
ReCO: A Large Scale Chinese Reading Comprehension Dataset on Opinion | This paper presents the ReCO, a human-curated ChineseReading Comprehension
dataset on Opinion. The questions in ReCO are opinion based queries issued to
the commercial search engine. The passages are provided by the crowdworkers who
extract the support snippet from the retrieved documents. Finally, an
abstractive yes/no/uncertain answer was given by the crowdworkers. The release
of ReCO consists of 300k questions that to our knowledge is the largest in
Chinese reading comprehension. A prominent characteristic of ReCO is that in
addition to the original context paragraph, we also provided the support
evidence that could be directly used to answer the question. Quality analysis
demonstrates the challenge of ReCO that requires various types of reasoning
skills, such as causal inference, logical reasoning, etc. Current QA models
that perform very well on many question answering problems, such as BERT, only
achieve 77% accuracy on this dataset, a large margin behind humans nearly 92%
performance, indicating ReCO presents a good challenge for machine reading
comprehension. The codes, datasets are freely available at
https://github.com/benywon/ReCO.
| 2,020 | Computation and Language |
Shared Task on Evaluating Accuracy in Natural Language Generation | We propose a shared task on methodologies and algorithms for evaluating the
accuracy of generated texts. Participants will measure the accuracy of
basketball game summaries produced by NLG systems from basketball box score
data.
| 2,020 | Computation and Language |
MedLatinEpi and MedLatinLit: Two Datasets for the Computational
Authorship Analysis of Medieval Latin Texts | We present and make available MedLatinEpi and MedLatinLit, two datasets of
medieval Latin texts to be used in research on computational authorship
analysis. MedLatinEpi and MedLatinLit consist of 294 and 30 curated texts,
respectively, labelled by author; MedLatinEpi texts are of epistolary nature,
while MedLatinLit texts consist of literary comments and treatises about
various subjects. As such, these two datasets lend themselves to supporting
research in authorship analysis tasks, such as authorship attribution,
authorship verification, or same-author verification. Along with the datasets
we provide experimental results, obtained on these datasets, for the authorship
verification task, i.e., the task of predicting whether a text of unknown
authorship was written by a candidate author or not. We also make available the
source code of the authorship verification system we have used, thus allowing
our experiments to be reproduced, and to be used as baselines, by other
researchers. We also describe the application of the above authorship
verification system, using these datasets as training data, for investigating
the authorship of two medieval epistles whose authorship has been disputed by
scholars.
| 2,021 | Computation and Language |
Dirichlet-Smoothed Word Embeddings for Low-Resource Settings | Nowadays, classical count-based word embeddings using positive pointwise
mutual information (PPMI) weighted co-occurrence matrices have been widely
superseded by machine-learning-based methods like word2vec and GloVe. But these
methods are usually applied using very large amounts of text data. In many
cases, however, there is not much text data available, for example for specific
domains or low-resource languages. This paper revisits PPMI by adding Dirichlet
smoothing to correct its bias towards rare words. We evaluate on standard word
similarity data sets and compare to word2vec and the recent state of the art
for low-resource settings: Positive and Unlabeled (PU) Learning for word
embeddings. The proposed method outperforms PU-Learning for low-resource
settings and obtains competitive results for Maltese and Luxembourgish.
| 2,020 | Computation and Language |
A Step Towards Interpretable Authorship Verification | A central problem that has been researched for many years in the field of
digital text forensics is the question whether two documents were written by
the same author. Authorship verification (AV) is a research branch in this
field that deals with this question. Over the years, research activities in the
context of AV have steadily increased, which has led to a variety of approaches
trying to solve this problem. Many of these approaches, however, make use of
features that are related to or influenced by the topic of the documents.
Therefore, it may accidentally happen that their verification results are based
not on the writing style (the actual focus of AV), but on the topic of the
documents. To address this problem, we propose an alternative AV approach that
considers only topic-agnostic features in its classification decision. In
addition, we present a post-hoc interpretation method that allows to understand
which particular features have contributed to the prediction of the proposed AV
method. To evaluate the performance of our AV method, we compared it with ten
competing baselines (including the current state of the art) on four
challenging data sets. The results show that our approach outperforms all
baselines in two cases (with a maximum accuracy of 84%), while in the other two
cases it performs as well as the strongest baseline.
| 2,020 | Computation and Language |
Using Company Specific Headlines and Convolutional Neural Networks to
Predict Stock Fluctuations | This work presents a Convolutional Neural Network (CNN) for the prediction of
next-day stock fluctuations using company-specific news headlines. Experiments
to evaluate model performance using various configurations of word-embeddings
and convolutional filter widths are reported. The total number of convolutional
filters used is far fewer than is common, reducing the dimensionality of the
task without loss of accuracy. Furthermore, multiple hidden layers with
decreasing dimensionality are employed. A classification accuracy of 61.7\% is
achieved using pre-learned embeddings, that are fine-tuned during training to
represent the specific context of this task. Multiple filter widths are also
implemented to detect different length phrases that are key for classification.
Trading simulations are conducted using the presented classification results.
Initial investments are more than tripled over a 838 day testing period using
the optimal classification configuration and a simple trading strategy. Two
novel methods are presented to reduce the risk of the trading simulations.
Adjustment of the sigmoid class threshold and re-labelling headlines using
multiple classes form the basis of these methods. A combination of these
approaches is found to more than double the Average Trade Profit (ATP) achieved
during baseline simulations.
| 2,020 | Computation and Language |
Open-Domain Conversational Agents: Current Progress, Open Problems, and
Future Directions | We present our view of what is necessary to build an engaging open-domain
conversational agent: covering the qualities of such an agent, the pieces of
the puzzle that have been built so far, and the gaping holes we have not filled
yet. We present a biased view, focusing on work done by our own group, while
citing related work in each area. In particular, we discuss in detail the
properties of continual learning, providing engaging content, and being
well-behaved -- and how to measure success in providing them. We end with a
discussion of our experience and learnings, and our recommendations to the
community.
| 2,020 | Computation and Language |
Exploring Software Naturalness through Neural Language Models | The Software Naturalness hypothesis argues that programming languages can be
understood through the same techniques used in natural language processing. We
explore this hypothesis through the use of a pre-trained transformer-based
language model to perform code analysis tasks. Present approaches to code
analysis depend heavily on features derived from the Abstract Syntax Tree (AST)
while our transformer-based language models work on raw source code. This work
is the first to investigate whether such language models can discover AST
features automatically. To achieve this, we introduce a sequence labeling task
that directly probes the language models understanding of AST. Our results show
that transformer based language models achieve high accuracy in the AST tagging
task. Furthermore, we evaluate our model on a software vulnerability
identification task. Importantly, we show that our approach obtains
vulnerability identification results comparable to graph based approaches that
rely heavily on compilers for feature extraction.
| 2,020 | Computation and Language |
Unsupervised Evaluation of Interactive Dialog with DialoGPT | It is important to define meaningful and interpretable automatic evaluation
metrics for open-domain dialog research. Standard language generation metrics
have been shown to be ineffective for dialog. This paper introduces the FED
metric (fine-grained evaluation of dialog), an automatic evaluation metric
which uses DialoGPT, without any fine-tuning or supervision. It also introduces
the FED dataset which is constructed by annotating a set of human-system and
human-human conversations with eighteen fine-grained dialog qualities. The FED
metric (1) does not rely on a ground-truth response, (2) does not require
training data and (3) measures fine-grained dialog qualities at both the turn
and whole dialog levels. FED attains moderate to strong correlation with human
judgement at both levels.
| 2,020 | Computation and Language |
Keyframe Segmentation and Positional Encoding for Video-guided Machine
Translation Challenge 2020 | Video-guided machine translation as one of multimodal neural machine
translation tasks targeting on generating high-quality text translation by
tangibly engaging both video and text. In this work, we presented our
video-guided machine translation system in approaching the Video-guided Machine
Translation Challenge 2020. This system employs keyframe-based video feature
extractions along with the video feature positional encoding. In the evaluation
phase, our system scored 36.60 corpus-level BLEU-4 and achieved the 1st place
on the Video-guided Machine Translation Challenge 2020.
| 2,020 | Computation and Language |
Inductive Unsupervised Domain Adaptation for Few-Shot Classification via
Clustering | Few-shot classification tends to struggle when it needs to adapt to diverse
domains. Due to the non-overlapping label space between domains, the
performance of conventional domain adaptation is limited. Previous work tackles
the problem in a transductive manner, by assuming access to the full set of
test data, which is too restrictive for many real-world applications. In this
paper, we set out to tackle this issue by introducing a inductive framework,
DaFeC, to improve Domain adaptation performance for Few-shot classification via
Clustering. We first build a representation extractor to derive features for
unlabeled data from the target domain (no test data is necessary) and then
group them with a cluster miner. The generated pseudo-labeled data and the
labeled source-domain data are used as supervision to update the parameters of
the few-shot classifier. In order to derive high-quality pseudo labels, we
propose a Clustering Promotion Mechanism, to learn better features for the
target domain via Similarity Entropy Minimization and Adversarial Distribution
Alignment, which are combined with a Cosine Annealing Strategy. Experiments are
performed on the FewRel 2.0 dataset. Our approach outperforms previous work
with absolute gains (in classification accuracy) of 4.95%, 9.55%, 3.99% and
11.62%, respectively, under four few-shot settings.
| 2,020 | Computation and Language |
NLPContributions: An Annotation Scheme for Machine Reading of Scholarly
Contributions in Natural Language Processing Literature | We describe an annotation initiative to capture the scholarly contributions
in natural language processing (NLP) articles, particularly, for the articles
that discuss machine learning (ML) approaches for various information
extraction tasks. We develop the annotation task based on a pilot annotation
exercise on 50 NLP-ML scholarly articles presenting contributions to five
information extraction tasks 1. machine translation, 2. named entity
recognition, 3. question answering, 4. relation classification, and 5. text
classification. In this article, we describe the outcomes of this pilot
annotation phase. Through the exercise we have obtained an annotation
methodology; and found ten core information units that reflect the contribution
of the NLP-ML scholarly investigations. The resulting annotation scheme we
developed based on these information units is called NLPContributions.
The overarching goal of our endeavor is four-fold: 1) to find a systematic
set of patterns of subject-predicate-object statements for the semantic
structuring of scholarly contributions that are more or less generically
applicable for NLP-ML research articles; 2) to apply the discovered patterns in
the creation of a larger annotated dataset for training machine readers of
research contributions; 3) to ingest the dataset into the Open Research
Knowledge Graph (ORKG) infrastructure as a showcase for creating user-friendly
state-of-the-art overviews; 4) to integrate the machine readers into the ORKG
to assist users in the manual curation of their respective article
contributions. We envision that the NLPContributions methodology engenders a
wider discussion on the topic toward its further refinement and development.
Our pilot annotated dataset of 50 NLP-ML scholarly articles according to the
NLPContributions scheme is openly available to the research community at
https://doi.org/10.25835/0019761.
| 2,020 | Computation and Language |
Can you tell? SSNet -- a Sagittal Stratum-inspired Neural Network
Framework for Sentiment Analysis | When people try to understand nuanced language they typically process
multiple input sensor modalities to complete this cognitive task. It turns out
the human brain has even a specialized neuron formation, called sagittal
stratum, to help us understand sarcasm. We use this biological formation as the
inspiration for designing a neural network architecture that combines
predictions of different models on the same text to construct robust, accurate
and computationally efficient classifiers for sentiment analysis and study
several different realizations. Among them, we propose a systematic new
approach to combining multiple predictions based on a dedicated neural network
and develop mathematical analysis of it along with state-of-the-art
experimental results. We also propose a heuristic-hybrid technique for
combining models and back it up with experimental results on a representative
benchmark dataset and comparisons to other methods to show the advantages of
the new approaches.
| 2,022 | Computation and Language |
Domain Adaptation for Semantic Parsing | Recently, semantic parsing has attracted much attention in the community.
Although many neural modeling efforts have greatly improved the performance, it
still suffers from the data scarcity issue. In this paper, we propose a novel
semantic parser for domain adaptation, where we have much fewer annotated data
in the target domain compared to the source domain. Our semantic parser
benefits from a two-stage coarse-to-fine framework, thus can provide different
and accurate treatments for the two stages, i.e., focusing on domain invariant
and domain specific information, respectively. In the coarse stage, our novel
domain discrimination component and domain relevance attention encourage the
model to learn transferable domain general structures. In the fine stage, the
model is guided to concentrate on domain related details. Experiments on a
benchmark dataset show that our method consistently outperforms several popular
domain adaptation strategies. Additionally, we show that our model can well
exploit limited target data to capture the difference between the source and
target domain, even when the target domain has far fewer training instances.
| 2,020 | Computation and Language |
Combining Neural Language Models for WordSense Induction | Word sense induction (WSI) is the problem of grouping occurrences of an
ambiguous word according to the expressed sense of this word. Recently a new
approach to this task was proposed, which generates possible substitutes for
the ambiguous word in a particular context using neural language models, and
then clusters sparse bag-of-words vectors built from these substitutes. In this
work, we apply this approach to the Russian language and improve it in two
ways. First, we propose methods of combining left and right contexts, resulting
in better substitutes generated. Second, instead of fixed number of clusters
for all ambiguous words we propose a technique for selecting individual number
of clusters for each word. Our approach established new state-of-the-art level,
improving current best results of WSI for the Russian language on two RUSSE
2018 datasets by a large margin.
| 2,019 | Computation and Language |
Automating Text Naturalness Evaluation of NLG Systems | Automatic methods and metrics that assess various quality criteria of
automatically generated texts are important for developing NLG systems because
they produce repeatable results and allow for a fast development cycle. We
present here an attempt to automate the evaluation of text naturalness which is
a very important characteristic of natural language generation methods. Instead
of relying on human participants for scoring or labeling the text samples, we
propose to automate the process by using a human likeliness metric we define
and a discrimination procedure based on large pretrained language models with
their probability distributions. We analyze the text probability fractions and
observe how they are influenced by the size of the generative and
discriminative models involved in the process. Based on our results, bigger
generators and larger pretrained discriminators are more appropriate for a
better evaluation of text naturalness. A comprehensive validation procedure
with human participants is required as follow up to check how well this
automatic evaluation scheme correlates with human judgments.
| 2,020 | Computation and Language |
Supervised Understanding of Word Embeddings | Pre-trained word embeddings are widely used for transfer learning in natural
language processing. The embeddings are continuous and distributed
representations of the words that preserve their similarities in compact
Euclidean spaces. However, the dimensions of these spaces do not provide any
clear interpretation. In this study, we have obtained supervised projections in
the form of the linear keyword-level classifiers on word embeddings. We have
shown that the method creates interpretable projections of original embedding
dimensions. Activations of the trained classifier nodes correspond to a subset
of the words in the vocabulary. Thus, they behave similarly to the dictionary
features while having the merit of continuous value output. Additionally, such
dictionaries can be grown iteratively with multiple rounds by adding expert
labels on top-scoring words to an initial collection of the keywords. Also, the
same classifiers can be applied to aligned word embeddings in other languages
to obtain corresponding dictionaries. In our experiments, we have shown that
initializing higher-order networks with these classifier weights gives more
accurate models for downstream NLP tasks. We further demonstrate the usefulness
of supervised dimensions in revealing the polysemous nature of a keyword of
interest by projecting it's embedding using learned classifiers in different
sub-spaces.
| 2,020 | Computation and Language |
Classifying Referential and Non-referential It Using Gaze | When processing a text, humans and machines must disambiguate between
different uses of the pronoun it, including non-referential, nominal anaphoric
or clause anaphoric ones. In this paper, we use eye-tracking data to learn how
humans perform this disambiguation. We use this knowledge to improve the
automatic classification of it. We show that by using gaze data and a
POS-tagger we are able to significantly outperform a common baseline and
classify between three categories of it with an accuracy comparable to that of
linguisticbased approaches. In addition, the discriminatory power of specific
gaze features informs the way humans process the pronoun, which, to the best of
our knowledge, has not been explored using data from a natural reading task.
| 2,020 | Computation and Language |
One Model to Pronounce Them All: Multilingual Grapheme-to-Phoneme
Conversion With a Transformer Ensemble | The task of grapheme-to-phoneme (G2P) conversion is important for both speech
recognition and synthesis. Similar to other speech and language processing
tasks, in a scenario where only small-sized training data are available,
learning G2P models is challenging. We describe a simple approach of exploiting
model ensembles, based on multilingual Transformers and self-training, to
develop a highly effective G2P solution for 15 languages. Our models are
developed as part of our participation in the SIGMORPHON 2020 Shared Task 1
focused at G2P. Our best models achieve 14.99 word error rate (WER) and 3.30
phoneme error rate (PER), a sizeable improvement over the shared task
competitive baselines.
| 2,020 | Computation and Language |
A High-Quality Multilingual Dataset for Structured Documentation
Translation | This paper presents a high-quality multilingual dataset for the documentation
domain to advance research on localization of structured text. Unlike
widely-used datasets for translation of plain text, we collect XML-structured
parallel text segments from the online documentation for an enterprise software
platform. These Web pages have been professionally translated from English into
16 languages and maintained by domain experts, and around 100,000 text segments
are available for each language pair. We build and evaluate translation models
for seven target languages from English, with several different copy mechanisms
and an XML-constrained beam search. We also experiment with a non-English pair
to show that our dataset has the potential to explicitly enable $17 \times 16$
translation settings. Our experiments show that learning to translate with the
XML tags improves translation accuracy, and the beam search accurately
generates XML structures. We also discuss trade-offs of using the copy
mechanisms by focusing on translation of numerical words and named entities. We
further provide a detailed human analysis of gaps between the model output and
human translations for real-world applications, including suitability for
post-editing.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.