Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Dialogue Relation Extraction with Document-level Heterogeneous Graph
Attention Networks | Dialogue relation extraction (DRE) aims to detect the relation between two
entities mentioned in a multi-party dialogue. It plays an important role in
constructing knowledge graphs from conversational data increasingly abundant on
the internet and facilitating intelligent dialogue system development. The
prior methods of DRE do not meaningfully leverage speaker information-they just
prepend the utterances with the respective speaker names. Thus, they fail to
model the crucial inter-speaker relations that may give additional context to
relevant argument entities through pronouns and triggers. We, however, present
a graph attention network-based method for DRE where a graph, that contains
meaningfully connected speaker, entity, entity-type, and utterance nodes, is
constructed. This graph is fed to a graph attention network for context
propagation among relevant nodes, which effectively captures the dialogue
context. We empirically show that this graph-based approach quite effectively
captures the relations between different entity pairs in a dialogue as it
outperforms the state-of-the-art approaches by a significant margin on the
benchmark dataset DialogRE. Our code is released at:
https://github.com/declare-lab/dialog-HGAT
| 2,021 | Computation and Language |
RadLex Normalization in Radiology Reports | Radiology reports have been widely used for extraction of various clinically
significant information about patients' imaging studies. However, limited
research has focused on standardizing the entities to a common
radiology-specific vocabulary. Further, no study to date has attempted to
leverage RadLex for standardization. In this paper, we aim to normalize a
diverse set of radiological entities to RadLex terms. We manually construct a
normalization corpus by annotating entities from three types of reports. This
contains 1706 entity mentions. We propose two deep learning-based NLP methods
based on a pre-trained language model (BERT) for automatic normalization.
First, we employ BM25 to retrieve candidate concepts for the BERT-based models
(re-ranker and span detector) to predict the normalized concept. The results
are promising, with the best accuracy (78.44%) obtained by the span detector.
Additionally, we discuss the challenges involved in corpus construction and
propose new RadLex terms.
| 2,020 | Computation and Language |
Rank over Class: The Untapped Potential of Ranking in Natural Language
Processing | Text classification has long been a staple within Natural Language Processing
(NLP) with applications spanning across diverse areas such as sentiment
analysis, recommender systems and spam detection. With such a powerful
solution, it is often tempting to use it as the go-to tool for all NLP problems
since when you are holding a hammer, everything looks like a nail. However, we
argue here that many tasks which are currently addressed using classification
are in fact being shoehorned into a classification mould and that if we instead
address them as a ranking problem, we not only improve the model, but we
achieve better performance. We propose a novel end-to-end ranking approach
consisting of a Transformer network responsible for producing representations
for a pair of text sequences, which are in turn passed into a context
aggregating network outputting ranking scores used to determine an ordering to
the sequences based on some notion of relevance. We perform numerous
experiments on publicly-available datasets and investigate the applications of
ranking in problems often solved using classification. In an experiment on a
heavily-skewed sentiment analysis dataset, converting ranking results to
classification labels yields an approximately 22% improvement over
state-of-the-art text classification, demonstrating the efficacy of text
ranking over text classification in certain scenarios.
| 2,021 | Computation and Language |
FILTER: An Enhanced Fusion Method for Cross-lingual Language
Understanding | Large-scale cross-lingual language models (LM), such as mBERT, Unicoder and
XLM, have achieved great success in cross-lingual representation learning.
However, when applied to zero-shot cross-lingual transfer tasks, most existing
methods use only single-language input for LM finetuning, without leveraging
the intrinsic cross-lingual alignment between different languages that proves
essential for multilingual tasks. In this paper, we propose FILTER, an enhanced
fusion method that takes cross-lingual data as input for XLM finetuning.
Specifically, FILTER first encodes text input in the source language and its
translation in the target language independently in the shallow layers, then
performs cross-language fusion to extract multilingual knowledge in the
intermediate layers, and finally performs further language-specific encoding.
During inference, the model makes predictions based on the text input in the
target language and its translation in the source language. For simple tasks
such as classification, translated text in the target language shares the same
label as the source language. However, this shared label becomes less accurate
or even unavailable for more complex tasks such as question answering, NER and
POS tagging. To tackle this issue, we further propose an additional
KL-divergence self-teaching loss for model training, based on auto-generated
soft pseudo-labels for translated text in the target language. Extensive
experiments demonstrate that FILTER achieves new state of the art on two
challenging multilingual multi-task benchmarks, XTREME and XGLUE.
| 2,020 | Computation and Language |
Accelerating Real-Time Question Answering via Question Generation | Although deep neural networks have achieved tremendous success for question
answering (QA), they are still suffering from heavy computational and energy
cost for real product deployment. Further, existing QA systems are bottlenecked
by the encoding time of real-time questions with neural networks, thus
suffering from detectable latency in deployment for large-volume traffic. To
reduce the computational cost and accelerate real-time question answering
(RTQA) for practical usage, we propose to remove all the neural networks from
online QA systems, and present Ocean-Q (an Ocean of Questions), which
introduces a new question generation (QG) model to generate a large pool of QA
pairs offline, then in real time matches an input question with the candidate
QA pool to predict the answer without question encoding. Ocean-Q can be readily
deployed in existing distributed database systems or search engine for
large-scale query usage, and much greener with no additional cost for
maintaining large neural networks. Experiments on SQuAD(-open) and HotpotQA
benchmarks demonstrate that Ocean-Q is able to accelerate the fastest
state-of-the-art RTQA system by 4X times, with only a 3+% accuracy drop.
| 2,021 | Computation and Language |
Sparsifying Transformer Models with Trainable Representation Pooling | We propose a novel method to sparsify attention in the Transformer model by
learning to select the most-informative token representations during the
training process, thus focusing on the task-specific parts of an input. A
reduction of quadratic time and memory complexity to sublinear was achieved due
to a robust trainable top-$k$ operator. Our experiments on a challenging long
document summarization task show that even our simple baseline performs
comparably to the current SOTA, and with trainable pooling, we can retain its
top quality, while being $1.8\times$ faster during training, $4.5\times$ faster
during inference, and up to $13\times$ more computationally efficient in the
decoder.
| 2,022 | Computation and Language |
Denoising Large-Scale Image Captioning from Alt-text Data using Content
Selection Models | Training large-scale image captioning (IC) models demands access to a rich
and diverse set of training examples, gathered from the wild, often from noisy
alt-text data. However, recent modeling approaches to IC often fall short in
terms of performance in this case, because they assume a clean annotated
dataset (as opposed to the noisier alt-text--based annotations), and employ an
end-to-end generation approach, which often lacks both controllability and
interpretability. We address these problems by breaking down the task into two
simpler, more controllable tasks -- skeleton prediction and skeleton-based
caption generation. Specifically, we show that selecting content words as
skeletons} helps in generating improved and denoised captions when leveraging
rich yet noisy alt-text--based uncurated datasets. We also show that the
predicted English skeletons can be further cross-lingually leveraged to
generate non-English captions, and present experimental results covering
caption generation in French, Italian, German, Spanish and Hindi. We also show
that skeleton-based prediction allows for better control of certain caption
properties, such as length, content, and gender expression, providing a handle
to perform human-in-the-loop semi-automatic corrections.
| 2,022 | Computation and Language |
UPB at SemEval-2020 Task 11: Propaganda Detection with Domain-Specific
Trained BERT | Manipulative and misleading news have become a commodity for some online news
outlets and these news have gained a significant impact on the global mindset
of people. Propaganda is a frequently employed manipulation method having as
goal to influence readers by spreading ideas meant to distort or manipulate
their opinions. This paper describes our participation in the SemEval-2020,
Task 11: Detection of Propaganda Techniques in News Articles competition. Our
approach considers specializing a pre-trained BERT model on propagandistic and
hyperpartisan news articles, enabling it to create more adequate
representations for the two subtasks, namely propaganda Span Identification
(SI) and propaganda Technique Classification (TC). Our proposed system achieved
a F1-score of 46.060% in subtask SI, ranking 5th in the leaderboard from 36
teams and a micro-averaged F1 score of 54.302% for subtask TC, ranking 19th
from 32 teams.
| 2,020 | Computation and Language |
IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural
Language Understanding | Although Indonesian is known to be the fourth most frequently used language
over the internet, the research progress on this language in the natural
language processing (NLP) is slow-moving due to a lack of available resources.
In response, we introduce the first-ever vast resource for the training,
evaluating, and benchmarking on Indonesian natural language understanding
(IndoNLU) tasks. IndoNLU includes twelve tasks, ranging from single sentence
classification to pair-sentences sequence labeling with different levels of
complexity. The datasets for the tasks lie in different domains and styles to
ensure task diversity. We also provide a set of Indonesian pre-trained models
(IndoBERT) trained from a large and clean Indonesian dataset Indo4B collected
from publicly available sources such as social media texts, blogs, news, and
websites. We release baseline models for all twelve tasks, as well as the
framework for benchmark evaluation, and thus it enables everyone to benchmark
their system performances.
| 2,020 | Computation and Language |
Semantic Relations and Deep Learning | The second edition of "Semantic Relations Between Nominals" by Vivi Nastase,
Stan Szpakowicz, Preslav Nakov and Diarmuid \'O S\'eaghdha has been published
in April 2021 by Morgan & Claypool
(www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1627).
A new Chapter 5 of the book, by Vivi Nastase and Stan Szpakowicz, discusses
relation classification/extraction in the deep-learning paradigm which arose
after the first edition appeared. This is Chapter 5, made public by the kind
permission of Morgan & Claypool.
| 2,021 | Computation and Language |
A Comparison of LSTM and BERT for Small Corpus | Recent advancements in the NLP field showed that transfer learning helps with
achieving state-of-the-art results for new tasks by tuning pre-trained models
instead of starting from scratch. Transformers have made a significant
improvement in creating new state-of-the-art results for many NLP tasks
including but not limited to text classification, text generation, and sequence
labeling. Most of these success stories were based on large datasets. In this
paper we focus on a real-life scenario that scientists in academia and industry
face frequently: given a small dataset, can we use a large pre-trained model
like BERT and get better results than simple models? To answer this question,
we use a small dataset for intent classification collected for building
chatbots and compare the performance of a simple bidirectional LSTM model with
a pre-trained BERT model. Our experimental results show that bidirectional LSTM
models can achieve significantly higher results than a BERT model for a small
dataset and these simple models get trained in much less time than tuning the
pre-trained counterparts. We conclude that the performance of a model is
dependent on the task and the data, and therefore before making a model choice,
these factors should be taken into consideration instead of directly choosing
the most popular model.
| 2,020 | Computation and Language |
WOLI at SemEval-2020 Task 12: Arabic Offensive Language Identification
on Different Twitter Datasets | Communicating through social platforms has become one of the principal means
of personal communications and interactions. Unfortunately, healthy
communication is often interfered by offensive language that can have damaging
effects on the users. A key to fight offensive language on social media is the
existence of an automatic offensive language detection system. This paper
presents the results and the main findings of SemEval-2020, Task 12 OffensEval
Sub-task A Zampieri et al. (2020), on Identifying and categorising Offensive
Language in Social Media. The task was based on the Arabic OffensEval dataset
Mubarak et al. (2020). In this paper, we describe the system submitted by
WideBot AI Lab for the shared task which ranked 10th out of 52 participants
with Macro-F1 86.9% on the golden dataset under CodaLab username
"yasserotiefy". We experimented with various models and the best model is a
linear SVM in which we use a combination of both character and word n-grams. We
also introduced a neural network approach that enhanced the predictive ability
of our system that includes CNN, highway network, Bi-LSTM, and attention
layers.
| 2,020 | Computation and Language |
Robust Neural Machine Translation: Modeling Orthographic and
Interpunctual Variation | Neural machine translation systems typically are trained on curated corpora
and break when faced with non-standard orthography or punctuation. Resilience
to spelling mistakes and typos, however, is crucial as machine translation
systems are used to translate texts of informal origins, such as chat
conversations, social media posts and web pages. We propose a simple generative
noise model to generate adversarial examples of ten different types. We use
these to augment machine translation systems' training data and show that, when
tested on noisy data, systems trained using adversarial examples perform almost
as well as when translating clean data, while baseline systems' performance
drops by 2-3 BLEU points. To measure the robustness and noise invariance of
machine translation systems' outputs, we use the average translation edit rate
between the translation of the original sentence and its noised variants. Using
this measure, we show that systems trained on adversarial examples on average
yield 50% consistency improvements when compared to baselines trained on clean
data.
| 2,020 | Computation and Language |
UPB at SemEval-2020 Task 6: Pretrained Language Models for Definition
Extraction | This work presents our contribution in the context of the 6th task of
SemEval-2020: Extracting Definitions from Free Text in Textbooks (DeftEval).
This competition consists of three subtasks with different levels of
granularity: (1) classification of sentences as definitional or
non-definitional,(2) labeling of definitional sentences, and (3) relation
classification. We use various pretrained language models (i.e., BERT, XLNet,
RoBERTa, SciBERT, and ALBERT) to solve each of the three subtasks of the
competition. Specifically, for each language model variant, we experiment by
both freezing its weights and fine-tuning them. We also explore a multi-task
architecture that was trained to jointly predict the outputs for the second and
the third subtasks. Our best performing model evaluated on the DeftEval dataset
obtains the 32nd place for the first subtask and the 37th place for the second
subtask. The code is available for further research at:
https://github.com/avramandrei/DeftEval.
| 2,020 | Computation and Language |
Solving Arithmetic Word Problems by Scoring Equations with Recursive
Neural Networks | Solving arithmetic word problems is a cornerstone task in assessing language
understanding and reasoning capabilities in NLP systems. Recent works use
automatic extraction and ranking of candidate solution equations providing the
answer to arithmetic word problems. In this work, we explore novel approaches
to score such candidate solution equations using tree-structured recursive
neural network (Tree-RNN) configurations. The advantage of this Tree-RNN
approach over using more established sequential representations, is that it can
naturally capture the structure of the equations. Our proposed method consists
of transforming the mathematical expression of the equation into an expression
tree. Further, we encode this tree into a Tree-RNN by using different Tree-LSTM
architectures. Experimental results show that our proposed method (i) improves
overall performance with more than 3% accuracy points compared to previous
state-of-the-art, and with over 15% points on a subset of problems that require
more complex reasoning, and (ii) outperforms sequential LSTMs by 4% accuracy
points on such more complex problems.
| 2,021 | Computation and Language |
Coreference Resolution System for Indonesian Text with Mention Pair
Method and Singleton Exclusion using Convolutional Neural Network | Neural network has shown promising performance on coreference resolution
systems that uses mention pair method. With deep neural network, it can learn
hidden and deep relations between two mentions. However, there is no work on
coreference resolution for Indonesian text that uses this learning technique.
The state-of-the-art system for Indonesian text only states the use of lexical
and syntactic features can improve the existing coreference resolution system.
In this paper, we propose a new coreference resolution system for Indonesian
text with mention pair method that uses deep neural network to learn the
relations of the two mentions. In addition to lexical and syntactic features,
in order to learn the representation of the mentions words and context, we use
word embeddings and feed them to Convolutional Neural Network (CNN).
Furthermore, we do singleton exclusion using singleton classifier component to
prevent singleton mentions entering any entity clusters at the end. Achieving
67.37% without singleton exclusion, 63.27% with trained singleton classifier,
and 75.95% with gold singleton classifier on CoNLL average F1 score, our
proposed system outperforms the state-of-the-art system.
| 2,019 | Computation and Language |
Investigating Bi-LSTM and CRF with POS Tag Embedding for Indonesian
Named Entity Tagger | Researches on Indonesian named entity (NE) tagger have been conducted since
years ago. However, most did not use deep learning and instead employed
traditional machine learning algorithms such as association rule, support
vector machine, random forest, na\"ive bayes, etc. In those researches, word
lists as gazetteers or clue words were provided to enhance the accuracy. Here,
we attempt to employ deep learning in our Indonesian NE tagger. We use long
short-term memory (LSTM) as the topology since it is the state-of-the-art of NE
tagger. By using LSTM, we do not need a word list in order to enhance the
accuracy. Basically, there are two main things that we investigate. The first
is the output layer of the network: Softmax vs conditional random field (CRF).
The second is the usage of part of speech (POS) tag embedding input layer.
Using 8400 sentences as the training data and 97 sentences as the evaluation
data, we find that using POS tag embedding as additional input improves the
performance of our Indonesian NE tagger. As for the comparison between Softmax
and CRF, we find that both architectures have a weakness in classifying an NE
tag.
| 2,018 | Computation and Language |
Relation Detection for Indonesian Language using Deep Neural Network --
Support Vector Machine | Relation Detection is a task to determine whether two entities are related or
not. In this paper, we employ neural network to do relation detection between
two named entities for Indonesian Language. We used feature such as word
embedding, position embedding, POS-Tag embedding, and character embedding. For
the model, we divide the model into two parts: Front-part classifier
(Convolutional layer or LSTM layer) and Back-part classifier (Dense layer or
SVM). We did grid search method of neural network hyper parameter and SVM. We
used 6000 Indonesian sentences for training process and 1,125 for testing. The
best result is 0.8083 on F1-Score using Convolutional Layer as front-part and
SVM as back-part.
| 2,020 | Computation and Language |
Improving Indonesian Text Classification Using Multilingual Language
Model | Compared to English, the amount of labeled data for Indonesian text
classification tasks is very small. Recently developed multilingual language
models have shown its ability to create multilingual representations
effectively. This paper investigates the effect of combining English and
Indonesian data on building Indonesian text classification (e.g., sentiment
analysis and hate speech) using multilingual language models. Using the
feature-based approach, we observe its performance on various data sizes and
total added English data. The experiment showed that the addition of English
data, especially if the amount of Indonesian data is small, improves
performance. Using the fine-tuning approach, we further showed its
effectiveness in utilizing the English language to build Indonesian text
classification models.
| 2,020 | Computation and Language |
Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using
Paragraph Vector | Bidirectional Long Short-Term Memory Network (Bi-LSTM) has shown promising
performance in sentiment classification task. It processes inputs as sequence
of information. Due to this behavior, sentiment predictions by Bi-LSTM were
influenced by words sequence and the first or last phrases of the texts tend to
have stronger features than other phrases. Meanwhile, in the problem scope of
Indonesian sentiment analysis, phrases that express the sentiment of a document
might not appear in the first or last part of the document that can lead to
incorrect sentiment classification. To this end, we propose the using of an
existing document representation method called paragraph vector as additional
input features for Bi-LSTM. This vector provides information context of the
document for each sequence processing. The paragraph vector is simply
concatenated to each word vector of the document. This representation also
helps to differentiate ambiguous Indonesian words. Bi-LSTM and paragraph vector
were previously used as separate methods. Combining the two methods has shown a
significant performance improvement of Indonesian sentiment analysis model.
Several case studies on testing data showed that the proposed method can handle
the sentiment phrases position problem encountered by Bi-LSTM.
| 2,019 | Computation and Language |
Syntax Role for Neural Semantic Role Labeling | Semantic role labeling (SRL) is dedicated to recognizing the semantic
predicate-argument structure of a sentence. Previous studies in terms of
traditional models have shown syntactic information can make remarkable
contributions to SRL performance; however, the necessity of syntactic
information was challenged by a few recent neural SRL studies that demonstrate
impressive performance without syntactic backbones and suggest that syntax
information becomes much less important for neural semantic role labeling,
especially when paired with recent deep neural network and large-scale
pre-trained language models. Despite this notion, the neural SRL field still
lacks a systematic and full investigation on the relevance of syntactic
information in SRL, for both dependency and both monolingual and multilingual
settings. This paper intends to quantify the importance of syntactic
information for neural SRL in the deep learning framework. We introduce three
typical SRL frameworks (baselines), sequence-based, tree-based, and
graph-based, which are accompanied by two categories of exploiting syntactic
information: syntax pruning-based and syntax feature-based. Experiments are
conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages
available, and results show that neural SRL models can still benefit from
syntactic information under certain conditions. Furthermore, we show the
quantitative significance of syntax to neural SRL models together with a
thorough empirical survey using existing models.
| 2,020 | Computation and Language |
Intent Detection with WikiHow | Modern task-oriented dialog systems need to reliably understand users'
intents. Intent detection is most challenging when moving to new domains or new
languages, since there is little annotated data. To address this challenge, we
present a suite of pretrained intent detection models. Our models are able to
predict a broad range of intended goals from many actions because they are
trained on wikiHow, a comprehensive instructional website. Our models achieve
state-of-the-art results on the Snips dataset, the Schema-Guided Dialogue
dataset, and all 3 languages of the Facebook multilingual dialog datasets. Our
models also demonstrate strong zero- and few-shot performance, reaching over
75% accuracy using only 100 training examples in all datasets.
| 2,020 | Computation and Language |
CIA_NITT at WNUT-2020 Task 2: Classification of COVID-19 Tweets Using
Pre-trained Language Models | This paper presents our models for WNUT 2020 shared task2. The shared task2
involves identification of COVID-19 related informative tweets. We treat this
as binary text classification problem and experiment with pre-trained language
models. Our first model which is based on CT-BERT achieves F1-score of 88.7%
and second model which is an ensemble of CT-BERT, RoBERTa and SVM achieves
F1-score of 88.52%.
| 2,020 | Computation and Language |
Improving Machine Reading Comprehension with Contextualized Commonsense
Knowledge | In this paper, we aim to extract commonsense knowledge to improve machine
reading comprehension. We propose to represent relations implicitly by
situating structured knowledge in a context instead of relying on a pre-defined
set of relations, and we call it contextualized knowledge. Each piece of
contextualized knowledge consists of a pair of interrelated verbal and
nonverbal messages extracted from a script and the scene in which they occur as
context to implicitly represent the relation between the verbal and nonverbal
messages, which are originally conveyed by different modalities within the
script. We propose a two-stage fine-tuning strategy to use the large-scale
weakly-labeled data based on a single type of contextualized knowledge and
employ a teacher-student paradigm to inject multiple types of contextualized
knowledge into a student machine reader. Experimental results demonstrate that
our method outperforms a state-of-the-art baseline by a 4.3% improvement in
accuracy on the machine reading comprehension dataset C^3, wherein most of the
questions require unstated prior knowledge.
| 2,020 | Computation and Language |
Fine-tuning Pre-trained Contextual Embeddings for Citation Content
Analysis in Scholarly Publication | Citation function and citation sentiment are two essential aspects of
citation content analysis (CCA), which are useful for influence analysis, the
recommendation of scientific publications. However, existing studies are mostly
traditional machine learning methods, although deep learning techniques have
also been explored, the improvement of the performance seems not significant
due to insufficient training data, which brings difficulties to applications.
In this paper, we propose to fine-tune pre-trained contextual embeddings
ULMFiT, BERT, and XLNet for the task. Experiments on three public datasets show
that our strategy outperforms all the baselines in terms of the F1 score. For
citation function identification, the XLNet model achieves 87.2%, 86.90%, and
81.6% on DFKI, UMICH, and TKDE2019 datasets respectively, while it achieves
91.72% and 91.56% on DFKI and UMICH in term of citation sentiment
identification. Our method can be used to enhance the influence analysis of
scholars and scholarly publications.
| 2,020 | Computation and Language |
Combining Word and Character Vector Representation on Neural Machine
Translation | This paper describes combinations of word vector representation and character
vector representation in English-Indonesian neural machine translation (NMT).
Six configurations of NMT models were built with different input vector
representations: word-based, combination of word and character representation
using bidirectional LSTM(bi-LSTM), combination of word and character
representation using CNN, combination of word and character representation by
combining bi-LSTM and CNN by three different vector operations: addition,
pointwise multiplication, and averaging. The experiment results showed that NMT
models with concatenation of word and character representation obtained BLEU
score higher than baseline model, ranging from 9.14 points to 11.65 points, for
all models that combining both word and character representation, except the
model that combining word and character representation using both bi-LSTM and
CNN by addition operation. The highest BLEU score achieved was 42.48 compared
to the 30.83 of the baseline model.
| 2,020 | Computation and Language |
Pow-Wow: A Dataset and Study on Collaborative Communication in Pommerman | In multi-agent learning, agents must coordinate with each other in order to
succeed. For humans, this coordination is typically accomplished through the
use of language. In this work we perform a controlled study of human language
use in a competitive team-based game, and search for useful lessons for
structuring communication protocol between autonomous agents. We construct
Pow-Wow, a new dataset for studying situated goal-directed human communication.
Using the Pommerman game environment, we enlisted teams of humans to play
against teams of AI agents, recording their observations, actions, and
communications. We analyze the types of communications which result in
effective game strategies, annotate them accordingly, and present corpus-level
statistical analysis of how trends in communications affect game outcomes.
Based on this analysis, we design a communication policy for learning agents,
and show that agents which utilize communication achieve higher win-rates
against baseline systems than those which do not.
| 2,020 | Computation and Language |
BoostingBERT:Integrating Multi-Class Boosting into BERT for NLP Tasks | As a pre-trained Transformer model, BERT (Bidirectional Encoder
Representations from Transformers) has achieved ground-breaking performance on
multiple NLP tasks. On the other hand, Boosting is a popular ensemble learning
technique which combines many base classifiers and has been demonstrated to
yield better generalization performance in many machine learning tasks. Some
works have indicated that ensemble of BERT can further improve the application
performance. However, current ensemble approaches focus on bagging or stacking
and there has not been much effort on exploring the boosting. In this work, we
proposed a novel Boosting BERT model to integrate multi-class boosting into the
BERT. Our proposed model uses the pre-trained Transformer as the base
classifier to choose harder training sets to fine-tune and gains the benefits
of both the pre-training language knowledge and boosting ensemble in NLP tasks.
We evaluate the proposed model on the GLUE dataset and 3 popular Chinese NLU
benchmarks. Experimental results demonstrate that our proposed model
significantly outperforms BERT on all datasets and proves its effectiveness in
many NLP tasks. Replacing the BERT base with RoBERTa as base classifier,
BoostingBERT achieves new state-of-the-art results in several NLP Tasks. We
also use knowledge distillation within the "teacher-student" framework to
reduce the computational overhead and model storage of BoostingBERT while
keeping its performance for practical application.
| 2,020 | Computation and Language |
Span-based Semantic Parsing for Compositional Generalization | Despite the success of sequence-to-sequence (seq2seq) models in semantic
parsing, recent work has shown that they fail in compositional generalization,
i.e., the ability to generalize to new structures built of components observed
during training. In this work, we posit that a span-based parser should lead to
better compositional generalization. we propose SpanBasedSP, a parser that
predicts a span tree over an input utterance, explicitly encoding how partial
programs compose over spans in the input. SpanBasedSP extends Pasupat et al.
(2019) to be comparable to seq2seq models by (i) training from programs,
without access to gold trees, treating trees as latent variables, (ii) parsing
a class of non-projective trees through an extension to standard CKY. On
GeoQuery, SCAN and CLOSURE datasets, SpanBasedSP performs similarly to strong
seq2seq baselines on random splits, but dramatically improves performance
compared to baselines on splits that require compositional generalization: from
$61.0 \rightarrow 88.9$ average accuracy.
| 2,021 | Computation and Language |
Cluster-Former: Clustering-based Sparse Transformer for Long-Range
Dependency Encoding | Transformer has become ubiquitous in the deep learning field. One of the key
ingredients that destined its success is the self-attention mechanism, which
allows fully-connected contextual encoding over input tokens. However, despite
its effectiveness in modeling short sequences, self-attention suffers when
handling inputs with extreme long-range dependencies, as its complexity grows
quadratically with respect to the sequence length. Therefore, long sequences
are often encoded by Transformer in chunks using a sliding window. In this
paper, we propose Cluster-Former, a novel clustering-based sparse Transformer
to perform attention across chunked sequences. The proposed framework is
pivoted on two unique types of Transformer layer: Sliding-Window Layer and
Cluster-Former Layer, which encode local sequence information and global
context jointly and iteratively. This new design allows information integration
beyond local windows, which is especially beneficial for question answering
(QA) tasks that rely on long-range dependencies. Experiments show that
Cluster-Former achieves state-of-the-art performance on several major QA
benchmarks.
| 2,021 | Computation and Language |
Identity-Based Patterns in Deep Convolutional Networks: Generative
Adversarial Phonology and Reduplication | This paper models unsupervised learning of an identity-based pattern (or
copying) in speech called reduplication from raw continuous data with deep
convolutional neural networks. We use the ciwGAN architecture Begu\v{s} (2021a;
arXiv:2006.02951) in which learning of meaningful representations in speech
emerges from a requirement that the CNNs generate informative data. We propose
a technique to wug-test CNNs trained on speech and, based on four generative
tests, argue that the network learns to represent an identity-based pattern in
its latent space. By manipulating only two categorical variables in the latent
space, we can actively turn an unreduplicated form into a reduplicated form
with no other substantial changes to the output in the majority of cases. We
also argue that the network extends the identity-based pattern to unobserved
data. Exploration of how meaningful representations of identity-based patterns
emerge in CNNs and how the latent space variables outside of the training range
correlate with identity-based patterns in the output has general implications
for neural network interpretability.
| 2,021 | Computation and Language |
Composing Answer from Multi-spans for Reading Comprehension | This paper presents a novel method to generate answers for non-extraction
machine reading comprehension (MRC) tasks whose answers cannot be simply
extracted as one span from the given passages. Using a pointer network-style
extractive decoder for such type of MRC may result in unsatisfactory
performance when the ground-truth answers are given by human annotators or
highly re-paraphrased from parts of the passages. On the other hand, using
generative decoder cannot well guarantee the resulted answers with well-formed
syntax and semantics when encountering long sentences. Therefore, to alleviate
the obvious drawbacks of both sides, we propose an answer making-up method from
extracted multi-spans that are learned by our model as highly confident
$n$-gram candidates in the given passage. That is, the returned answers are
composed of discontinuous multi-spans but not just one consecutive span in the
given passages anymore. The proposed method is simple but effective: empirical
experiments on MS MARCO show that the proposed method has a better performance
on accurately generating long answers, and substantially outperforms two
competitive typical one-span and Seq2Seq baseline decoders.
| 2,021 | Computation and Language |
On Robustness and Bias Analysis of BERT-based Relation Extraction | Fine-tuning pre-trained models have achieved impressive performance on
standard natural language processing benchmarks. However, the resultant model
generalizability remains poorly understood. We do not know, for example, how
excellent performance can lead to the perfection of generalization models. In
this study, we analyze a fine-tuned BERT model from different perspectives
using relation extraction. We also characterize the differences in
generalization techniques according to our proposed improvements. From
empirical experimentation, we find that BERT suffers a bottleneck in terms of
robustness by way of randomizations, adversarial and counterfactual tests, and
biases (i.e., selection and semantic). These findings highlight opportunities
for future improvements. Our open-sourced testbed DiagnoseRE is available in
\url{https://github.com/zjunlp/DiagnoseRE}.
| 2,023 | Computation and Language |
Contrastive Triple Extraction with Generative Transformer | Triple extraction is an essential task in information extraction for natural
language processing and knowledge graph construction. In this paper, we revisit
the end-to-end triple extraction task for sequence generation. Since generative
triple extraction may struggle to capture long-term dependencies and generate
unfaithful triples, we introduce a novel model, contrastive triple extraction
with a generative transformer. Specifically, we introduce a single shared
transformer module for encoder-decoder-based generation. To generate faithful
results, we propose a novel triplet contrastive training object. Moreover, we
introduce two mechanisms to further improve model performance (i.e., batch-wise
dynamic attention-masking and triple-wise calibration). Experimental results on
three datasets (i.e., NYT, WebNLG, and MIE) show that our approach achieves
better performance than that of baselines.
| 2,023 | Computation and Language |
A Comparison of Two Fluctuation Analyses for Natural Language Clustering
Phenomena: Taylor and Ebeling & Neiman Methods | This article considers the fluctuation analysis methods of Taylor and Ebeling
& Neiman. While both have been applied to various phenomena in the statistical
mechanics domain, their similarities and differences have not been clarified.
After considering their analytical aspects, this article presents a large-scale
application of these methods to text. It is found that both methods can
distinguish real text from independently and identically distributed (i.i.d.)
sequences. Furthermore, it is found that the Taylor exponents acquired from
words can roughly distinguish text categories; this is also the case for
Ebeling and Neiman exponents, but to a lesser extent. Additionally, both
methods show some possibility of capturing script kinds.
| 2,021 | Computation and Language |
Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues | Building an intelligent dialogue system with the ability to select a proper
response according to a multi-turn context is a great challenging task.
Existing studies focus on building a context-response matching model with
various neural architectures or PLMs and typically learning with a single
response prediction task. These approaches overlook many potential training
signals contained in dialogue data, which might be beneficial for context
understanding and produce better features for response prediction. Besides, the
response retrieved from existing dialogue systems supervised by the
conventional way still faces some critical challenges, including incoherence
and inconsistency. To address these issues, in this paper, we propose learning
a context-response matching model with auxiliary self-supervised tasks designed
for the dialogue data based on pre-trained language models. Specifically, we
introduce four self-supervised tasks including next session prediction,
utterance restoration, incoherence detection and consistency discrimination,
and jointly train the PLM-based response selection model with these auxiliary
tasks in a multi-task manner. By this means, the auxiliary tasks can guide the
learning of the matching model to achieve a better local optimum and select a
more proper response. Experiment results on two benchmarks indicate that the
proposed auxiliary self-supervised tasks bring significant improvement for
multi-turn response selection in retrieval-based dialogues, and our model
achieves new state-of-the-art results on both datasets.
| 2,020 | Computation and Language |
QED: A Framework and Dataset for Explanations in Question Answering | A question answering system that in addition to providing an answer provides
an explanation of the reasoning that leads to that answer has potential
advantages in terms of debuggability, extensibility and trust. To this end, we
propose QED, a linguistically informed, extensible framework for explanations
in question answering. A QED explanation specifies the relationship between a
question and answer according to formal semantic notions such as referential
equality, sentencehood, and entailment. We describe and publicly release an
expert-annotated dataset of QED explanations built upon a subset of the Google
Natural Questions dataset, and report baseline models on two tasks -- post-hoc
explanation generation given an answer, and joint question answering and
explanation generation. In the joint setting, a promising result suggests that
training on a relatively small amount of QED data can improve question
answering. In addition to describing the formal, language-theoretic motivations
for the QED approach, we describe a large user study showing that the presence
of QED explanations significantly improves the ability of untrained raters to
spot errors made by a strong neural QA baseline.
| 2,020 | Computation and Language |
Improving Language Generation with Sentence Coherence Objective | Conditional story generation and contextual text continuation have become
increasingly popular topics in NLP community. Existing models are often prone
to output paragraphs of texts that gradually diverge from the given prompt.
Although the generated text may have a reasonable perplexity and diversity, it
could easily be identified by human as gibberish. The goal of our project is to
improve the coherence and consistency across sentences in a language-generation
model. We aim to solve this issue by first training a sentence pair coherence
classifier with GPT-2 pretrained model, and then co-train the GPT-2 language
model with this new coherence objective using a method analogous to the
REINFORCE algorithm. This fine-tuned language model is able to generate lengthy
paragraph conditioned on a given topic without diverging too much. The
simplicity of this model allows it to be applicable to a variety of underlying
language model architecture since it only modifies the final layer of the
pre-trained model.
| 2,020 | Computation and Language |
GeDi: Generative Discriminator Guided Sequence Generation | While large-scale language models (LMs) are able to imitate the distribution
of natural language well enough to generate realistic text, it is difficult to
control which regions of the distribution they generate. This is especially
problematic because datasets used for training large LMs usually contain
significant toxicity, hate, bias, and negativity. We propose GeDi as an
efficient method for using smaller LMs as generative discriminators to guide
generation from large LMs to make them safer and more controllable. GeDi guides
generation at each step by computing classification probabilities for all
possible next tokens via Bayes rule by normalizing over two class-conditional
distributions; one conditioned on the desired attribute, or control code, and
another conditioned on the undesired attribute, or anti control code. We find
that GeDi gives stronger controllability than the state of the art method while
also achieving generation speeds more than 30 times faster. Additionally,
training GeDi on only four topics allows us to controllably generate new topics
zero-shot from just a keyword, unlocking a new capability that previous
controllable generation methods do not have. Lastly, we show that GeDi can make
GPT-2 (1.5B parameters) significantly less toxic without sacrificing linguistic
quality, making it by far the most practical existing method for detoxifying
large language models while maintaining a fast generation speed.
| 2,020 | Computation and Language |
Searching for a Search Method: Benchmarking Search Algorithms for
Generating NLP Adversarial Examples | We study the behavior of several black-box search algorithms used for
generating adversarial examples for natural language processing (NLP) tasks. We
perform a fine-grained analysis of three elements relevant to search: search
algorithm, search space, and search budget. When new search algorithms are
proposed in past work, the attack search space is often modified alongside the
search algorithm. Without ablation studies benchmarking the search algorithm
change with the search space held constant, one cannot tell if an increase in
attack success rate is a result of an improved search algorithm or a less
restrictive search space. Additionally, many previous studies fail to properly
consider the search algorithms' run-time cost, which is essential for
downstream tasks like adversarial training. Our experiments provide a
reproducible benchmark of search algorithms across a variety of search spaces
and query budgets to guide future research in adversarial NLP. Based on our
experiments, we recommend greedy attacks with word importance ranking when
under a time constraint or attacking long inputs, and either beam search or
particle swarm optimization otherwise. Code implementation shared via
https://github.com/QData/TextAttack-Search-Benchmark
| 2,020 | Computation and Language |
Not-NUTs at W-NUT 2020 Task 2: A BERT-based System in Identifying
Informative COVID-19 English Tweets | As of 2020 when the COVID-19 pandemic is full-blown on a global scale,
people's need to have access to legitimate information regarding COVID-19 is
more urgent than ever, especially via online media where the abundance of
irrelevant information overshadows the more informative ones. In response to
such, we proposed a model that, given an English tweet, automatically
identifies whether that tweet bears informative content regarding COVID-19 or
not. By ensembling different BERTweet model configurations, we have achieved
competitive results that are only shy of those by top performing teams by
roughly 1% in terms of F1 score on the informative class. In the
post-competition period, we have also experimented with various other
approaches that potentially boost generalization to a new dataset.
| 2,020 | Computation and Language |
EdinburghNLP at WNUT-2020 Task 2: Leveraging Transformers with
Generalized Augmentation for Identifying Informativeness in COVID-19 Tweets | Twitter and, in general, social media has become an indispensable
communication channel in times of emergency. The ubiquitousness of smartphone
gadgets enables people to declare an emergency observed in real-time. As a
result, more agencies are interested in programmatically monitoring Twitter
(disaster relief organizations and news agencies). Therefore, recognizing the
informativeness of a Tweet can help filter noise from the large volumes of
Tweets. In this paper, we present our submission for WNUT-2020 Task 2:
Identification of informative COVID-19 English Tweets. Our most successful
model is an ensemble of transformers, including RoBERTa, XLNet, and BERTweet
trained in a Semi-Supervised Learning (SSL) setting. The proposed system
achieves an F1 score of 0.9011 on the test set (ranking 7th on the leaderboard)
and shows significant gains in performance compared to a baseline system using
FastText embeddings.
| 2,021 | Computation and Language |
Analysis and representation of Igbo text document for a text-based
system | The advancement in Information Technology (IT) has assisted in inculcating
the three Nigeria major languages in text-based application such as text
mining, information retrieval and natural language processing. The interest of
this paper is the Igbo language, which uses compounding as a common type of
word formation and as well has many vocabularies of compound words. The issues
of collocation, word ordering and compounding play high role in Igbo language.
The ambiguity in dealing with these compound words has made the representation
of Igbo language text document very difficult because this cannot be addressed
using the most common and standard approach of the Bag-Of-Words (BOW) model of
text representation, which ignores the word order and relation. However, this
cause for a concern and the need to develop an improved model to capture this
situation. This paper presents the analysis of Igbo language text document,
considering its compounding nature and describes its representation with the
Word-based N-gram model to properly prepare it for any text-based application.
The result shows that Bigram and Trigram n-gram text representation models
provide more semantic information as well addresses the issues of compounding,
word ordering and collocations which are the major language peculiarities in
Igbo. They are likely to give better performance when used in any Igbo
text-based system.
| 2,017 | Computation and Language |
Multi-Hop Fact Checking of Political Claims | Recent work has proposed multi-hop models and datasets for studying complex
natural language reasoning. One notable task requiring multi-hop reasoning is
fact checking, where a set of connected evidence pieces leads to the final
verdict of a claim. However, existing datasets either do not provide
annotations for gold evidence pages, or the only dataset which does (FEVER)
mostly consists of claims which can be fact-checked with simple reasoning and
is constructed artificially. Here, we study more complex claim verification of
naturally occurring claims with multiple hops over interconnected evidence
chunks. We: 1) construct a small annotated dataset, PolitiHop, of evidence
sentences for claim verification; 2) compare it to existing multi-hop datasets;
and 3) study how to transfer knowledge from more extensive in- and
out-of-domain resources to PolitiHop. We find that the task is complex and
achieve the best performance with an architecture that specifically models
reasoning over evidence pieces in combination with in-domain transfer learning.
| 2,021 | Computation and Language |
Time-Aware Evidence Ranking for Fact-Checking | Truth can vary over time. Fact-checking decisions on claim veracity should
therefore take into account temporal information of both the claim and
supporting or refuting evidence. In this work, we investigate the hypothesis
that the timestamp of a Web page is crucial to how it should be ranked for a
given claim. We delineate four temporal ranking methods that constrain evidence
ranking differently and simulate hypothesis-specific evidence rankings given
the evidence timestamps as gold standard. Evidence ranking in three
fact-checking models is ultimately optimized using a learning-to-rank loss
function. Our study reveals that time-aware evidence ranking not only surpasses
relevance assumptions based purely on semantic similarity or position in a
search results list, but also improves veracity predictions of time-sensitive
claims in particular.
| 2,021 | Computation and Language |
Development of a Dataset and a Deep Learning Baseline Named Entity
Recognizer for Three Low Resource Languages: Bhojpuri, Maithili and Magahi | In Natural Language Processing (NLP) pipelines, Named Entity Recognition
(NER) is one of the preliminary problems, which marks proper nouns and other
named entities such as Location, Person, Organization, Disease etc. Such
entities, without a NER module, adversely affect the performance of a machine
translation system. NER helps in overcoming this problem by recognising and
handling such entities separately, although it can be useful in Information
Extraction systems also. Bhojpuri, Maithili and Magahi are low resource
languages, usually known as Purvanchal languages. This paper focuses on the
development of a NER benchmark dataset for the Machine Translation systems
developed to translate from these languages to Hindi by annotating parts of
their available corpora. Bhojpuri, Maithili and Magahi corpora of sizes 228373,
157468 and 56190 tokens, respectively, were annotated using 22 entity labels.
The annotation considers coarse-grained annotation labels followed by the
tagset used in one of the Hindi NER datasets. We also report a Deep Learning
based baseline that uses an LSTM-CNNs-CRF model. The lower baseline F1-scores
from the NER tool obtained by using Conditional Random Fields models are 96.73
for Bhojpuri, 93.33 for Maithili and 95.04 for Magahi. The Deep Learning-based
technique (LSTM-CNNs-CRF) achieved 96.25 for Bhojpuri, 93.33 for Maithili and
95.44 for Magahi.
| 2,020 | Computation and Language |
EasyASR: A Distributed Machine Learning Platform for End-to-end
Automatic Speech Recognition | We present EasyASR, a distributed machine learning platform for training and
serving large-scale Automatic Speech Recognition (ASR) models, as well as
collecting and processing audio data at scale. Our platform is built upon the
Machine Learning Platform for AI of Alibaba Cloud. Its main functionality is to
support efficient learning and inference for end-to-end ASR models on
distributed GPU clusters. It allows users to learn ASR models with either
pre-defined or user-customized network architectures via simple user interface.
On EasyASR, we have produced state-of-the-art results over several public
datasets for Mandarin speech recognition.
| 2,020 | Computation and Language |
Filling the Gap of Utterance-aware and Speaker-aware Representation for
Multi-turn Dialogue | A multi-turn dialogue is composed of multiple utterances from two or more
different speaker roles. Thus utterance- and speaker-aware clues are supposed
to be well captured in models. However, in the existing retrieval-based
multi-turn dialogue modeling, the pre-trained language models (PrLMs) as
encoder represent the dialogues coarsely by taking the pairwise dialogue
history and candidate response as a whole, the hierarchical information on
either utterance interrelation or speaker roles coupled in such representations
is not well addressed. In this work, we propose a novel model to fill such a
gap by modeling the effective utterance-aware and speaker-aware representations
entailed in a dialogue history. In detail, we decouple the contextualized word
representations by masking mechanisms in Transformer-based PrLM, making each
word only focus on the words in current utterance, other utterances, two
speaker roles (i.e., utterances of sender and utterances of receiver),
respectively. Experimental results show that our method boosts the strong
ELECTRA baseline substantially in four public benchmark datasets, and achieves
various new state-of-the-art performance over previous methods. A series of
ablation studies are conducted to demonstrate the effectiveness of our method.
| 2,020 | Computation and Language |
At your Command! An Empirical Study on How LaypersonsTeach Robots New
Functions | Even though intelligent systems such as Siri or Google Assistant are
enjoyable (and useful) dialog partners, users can only access predefined
functionality. Enabling end-users to extend the functionality of intelligent
systems will be the next big thing. To promote research in this area we carried
out an empirical study on how laypersons teach robots new functions by means of
natural language instructions. The result is a labeled corpus consisting of
3168 submissions given by 870 subjects. The analysis of the dataset revealed
that many participants used certain wordings to express their wish to teach new
functionality; two corresponding trigrams are among the most frequent. On the
contrary, more than one third (36.93%) did not verbalize the teaching intent at
all. We labeled the semantic constituents in the utterances: declaration
(including the name of the function) and intermediate steps. The full corpus is
publicly available: http://dx.doi.org/10.21227/zecn-6c61
| 2,020 | Computation and Language |
Using Known Words to Learn More Words: A Distributional Analysis of
Child Vocabulary Development | Why do children learn some words before others? Understanding individual
variability across children and also variability across words, may be
informative of the learning processes that underlie language learning. We
investigated item-based variability in vocabulary development using lexical
properties of distributional statistics derived from a large corpus of
child-directed speech. Unlike previous analyses, we predicted word trajectories
cross-sectionally, shedding light on trends in vocabulary development that may
not have been evident at a single time point. We also show that whether one
looks at a single age group or across ages as a whole, the best distributional
predictor of whether a child knows a word is the number of other known words
with which that word tends to co-occur. Keywords: age of acquisition;
vocabulary development; lexical diversity; child-directed speech;
| 2,021 | Computation and Language |
MatScIE: An automated tool for the generation of databases of methods
and parameters used in the computational materials science literature | The number of published articles in the field of materials science is growing
rapidly every year. This comparatively unstructured data source, which contains
a large amount of information, has a restriction on its re-usability, as the
information needed to carry out further calculations using the data in it must
be extracted manually. It is very important to obtain valid and contextually
correct information from the online (offline) data, as it can be useful not
only to generate inputs for further calculations, but also to incorporate them
into a querying framework. Retaining this context as a priority, we have
developed an automated tool, MatScIE (Material Scince Information Extractor)
that can extract relevant information from material science literature and make
a structured database that is much easier to use for material simulations.
Specifically, we extract the material details, methods, code, parameters, and
structure from the various research articles. Finally, we created a web
application where users can upload published articles and view/download the
information obtained from this tool and can create their own databases for
their personal uses.
| 2,021 | Computation and Language |
Real-Time Execution of Large-scale Language Models on Mobile | Pre-trained large-scale language models have increasingly demonstrated high
accuracy on many natural language processing (NLP) tasks. However, the limited
weight storage and computational speed on hardware platforms have impeded the
popularity of pre-trained models, especially in the era of edge computing. In
this paper, we seek to find the best model structure of BERT for a given
computation size to match specific devices. We propose the first compiler-aware
neural architecture optimization framework. Our framework can guarantee the
identified model to meet both resource and real-time specifications of mobile
devices, thus achieving real-time execution of large transformer-based models
like BERT variants. We evaluate our model on several NLP tasks, achieving
competitive results on well-known benchmarks with lower latency on mobile
devices. Specifically, our model is 5.2x faster on CPU and 4.1x faster on GPU
with 0.5-2% accuracy loss compared with BERT-base. Our overall framework
achieves up to 7.8x speedup compared with TensorFlow-Lite with only minor
accuracy loss.
| 2,020 | Computation and Language |
Unsupervised Abstractive Dialogue Summarization for Tete-a-Tetes | High-quality dialogue-summary paired data is expensive to produce and
domain-sensitive, making abstractive dialogue summarization a challenging task.
In this work, we propose the first unsupervised abstractive dialogue
summarization model for tete-a-tetes (SuTaT). Unlike standard text
summarization, a dialogue summarization method should consider the
multi-speaker scenario where the speakers have different roles, goals, and
language styles. In a tete-a-tete, such as a customer-agent conversation, SuTaT
aims to summarize for each speaker by modeling the customer utterances and the
agent utterances separately while retaining their correlations. SuTaT consists
of a conditional generative module and two unsupervised summarization modules.
The conditional generative module contains two encoders and two decoders in a
variational autoencoder framework where the dependencies between two latent
spaces are captured. With the same encoders and decoders, two unsupervised
summarization modules equipped with sentence-level self-attention mechanisms
generate summaries without using any annotations. Experimental results show
that SuTaT is superior on unsupervised dialogue summarization for both
automatic and human evaluations, and is capable of dialogue classification and
single-turn conversation generation.
| 2,020 | Computation and Language |
Current Limitations of Language Models: What You Need is Retrieval | We classify and re-examine some of the current approaches to improve the
performance-computes trade-off of language models, including (1) non-causal
models (such as masked language models), (2) extension of batch length with
efficient attention, (3) recurrence, (4) conditional computation and (5)
retrieval. We identify some limitations (1) - (4) suffer from. For example, (1)
currently struggles with open-ended text generation with the output loosely
constrained by the input as well as performing general textual tasks like
GPT-2/3 due to its need for a specific fine-tuning dataset. (2) and (3) do not
improve the prediction of the first $\sim 10^3$ tokens. Scaling up a model size
(e.g. efficiently with (4)) still results in poor performance scaling for some
tasks. We argue (5) would resolve many of these limitations, and it can (a)
reduce the amount of supervision and (b) efficiently extend the context over
the entire training dataset and the entire past of the current sample. We
speculate how to modify MARGE to perform unsupervised causal modeling that
achieves (b) with the retriever jointly trained.
| 2,020 | Computation and Language |
Global-aware Beam Search for Neural Abstractive Summarization | This study develops a calibrated beam-based algorithm with awareness of the
global attention distribution for neural abstractive summarization, aiming to
improve the local optimality problem of the original beam search in a rigorous
way. Specifically, a novel global protocol is proposed based on the attention
distribution to stipulate how a global optimal hypothesis should attend to the
source. A global scoring mechanism is then developed to regulate beam search to
generate summaries in a near-global optimal fashion. This novel design enjoys a
distinctive property, i.e., the global attention distribution could be
predicted before inference, enabling step-wise improvements on the beam search
through the global scoring mechanism. Extensive experiments on nine datasets
show that the global (attention)-aware inference significantly improves
state-of-the-art summarization models even using empirical hyper-parameters.
The algorithm is also proven robust as it remains to generate meaningful texts
with corrupted attention distributions. The codes and a comprehensive set of
examples are available.
| 2,021 | Computation and Language |
High-order Refining for End-to-end Chinese Semantic Role Labeling | Current end-to-end semantic role labeling is mostly accomplished via
graph-based neural models. However, these all are first-order models, where
each decision for detecting any predicate-argument pair is made in isolation
with local features. In this paper, we present a high-order refining mechanism
to perform interaction between all predicate-argument pairs. Based on the
baseline graph model, our high-order refining module learns higher-order
features between all candidate pairs via attention calculation, which are later
used to update the original token representations. After several iterations of
refinement, the underlying token representations can be enriched with globally
interacted features. Our high-order model achieves state-of-the-art results on
Chinese SRL data, including CoNLL09 and Universal Proposition Bank, meanwhile
relieving the long-range dependency issues.
| 2,020 | Computation and Language |
Dialogue Response Ranking Training with Large-Scale Human Feedback Data | Existing open-domain dialog models are generally trained to minimize the
perplexity of target human responses. However, some human replies are more
engaging than others, spawning more followup interactions. Current
conversational models are increasingly capable of producing turns that are
context-relevant, but in order to produce compelling agents, these models need
to be able to predict and optimize for turns that are genuinely engaging. We
leverage social media feedback data (number of replies and upvotes) to build a
large-scale training dataset for feedback prediction. To alleviate possible
distortion between the feedback and engagingness, we convert the ranking
problem to a comparison of response pairs which involve few confounding
factors. We trained DialogRPT, a set of GPT-2 based models on 133M pairs of
human feedback data and the resulting ranker outperformed several baselines.
Particularly, our ranker outperforms the conventional dialog perplexity
baseline with a large margin on predicting Reddit feedback. We finally combine
the feedback prediction models and a human-like scoring model to rank the
machine-generated dialog responses. Crowd-sourced human evaluation shows that
our ranking method correlates better with real human preferences than baseline
models.
| 2,020 | Computation and Language |
Noisy Self-Knowledge Distillation for Text Summarization | In this paper we apply self-knowledge distillation to text summarization
which we argue can alleviate problems with maximum-likelihood training on
single reference and noisy datasets. Instead of relying on one-hot annotation
labels, our student summarization model is trained with guidance from a teacher
which generates smoothed labels to help regularize training. Furthermore, to
better model uncertainty during training, we introduce multiple noise signals
for both teacher and student models. We demonstrate experimentally on three
benchmarks that our framework boosts the performance of both pretrained and
non-pretrained summarizers achieving state-of-the-art results.
| 2,021 | Computation and Language |
MLMLM: Link Prediction with Mean Likelihood Masked Language Model | Knowledge Bases (KBs) are easy to query, verifiable, and interpretable. They
however scale with man-hours and high-quality data. Masked Language Models
(MLMs), such as BERT, scale with computing power as well as unstructured raw
text data. The knowledge contained within those models is however not directly
interpretable. We propose to perform link prediction with MLMs to address both
the KBs scalability issues and the MLMs interpretability issues. To do that we
introduce MLMLM, Mean Likelihood Masked Language Model, an approach comparing
the mean likelihood of generating the different entities to perform link
prediction in a tractable manner. We obtain State of the Art (SotA) results on
the WN18RR dataset and the best non-entity-embedding based results on the
FB15k-237 dataset. We also obtain convincing results on link prediction on
previously unseen entities, making MLMLM a suitable approach to introducing new
entities to a KB.
| 2,020 | Computation and Language |
Multi-Referenced Training for Dialogue Response Generation | In open-domain dialogue response generation, a dialogue context can be
continued with diverse responses, and the dialogue models should capture such
one-to-many relations. In this work, we first analyze the training objective of
dialogue models from the view of Kullback-Leibler divergence (KLD) and show
that the gap between the real world probability distribution and the
single-referenced data's probability distribution prevents the model from
learning the one-to-many relations efficiently. Then we explore approaches to
multi-referenced training in two aspects. Data-wise, we generate diverse pseudo
references from a powerful pretrained model to build multi-referenced data that
provides a better approximation of the real-world distribution. Model-wise, we
propose to equip variational models with an expressive prior, named linear
Gaussian model (LGM). Experimental results of automated evaluation and human
evaluation show that the methods yield significant improvements over baselines.
We will release our code and data in
https://github.com/ZHAOTING/dialog-processing.
| 2,020 | Computation and Language |
It's Not Just Size That Matters: Small Language Models Are Also Few-Shot
Learners | When scaled to hundreds of billions of parameters, pretrained language models
such as GPT-3 (Brown et al., 2020) achieve remarkable few-shot performance.
However, enormous amounts of compute are required for training and applying
such big models, resulting in a large carbon footprint and making it difficult
for researchers and practitioners to use them. We show that performance similar
to GPT-3 can be obtained with language models that are much "greener" in that
their parameter count is several orders of magnitude smaller. This is achieved
by converting textual inputs into cloze questions that contain a task
description, combined with gradient-based optimization; exploiting unlabeled
data gives further improvements. We identify key factors required for
successful natural language understanding with small language models.
| 2,021 | Computation and Language |
Improving Joint Layer RNN based Keyphrase Extraction by Using
Syntactical Features | Keyphrase extraction as a task to identify important words or phrases from a
text, is a crucial process to identify main topics when analyzing texts from a
social media platform. In our study, we focus on text written in Indonesia
language taken from Twitter. Different from the original joint layer recurrent
neural network (JRNN) with output of one sequence of keywords and using only
word embedding, here we propose to modify the input layer of JRNN to extract
more than one sequence of keywords by additional information of syntactical
features, namely part of speech, named entity types, and dependency structures.
Since JRNN in general requires a large amount of data as the training examples
and creating those examples is expensive, we used a data augmentation method to
increase the number of training examples. Our experiment had shown that our
method outperformed the baseline methods. Our method achieved .9597 in accuracy
and .7691 in F1.
| 2,019 | Computation and Language |
Cascaded Semantic and Positional Self-Attention Network for Document
Classification | Transformers have shown great success in learning representations for
language modelling. However, an open challenge still remains on how to
systematically aggregate semantic information (word embedding) with positional
(or temporal) information (word orders). In this work, we propose a new
architecture to aggregate the two sources of information using cascaded
semantic and positional self-attention network (CSPAN) in the context of
document classification. The CSPAN uses a semantic self-attention layer
cascaded with Bi-LSTM to process the semantic and positional information in a
sequential manner, and then adaptively combine them together through a residue
connection. Compared with commonly used positional encoding schemes, CSPAN can
exploit the interaction between semantics and word positions in a more
interpretable and adaptive manner, and the classification performance can be
notably improved while simultaneously preserving a compact model size and high
convergence rate. We evaluate the CSPAN model on several benchmark data sets
for document classification with careful ablation studies, and demonstrate the
encouraging results compared with state of the art.
| 2,020 | Computation and Language |
Multimodal Joint Attribute Prediction and Value Extraction for
E-commerce Product | Product attribute values are essential in many e-commerce scenarios, such as
customer service robots, product recommendations, and product retrieval. While
in the real world, the attribute values of a product are usually incomplete and
vary over time, which greatly hinders the practical applications. In this
paper, we propose a multimodal method to jointly predict product attributes and
extract values from textual product descriptions with the help of the product
images. We argue that product attributes and values are highly correlated,
e.g., it will be easier to extract the values on condition that the product
attributes are given. Thus, we jointly model the attribute prediction and value
extraction tasks from multiple aspects towards the interactions between
attributes and values. Moreover, product images have distinct effects on our
tasks for different product attributes and values. Thus, we selectively draw
useful visual information from product images to enhance our model. We annotate
a multimodal product attribute value dataset that contains 87,194 instances,
and the experimental results on this dataset demonstrate that explicitly
modeling the relationship between attributes and values facilitates our method
to establish the correspondence between them, and selectively utilizing visual
product information is necessary for the task. Our code and dataset will be
released to the public.
| 2,020 | Computation and Language |
Iterative Refinement in the Continuous Space for Non-Autoregressive
Neural Machine Translation | We propose an efficient inference procedure for non-autoregressive machine
translation that iteratively refines translation purely in the continuous
space. Given a continuous latent variable model for machine translation (Shu et
al., 2020), we train an inference network to approximate the gradient of the
marginal log probability of the target sentence, using only the latent variable
as input. This allows us to use gradient-based optimization to find the target
sentence at inference time that approximately maximizes its marginal
probability. As each refinement step only involves computation in the latent
space of low dimensionality (we use 8 in our experiments), we avoid
computational overhead incurred by existing non-autoregressive inference
procedures that often refine in token space. We compare our approach to a
recently proposed EM-like inference procedure (Shu et al., 2020) that optimizes
in a hybrid space, consisting of both discrete and continuous variables. We
evaluate our approach on WMT'14 En-De, WMT'16 Ro-En and IWSLT'16 De-En, and
observe two advantages over the EM-like inference: (1) it is computationally
efficient, i.e. each refinement step is twice as fast, and (2) it is more
effective, resulting in higher marginal probabilities and BLEU scores with the
same number of refinement steps. On WMT'14 En-De, for instance, our approach is
able to decode 6.2 times faster than the autoregressive model with minimal
degradation to translation quality (0.9 BLEU).
| 2,020 | Computation and Language |
Critical Thinking for Language Models | This paper takes a first step towards a critical thinking curriculum for
neural auto-regressive language models. We introduce a synthetic corpus of
deductively valid arguments, and generate artificial argumentative texts to
train and evaluate GPT-2. Significant transfer learning effects can be
observed: Training a model on three simple core schemes allows it to accurately
complete conclusions of different, and more complex types of arguments, too.
The language models generalize the core argument schemes in a correct way.
Moreover, we obtain consistent and promising results for NLU benchmarks. In
particular, pre-training on the argument schemes raises zero-shot accuracy on
the GLUE diagnostics by up to 15 percentage points. The findings suggest that
intermediary pre-training on texts that exemplify basic reasoning abilities
(such as typically covered in critical thinking textbooks) might help language
models to acquire a broad range of reasoning skills. The synthetic
argumentative texts presented in this paper are a promising starting point for
building such a "critical thinking curriculum for language models."
| 2,020 | Computation and Language |
Event Presence Prediction Helps Trigger Detection Across Languages | The task of event detection and classification is central to most information
retrieval applications. We show that a Transformer based architecture can
effectively model event extraction as a sequence labeling task. We propose a
combination of sentence level and token level training objectives that
significantly boosts the performance of a BERT based event extraction model.
Our approach achieves a new state-of-the-art performance on ACE 2005 data for
English and Chinese. We also test our model on ERE Spanish, achieving an
average gain of 2 absolute F1 points over prior best performing model.
| 2,020 | Computation and Language |
Lessons Learned from Applying off-the-shelf BERT: There is no Silver
Bullet | One of the challenges in the NLP field is training large classification
models, a task that is both difficult and tedious. It is even harder when GPU
hardware is unavailable. The increased availability of pre-trained and
off-the-shelf word embeddings, models, and modules aim at easing the process of
training large models and achieving a competitive performance. We explore the
use of off-the-shelf BERT models and share the results of our experiments and
compare their results to those of LSTM networks and more simple baselines. We
show that the complexity and computational cost of BERT is not a guarantee for
enhanced predictive performance in the classification tasks at hand.
| 2,020 | Computation and Language |
A Systematic Characterization of Sampling Algorithms for Open-ended
Language Generation | This work studies the widely adopted ancestral sampling algorithms for
auto-regressive language models, which is not widely studied in the literature.
We use the quality-diversity (Q-D) trade-off to investigate three popular
sampling algorithms (top-k, nucleus and tempered sampling). We focus on the
task of open-ended language generation. We first show that the existing
sampling algorithms have similar performance. After carefully inspecting the
transformations defined by different sampling algorithms, we identify three key
properties that are shared among them: entropy reduction, order preservation,
and slope preservation. To validate the importance of the identified
properties, we design two sets of new sampling algorithms: one set in which
each algorithm satisfies all three properties, and one set in which each
algorithm violates at least one of the properties. We compare their performance
with existing sampling algorithms, and find that violating the identified
properties could lead to drastic performance degradation, as measured by the
Q-D trade-off. On the other hand, we find that the set of sampling algorithms
that satisfies these properties performs on par with the existing sampling
algorithms. Our data and code are available at
https://github.com/moinnadeem/characterizing-sampling-algorithms
| 2,020 | Computation and Language |
Autoregressive Knowledge Distillation through Imitation Learning | The performance of autoregressive models on natural language generation tasks
has dramatically improved due to the adoption of deep, self-attentive
architectures. However, these gains have come at the cost of hindering
inference speed, making state-of-the-art models cumbersome to deploy in
real-world, time-sensitive settings. We develop a compression technique for
autoregressive models that is driven by an imitation learning perspective on
knowledge distillation. The algorithm is designed to address the exposure bias
problem. On prototypical language generation tasks such as translation and
summarization, our method consistently outperforms other distillation
algorithms, such as sequence-level knowledge distillation. Student models
trained with our method attain 1.4 to 4.8 BLEU/ROUGE points higher than those
trained from scratch, while increasing inference speed by up to 14 times in
comparison to the teacher model.
| 2,020 | Computation and Language |
Simultaneous Machine Translation with Visual Context | Simultaneous machine translation (SiMT) aims to translate a continuous input
text stream into another language with the lowest latency and highest quality
possible. The translation thus has to start with an incomplete source text,
which is read progressively, creating the need for anticipation. In this paper,
we seek to understand whether the addition of visual information can compensate
for the missing source context. To this end, we analyse the impact of different
multimodal approaches and visual features on state-of-the-art SiMT frameworks.
Our results show that visual context is helpful and that visually-grounded
models based on explicit object region information are much better than
commonly used global features, reaching up to 3 BLEU points improvement under
low latency scenarios. Our qualitative analysis illustrates cases where only
the multimodal systems are able to translate correctly from English into
gender-marked languages, as well as deal with differences in word order, such
as adjective-noun placement between English and French.
| 2,020 | Computation and Language |
Cascaded Models for Better Fine-Grained Named Entity Recognition | Named Entity Recognition (NER) is an essential precursor task for many
natural language applications, such as relation extraction or event extraction.
Much of the NER research has been done on datasets with few classes of entity
types (e.g. PER, LOC, ORG, MISC), but many real world applications (disaster
relief, complex event extraction, law enforcement) can benefit from a larger
NER typeset. More recently, datasets were created that have hundreds to
thousands of types of entities, sparking new lines of research (Sekine,
2008;Ling and Weld, 2012; Gillick et al., 2014; Choiet al., 2018). In this
paper we present a cascaded approach to labeling fine-grained NER, applying to
a newly released fine-grained NER dataset that was used in the TAC KBP 2019
evaluation (Ji et al., 2019), inspired by the fact that training data is
available for some of the coarse labels. Using a combination of transformer
networks, we show that performance can be improved by about 20 F1 absolute, as
compared with the straightforward model built on the full fine-grained types,
and show that, surprisingly, using course-labeled data in three languages leads
to an improvement in the English data.
| 2,020 | Computation and Language |
An information theoretic view on selecting linguistic probes | There is increasing interest in assessing the linguistic knowledge encoded in
neural representations. A popular approach is to attach a diagnostic classifier
-- or "probe" -- to perform supervised classification from internal
representations. However, how to select a good probe is in debate. Hewitt and
Liang (2019) showed that a high performance on diagnostic classification itself
is insufficient, because it can be attributed to either "the representation
being rich in knowledge", or "the probe learning the task", which Pimentel et
al. (2020) challenged. We show this dichotomy is valid
information-theoretically. In addition, we find that the methods to construct
and select good probes proposed by the two papers, *control task* (Hewitt and
Liang, 2019) and *control function* (Pimentel et al., 2020), are equivalent --
the errors of their approaches are identical (modulo irrelevant terms).
Empirically, these two selection criteria lead to results that highly agree
with each other.
| 2,020 | Computation and Language |
Fast semantic parsing with well-typedness guarantees | AM dependency parsing is a linguistically principled method for neural
semantic parsing with high accuracy across multiple graphbanks. It relies on a
type system that models semantic valency but makes existing parsers slow. We
describe an A* parser and a transition-based parser for AM dependency parsing
which guarantee well-typedness and improve parsing speed by up to 3 orders of
magnitude, while maintaining or improving accuracy.
| 2,020 | Computation and Language |
Domain Knowledge Empowered Structured Neural Net for End-to-End Event
Temporal Relation Extraction | Extracting event temporal relations is a critical task for information
extraction and plays an important role in natural language understanding. Prior
systems leverage deep learning and pre-trained language models to improve the
performance of the task. However, these systems often suffer from two
short-comings: 1) when performing maximum a posteriori (MAP) inference based on
neural models, previous systems only used structured knowledge that are assumed
to be absolutely correct, i.e., hard constraints; 2) biased predictions on
dominant temporal relations when training with a limited amount of data. To
address these issues, we propose a framework that enhances deep neural network
with distributional constraints constructed by probabilistic domain knowledge.
We solve the constrained inference problem via Lagrangian Relaxation and apply
it on end-to-end event temporal relation extraction tasks. Experimental results
show our framework is able to improve the baseline neural network models with
strong statistical significance on two widely used datasets in news and
clinical domains.
| 2,020 | Computation and Language |
Multi-span Style Extraction for Generative Reading Comprehension | Generative machine reading comprehension (MRC) requires a model to generate
well-formed answers. For this type of MRC, answer generation method is crucial
to the model performance. However, generative models, which are supposed to be
the right model for the task, in generally perform poorly. At the same time,
single-span extraction models have been proven effective for extractive MRC,
where the answer is constrained to a single span in the passage. Nevertheless,
they generally suffer from generating incomplete answers or introducing
redundant words when applied to the generative MRC. Thus, we extend the
single-span extraction method to multi-span, proposing a new framework which
enables generative MRC to be smoothly solved as multi-span extraction. Thorough
experiments demonstrate that this novel approach can alleviate the dilemma
between generative models and single-span models and produce answers with
better-formed syntax and semantics.
| 2,020 | Computation and Language |
Pardon the Interruption: An Analysis of Gender and Turn-Taking in U.S.
Supreme Court Oral Arguments | This study presents a corpus of turn changes between speakers in U.S. Supreme
Court oral arguments. Each turn change is labeled on a spectrum of
"cooperative" to "competitive" by a human annotator with legal experience in
the United States. We analyze the relationship between speech features, the
nature of exchanges, and the gender and legal role of the speakers. Finally, we
demonstrate that the models can be used to predict the label of an exchange
with moderate success. The automatic classification of the nature of exchanges
indicates that future studies of turn-taking in oral arguments can rely on
larger, unlabeled corpora.
| 2,020 | Computation and Language |
Grounded Adaptation for Zero-shot Executable Semantic Parsing | We propose Grounded Adaptation for Zero-shot Executable Semantic Parsing
(GAZP) to adapt an existing semantic parser to new environments (e.g. new
database schemas). GAZP combines a forward semantic parser with a backward
utterance generator to synthesize data (e.g. utterances and SQL queries) in the
new environment, then selects cycle-consistent examples to adapt the parser.
Unlike data-augmentation, which typically synthesizes unverified examples in
the training environment, GAZP synthesizes examples in the new environment
whose input-output consistency are verified. On the Spider, Sparc, and CoSQL
zero-shot semantic parsing tasks, GAZP improves logical form and execution
accuracy of the baseline parser. Our analyses show that GAZP outperforms
data-augmentation in the training environment, performance increases with the
amount of GAZP-synthesized data, and cycle-consistency is central to successful
adaptation.
| 2,021 | Computation and Language |
Arabic Opinion Mining Using a Hybrid Recommender System Approach | Recommender systems nowadays are playing an important role in the delivery of
services and information to users. Sentiment analysis (also known as opinion
mining) is the process of determining the attitude of textual opinions, whether
they are positive, negative or neutral. Data sparsity is representing a big
issue for recommender systems because of the insufficiency of user rating or
absence of data about users or items. This research proposed a hybrid approach
combining sentiment analysis and recommender systems to tackle the problem of
data sparsity problems by predicting the rating of products from users reviews
using text mining and NLP techniques. This research focuses especially on
Arabic reviews, where the model is evaluated using Opinion Corpus for Arabic
(OCA) dataset. Our system was efficient, and it showed a good accuracy of
nearly 85 percent in predicting rating from reviews
| 2,022 | Computation and Language |
Asking Complex Questions with Multi-hop Answer-focused Reasoning | Asking questions from natural language text has attracted increasing
attention recently, and several schemes have been proposed with promising
results by asking the right question words and copy relevant words from the
input to the question. However, most state-of-the-art methods focus on asking
simple questions involving single-hop relations. In this paper, we propose a
new task called multihop question generation that asks complex and semantically
relevant questions by additionally discovering and modeling the multiple
entities and their semantic relations given a collection of documents and the
corresponding answer 1. To solve the problem, we propose multi-hop
answer-focused reasoning on the grounded answer-centric entity graph to include
different granularity levels of semantic information including the word-level
and document-level semantics of the entities and their semantic relations.
Through extensive experiments on the HOTPOTQA dataset, we demonstrate the
superiority and effectiveness of our proposed model that serves as a baseline
to motivate future work.
| 2,020 | Computation and Language |
Tag and Correct: Question aware Open Information Extraction with
Two-stage Decoding | Question Aware Open Information Extraction (Question aware Open IE) takes
question and passage as inputs, outputting an answer tuple which contains a
subject, a predicate, and one or more arguments. Each field of answer is a
natural language word sequence and is extracted from the passage. The
semi-structured answer has two advantages which are more readable and
falsifiable compared to span answer. There are two approaches to solve this
problem. One is an extractive method which extracts candidate answers from the
passage with the Open IE model, and ranks them by matching with questions. It
fully uses the passage information at the extraction step, but the extraction
is independent to the question. The other one is the generative method which
uses a sequence to sequence model to generate answers directly. It combines the
question and passage as input at the same time, but it generates the answer
from scratch, which does not use the facts that most of the answer words come
from in the passage. To guide the generation by passage, we present a two-stage
decoding model which contains a tagging decoder and a correction decoder. At
the first stage, the tagging decoder will tag keywords from the passage. At the
second stage, the correction decoder will generate answers based on tagged
keywords. Our model could be trained end-to-end although it has two stages.
Compared to previous generative models, we generate better answers by
generating coarse to fine. We evaluate our model on WebAssertions (Yan et al.,
2018) which is a Question aware Open IE dataset. Our model achieves a BLEU
score of 59.32, which is better than previous generative methods.
| 2,020 | Computation and Language |
Retrofitting Structure-aware Transformer Language Model for End Tasks | We consider retrofitting structure-aware Transformer-based language model for
facilitating end tasks by proposing to exploit syntactic distance to encode
both the phrasal constituency and dependency connection into the language
model. A middle-layer structural learning strategy is leveraged for structure
integration, accomplished with main semantic task training under multi-task
learning scheme. Experimental results show that the retrofitted structure-aware
Transformer language model achieves improved perplexity, meanwhile inducing
accurate syntactic phrases. By performing structure-aware fine-tuning, our
model achieves significant improvements for both semantic- and
syntactic-dependent tasks.
| 2,020 | Computation and Language |
Mimic and Conquer: Heterogeneous Tree Structure Distillation for
Syntactic NLP | Syntax has been shown useful for various NLP tasks, while existing work
mostly encodes singleton syntactic tree using one hierarchical neural network.
In this paper, we investigate a simple and effective method, Knowledge
Distillation, to integrate heterogeneous structure knowledge into a unified
sequential LSTM encoder. Experimental results on four typical syntax-dependent
tasks show that our method outperforms tree encoders by effectively integrating
rich heterogeneous structure syntax, meanwhile reducing error propagation, and
also outperforms ensemble methods, in terms of both the efficiency and
accuracy.
| 2,020 | Computation and Language |
Answering Any-hop Open-domain Questions with Iterative Document
Reranking | Existing approaches for open-domain question answering (QA) are typically
designed for questions that require either single-hop or multi-hop reasoning,
which make strong assumptions of the complexity of questions to be answered.
Also, multi-step document retrieval often incurs higher number of relevant but
non-supporting documents, which dampens the downstream noise-sensitive reader
module for answer extraction. To address these challenges, we propose a unified
QA framework to answer any-hop open-domain questions, which iteratively
retrieves, reranks and filters documents, and adaptively determines when to
stop the retrieval process. To improve the retrieval accuracy, we propose a
graph-based reranking model that perform multi-document interaction as the core
of our iterative reranking framework. Our method consistently achieves
performance comparable to or better than the state-of-the-art on both
single-hop and multi-hop open-domain QA datasets, including Natural Questions
Open, SQuAD Open, and HotpotQA.
| 2,021 | Computation and Language |
Solomon at SemEval-2020 Task 11: Ensemble Architecture for Fine-Tuned
Propaganda Detection in News Articles | This paper describes our system (Solomon) details and results of
participation in the SemEval 2020 Task 11 "Detection of Propaganda Techniques
in News Articles"\cite{DaSanMartinoSemeval20task11}. We participated in Task
"Technique Classification" (TC) which is a multi-class classification task. To
address the TC task, we used RoBERTa based transformer architecture for
fine-tuning on the propaganda dataset. The predictions of RoBERTa were further
fine-tuned by class-dependent-minority-class classifiers. A special classifier,
which employs dynamically adapted Least Common Sub-sequence algorithm, is used
to adapt to the intricacies of repetition class. Compared to the other
participating systems, our submission is ranked 4th on the leaderboard.
| 2,020 | Computation and Language |
Unsupervised Summarization by Jointly Extracting Sentences and Keywords | We present RepRank, an unsupervised graph-based ranking model for extractive
multi-document summarization in which the similarity between words, sentences,
and word-to-sentence can be estimated by the distances between their vector
representations in a unified vector space. In order to obtain desirable
representations, we propose a self-attention based learning method that
represent a sentence by the weighted sum of its word embeddings, and the
weights are concentrated to those words hopefully better reflecting the content
of a document. We show that salient sentences and keywords can be extracted in
a joint and mutual reinforcement process using our learned representations, and
prove that this process always converges to a unique solution leading to
improvement in performance. A variant of absorbing random walk and the
corresponding sampling-based algorithm are also described to avoid redundancy
and increase diversity in the summaries. Experiment results with multiple
benchmark datasets show that RepRank achieved the best or comparable
performance in ROUGE.
| 2,023 | Computation and Language |
Graph-to-Sequence Neural Machine Translation | Neural machine translation (NMT) usually works in a seq2seq learning way by
viewing either source or target sentence as a linear sequence of words, which
can be regarded as a special case of graph, taking words in the sequence as
nodes and relationships between words as edges. In the light of the current NMT
models more or less capture graph information among the sequence in a latent
way, we present a graph-to-sequence model facilitating explicit graph
information capturing. In detail, we propose a graph-based SAN-based NMT model
called Graph-Transformer by capturing information of subgraphs of different
orders in every layers. Subgraphs are put into different groups according to
their orders, and every group of subgraphs respectively reflect different
levels of dependency between words. For fusing subgraph representations, we
empirically explore three methods which weight different groups of subgraphs of
different orders. Results of experiments on WMT14 English-German and IWSLT14
German-English show that our method can effectively boost the Transformer with
an improvement of 1.1 BLEU points on WMT14 English-German dataset and 1.0 BLEU
points on IWSLT14 German-English dataset.
| 2,020 | Computation and Language |
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for
Post-Hoc Interpretability | Recent years have witnessed an increasing number of interpretation methods
being developed for improving transparency of NLP models. Meanwhile,
researchers also try to answer the question that whether the obtained
interpretation is faithful in explaining mechanisms behind model prediction?
Specifically, (Jain and Wallace, 2019) proposes that "attention is not
explanation" by comparing attention interpretation with gradient alternatives.
However, it raises a new question that can we safely pick one interpretation
method as the ground-truth? If not, on what basis can we compare different
interpretation methods? In this work, we propose that it is crucial to have a
concrete definition of interpretation before we could evaluate faithfulness of
an interpretation. The definition will affect both the algorithm to obtain
interpretation and, more importantly, the metric used in evaluation. Through
both theoretical and experimental analysis, we find that although
interpretation methods perform differently under a certain evaluation metric,
such a difference may not result from interpretation quality or faithfulness,
but rather the inherent bias of the evaluation metric.
| 2,020 | Computation and Language |
Contextualized Perturbation for Textual Adversarial Attack | Adversarial examples expose the vulnerabilities of natural language
processing (NLP) models, and can be used to evaluate and improve their
robustness. Existing techniques of generating such examples are typically
driven by local heuristic rules that are agnostic to the context, often
resulting in unnatural and ungrammatical outputs. This paper presents CLARE, a
ContextuaLized AdversaRial Example generation model that produces fluent and
grammatical outputs through a mask-then-infill procedure. CLARE builds on a
pre-trained masked language model and modifies the inputs in a context-aware
manner. We propose three contextualized perturbations, Replace, Insert and
Merge, allowing for generating outputs of varied lengths. With a richer range
of available strategies, CLARE is able to attack a victim model more
efficiently with fewer edits. Extensive experiments and human evaluation
demonstrate that CLARE outperforms the baselines in terms of attack success
rate, textual similarity, fluency and grammaticality.
| 2,021 | Computation and Language |
Minimize Exposure Bias of Seq2Seq Models in Joint Entity and Relation
Extraction | Joint entity and relation extraction aims to extract relation triplets from
plain text directly. Prior work leverages Sequence-to-Sequence (Seq2Seq) models
for triplet sequence generation. However, Seq2Seq enforces an unnecessary order
on the unordered triplets and involves a large decoding length associated with
error accumulation. These introduce exposure bias, which may cause the models
overfit to the frequent label combination, thus deteriorating the
generalization. We propose a novel Sequence-to-Unordered-Multi-Tree
(Seq2UMTree) model to minimize the effects of exposure bias by limiting the
decoding length to three within a triplet and removing the order among
triplets. We evaluate our model on two datasets, DuIE and NYT, and
systematically study how exposure bias alters the performance of Seq2Seq
models. Experiments show that the state-of-the-art Seq2Seq model overfits to
both datasets while Seq2UMTree shows significantly better generalization. Our
code is available at https://github.com/WindChimeRan/OpenJERE .
| 2,020 | Computation and Language |
Group-wise Contrastive Learning for Neural Dialogue Generation | Neural dialogue response generation has gained much popularity in recent
years. Maximum Likelihood Estimation (MLE) objective is widely adopted in
existing dialogue model learning. However, models trained with MLE objective
function are plagued by the low-diversity issue when it comes to the
open-domain conversational setting. Inspired by the observation that humans not
only learn from the positive signals but also benefit from correcting behaviors
of undesirable actions, in this work, we introduce contrastive learning into
dialogue generation, where the model explicitly perceives the difference
between the well-chosen positive and negative utterances. Specifically, we
employ a pretrained baseline model as a reference. During contrastive learning,
the target dialogue model is trained to give higher conditional probabilities
for the positive samples, and lower conditional probabilities for those
negative samples, compared to the reference model. To manage the multi-mapping
relations prevailed in human conversation, we augment contrastive dialogue
learning with group-wise dual sampling. Extensive experimental results show
that the proposed group-wise contrastive learning framework is suited for
training a wide range of neural dialogue generation models with very favorable
performance over the baseline training approaches.
| 2,020 | Computation and Language |
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation | Despite the success of existing referenced metrics (e.g., BLEU and
MoverScore), they correlate poorly with human judgments for open-ended text
generation including story or dialog generation because of the notorious
one-to-many issue: there are many plausible outputs for the same input, which
may differ substantially in literal or semantics from the limited number of
given references. To alleviate this issue, we propose UNION, a learnable
unreferenced metric for evaluating open-ended story generation, which measures
the quality of a generated story without any reference. Built on top of BERT,
UNION is trained to distinguish human-written stories from negative samples and
recover the perturbation in negative stories. We propose an approach of
constructing negative samples by mimicking the errors commonly observed in
existing NLG models, including repeated plots, conflicting logic, and
long-range incoherence. Experiments on two story datasets demonstrate that
UNION is a reliable measure for evaluating the quality of generated stories,
which correlates better with human judgments and is more generalizable than
existing state-of-the-art metrics.
| 2,020 | Computation and Language |
Reusing a Pretrained Language Model on Languages with Limited Corpora
for Unsupervised NMT | Using a language model (LM) pretrained on two languages with large
monolingual data in order to initialize an unsupervised neural machine
translation (UNMT) system yields state-of-the-art results. When limited data is
available for one language, however, this method leads to poor translations. We
present an effective approach that reuses an LM that is pretrained only on the
high-resource language. The monolingual LM is fine-tuned on both languages and
is then used to initialize a UNMT model. To reuse the pretrained LM, we have to
modify its predefined vocabulary, to account for the new language. We therefore
propose a novel vocabulary extension method. Our approach, RE-LM, outperforms a
competitive cross-lingual pretraining model (XLM) in English-Macedonian (En-Mk)
and English-Albanian (En-Sq), yielding more than +8.3 BLEU points for all four
translation directions.
| 2,020 | Computation and Language |
Neural Dialogue State Tracking with Temporally Expressive Networks | Dialogue state tracking (DST) is an important part of a spoken dialogue
system. Existing DST models either ignore temporal feature dependencies across
dialogue turns or fail to explicitly model temporal state dependencies in a
dialogue. In this work, we propose Temporally Expressive Networks (TEN) to
jointly model the two types of temporal dependencies in DST. The TEN model
utilizes the power of recurrent networks and probabilistic graphical models.
Evaluating on standard datasets, TEN is demonstrated to be effective in
improving the accuracy of turn-level-state prediction and the state
aggregation.
| 2,020 | Computation and Language |
Parallel Interactive Networks for Multi-Domain Dialogue State Generation | The dependencies between system and user utterances in the same turn and
across different turns are not fully considered in existing multidomain
dialogue state tracking (MDST) models. In this study, we argue that the
incorporation of these dependencies is crucial for the design of MDST and
propose Parallel Interactive Networks (PIN) to model these dependencies.
Specifically, we integrate an interactive encoder to jointly model the in-turn
dependencies and cross-turn dependencies. The slot-level context is introduced
to extract more expressive features for different slots. And a distributed copy
mechanism is utilized to selectively copy words from historical system
utterances or historical user utterances. Empirical studies demonstrated the
superiority of the proposed PIN model.
| 2,020 | Computation and Language |
Reasoning about Goals, Steps, and Temporal Ordering with WikiHow | We propose a suite of reasoning tasks on two types of relations between
procedural events: goal-step relations ("learn poses" is a step in the larger
goal of "doing yoga") and step-step temporal relations ("buy a yoga mat"
typically precedes "learn poses"). We introduce a dataset targeting these two
relations based on wikiHow, a website of instructional how-to articles. Our
human-validated test set serves as a reliable benchmark for commonsense
inference, with a gap of about 10% to 20% between the performance of
state-of-the-art transformer models and human performance. Our
automatically-generated training set allows models to effectively transfer to
out-of-domain tasks requiring knowledge of procedural events, with greatly
improved performances on SWAG, Snips, and the Story Cloze Test in zero- and
few-shot settings.
| 2,020 | Computation and Language |
Knowledge Graphs for Multilingual Language Translation and Generation | The Natural Language Processing (NLP) community has recently seen outstanding
progress, catalysed by the release of different Neural Network (NN)
architectures. Neural-based approaches have proven effective by significantly
increasing the output quality of a large number of automated solutions for NLP
tasks (Belinkov and Glass, 2019). Despite these notable advancements, dealing
with entities still poses a difficult challenge as they are rarely seen in
training data. Entities can be classified into two groups, i.e., proper nouns
and common nouns. Proper nouns are also known as Named Entities (NE) and
correspond to the name of people, organizations, or locations, e.g., John, WHO,
or Canada. Common nouns describe classes of objects, e.g., spoon or cancer.
Both types of entities can be found in a Knowledge Graph (KG). Recent work has
successfully exploited the contribution of KGs in NLP tasks, such as Natural
Language Inference (NLI) (KM et al.,2018) and Question Answering (QA) (Sorokin
and Gurevych, 2018). Only a few works had exploited the benefits of KGs in
Neural Machine Translation (NMT) when the work presented herein began.
Additionally, few works had studied the contribution of KGs to Natural Language
Generation (NLG) tasks. Moreover, the multilinguality also remained an open
research area in these respective tasks (Young et al., 2018). In this thesis,
we focus on the use of KGs for machine translation and the generation of texts
to deal with the problems caused by entities and consequently enhance the
quality of automatically generated texts.
| 2,020 | Computation and Language |
Leveraging Semantic Parsing for Relation Linking over Knowledge Bases | Knowledgebase question answering systems are heavily dependent on relation
extraction and linking modules. However, the task of extracting and linking
relations from text to knowledgebases faces two primary challenges; the
ambiguity of natural language and lack of training data. To overcome these
challenges, we present SLING, a relation linking framework which leverages
semantic parsing using Abstract Meaning Representation (AMR) and distant
supervision. SLING integrates multiple relation linking approaches that capture
complementary signals such as linguistic cues, rich semantic representation,
and information from the knowledgebase. The experiments on relation linking
using three KBQA datasets; QALD-7, QALD-9, and LC-QuAD 1.0 demonstrate that the
proposed approach achieves state-of-the-art performance on all benchmarks.
| 2,020 | Computation and Language |
NABU $\mathrm{-}$ Multilingual Graph-based Neural RDF Verbalizer | The RDF-to-text task has recently gained substantial attention due to
continuous growth of Linked Data. In contrast to traditional pipeline models,
recent studies have focused on neural models, which are now able to convert a
set of RDF triples into text in an end-to-end style with promising results.
However, English is the only language widely targeted. We address this research
gap by presenting NABU, a multilingual graph-based neural model that verbalizes
RDF data to German, Russian, and English. NABU is based on an encoder-decoder
architecture, uses an encoder inspired by Graph Attention Networks and a
Transformer as decoder. Our approach relies on the fact that knowledge graphs
are language-agnostic and they hence can be used to generate multilingual text.
We evaluate NABU in monolingual and multilingual settings on standard
benchmarking WebNLG datasets. Our results show that NABU outperforms
state-of-the-art approaches on English with 66.21 BLEU, and achieves consistent
results across all languages on the multilingual scenario with 56.04 BLEU.
| 2,020 | Computation and Language |
Automated Source Code Generation and Auto-completion Using Deep
Learning: Comparing and Discussing Current Language-Model-Related Approaches | In recent years, the use of deep learning in language models gained much
attention. Some research projects claim that they can generate text that can be
interpreted as human-writing, enabling new possibilities in many application
areas. Among the different areas related to language processing, one of the
most notable in applying this type of modeling is programming languages. For
years, the Machine Learning community has been researching this software
engineering area, pursuing goals like applying different approaches to
auto-complete, generate, fix, or evaluate code programmed by humans.
Considering the increasing popularity of the Deep-Learning-enabled language
models approach, we detected a lack of empirical papers that compare different
deep learning architectures to create and use language models based on
programming code. This paper compares different neural network architectures
like AWD-LSTMs, AWD-QRNNs, and Transformer while using transfer learning and
different tokenizations to see how they behave in building language models
using a Python dataset for code generation and filling mask tasks. Considering
the results, we discuss each approach's different strengths and weaknesses and
what gaps we find to evaluate the language models or apply them in a real
programming context.
| 2,021 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.