Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Masking as an Efficient Alternative to Finetuning for Pretrained
Language Models | We present an efficient method of utilizing pretrained language models, where
we learn selective binary masks for pretrained weights in lieu of modifying
them through finetuning. Extensive evaluations of masking BERT and RoBERTa on a
series of NLP tasks show that our masking scheme yields performance comparable
to finetuning, yet has a much smaller memory footprint when several tasks need
to be inferred simultaneously. Through intrinsic evaluations, we show that
representations computed by masked language models encode information necessary
for solving downstream tasks. Analyzing the loss landscape, we show that
masking and finetuning produce models that reside in minima that can be
connected by a line segment with nearly constant test accuracy. This confirms
that masking can be utilized as an efficient alternative to finetuning.
| 2,020 | Computation and Language |
Towards Multimodal Response Generation with Exemplar Augmentation and
Curriculum Optimization | Recently, variational auto-encoder (VAE) based approaches have made
impressive progress on improving the diversity of generated responses. However,
these methods usually suffer the cost of decreased relevance accompanied by
diversity improvements. In this paper, we propose a novel multimodal response
generation framework with exemplar augmentation and curriculum optimization to
enhance relevance and diversity of generated responses. First, unlike existing
VAE-based models that usually approximate a simple Gaussian posterior
distribution, we present a Gaussian mixture posterior distribution (i.e,
multimodal) to further boost response diversity, which helps capture complex
semantics of responses. Then, to ensure that relevance does not decrease while
diversity increases, we fully exploit similar examples (exemplars) retrieved
from the training data into posterior distribution modeling to augment response
relevance. Furthermore, to facilitate the convergence of Gaussian mixture prior
and posterior distributions, we devise a curriculum optimization strategy to
progressively train the model under multiple training criteria from easy to
hard. Experimental results on widely used SwitchBoard and DailyDialog datasets
demonstrate that our model achieves significant improvements compared to strong
baselines in terms of diversity and relevance.
| 2,020 | Computation and Language |
Single-/Multi-Source Cross-Lingual NER via Teacher-Student Learning on
Unlabeled Data in Target Language | To better tackle the named entity recognition (NER) problem on languages with
little/no labeled data, cross-lingual NER must effectively leverage knowledge
learned from source languages with rich labeled data. Previous works on
cross-lingual NER are mostly based on label projection with pairwise texts or
direct model transfer. However, such methods either are not applicable if the
labeled data in the source languages is unavailable, or do not leverage
information contained in unlabeled data in the target language. In this paper,
we propose a teacher-student learning method to address such limitations, where
NER models in the source languages are used as teachers to train a student
model on unlabeled data in the target language. The proposed method works for
both single-source and multi-source cross-lingual NER. For the latter, we
further propose a similarity measuring method to better weight the supervision
from different teacher models. Extensive experiments for 3 target languages on
benchmark datasets well demonstrate that our method outperforms existing
state-of-the-art methods for both single-source and multi-source cross-lingual
NER.
| 2,020 | Computation and Language |
Semi-Supervised Neural System for Tagging, Parsing and Lematization | This paper describes the ICS PAS system which took part in CoNLL 2018 shared
task on Multilingual Parsing from Raw Text to Universal Dependencies. The
system consists of jointly trained tagger, lemmatizer, and dependency parser
which are based on features extracted by a biLSTM network. The system uses both
fully connected and dilated convolutional neural architectures. The novelty of
our approach is the use of an additional loss function, which reduces the
number of cycles in the predicted dependency graphs, and the use of
self-training to increase the system performance. The proposed system, i.e. ICS
PAS (Warszawa), ranked 3th/4th in the official evaluation obtaining the
following overall results: 73.02 (LAS), 60.25 (MLAS) and 64.44 (BLEX).
| 2,020 | Computation and Language |
Experiments with LVT and FRE for Transformer model | In this paper, we experiment with Large Vocabulary Trick and Feature-rich
encoding applied to the Transformer model for Text Summarization. We could not
achieve better results, than the analogous RNN-based sequence-to-sequence
model, so we tried more models to find out, what improves the results and what
deteriorates them.
| 2,020 | Computation and Language |
PTPARL-D: Annotated Corpus of 44 years of Portuguese Parliament debates | In a representative democracy, some decide in the name of the rest, and these
elected officials are commonly gathered in public assemblies, such as
parliaments, where they discuss policies, legislate, and vote on fundamental
initiatives. A core aspect of such democratic processes are the plenary
debates, where important public discussions take place. Many parliaments around
the world are increasingly keeping the transcripts of such debates, and other
parliamentary data, in digital formats accessible to the public, increasing
transparency and accountability. Furthermore, some parliaments are bringing old
paper transcripts to semi-structured digital formats. However, these records
are often only provided as raw text or even as images, with little to no
annotation, and inconsistent formats, making them difficult to analyze and
study, reducing both transparency and public reach. Here, we present PTPARL-D,
an annotated corpus of debates in the Portuguese Parliament, from 1976 to 2019,
covering the entire period of Portuguese democracy.
| 2,021 | Computation and Language |
Assessing Discourse Relations in Language Generation from GPT-2 | Recent advances in NLP have been attributed to the emergence of large-scale
pre-trained language models. GPT-2, in particular, is suited for generation
tasks given its left-to-right language modeling objective, yet the linguistic
quality of its generated text has largely remain unexplored. Our work takes a
step in understanding GPT-2's outputs in terms of discourse coherence. We
perform a comprehensive study on the validity of explicit discourse relations
in GPT-2's outputs under both organic generation and fine-tuned scenarios.
Results show GPT-2 does not always generate text containing valid discourse
relations; nevertheless, its text is more aligned with human expectation in the
fine-tuned scenario. We propose a decoupled strategy to mitigate these problems
and highlight the importance of explicitly modeling discourse information.
| 2,020 | Computation and Language |
Neural Machine Translation with Monte-Carlo Tree Search | Recent algorithms in machine translation have included a value network to
assist the policy network when deciding which word to output at each step of
the translation. The addition of a value network helps the algorithm perform
better on evaluation metrics like the BLEU score. After training the policy and
value networks in a supervised setting, the policy and value networks can be
jointly improved through common actor-critic methods. The main idea of our
project is to instead leverage Monte-Carlo Tree Search (MCTS) to search for
good output words with guidance from a combined policy and value network
architecture in a similar fashion as AlphaZero. This network serves both as a
local and a global look-ahead reference that uses the result of the search to
improve itself. Experiments using the IWLST14 German to English translation
dataset show that our method outperforms the actor-critic methods used in
recent machine translation papers.
| 2,020 | Computation and Language |
On the Importance of Word and Sentence Representation Learning in
Implicit Discourse Relation Classification | Implicit discourse relation classification is one of the most difficult parts
in shallow discourse parsing as the relation prediction without explicit
connectives requires the language understanding at both the text span level and
the sentence level. Previous studies mainly focus on the interactions between
two arguments. We argue that a powerful contextualized representation module, a
bilateral multi-perspective matching module, and a global information fusion
module are all important to implicit discourse analysis. We propose a novel
model to combine these modules together. Extensive experiments show that our
proposed model outperforms BERT and other state-of-the-art systems on the PDTB
dataset by around 8% and CoNLL 2016 datasets around 16%. We also analyze the
effectiveness of different modules in the implicit discourse relation
classification task and demonstrate how different levels of representation
learning can affect the results.
| 2,020 | Computation and Language |
Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less
Forgetting | Deep pretrained language models have achieved great success in the way of
pretraining first and then fine-tuning. But such a sequential transfer learning
paradigm often confronts the catastrophic forgetting problem and leads to
sub-optimal performance. To fine-tune with less forgetting, we propose a recall
and learn mechanism, which adopts the idea of multi-task learning and jointly
learns pretraining tasks and downstream tasks. Specifically, we propose a
Pretraining Simulation mechanism to recall the knowledge from pretraining tasks
without data, and an Objective Shifting mechanism to focus the learning on
downstream tasks gradually. Experiments show that our method achieves
state-of-the-art performance on the GLUE benchmark. Our method also enables
BERT-base to achieve better performance than directly fine-tuning of
BERT-large. Further, we provide the open-source RecAdam optimizer, which
integrates the proposed mechanisms into Adam optimizer, to facility the NLP
community.
| 2,020 | Computation and Language |
Lexically Constrained Neural Machine Translation with Levenshtein
Transformer | This paper proposes a simple and effective algorithm for incorporating
lexical constraints in neural machine translation. Previous work either
required re-training existing models with the lexical constraints or
incorporating them during beam search decoding with significantly higher
computational overheads. Leveraging the flexibility and speed of a recently
proposed Levenshtein Transformer model (Gu et al., 2019), our method injects
terminology constraints at inference time without any impact on decoding speed.
Our method does not require any modification to the training procedure and can
be easily applied at runtime with custom dictionaries. Experiments on
English-German WMT datasets show that our approach improves an unconstrained
baseline and previous approaches.
| 2,020 | Computation and Language |
Semantic Graphs for Generating Deep Questions | This paper proposes the problem of Deep Question Generation (DQG), which aims
to generate complex questions that require reasoning over multiple pieces of
information of the input passage. In order to capture the global structure of
the document and facilitate reasoning, we propose a novel framework which first
constructs a semantic-level graph for the input document and then encodes the
semantic graph by introducing an attention-based GGNN (Att-GGNN). Afterwards,
we fuse the document-level and graph-level representations to perform joint
training of content selection and question decoding. On the HotpotQA
deep-question centric dataset, our model greatly improves performance over
questions requiring reasoning over multiple facts, leading to state-of-the-art
performance. The code is publicly available at
https://github.com/WING-NUS/SG-Deep-Question-Generation.
| 2,020 | Computation and Language |
BLEU Neighbors: A Reference-less Approach to Automatic Evaluation | Evaluation is a bottleneck in the development of natural language generation
(NLG) models. Automatic metrics such as BLEU rely on references, but for tasks
such as open-ended generation, there are no references to draw upon. Although
language diversity can be estimated using statistical measures such as
perplexity, measuring language quality requires human evaluation. However,
because human evaluation at scale is slow and expensive, it is used sparingly;
it cannot be used to rapidly iterate on NLG models, in the way BLEU is used for
machine translation. To this end, we propose BLEU Neighbors, a nearest
neighbors model for estimating language quality by using the BLEU score as a
kernel function. On existing datasets for chitchat dialogue and open-ended
sentence generation, we find that -- on average -- the quality estimation from
a BLEU Neighbors model has a lower mean squared error and higher Spearman
correlation with the ground truth than individual human annotators. Despite its
simplicity, BLEU Neighbors even outperforms state-of-the-art models on
automatically grading essays, including models that have access to a
gold-standard reference essay.
| 2,020 | Computation and Language |
Screenplay Summarization Using Latent Narrative Structure | Most general-purpose extractive summarization models are trained on news
articles, which are short and present all important information upfront. As a
result, such models are biased on position and often perform a smart selection
of sentences from the beginning of the document. When summarizing long
narratives, which have complex structure and present information piecemeal,
simple position heuristics are not sufficient. In this paper, we propose to
explicitly incorporate the underlying structure of narratives into general
unsupervised and supervised extractive summarization models. We formalize
narrative structure in terms of key narrative events (turning points) and treat
it as latent in order to summarize screenplays (i.e., extract an optimal
sequence of scenes). Experimental results on the CSI corpus of TV screenplays,
which we augment with scene-level summarization labels, show that latent
turning points correlate with important aspects of a CSI episode and improve
summarization performance over general extractive algorithms leading to more
complete and diverse summaries.
| 2,020 | Computation and Language |
Augmenting Transformers with KNN-Based Composite Memory for Dialogue | Various machine learning tasks can benefit from access to external
information of different modalities, such as text and images. Recent work has
focused on learning architectures with large memories capable of storing this
knowledge. We propose augmenting generative Transformer neural networks with
KNN-based Information Fetching (KIF) modules. Each KIF module learns a read
operation to access fixed external knowledge. We apply these modules to
generative dialog modeling, a challenging task where information must be
flexibly retrieved and incorporated to maintain the topic and flow of
conversation. We demonstrate the effectiveness of our approach by identifying
relevant knowledge required for knowledgeable but engaging dialog from
Wikipedia, images, and human-written dialog utterances, and show that
leveraging this retrieved information improves model performance, measured by
automatic and human evaluation.
| 2,020 | Computation and Language |
The Gutenberg Dialogue Dataset | Large datasets are essential for neural modeling of many NLP tasks. Current
publicly available open-domain dialogue datasets offer a trade-off between
quality (e.g., DailyDialog) and size (e.g., Opensubtitles). We narrow this gap
by building a high-quality dataset of 14.8M utterances in English, and smaller
datasets in German, Dutch, Spanish, Portuguese, Italian, and Hungarian. We
extract and process dialogues from public-domain books made available by
Project Gutenberg. We describe our dialogue extraction pipeline, analyze the
effects of the various heuristics used, and present an error analysis of
extracted dialogues. Finally, we conduct experiments showing that better
response quality can be achieved in zero-shot and finetuning settings by
training on our data than on the larger but much noisier Opensubtitles dataset.
Our open-source pipeline (https://github.com/ricsinaruto/gutenberg-dialog) can
be extended to further languages with little additional effort. Researchers can
also build their versions of existing datasets by adjusting various trade-off
parameters. We also built a web demo for interacting with our models:
https://ricsinaruto.github.io/chatbot.html.
| 2,021 | Computation and Language |
ColBERT: Using BERT Sentence Embedding in Parallel Neural Networks for
Computational Humor | Automation of humor detection and rating has interesting use cases in modern
technologies, such as humanoid robots, chatbots, and virtual assistants. In
this paper, we propose a novel approach for detecting and rating humor in short
texts based on a popular linguistic theory of humor. The proposed technical
method initiates by separating sentences of the given text and utilizing the
BERT model to generate embeddings for each one. The embeddings are fed to
separate lines of hidden layers in a neural network (one line for each
sentence) to extract latent features. At last, the parallel lines are
concatenated to determine the congruity and other relationships between the
sentences and predict the target value. We accompany the paper with a novel
dataset for humor detection consisting of 200,000 formal short texts. In
addition to evaluating our work on the novel dataset, we participated in a live
machine learning competition focused on rating humor in Spanish tweets. The
proposed model obtained F1 scores of 0.982 and 0.869 in the humor detection
experiments which outperform general and state-of-the-art models. The
evaluation performed on two contrasting settings confirm the strength and
robustness of the model and suggests two important factors in achieving high
accuracy in the current task: 1) usage of sentence embeddings and 2) utilizing
the linguistic structure of humor in designing the proposed model.
| 2,022 | Computation and Language |
LightPAFF: A Two-Stage Distillation Framework for Pre-training and
Fine-tuning | While pre-training and fine-tuning, e.g., BERT~\citep{devlin2018bert},
GPT-2~\citep{radford2019language}, have achieved great success in language
understanding and generation tasks, the pre-trained models are usually too big
for online deployment in terms of both memory cost and inference speed, which
hinders them from practical online usage. In this paper, we propose LightPAFF,
a Lightweight Pre-training And Fine-tuning Framework that leverages two-stage
knowledge distillation to transfer knowledge from a big teacher model to a
lightweight student model in both pre-training and fine-tuning stages. In this
way the lightweight model can achieve similar accuracy as the big teacher
model, but with much fewer parameters and thus faster online inference speed.
LightPAFF can support different pre-training methods (such as BERT, GPT-2 and
MASS~\citep{song2019mass}) and be applied to many downstream tasks. Experiments
on three language understanding tasks, three language modeling tasks and three
sequence to sequence generation tasks demonstrate that while achieving similar
accuracy with the big BERT, GPT-2 and MASS models, LightPAFF reduces the model
size by nearly 5x and improves online inference speed by 5x-7x.
| 2,020 | Computation and Language |
Intuitive Contrasting Map for Antonym Embeddings | This paper shows that, modern word embeddings contain information that
distinguishes synonyms and antonyms despite small cosine similarities between
corresponding vectors. This information is encoded in the geometry of the
embeddings and could be extracted with a straight-forward and intuitive
manifold learning procedure or a contrasting map. Such a map is trained on a
small labeled subset of the data and can produce new embeddings that explicitly
highlight specific semantic attributes of the word. The new embeddings produced
by the map are shown to improve the performance on downstream tasks.
| 2,021 | Computation and Language |
DeSePtion: Dual Sequence Prediction and Adversarial Examples for
Improved Fact-Checking | The increased focus on misinformation has spurred development of data and
systems for detecting the veracity of a claim as well as retrieving
authoritative evidence. The Fact Extraction and VERification (FEVER) dataset
provides such a resource for evaluating end-to-end fact-checking, requiring
retrieval of evidence from Wikipedia to validate a veracity prediction. We show
that current systems for FEVER are vulnerable to three categories of realistic
challenges for fact-checking -- multiple propositions, temporal reasoning, and
ambiguity and lexical variation -- and introduce a resource with these types of
claims. Then we present a system designed to be resilient to these "attacks"
using multiple pointer networks for document selection and jointly modeling a
sequence of evidence sentences and veracity relation predictions. We find that
in handling these attacks we obtain state-of-the-art results on FEVER, largely
due to improved evidence retrieval.
| 2,020 | Computation and Language |
Intelligent Translation Memory Matching and Retrieval with Sentence
Encoders | Matching and retrieving previously translated segments from a Translation
Memory is the key functionality in Translation Memories systems. However this
matching and retrieving process is still limited to algorithms based on edit
distance which we have identified as a major drawback in Translation Memories
systems. In this paper we introduce sentence encoders to improve the matching
and retrieving process in Translation Memories systems - an effective and
efficient solution to replace edit distance based algorithms.
| 2,020 | Computation and Language |
SCDE: Sentence Cloze Dataset with High Quality Distractors From
Examinations | We introduce SCDE, a dataset to evaluate the performance of computational
models through sentence prediction. SCDE is a human-created sentence cloze
dataset, collected from public school English examinations. Our task requires a
model to fill up multiple blanks in a passage from a shared candidate set with
distractors designed by English teachers. Experimental results demonstrate that
this task requires the use of non-local, discourse-level context beyond the
immediate sentence neighborhood. The blanks require joint solving and
significantly impair each other's context. Furthermore, through ablations, we
show that the distractors are of high quality and make the task more
challenging. Our experiments show that there is a significant performance gap
between advanced models (72%) and humans (87%), encouraging future models to
bridge this gap.
| 2,020 | Computation and Language |
Natural language processing for achieving sustainable development: the
case of neural labelling to enhance community profiling | In recent years, there has been an increasing interest in the application of
Artificial Intelligence - and especially Machine Learning - to the field of
Sustainable Development (SD). However, until now, NLP has not been applied in
this context. In this research paper, we show the high potential of NLP
applications to enhance the sustainability of projects. In particular, we focus
on the case of community profiling in developing countries, where, in contrast
to the developed world, a notable data gap exists. In this context, NLP could
help to address the cost and time barrier of structuring qualitative data that
prohibits its widespread use and associated benefits. We propose the new task
of Automatic UPV classification, which is an extreme multi-class multi-label
classification problem. We release Stories2Insights, an expert-annotated
dataset, provide a detailed corpus analysis, and implement a number of strong
neural baselines to address the task. Experimental results show that the
problem is challenging, and leave plenty of room for future research at the
intersection of NLP and SD.
| 2,020 | Computation and Language |
DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference | Large-scale pre-trained language models such as BERT have brought significant
improvements to NLP applications. However, they are also notorious for being
slow in inference, which makes them difficult to deploy in real-time
applications. We propose a simple but effective method, DeeBERT, to accelerate
BERT inference. Our approach allows samples to exit earlier without passing
through the entire model. Experiments show that DeeBERT is able to save up to
~40% inference time with minimal degradation in model quality. Further analyses
show different behaviors in the BERT transformer layers and also reveal their
redundancy. Our work provides new ideas to efficiently apply deep
transformer-based models to downstream tasks. Code is available at
https://github.com/castorini/DeeBERT.
| 2,020 | Computation and Language |
Context-aware Helpfulness Prediction for Online Product Reviews | Modeling and prediction of review helpfulness has become more predominant due
to proliferation of e-commerce websites and online shops. Since the
functionality of a product cannot be tested before buying, people often rely on
different kinds of user reviews to decide whether or not to buy a product.
However, quality reviews might be buried deep in the heap of a large amount of
reviews. Therefore, recommending reviews to customers based on the review
quality is of the essence. Since there is no direct indication of review
quality, most reviews use the information that ''X out of Y'' users found the
review helpful for obtaining the review quality. However, this approach
undermines helpfulness prediction because not all reviews have statistically
abundant votes. In this paper, we propose a neural deep learning model that
predicts the helpfulness score of a review. This model is based on
convolutional neural network (CNN) and a context-aware encoding mechanism which
can directly capture relationships between words irrespective of their distance
in a long sequence. We validated our model on human annotated dataset and the
result shows that our model significantly outperforms existing models for
helpfulness prediction.
| 2,020 | Computation and Language |
Octa: Omissions and Conflicts in Target-Aspect Sentiment Analysis | Sentiments in opinionated text are often determined by both aspects and
target words (or targets). We observe that targets and aspects interrelate in
subtle ways, often yielding conflicting sentiments. Thus, a naive aggregation
of sentiments from aspects and targets treated separately, as in existing
sentiment analysis models, impairs performance.
We propose Octa, an approach that jointly considers aspects and targets when
inferring sentiments. To capture and quantify relationships between targets and
context words, Octa uses a selective self-attention mechanism that handles
implicit or missing targets. Specifically, Octa involves two layers of
attention mechanisms for, respectively, selective attention between targets and
context words and attention over words based on aspects. On benchmark datasets,
Octa outperforms leading models by a large margin, yielding (absolute) gains in
accuracy of 1.6% to 4.3%.
| 2,020 | Computation and Language |
PuzzLing Machines: A Challenge on Learning From Small Data | Deep neural models have repeatedly proved excellent at memorizing surface
patterns from large datasets for various ML and NLP benchmarks. They struggle
to achieve human-like thinking, however, because they lack the skill of
iterative reasoning upon knowledge. To expose this problem in a new light, we
introduce a challenge on learning from small data, PuzzLing Machines, which
consists of Rosetta Stone puzzles from Linguistic Olympiads for high school
students. These puzzles are carefully designed to contain only the minimal
amount of parallel text necessary to deduce the form of unseen expressions.
Solving them does not require external information (e.g., knowledge bases,
visual signals) or linguistic expertise, but meta-linguistic awareness and
deductive skills. Our challenge contains around 100 puzzles covering a wide
range of linguistic phenomena from 81 languages. We show that both simple
statistical algorithms and state-of-the-art deep neural models perform
inadequately on this challenge, as expected. We hope that this benchmark,
available at https://ukplab.github.io/PuzzLing-Machines/, inspires further
efforts towards a new paradigm in NLP---one that is grounded in human-like
reasoning and understanding.
| 2,020 | Computation and Language |
Simultaneous Translation Policies: From Fixed to Adaptive | Adaptive policies are better than fixed policies for simultaneous
translation, since they can flexibly balance the tradeoff between translation
quality and latency based on the current context information. But previous
methods on obtaining adaptive policies either rely on complicated training
process, or underperform simple fixed policies. We design an algorithm to
achieve adaptive policies via a simple heuristic composition of a set of fixed
policies. Experiments on Chinese -> English and German -> English show that our
adaptive policies can outperform fixed ones by up to 4 BLEU points for the same
latency, and more surprisingly, it even surpasses the BLEU score of
full-sentence translation in the greedy mode (and very close to beam mode), but
with much lower latency.
| 2,020 | Computation and Language |
Word Interdependence Exposes How LSTMs Compose Representations | Recent work in NLP shows that LSTM language models capture compositional
structure in language data. For a closer look at how these representations are
composed hierarchically, we present a novel measure of interdependence between
word meanings in an LSTM, based on their interactions at the internal gates. To
explore how compositional representations arise over training, we conduct
simple experiments on synthetic data, which illustrate our measure by showing
how high interdependence can hurt generalization. These synthetic experiments
also illustrate a specific hypothesis about how hierarchical structures are
discovered over the course of training: that parent constituents rely on
effective representations of their children, rather than on learning long-range
relations independently. We further support this measure with experiments on
English language data, where interdependence is higher for more closely
syntactically linked word pairs.
| 2,020 | Computation and Language |
A Summary of the First Workshop on Language Technology for Language
Documentation and Revitalization | Despite recent advances in natural language processing and other language
technology, the application of such technology to language documentation and
conservation has been limited. In August 2019, a workshop was held at Carnegie
Mellon University in Pittsburgh to attempt to bring together language community
members, documentary linguists, and technologists to discuss how to bridge this
gap and create prototypes of novel and practical language revitalization
technologies. This paper reports the results of this workshop, including issues
discussed, and various conceived and implemented technologies for nine
languages: Arapaho, Cayuga, Inuktitut, Irish Gaelic, Kidaw'ida, Kwak'wala,
Ojibwe, San Juan Quiahije Chatino, and Seneca.
| 2,020 | Computation and Language |
KoParadigm: A Korean Conjugation Paradigm Generator | Korean is a morphologically rich language. Korean verbs change their forms in
a fickle manner depending on tense, mood, speech level, meaning, etc.
Therefore, it is challenging to construct comprehensive conjugation paradigms
of Korean verbs. In this paper we introduce a Korean (verb) conjugation
paradigm generator, dubbed KoParadigm. To the best of our knowledge, it is the
first Korean conjugation module that covers all contemporary Korean verbs and
endings. KoParadigm is not only linguistically well established, but also
computationally simple and efficient. We share it via PyPi.
| 2,020 | Computation and Language |
UXLA: A Robust Unsupervised Data Augmentation Framework for
Zero-Resource Cross-Lingual NLP | Transfer learning has yielded state-of-the-art (SoTA) results in many
supervised NLP tasks. However, annotated data for every target task in every
target language is rare, especially for low-resource languages. We propose
UXLA, a novel unsupervised data augmentation framework for zero-resource
transfer learning scenarios. In particular, UXLA aims to solve cross-lingual
adaptation problems from a source language task distribution to an unknown
target language task distribution, assuming no training label in the target
language. At its core, UXLA performs simultaneous self-training with data
augmentation and unsupervised sample selection. To show its effectiveness, we
conduct extensive experiments on three diverse zero-resource cross-lingual
transfer tasks. UXLA achieves SoTA results in all the tasks, outperforming the
baselines by a good margin. With an in-depth framework dissection, we
demonstrate the cumulative contributions of different components to its
success.
| 2,021 | Computation and Language |
$R^3$: Reverse, Retrieve, and Rank for Sarcasm Generation with
Commonsense Knowledge | We propose an unsupervised approach for sarcasm generation based on a
non-sarcastic input sentence. Our method employs a retrieve-and-edit framework
to instantiate two major characteristics of sarcasm: reversal of valence and
semantic incongruity with the context which could include shared commonsense or
world knowledge between the speaker and the listener. While prior works on
sarcasm generation predominantly focus on context incongruity, we show that
combining valence reversal and semantic incongruity based on the commonsense
knowledge generates sarcasm of higher quality. Human evaluation shows that our
system generates sarcasm better than human annotators 34% of the time, and
better than a reinforced hybrid baseline 90% of the time.
| 2,020 | Computation and Language |
Conversational Word Embedding for Retrieval-Based Dialog System | Human conversations contain many types of information, e.g., knowledge,
common sense, and language habits. In this paper, we propose a conversational
word embedding method named PR-Embedding, which utilizes the conversation pairs
$ \left\langle{post, reply} \right\rangle$ to learn word embedding. Different
from previous works, PR-Embedding uses the vectors from two different semantic
spaces to represent the words in post and reply. To catch the information among
the pair, we first introduce the word alignment model from statistical machine
translation to generate the cross-sentence window, then train the embedding on
word-level and sentence-level. We evaluate the method on single-turn and
multi-turn response selection tasks for retrieval-based dialog systems. The
experiment results show that PR-Embedding can improve the quality of the
selected response. PR-Embedding source code is available at
https://github.com/wtma/PR-Embedding
| 2,020 | Computation and Language |
Learning Interpretable and Discrete Representations with Adversarial
Training for Unsupervised Text Classification | Learning continuous representations from unlabeled textual data has been
increasingly studied for benefiting semi-supervised learning. Although it is
relatively easier to interpret discrete representations, due to the difficulty
of training, learning discrete representations for unlabeled textual data has
not been widely explored. This work proposes TIGAN that learns to encode texts
into two disentangled representations, including a discrete code and a
continuous noise, where the discrete code represents interpretable topics, and
the noise controls the variance within the topics. The discrete code learned by
TIGAN can be used for unsupervised text classification. Compared to other
unsupervised baselines, the proposed TIGAN achieves superior performance on six
different corpora. Also, the performance is on par with a recently proposed
weakly-supervised text classification method. The extracted topical words for
representing latent topics show that TIGAN learns coherent and highly
interpretable topics.
| 2,020 | Computation and Language |
Assessing the Bilingual Knowledge Learned by Neural Machine Translation
Models | Machine translation (MT) systems translate text between different languages
by automatically learning in-depth knowledge of bilingual lexicons, grammar and
semantics from the training examples. Although neural machine translation (NMT)
has led the field of MT, we have a poor understanding on how and why it works.
In this paper, we bridge the gap by assessing the bilingual knowledge learned
by NMT models with phrase table -- an interpretable table of bilingual
lexicons. We extract the phrase table from the training examples that an NMT
model correctly predicts. Extensive experiments on widely-used datasets show
that the phrase table is reasonable and consistent against language pairs and
random seeds. Equipped with the interpretable phrase table, we find that NMT
models learn patterns from simple to complex and distill essential bilingual
knowledge from the training examples. We also revisit some advances that
potentially affect the learning of bilingual knowledge (e.g.,
back-translation), and report some interesting findings. We believe this work
opens a new angle to interpret NMT with statistic models, and provides
empirical supports for recent advances in improving NMT models.
| 2,020 | Computation and Language |
Learning to Learn Morphological Inflection for Resource-Poor Languages | We propose to cast the task of morphological inflection - mapping a lemma to
an indicated inflected form - for resource-poor languages as a meta-learning
problem. Treating each language as a separate task, we use data from
high-resource source languages to learn a set of model parameters that can
serve as a strong initialization point for fine-tuning on a resource-poor
target language. Experiments with two model architectures on 29 target
languages from 3 families show that our suggested approach outperforms all
baselines. In particular, it obtains a 31.7% higher absolute accuracy than a
previously proposed cross-lingual transfer model and outperforms the previous
state of the art by 1.7% absolute accuracy on average over languages.
| 2,020 | Computation and Language |
Weakly Supervised POS Taggers Perform Poorly on Truly Low-Resource
Languages | Part-of-speech (POS) taggers for low-resource languages which are exclusively
based on various forms of weak supervision - e.g., cross-lingual transfer,
type-level supervision, or a combination thereof - have been reported to
perform almost as well as supervised ones. However, weakly supervised POS
taggers are commonly only evaluated on languages that are very different from
truly low-resource languages, and the taggers use sources of information, like
high-coverage and almost error-free dictionaries, which are likely not
available for resource-poor languages. We train and evaluate state-of-the-art
weakly supervised POS taggers for a typologically diverse set of 15 truly
low-resource languages. On these languages, given a realistic amount of
resources, even our best model gets only less than half of the words right. Our
results highlight the need for new and different approaches to POS tagging for
truly low-resource languages.
| 2,020 | Computation and Language |
Self-Attention with Cross-Lingual Position Representation | Position encoding (PE), an essential part of self-attention networks (SANs),
is used to preserve the word order information for natural language processing
tasks, generating fixed position indices for input sequences. However, in
cross-lingual scenarios, e.g. machine translation, the PEs of source and target
sentences are modeled independently. Due to word order divergences in different
languages, modeling the cross-lingual positional relationships might help SANs
tackle this problem. In this paper, we augment SANs with \emph{cross-lingual
position representations} to model the bilingually aware latent structure for
the input sentence. Specifically, we utilize bracketing transduction grammar
(BTG)-based reordering information to encourage SANs to learn bilingual
diagonal alignments. Experimental results on WMT'14 English$\Rightarrow$German,
WAT'17 Japanese$\Rightarrow$English, and WMT'17 Chinese$\Leftrightarrow$English
translation tasks demonstrate that our approach significantly and consistently
improves translation quality over strong baselines. Extensive analyses confirm
that the performance gains come from the cross-lingual information.
| 2,020 | Computation and Language |
Let's be Humorous: Knowledge Enhanced Humor Generation | The generation of humor is an under-explored and challenging problem.
Previous works mainly utilize templates or replace phrases to generate humor.
However, few works focus on freer forms and the background knowledge of humor.
The linguistic theory of humor defines the structure of a humor sentence as
set-up and punchline. In this paper, we explore how to generate a punchline
given the set-up with the relevant knowledge. We propose a framework that can
fuse the knowledge to end-to-end models. To our knowledge, this is the first
attempt to generate punchlines with knowledge enhanced model. Furthermore, we
create the first humor-knowledge dataset. The experimental results demonstrate
that our method can make use of knowledge to generate fluent, funny punchlines,
which outperforms several baselines.
| 2,020 | Computation and Language |
Semantics-Aware Inferential Network for Natural Language Understanding | For natural language understanding tasks, either machine reading
comprehension or natural language inference, both semantics-aware and inference
are favorable features of the concerned modeling for better understanding
performance. Thus we propose a Semantics-Aware Inferential Network (SAIN) to
meet such a motivation. Taking explicit contextualized semantics as a
complementary input, the inferential module of SAIN enables a series of
reasoning steps over semantic clues through an attention mechanism. By
stringing these steps, the inferential network effectively learns to perform
iterative reasoning which incorporates both explicit semantics and
contextualized representations. In terms of well pre-trained language models as
front-end encoder, our model achieves significant improvement on 11 tasks
including machine reading comprehension and natural language inference.
| 2,020 | Computation and Language |
Scheduled DropHead: A Regularization Method for Transformer Models | In this paper, we introduce DropHead, a structured dropout method
specifically designed for regularizing the multi-head attention mechanism,
which is a key component of transformer, a state-of-the-art model for various
NLP tasks. In contrast to the conventional dropout mechanisms which randomly
drop units or connections, the proposed DropHead is a structured dropout
method. It drops entire attention-heads during training and It prevents the
multi-head attention model from being dominated by a small portion of attention
heads while also reduces the risk of overfitting the training data, thus making
use of the multi-head attention mechanism more efficiently. Motivated by recent
studies about the learning dynamic of the multi-head attention mechanism, we
propose a specific dropout rate schedule to adaptively adjust the dropout rate
of DropHead and achieve better regularization effect. Experimental results on
both machine translation and text classification benchmark datasets demonstrate
the effectiveness of the proposed approach.
| 2,020 | Computation and Language |
Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for
Offensive Language Detection | Nowadays, offensive content in social media has become a serious problem, and
automatically detecting offensive language is an essential task. In this paper,
we build an offensive language detection system, which combines multi-task
learning with BERT-based models. Using a pre-trained language model such as
BERT, we can effectively learn the representations for noisy text in social
media. Besides, to boost the performance of offensive language detection, we
leverage the supervision signals from other related tasks. In the
OffensEval-2020 competition, our model achieves 91.51% F1 score in English
Sub-task A, which is comparable to the first place (92.23%F1). An empirical
analysis is provided to explain the effectiveness of our approaches.
| 2,020 | Computation and Language |
An Effective Transition-based Model for Discontinuous NER | Unlike widely used Named Entity Recognition (NER) data sets in generic
domains, biomedical NER data sets often contain mentions consisting of
discontinuous spans. Conventional sequence tagging techniques encode Markov
assumptions that are efficient but preclude recovery of these mentions. We
propose a simple, effective transition-based model with generic neural encoding
for discontinuous NER. Through extensive experiments on three biomedical data
sets, we show that our model can effectively recognize discontinuous mentions
without sacrificing the accuracy on continuous mentions.
| 2,020 | Computation and Language |
DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim
Verification | Recently, many methods discover effective evidence from reliable sources by
appropriate neural networks for explainable claim verification, which has been
widely recognized. However, in these methods, the discovery process of evidence
is nontransparent and unexplained. Simultaneously, the discovered evidence only
roughly aims at the interpretability of the whole sequence of claims but
insufficient to focus on the false parts of claims. In this paper, we propose a
Decision Tree-based Co-Attention model (DTCA) to discover evidence for
explainable claim verification. Specifically, we first construct Decision
Tree-based Evidence model (DTE) to select comments with high credibility as
evidence in a transparent and interpretable way. Then we design Co-attention
Self-attention networks (CaSa) to make the selected evidence interact with
claims, which is for 1) training DTE to determine the optimal decision
thresholds and obtain more powerful evidence; and 2) utilizing the evidence to
find the false parts in the claim. Experiments on two public datasets,
RumourEval and PHEME, demonstrate that DTCA not only provides explanations for
the results of claim verification but also achieves the state-of-the-art
performance, boosting the F1-score by 3.11%, 2.41%, respectively.
| 2,020 | Computation and Language |
Introducing a framework to assess newly created questions with Natural
Language Processing | Statistical models such as those derived from Item Response Theory (IRT)
enable the assessment of students on a specific subject, which can be useful
for several purposes (e.g., learning path customization, drop-out prediction).
However, the questions have to be assessed as well and, although it is possible
to estimate with IRT the characteristics of questions that have already been
answered by several students, this technique cannot be used on newly generated
questions. In this paper, we propose a framework to train and evaluate models
for estimating the difficulty and discrimination of newly created Multiple
Choice Questions by extracting meaningful features from the text of the
question and of the possible choices. We implement one model using this
framework and test it on a real-world dataset provided by CloudAcademy, showing
that it outperforms previously proposed models, reducing by 6.7% the RMSE for
difficulty estimation and by 10.8% the RMSE for discrimination estimation. We
also present the results of an ablation study performed to support our features
choice and to show the effects of different characteristics of the questions'
text on difficulty and discrimination.
| 2,020 | Computation and Language |
Faster Depth-Adaptive Transformers | Depth-adaptive neural networks can dynamically adjust depths according to the
hardness of input words, and thus improve efficiency. The main challenge is how
to measure such hardness and decide the required depths (i.e., layers) to
conduct. Previous works generally build a halting unit to decide whether the
computation should continue or stop at each layer. As there is no specific
supervision of depth selection, the halting unit may be under-optimized and
inaccurate, which results in suboptimal and unstable performance when modeling
sentences. In this paper, we get rid of the halting unit and estimate the
required depths in advance, which yields a faster depth-adaptive model.
Specifically, two approaches are proposed to explicitly measure the hardness of
input words and estimate corresponding adaptive depth, namely 1) mutual
information (MI) based estimation and 2) reconstruction loss based estimation.
We conduct experiments on the text classification task with 24 datasets in
various sizes and domains. Results confirm that our approaches can speed up the
vanilla Transformer (up to 7x) while preserving high accuracy. Moreover,
efficiency and robustness are significantly improved when compared with other
depth-adaptive approaches.
| 2,020 | Computation and Language |
Embarrassingly Simple Unsupervised Aspect Extraction | We present a simple but effective method for aspect identification in
sentiment analysis. Our unsupervised method only requires word embeddings and a
POS tagger, and is therefore straightforward to apply to new domains and
languages. We introduce Contrastive Attention (CAt), a novel single-head
attention mechanism based on an RBF kernel, which gives a considerable boost in
performance and makes the model interpretable. Previous work relied on
syntactic features and complex neural models. We show that given the simplicity
of current benchmark datasets for aspect extraction, such complex models are
not needed. The code to reproduce the experiments reported in this paper is
available at https://github.com/clips/cat
| 2,020 | Computation and Language |
MAVEN: A Massive General Domain Event Detection Dataset | Event detection (ED), which means identifying event trigger words and
classifying event types, is the first and most fundamental step for extracting
event knowledge from plain text. Most existing datasets exhibit the following
issues that limit further development of ED: (1) Data scarcity. Existing
small-scale datasets are not sufficient for training and stably benchmarking
increasingly sophisticated modern neural methods. (2) Low coverage. Limited
event types of existing datasets cannot well cover general-domain events, which
restricts the applications of ED models. To alleviate these problems, we
present a MAssive eVENt detection dataset (MAVEN), which contains 4,480
Wikipedia documents, 118,732 event mention instances, and 168 event types.
MAVEN alleviates the data scarcity problem and covers much more general event
types. We reproduce the recent state-of-the-art ED models and conduct a
thorough evaluation on MAVEN. The experimental results show that existing ED
methods cannot achieve promising results on MAVEN as on the small datasets,
which suggests that ED in the real world remains a challenging task and
requires further research efforts. We also discuss further directions for
general domain ED with empirical analyses. The source code and dataset can be
obtained from https://github.com/THU-KEG/MAVEN-dataset.
| 2,020 | Computation and Language |
The Curse of Performance Instability in Analysis Datasets: Consequences,
Source, and Suggestions | We find that the performance of state-of-the-art models on Natural Language
Inference (NLI) and Reading Comprehension (RC) analysis/stress sets can be
highly unstable. This raises three questions: (1) How will the instability
affect the reliability of the conclusions drawn based on these analysis sets?
(2) Where does this instability come from? (3) How should we handle this
instability and what are some potential solutions? For the first question, we
conduct a thorough empirical study over analysis sets and find that in addition
to the unstable final performance, the instability exists all along the
training curve. We also observe lower-than-expected correlations between the
analysis validation set and standard validation set, questioning the
effectiveness of the current model-selection routine. Next, to answer the
second question, we give both theoretical explanations and empirical evidence
regarding the source of the instability, demonstrating that the instability
mainly comes from high inter-example correlations within analysis sets.
Finally, for the third question, we discuss an initial attempt to mitigate the
instability and suggest guidelines for future work such as reporting the
decomposed variance for more interpretable results and fair comparison across
models. Our code is publicly available at:
https://github.com/owenzx/InstabilityAnalysis
| 2,020 | Computation and Language |
Event Extraction by Answering (Almost) Natural Questions | The problem of event extraction requires detecting the event trigger and
extracting its corresponding arguments. Existing work in event argument
extraction typically relies heavily on entity recognition as a
preprocessing/concurrent step, causing the well-known problem of error
propagation. To avoid this issue, we introduce a new paradigm for event
extraction by formulating it as a question answering (QA) task that extracts
the event arguments in an end-to-end manner. Empirical results demonstrate that
our framework outperforms prior methods substantially; in addition, it is
capable of extracting event arguments for roles not seen at training time
(zero-shot learning setting).
| 2,021 | Computation and Language |
KACC: A Multi-task Benchmark for Knowledge Abstraction, Concretization
and Completion | A comprehensive knowledge graph (KG) contains an instance-level entity graph
and an ontology-level concept graph. The two-view KG provides a testbed for
models to "simulate" human's abilities on knowledge abstraction,
concretization, and completion (KACC), which are crucial for human to recognize
the world and manage learned knowledge. Existing studies mainly focus on
partial aspects of KACC. In order to promote thorough analyses for KACC
abilities of models, we propose a unified KG benchmark by improving existing
benchmarks in terms of dataset scale, task coverage, and difficulty.
Specifically, we collect new datasets that contain larger concept graphs,
abundant cross-view links as well as dense entity graphs. Based on the
datasets, we propose novel tasks such as multi-hop knowledge abstraction (MKA),
multi-hop knowledge concretization (MKC) and then design a comprehensive
benchmark. For MKA and MKC tasks, we further annotate multi-hop hierarchical
triples as harder samples. The experimental results of existing methods
demonstrate the challenges of our benchmark. The resource is available at
https://github.com/thunlp/KACC.
| 2,021 | Computation and Language |
Recipes for building an open-domain chatbot | Building open-domain chatbots is a challenging area for machine learning
research. While prior work has shown that scaling neural models in the number
of parameters and the size of the data they are trained on gives improved
results, we show that other ingredients are important for a high-performing
chatbot. Good conversation requires a number of skills that an expert
conversationalist blends in a seamless way: providing engaging talking points
and listening to their partners, and displaying knowledge, empathy and
personality appropriately, while maintaining a consistent persona. We show that
large scale models can learn these skills when given appropriate training data
and choice of generation strategy. We build variants of these recipes with 90M,
2.7B and 9.4B parameter models, and make our models and code publicly
available. Human evaluations show our best models are superior to existing
approaches in multi-turn dialogue in terms of engagingness and humanness
measurements. We then discuss the limitations of this work by analyzing failure
cases of our models.
| 2,020 | Computation and Language |
Capturing Global Informativeness in Open Domain Keyphrase Extraction | Open-domain KeyPhrase Extraction (KPE) aims to extract keyphrases from
documents without domain or quality restrictions, e.g., web pages with variant
domains and qualities. Recently, neural methods have shown promising results in
many KPE tasks due to their powerful capacity for modeling contextual semantics
of the given documents. However, we empirically show that most neural KPE
methods prefer to extract keyphrases with good phraseness, such as short and
entity-style n-grams, instead of globally informative keyphrases from
open-domain documents. This paper presents JointKPE, an open-domain KPE
architecture built on pre-trained language models, which can capture both local
phraseness and global informativeness when extracting keyphrases. JointKPE
learns to rank keyphrases by estimating their informativeness in the entire
document and is jointly trained on the keyphrase chunking task to guarantee the
phraseness of keyphrase candidates. Experiments on two large KPE datasets with
diverse domains, OpenKP and KP20k, demonstrate the effectiveness of JointKPE on
different pre-trained variants in open-domain scenarios. Further analyses
reveal the significant advantages of JointKPE in predicting long and non-entity
keyphrases, which are challenging for previous neural KPE methods. Our code is
publicly available at https://github.com/thunlp/BERT-KPE.
| 2,021 | Computation and Language |
Extending Multilingual BERT to Low-Resource Languages | Multilingual BERT (M-BERT) has been a huge success in both supervised and
zero-shot cross-lingual transfer learning. However, this success has focused
only on the top 104 languages in Wikipedia that it was trained on. In this
paper, we propose a simple but effective approach to extend M-BERT (E-BERT) so
that it can benefit any new language, and show that our approach benefits
languages that are already in M-BERT as well. We perform an extensive set of
experiments with Named Entity Recognition (NER) on 27 languages, only 16 of
which are in M-BERT, and show an average increase of about 6% F1 on languages
that are already in M-BERT and 23% F1 increase on new languages.
| 2,020 | Computation and Language |
Unnatural Language Processing: Bridging the Gap Between Synthetic and
Natural Language Data | Large, human-annotated datasets are central to the development of natural
language processing models. Collecting these datasets can be the most
challenging part of the development process. We address this problem by
introducing a general purpose technique for ``simulation-to-real'' transfer in
language understanding problems with a delimited set of target behaviors,
making it possible to develop models that can interpret natural utterances
without natural training data. We begin with a synthetic data generation
procedure, and train a model that can accurately interpret utterances produced
by the data generator. To generalize to natural utterances, we automatically
find projections of natural language utterances onto the support of the
synthetic language, using learned sentence embeddings to define a distance
metric. With only synthetic training data, our approach matches or outperforms
state-of-the-art models trained on natural language data in several domains.
These results suggest that simulation-to-real transfer is a practical framework
for developing NLP applications, and that improved models for transfer might
provide wide-ranging improvements in downstream tasks.
| 2,020 | Computation and Language |
LogicalFactChecker: Leveraging Logical Operations for Fact Checking with
Graph Module Network | Verifying the correctness of a textual statement requires not only semantic
reasoning about the meaning of words, but also symbolic reasoning about logical
operations like count, superlative, aggregation, etc. In this work, we propose
LogicalFactChecker, a neural network approach capable of leveraging logical
operations for fact checking. It achieves the state-of-the-art performance on
TABFACT, a large-scale, benchmark dataset built for verifying a textual
statement with semi-structured tables. This is achieved by a graph module
network built upon the Transformer-based architecture. With a textual statement
and a table as the input, LogicalFactChecker automatically derives a program
(a.k.a. logical form) of the statement in a semantic parsing manner. A
heterogeneous graph is then constructed to capture not only the structures of
the table and the program, but also the connections between inputs with
different modalities. Such a graph reveals the related contexts of each word in
the statement, the table and the program. The graph is used to obtain
graph-enhanced contextual representations of words in Transformer-based
architecture. After that, a program-driven module network is further introduced
to exploit the hierarchical structure of the program, where semantic
compositionality is dynamically modeled along the program structure with a set
of function-specific modules. Ablation experiments suggest that both the
heterogeneous graph and the module network are important to obtain strong
results.
| 2,020 | Computation and Language |
Active Learning for Coreference Resolution using Discrete Annotation | We improve upon pairwise annotation for active learning in coreference
resolution, by asking annotators to identify mention antecedents if a presented
mention pair is deemed not coreferent. This simple modification, when combined
with a novel mention clustering algorithm for selecting which examples to
label, is much more efficient in terms of the performance obtained per
annotation budget. In experiments with existing benchmark coreference datasets,
we show that the signal from this additional question leads to significant
performance gains per human-annotation hour. Future work can use our annotation
protocol to effectively develop coreference models for new domains. Our code is
publicly available at
https://github.com/belindal/discrete-active-learning-coref .
| 2,020 | Computation and Language |
Entity Type Prediction in Knowledge Graphs using Embeddings | Open Knowledge Graphs (such as DBpedia, Wikidata, YAGO) have been recognized
as the backbone of diverse applications in the field of data mining and
information retrieval. Hence, the completeness and correctness of the Knowledge
Graphs (KGs) are vital. Most of these KGs are mostly created either via an
automated information extraction from Wikipedia snapshots or information
accumulation provided by the users or using heuristics. However, it has been
observed that the type information of these KGs is often noisy, incomplete, and
incorrect. To deal with this problem a multi-label classification approach is
proposed in this work for entity typing using KG embeddings. We compare our
approach with the current state-of-the-art type prediction method and report on
experiments with the KGs.
| 2,020 | Computation and Language |
Autoencoding Word Representations through Time for Semantic Change
Detection | Semantic change detection concerns the task of identifying words whose
meaning has changed over time. The current state-of-the-art detects the level
of semantic change in a word by comparing its vector representation in two
distinct time periods, without considering its evolution through time. In this
work, we propose three variants of sequential models for detecting semantically
shifted words, effectively accounting for the changes in the word
representations over time, in a temporally sensitive manner. Through extensive
experimentation under various settings with both synthetic and real data we
showcase the importance of sequential modelling of word vectors through time
for detecting the words whose semantics have changed the most. Finally, we take
a step towards comparing different approaches in a quantitative manner,
demonstrating that the temporal modelling of word representations yields a
clear-cut advantage in performance.
| 2,020 | Computation and Language |
Showing Your Work Doesn't Always Work | In natural language processing, a recently popular line of work explores how
to best report the experimental results of neural networks. One exemplar
publication, titled "Show Your Work: Improved Reporting of Experimental
Results," advocates for reporting the expected validation effectiveness of the
best-tuned model, with respect to the computational budget. In the present
work, we critically examine this paper. As far as statistical generalizability
is concerned, we find unspoken pitfalls and caveats with this approach. We
analytically show that their estimator is biased and uses error-prone
assumptions. We find that the estimator favors negative errors and yields poor
bootstrapped confidence intervals. We derive an unbiased alternative and
bolster our claims with empirical evidence from statistical simulation. Our
codebase is at http://github.com/castorini/meanmax.
| 2,020 | Computation and Language |
Informational Space of Meaning for Scientific Texts | In Natural Language Processing, automatic extracting the meaning of texts
constitutes an important problem. Our focus is the computational analysis of
meaning of short scientific texts (abstracts or brief reports). In this paper,
a vector space model is developed for quantifying the meaning of words and
texts. We introduce the Meaning Space, in which the meaning of a word is
represented by a vector of Relative Information Gain (RIG) about the subject
categories that the text belongs to, which can be obtained from observing the
word in the text. This new approach is applied to construct the Meaning Space
based on Leicester Scientific Corpus (LSC) and Leicester Scientific
Dictionary-Core (LScDC). The LSC is a scientific corpus of 1,673,350 abstracts
and the LScDC is a scientific dictionary which words are extracted from the
LSC. Each text in the LSC belongs to at least one of 252 subject categories of
Web of Science (WoS). These categories are used in construction of vectors of
information gains. The Meaning Space is described and statistically analysed
for the LSC with the LScDC. The usefulness of the proposed representation model
is evaluated through top-ranked words in each category. The most informative n
words are ordered. We demonstrated that RIG-based word ranking is much more
useful than ranking based on raw word frequency in determining the
science-specific meaning and importance of a word. The proposed model based on
RIG is shown to have ability to stand out topic-specific words in categories.
The most informative words are presented for 252 categories. The new scientific
dictionary and the 103,998 x 252 Word-Category RIG Matrix are available online.
Analysis of the Meaning Space provides us with a tool to further explore
quantifying the meaning of a text using more complex and context-dependent
meaning models that use co-occurrence of words and their combinations.
| 2,020 | Computation and Language |
Graph-to-Tree Neural Networks for Learning Structured Input-Output
Translation with Applications to Semantic Parsing and Math Word Problem | The celebrated Seq2Seq technique and its numerous variants achieve excellent
performance on many tasks such as neural machine translation, semantic parsing,
and math word problem solving. However, these models either only consider input
objects as sequences while ignoring the important structural information for
encoding, or they simply treat output objects as sequence outputs instead of
structural objects for decoding. In this paper, we present a novel
Graph-to-Tree Neural Networks, namely Graph2Tree consisting of a graph encoder
and a hierarchical tree decoder, that encodes an augmented graph-structured
input and decodes a tree-structured output. In particular, we investigated our
model for solving two problems, neural semantic parsing and math word problem.
Our extensive experiments demonstrate that our Graph2Tree model outperforms or
matches the performance of other state-of-the-art models on these tasks.
| 2,020 | Computation and Language |
Conspiracy in the Time of Corona: Automatic detection of Covid-19
Conspiracy Theories in Social Media and the News | Rumors and conspiracy theories thrive in environments of low confidence and
low trust. Consequently, it is not surprising that ones related to the Covid-19
pandemic are proliferating given the lack of any authoritative scientific
consensus on the virus, its spread and containment, or on the long term social
and economic ramifications of the pandemic. Among the stories currently
circulating are ones suggesting that the 5G network activates the virus, that
the pandemic is a hoax perpetrated by a global cabal, that the virus is a
bio-weapon released deliberately by the Chinese, or that Bill Gates is using it
as cover to launch a global surveillance regime. While some may be quick to
dismiss these stories as having little impact on real-world behavior, recent
events including the destruction of property, racially fueled attacks against
Asian Americans, and demonstrations espousing resistance to public health
orders countermand such conclusions. Inspired by narrative theory, we crawl
social media sites and news reports and, through the application of automated
machine-learning methods, discover the underlying narrative frameworks
supporting the generation of these stories. We show how the various narrative
frameworks fueling rumors and conspiracy theories rely on the alignment of
otherwise disparate domains of knowledge, and consider how they attach to the
broader reporting on the pandemic. These alignments and attachments, which can
be monitored in near real-time, may be useful for identifying areas in the news
that are particularly vulnerable to reinterpretation by conspiracy theorists.
Understanding the dynamics of storytelling on social media and the narrative
frameworks that provide the generative basis for these stories may also be
helpful for devising methods to disrupt their spread.
| 2,020 | Computation and Language |
A Practical Framework for Relation Extraction with Noisy Labels Based on
Doubly Transitional Loss | Either human annotation or rule based automatic labeling is an effective
method to augment data for relation extraction. However, the inevitable wrong
labeling problem for example by distant supervision may deteriorate the
performance of many existing methods. To address this issue, we introduce a
practical end-to-end deep learning framework, including a standard feature
extractor and a novel noisy classifier with our proposed doubly transitional
mechanism. One transition is basically parameterized by a non-linear
transformation between hidden layers that implicitly represents the conversion
between the true and noisy labels, and it can be readily optimized together
with other model parameters. Another is an explicit probability transition
matrix that captures the direct conversion between labels but needs to be
derived from an EM algorithm. We conduct experiments on the NYT dataset and
SemEval 2018 Task 7. The empirical results show comparable or better
performance over state-of-the-art methods.
| 2,020 | Computation and Language |
TextGAIL: Generative Adversarial Imitation Learning for Text Generation | Generative Adversarial Networks (GANs) for text generation have recently
received many criticisms, as they perform worse than their MLE counterparts. We
suspect previous text GANs' inferior performance is due to the lack of a
reliable guiding signal in their discriminators. To address this problem, we
propose a generative adversarial imitation learning framework for text
generation that uses large pre-trained language models to provide more reliable
reward guidance. Our approach uses contrastive discriminator, and proximal
policy optimization (PPO) to stabilize and improve text generation performance.
For evaluation, we conduct experiments on a diverse set of unconditional and
conditional text generation tasks. Experimental results show that TextGAIL
achieves better performance in terms of both quality and diversity than the MLE
baseline. We also validate our intuition that TextGAIL's discriminator
demonstrates the capability of providing reasonable rewards with an additional
task.
| 2,021 | Computation and Language |
Multilingual Chart-based Constituency Parse Extraction from Pre-trained
Language Models | As it has been unveiled that pre-trained language models (PLMs) are to some
extent capable of recognizing syntactic concepts in natural language, much
effort has been made to develop a method for extracting complete (binary)
parses from PLMs without training separate parsers. We improve upon this
paradigm by proposing a novel chart-based method and an effective top-K
ensemble technique. Moreover, we demonstrate that we can broaden the scope of
application of the approach into multilingual settings. Specifically, we show
that by applying our method on multilingual PLMs, it becomes possible to induce
non-trivial parses for sentences from nine languages in an integrated and
language-agnostic manner, attaining performance superior or comparable to that
of unsupervised PCFGs. We also verify that our approach is robust to
cross-lingual transfer. Finally, we provide analyses on the inner workings of
our method. For instance, we discover universal attention heads which are
consistently sensitive to syntactic information irrespective of the input
language.
| 2,021 | Computation and Language |
DomBERT: Domain-oriented Language Model for Aspect-based Sentiment
Analysis | This paper focuses on learning domain-oriented language models driven by end
tasks, which aims to combine the worlds of both general-purpose language models
(such as ELMo and BERT) and domain-specific language understanding. We propose
DomBERT, an extension of BERT to learn from both in-domain corpus and relevant
domain corpora. This helps in learning domain language models with
low-resources. Experiments are conducted on an assortment of tasks in
aspect-based sentiment analysis, demonstrating promising results.
| 2,020 | Computation and Language |
A Survey of Document Grounded Dialogue Systems (DGDS) | Dialogue system (DS) attracts great attention from industry and academia
because of its wide application prospects. Researchers usually divide the DS
according to the function. However, many conversations require the DS to switch
between different functions. For example, movie discussion can change from
chit-chat to QA, the conversational recommendation can transform from chit-chat
to recommendation, etc. Therefore, classification according to functions may
not be enough to help us appreciate the current development trend. We classify
the DS based on background knowledge. Specifically, study the latest DS based
on the unstructured document(s). We define Document Grounded Dialogue System
(DGDS) as the DS that the dialogues are centering on the given document(s). The
DGDS can be used in scenarios such as talking over merchandise against product
Manual, commenting on news reports, etc. We believe that extracting
unstructured document(s) information is the future trend of the DS because a
great amount of human knowledge lies in these document(s). The research of the
DGDS not only possesses a broad application prospect but also facilitates AI to
better understand human knowledge and natural language. We analyze the
classification, architecture, datasets, models, and future development trends
of the DGDS, hoping to help researchers in this field.
| 2,020 | Computation and Language |
Neural Machine Translation for Low-Resourced Indian Languages | A large number of significant assets are available online in English, which
is frequently translated into native languages to ease the information sharing
among local people who are not much familiar with English. However, manual
translation is a very tedious, costly, and time-taking process. To this end,
machine translation is an effective approach to convert text to a different
language without any human involvement. Neural machine translation (NMT) is one
of the most proficient translation techniques amongst all existing machine
translation systems. In this paper, we have applied NMT on two of the most
morphological rich Indian languages, i.e. English-Tamil and English-Malayalam.
We proposed a novel NMT model using Multihead self-attention along with
pre-trained Byte-Pair-Encoded (BPE) and MultiBPE embeddings to develop an
efficient translation system that overcomes the OOV (Out Of Vocabulary) problem
for low resourced morphological rich Indian languages which do not have much
translation available online. We also collected corpus from different sources,
addressed the issues with these publicly available data and refined them for
further uses. We used the BLEU score for evaluating our system performance.
Experimental results and survey confirmed that our proposed translator (24.34
and 9.78 BLEU score) outperforms Google translator (9.40 and 5.94 BLEU score)
respectively.
| 2,020 | Computation and Language |
Evolution of Semantic Similarity -- A Survey | Estimating the semantic similarity between text data is one of the
challenging and open research problems in the field of Natural Language
Processing (NLP). The versatility of natural language makes it difficult to
define rule-based methods for determining semantic similarity measures. In
order to address this issue, various semantic similarity methods have been
proposed over the years. This survey article traces the evolution of such
methods, categorizing them based on their underlying principles as
knowledge-based, corpus-based, deep neural network-based methods, and hybrid
methods. Discussing the strengths and weaknesses of each method, this survey
provides a comprehensive view of existing systems in place, for new researchers
to experiment and develop innovative ideas to address the issue of semantic
similarity.
| 2,021 | Computation and Language |
Fine-tuning Multi-hop Question Answering with Hierarchical Graph Network | In this paper, we present a two stage model for multi-hop question answering.
The first stage is a hierarchical graph network, which is used to reason over
multi-hop question and is capable to capture different levels of granularity
using the nature structure(i.e., paragraphs, questions, sentences and entities)
of documents. The reasoning process is convert to node classify task(i.e.,
paragraph nodes and sentences nodes). The second stage is a language model
fine-tuning task. In a word, stage one use graph neural network to select and
concatenate support sentences as one paragraph, and stage two find the answer
span in language model fine-tuning paradigm.
| 2,022 | Computation and Language |
A Baseline for the Commands For Autonomous Vehicles Challenge | The Commands For Autonomous Vehicles (C4AV) challenge requires participants
to solve an object referral task in a real-world setting. More specifically, we
consider a scenario where a passenger can pass free-form natural language
commands to a self-driving car. This problem is particularly challenging, as
the language is much less constrained compared to existing benchmarks, and
object references are often implicit. The challenge is based on the recent
\texttt{Talk2Car} dataset. This document provides a technical overview of a
model that we released to help participants get started in the competition. The
code can be found at https://github.com/talk2car/Talk2Car.
| 2,020 | Computation and Language |
Leveraging Personal Navigation Assistant Systems Using Automated Social
Media Traffic Reporting | Modern urbanization is demanding smarter technologies to improve a variety of
applications in intelligent transportation systems to relieve the increasing
amount of vehicular traffic congestion and incidents. Existing incident
detection techniques are limited to the use of sensors in the transportation
network and hang on human-inputs. Despite of its data abundance, social media
is not well-exploited in such context. In this paper, we develop an automated
traffic alert system based on Natural Language Processing (NLP) that filters
this flood of information and extract important traffic-related bullets. To
this end, we employ the fine-tuning Bidirectional Encoder Representations from
Transformers (BERT) language embedding model to filter the related traffic
information from social media. Then, we apply a question-answering model to
extract necessary information characterizing the report event such as its exact
location, occurrence time, and nature of the events. We demonstrate the adopted
NLP approaches outperform other existing approach and, after effectively
training them, we focus on real-world situation and show how the developed
approach can, in real-time, extract traffic-related information and
automatically convert them into alerts for navigation assistance applications
such as navigation apps.
| 2,020 | Computation and Language |
Every Document Owns Its Structure: Inductive Text Classification via
Graph Neural Networks | Text classification is fundamental in natural language processing (NLP), and
Graph Neural Networks (GNN) are recently applied in this task. However, the
existing graph-based works can neither capture the contextual word
relationships within each document nor fulfil the inductive learning of new
words. In this work, to overcome such problems, we propose TextING for
inductive text classification via GNN. We first build individual graphs for
each document and then use GNN to learn the fine-grained word representations
based on their local structures, which can also effectively produce embeddings
for unseen words in the new document. Finally, the word nodes are aggregated as
the document embedding. Extensive experiments on four benchmark datasets show
that our method outperforms state-of-the-art text classification methods.
| 2,020 | Computation and Language |
DeepSubQE: Quality estimation for subtitle translations | Quality estimation (QE) for tasks involving language data is hard owing to
numerous aspects of natural language like variations in paraphrasing, style,
grammar, etc. There can be multiple answers with varying levels of
acceptability depending on the application at hand. In this work, we look at
estimating quality of translations for video subtitles. We show how existing QE
methods are inadequate and propose our method DeepSubQE as a system to estimate
quality of translation given subtitles data for a pair of languages. We rely on
various data augmentation strategies for automated labelling and synthesis for
training. We create a hybrid network which learns semantic and syntactic
features of bilingual data and compare it with only-LSTM and only-CNN networks.
Our proposed network outperforms them by significant margin.
| 2,020 | Computation and Language |
Answer Generation through Unified Memories over Multiple Passages | Machine reading comprehension methods that generate answers by referring to
multiple passages for a question have gained much attention in AI and NLP
communities. The current methods, however, do not investigate the relationships
among multiple passages in the answer generation process, even though topics
correlated among the passages may be answer candidates. Our method, called
neural answer Generation through Unified Memories over Multiple Passages
(GUM-MP), solves this problem as follows. First, it determines which tokens in
the passages are matched to the question. In particular, it investigates
matches between tokens in positive passages, which are assigned to the
question, and those in negative passages, which are not related to the
question. Next, it determines which tokens in the passage are matched to other
passages assigned to the same question and at the same time it investigates the
topics in which they are matched. Finally, it encodes the token sequences with
the above two matching results into unified memories in the passage encoders
and learns the answer sequence by using an encoder-decoder with a
multiple-pointer-generator mechanism. As a result, GUM-MP can generate answers
by pointing to important tokens present across passages. Evaluations indicate
that GUM-MP generates much more accurate results than the current models do.
| 2,020 | Computation and Language |
A Review of Winograd Schema Challenge Datasets and Approaches | The Winograd Schema Challenge is both a commonsense reasoning and natural
language understanding challenge, introduced as an alternative to the Turing
test. A Winograd schema is a pair of sentences differing in one or two words
with a highly ambiguous pronoun, resolved differently in the two sentences,
that appears to require commonsense knowledge to be resolved correctly. The
examples were designed to be easily solvable by humans but difficult for
machines, in principle requiring a deep understanding of the content of the
text and the situation it describes. This paper reviews existing Winograd
Schema Challenge benchmark datasets and approaches that have been published
since its introduction.
| 2,020 | Computation and Language |
Towards an evolutionary-based approach for natural language processing | Tasks related to Natural Language Processing (NLP) have recently been the
focus of a large research endeavor by the machine learning community. The
increased interest in this area is mainly due to the success of deep learning
methods. Genetic Programming (GP), however, was not under the spotlight with
respect to NLP tasks. Here, we propose a first proof-of-concept that combines
GP with the well established NLP tool word2vec for the next word prediction
task. The main idea is that, once words have been moved into a vector space,
traditional GP operators can successfully work on vectors, thus producing
meaningful words as the output. To assess the suitability of this approach, we
perform an experimental evaluation on a set of existing newspaper headlines.
Individuals resulting from this (pre-)training phase can be employed as the
initial population in other NLP tasks, like sentence generation, which will be
the focus of future investigations, possibly employing adversarial
co-evolutionary approaches.
| 2,020 | Computation and Language |
Data Annealing for Informal Language Understanding Tasks | There is a huge performance gap between formal and informal language
understanding tasks. The recent pre-trained models that improved the
performance of formal language understanding tasks did not achieve a comparable
result on informal language. We pro-pose a data annealing transfer learning
procedure to bridge the performance gap on informal natural language
understanding tasks. It successfully utilizes a pre-trained model such as BERT
in informal language. In our data annealing procedure, the training set
contains mainly formal text data at first; then, the proportion of the informal
text data is gradually increased during the training process. Our data
annealing procedure is model-independent and can be applied to various tasks.
We validate its effectiveness in exhaustive experiments. When BERT is
implemented with our learning procedure, it outperforms all the
state-of-the-art models on the three common informal language tasks.
| 2,020 | Computation and Language |
A Tailored Pre-Training Model for Task-Oriented Dialog Generation | The recent success of large pre-trained language models such as BERT and
GPT-2 has suggested the effectiveness of incorporating language priors in
downstream dialog generation tasks. However, the performance of pre-trained
models on the dialog task is not as optimal as expected. In this paper, we
propose a Pre-trained Role Alternating Language model (PRAL), designed
specifically for task-oriented conversational systems. We adopted (Wu et al.,
2019) that models two speakers separately. We also design several techniques,
such as start position randomization, knowledge distillation, and history
discount to improve pre-training performance. We introduce a task-oriented
dialog pretraining dataset by cleaning 13 existing data sets. We test PRAL on
three different downstream tasks. The results show that PRAL performs better or
on par with state-of-the-art methods.
| 2,020 | Computation and Language |
How Chaotic Are Recurrent Neural Networks? | Recurrent neural networks (RNNs) are non-linear dynamic systems. Previous
work believes that RNN may suffer from the phenomenon of chaos, where the
system is sensitive to initial states and unpredictable in the long run. In
this paper, however, we perform a systematic empirical analysis, showing that a
vanilla or long short term memory (LSTM) RNN does not exhibit chaotic behavior
along the training process in real applications such as text generation. Our
findings suggest that future work in this direction should address the other
side of non-linear dynamics for RNN.
| 2,020 | Computation and Language |
Neural translation and automated recognition of ICD10 medical entities
from natural language | The recognition of medical entities from natural language is an ubiquitous
problem in the medical field, with applications ranging from medical act coding
to the analysis of electronic health data for public health. It is however a
complex task usually requiring human expert intervention, thus making it
expansive and time consuming. The recent advances in artificial intelligence,
specifically the raise of deep learning methods, has enabled computers to make
efficient decisions on a number of complex problems, with the notable example
of neural sequence models and their powerful applications in natural language
processing. They however require a considerable amount of data to learn from,
which is typically their main limiting factor. However, the C\'epiDc stores an
exhaustive database of death certificates at the French national scale,
amounting to several millions of natural language examples provided with their
associated human coded medical entities available to the machine learning
practitioner. This article investigates the applications of deep neural
sequence models to the medical entity recognition from natural language
problem.
| 2,020 | Computation and Language |
Using LSTM to Translate French to Senegalese Local Languages: Wolof as a
Case Study | In this paper, we propose a neural machine translation system for Wolof, a
low-resource Niger-Congo language. First we gathered a parallel corpus of 70000
aligned French-Wolof sentences. Then we developped a baseline LSTM based
encoder-decoder architecture which was further extended to bidirectional LSTMs
with attention mechanisms. Our models are trained on a limited amount of
parallel French-Wolof data of approximately 35000 parallel sentences.
Experimental results on French-Wolof translation tasks show that our approach
produces promising translations in extremely low-resource conditions. The best
model was able to achieve a good performance of 47% BLEU score.
| 2,020 | Computation and Language |
Neurals Networks for Projecting Named Entities from English to Ewondo | Named entity recognition is an important task in natural language processing.
It is very well studied for rich language, but still under explored for
low-resource languages. The main reason is that the existing techniques
required a lot of annotated data to reach good performance. Recently, a new
distributional representation of words has been proposed to project named
entities from a rich language to a low-resource one. This representation has
been coupled to a neural network in order to project named entities from
English to Ewondo, a Bantu language spoken in Cameroon. Although the proposed
method reached appreciable results, the size of the used neural network was too
large compared to the size of the dataset. Furthermore the impact of the model
parameters has not been studied. In this paper, we show experimentally that the
same results can be obtained using a smaller neural network. We also emphasize
the parameters that are highly correlated to the network performance. This work
is a step forward to build a reliable and robust network architecture for named
entity projection in low resource languages.
| 2,020 | Computation and Language |
Low resource language dataset creation, curation and classification:
Setswana and Sepedi -- Extended Abstract | The recent advances in Natural Language Processing have only been a boon for
well represented languages, negating research in lesser known global languages.
This is in part due to the availability of curated data and research resources.
One of the current challenges concerning low-resourced languages are clear
guidelines on the collection, curation and preparation of datasets for
different use-cases. In this work, we take on the task of creating two datasets
that are focused on news headlines (i.e short text) for Setswana and Sepedi and
the creation of a news topic classification task from these datasets. In this
study, we document our work, propose baselines for classification, and
investigate an approach on data augmentation better suited to low-resourced
languages in order to improve the performance of the classifiers.
| 2,020 | Computation and Language |
Template-based Question Answering using Recursive Neural Networks | We propose a neural network-based approach to automatically learn and
classify natural language questions into its corresponding template using
recursive neural networks. An obvious advantage of using neural networks is the
elimination of the need for laborious feature engineering that can be
cumbersome and error-prone. The input question is encoded into a vector
representation. The model is trained and evaluated on the LC-QuAD dataset
(Large-scale Complex Question Answering Dataset). The LC-QuAD queries are
annotated based on 38 unique templates that the model attempts to classify. The
resulting model is evaluated against both the LC-QuAD dataset and the 7th
Question Answering Over Linked Data (QALD-7) dataset. The recursive neural
network achieves template classification accuracy of 0.828 on the LC-QuAD
dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most
likely templates were considered the model achieves an accuracy of 0.945 on the
LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the
overall system achieves a macro F-score 0.419 on the LC-QuAD dataset and a
macro F-score of 0.417 on the QALD-7 dataset.
| 2,020 | Computation and Language |
Decomposing Word Embedding with the Capsule Network | Word sense disambiguation tries to learn the appropriate sense of an
ambiguous word in a given context. The existing pre-trained language methods
and the methods based on multi-embeddings of word did not explore the power of
the unsupervised word embedding sufficiently.
In this paper, we discuss a capsule network-based approach, taking advantage
of capsule's potential for recognizing highly overlapping features and dealing
with segmentation. We propose a Capsule network-based method to Decompose the
unsupervised word Embedding of an ambiguous word into context specific Sense
embedding, called CapsDecE2S. In this approach, the unsupervised ambiguous
embedding is fed into capsule network to produce its multiple morpheme-like
vectors, which are defined as the basic semantic language units of meaning.
With attention operations, CapsDecE2S integrates the word context to
reconstruct the multiple morpheme-like vectors into the context-specific sense
embedding. To train CapsDecE2S, we propose a sense matching training method. In
this method, we convert the sense learning into a binary classification that
explicitly learns the relation between senses by the label of matching and
non-matching. The CapsDecE2S was experimentally evaluated on two sense learning
tasks, i.e., word in context and word sense disambiguation. Results on two
public corpora Word-in-Context and English all-words Word Sense Disambiguation
show that, the CapsDecE2S model achieves the new state-of-the-art for the word
in context and word sense disambiguation tasks.
| 2,020 | Computation and Language |
DARE: Data Augmented Relation Extraction with GPT-2 | Real-world Relation Extraction (RE) tasks are challenging to deal with,
either due to limited training data or class imbalance issues. In this work, we
present Data Augmented Relation Extraction(DARE), a simple method to augment
training data by properly fine-tuning GPT-2 to generate examples for specific
relation types. The generated training data is then used in combination with
the gold dataset to train a BERT-based RE classifier. In a series of
experiments we show the advantages of our method, which leads in improvements
of up to 11 F1 score points against a strong base-line. Also, DARE achieves new
state of the art in three widely used biomedical RE datasets surpassing the
previous best results by 4.7 F1 points on average.
| 2,020 | Computation and Language |
Character-level Japanese Text Generation with Attention Mechanism for
Chest Radiography Diagnosis | Chest radiography is a general method for diagnosing a patient's condition
and identifying important information; therefore, radiography is used
extensively in routine medical practice in various situations, such as
emergency medical care and medical checkup. However, a high level of expertise
is required to interpret chest radiographs. Thus, medical specialists spend
considerable time in diagnosing such huge numbers of radiographs. In order to
solve these problems, methods for generating findings have been proposed.
However, the study of generating chest radiograph findings has primarily
focused on the English language, and to the best of our knowledge, no studies
have studied Japanese data on this subject. There are two challenges involved
in generating findings in the Japanese language. The first challenge is that
word splitting is difficult because the boundaries of Japanese word are not
clear. The second challenge is that there are numerous orthographic variants.
For deal with these two challenges, we proposed an end-to-end model that
generates Japanese findings at the character-level from chest radiographs. In
addition, we introduced the attention mechanism to improve not only the
accuracy, but also the interpretation ability of the results. We evaluated the
proposed method using a public dataset with Japanese findings. The
effectiveness of the proposed method was confirmed using the Bilingual
Evaluation Understudy score. And, we were confirmed from the generated findings
that the proposed method was able to consider the orthographic variants.
Furthermore, we confirmed via visual inspection that the attention mechanism
captures the features and positional information of radiographs.
| 2,020 | Computation and Language |
Word Equations: Inherently Interpretable Sparse Word Embeddingsthrough
Sparse Coding | Word embeddings are a powerful natural language processing technique, but
they are extremely difficult to interpret. To enable interpretable NLP models,
we create vectors where each dimension is inherently interpretable. By
inherently interpretable, we mean a system where each dimension is associated
with some human understandable hint that can describe the meaning of that
dimension. In order to create more interpretable word embeddings, we transform
pretrained dense word embeddings into sparse embeddings. These new embeddings
are inherently interpretable: each of their dimensions is created from and
represents a natural language word or specific grammatical concept. We
construct these embeddings through sparse coding, where each vector in the
basis set is itself a word embedding. Therefore, each dimension of our sparse
vectors corresponds to a natural language word. We also show that models
trained using these sparse embeddings can achieve good performance and are more
interpretable in practice, including through human evaluations.
| 2,021 | Computation and Language |
A Natural Language Processing Pipeline of Chinese Free-text Radiology
Reports for Liver Cancer Diagnosis | Despite the rapid development of natural language processing (NLP)
implementation in electronic medical records (EMRs), Chinese EMRs processing
remains challenging due to the limited corpus and specific grammatical
characteristics, especially for radiology reports. In this study, we designed
an NLP pipeline for the direct extraction of clinically relevant features from
Chinese radiology reports, which is the first key step in computer-aided
radiologic diagnosis. The pipeline was comprised of named entity recognition,
synonyms normalization, and relationship extraction to finally derive the
radiological features composed of one or more terms. In named entity
recognition, we incorporated lexicon into deep learning model bidirectional
long short-term memory-conditional random field (BiLSTM-CRF), and the model
finally achieved an F1 score of 93.00%. With the extracted radiological
features, least absolute shrinkage and selection operator and machine learning
methods (support vector machine, random forest, decision tree, and logistic
regression) were used to build the classifiers for liver cancer prediction. For
liver cancer diagnosis, random forest had the highest predictive performance in
liver cancer diagnosis (F1 score 86.97%, precision 87.71%, and recall 86.25%).
This work was a comprehensive NLP study focusing on Chinese radiology reports
and the application of NLP in cancer risk prediction. The proposed NLP pipeline
for the radiological feature extraction could be easily implemented in other
kinds of Chinese clinical texts and other disease predictive tasks.
| 2,020 | Computation and Language |
Cross-lingual Zero- and Few-shot Hate Speech Detection Utilising Frozen
Transformer Language Models and AXEL | Detecting hate speech, especially in low-resource languages, is a non-trivial
challenge. To tackle this, we developed a tailored architecture based on
frozen, pre-trained Transformers to examine cross-lingual zero-shot and
few-shot learning, in addition to uni-lingual learning, on the HatEval
challenge data set. With our novel attention-based classification block AXEL,
we demonstrate highly competitive results on the English and Spanish subsets.
We also re-sample the English subset, enabling additional, meaningful
comparisons in the future.
| 2,020 | Computation and Language |
Sentiment Analysis of Yelp Reviews: A Comparison of Techniques and
Models | We use over 350,000 Yelp reviews on 5,000 restaurants to perform an ablation
study on text preprocessing techniques. We also compare the effectiveness of
several machine learning and deep learning models on predicting user sentiment
(negative, neutral, or positive). For machine learning models, we find that
using binary bag-of-word representation, adding bi-grams, imposing minimum
frequency constraints and normalizing texts have positive effects on model
performance. For deep learning models, we find that using pre-trained word
embeddings and capping maximum length often boost model performance. Finally,
using macro F1 score as our comparison metric, we find simpler models such as
Logistic Regression and Support Vector Machine to be more effective at
predicting sentiments than more complex models such as Gradient Boosting, LSTM
and BERT.
| 2,020 | Computation and Language |
TXtract: Taxonomy-Aware Knowledge Extraction for Thousands of Product
Categories | Extracting structured knowledge from product profiles is crucial for various
applications in e-Commerce. State-of-the-art approaches for knowledge
extraction were each designed for a single category of product, and thus do not
apply to real-life e-Commerce scenarios, which often contain thousands of
diverse categories. This paper proposes TXtract, a taxonomy-aware knowledge
extraction model that applies to thousands of product categories organized in a
hierarchical taxonomy. Through category conditional self-attention and
multi-task learning, our approach is both scalable, as it trains a single model
for thousands of categories, and effective, as it extracts category-specific
attribute values. Experiments on products from a taxonomy with 4,000 categories
show that TXtract outperforms state-of-the-art approaches by up to 10% in F1
and 15% in coverage across all categories.
| 2,020 | Computation and Language |
The Explanation Game: Towards Prediction Explainability through Sparse
Communication | Explainability is a topic of growing importance in NLP. In this work, we
provide a unified perspective of explainability as a communication problem
between an explainer and a layperson about a classifier's decision. We use this
framework to compare several prior approaches for extracting explanations,
including gradient methods, representation erasure, and attention mechanisms,
in terms of their communication success. In addition, we reinterpret these
methods at the light of classical feature selection, and we use this as
inspiration to propose new embedded methods for explainability, through the use
of selective, sparse attention. Experiments in text classification, natural
language entailment, and machine translation, using different configurations of
explainers and laypeople (including both machines and humans), reveal an
advantage of attention-based explainers over gradient and erasure methods.
Furthermore, human evaluation experiments show promising results with post-hoc
explainers trained to optimize communication success and faithfulness.
| 2,020 | Computation and Language |
Analyzing Political Parody in Social Media | Parody is a figurative device used to imitate an entity for comedic or
critical purposes and represents a widespread phenomenon in social media
through many popular parody accounts. In this paper, we present the first
computational study of parody. We introduce a new publicly available data set
of tweets from real politicians and their corresponding parody accounts. We run
a battery of supervised machine learning models for automatically detecting
parody tweets with an emphasis on robustness by testing on tweets from accounts
unseen in training, across different genders and across countries. Our results
show that political parody tweets can be predicted with an accuracy up to 90%.
Finally, we identify the markers of parody through a linguistic analysis.
Beyond research in linguistics and political communication, accurately and
automatically detecting parody is important to improving fact checking for
journalists and analytics such as sentiment analysis through filtering out
parodical utterances.
| 2,020 | Computation and Language |
Synonymy = Translational Equivalence | Synonymy and translational equivalence are the relations of sameness of
meaning within and across languages. As the principal relations in wordnets and
multi-wordnets, they are vital to computational lexical semantics, yet the
field suffers from the absence of a common formal framework to define their
properties and relationship. This paper proposes a unifying treatment of these
two relations, which is validated by experiments on existing resources. In our
view, synonymy and translational equivalence are simply different types of
semantic identity. The theory establishes a solid foundation for critically
re-evaluating prior work in cross-lingual semantics, and facilitating the
creation, verification, and amelioration of lexical resources.
| 2,020 | Computation and Language |
LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon
Induction Through Non-Linear Mapping in Latent Space | Most of the successful and predominant methods for bilingual lexicon
induction (BLI) are mapping-based, where a linear mapping function is learned
with the assumption that the word embedding spaces of different languages
exhibit similar geometric structures (i.e., approximately isomorphic). However,
several recent studies have criticized this simplified assumption showing that
it does not hold in general even for closely related languages. In this work,
we propose a novel semi-supervised method to learn cross-lingual word
embeddings for BLI. Our model is independent of the isomorphic assumption and
uses nonlinear mapping in the latent space of two independently trained
auto-encoders. Through extensive experiments on fifteen (15) different language
pairs (in both directions) comprising resource-rich and low-resource languages
from two different datasets, we demonstrate that our method outperforms
existing models by a good margin. Ablation studies show the importance of
different model components and the necessity of non-linear mapping.
| 2,020 | Computation and Language |
Empower Entity Set Expansion via Language Model Probing | Entity set expansion, aiming at expanding a small seed entity set with new
entities belonging to the same semantic class, is a critical task that benefits
many downstream NLP and IR applications, such as question answering, query
understanding, and taxonomy construction. Existing set expansion methods
bootstrap the seed entity set by adaptively selecting context features and
extracting new entities. A key challenge for entity set expansion is to avoid
selecting ambiguous context features which will shift the class semantics and
lead to accumulative errors in later iterations. In this study, we propose a
novel iterative set expansion framework that leverages automatically generated
class names to address the semantic drift issue. In each iteration, we select
one positive and several negative class names by probing a pre-trained language
model, and further score each candidate entity based on selected class names.
Experiments on two datasets show that our framework generates high-quality
class names and outperforms previous state-of-the-art methods significantly.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.