Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
How much complexity does an RNN architecture need to learn
syntax-sensitive dependencies? | Long short-term memory (LSTM) networks and their variants are capable of
encapsulating long-range dependencies, which is evident from their performance
on a variety of linguistic tasks. On the other hand, simple recurrent networks
(SRNs), which appear more biologically grounded in terms of synaptic
connections, have generally been less successful at capturing long-range
dependencies as well as the loci of grammatical errors in an unsupervised
setting. In this paper, we seek to develop models that bridge the gap between
biological plausibility and linguistic competence. We propose a new
architecture, the Decay RNN, which incorporates the decaying nature of neuronal
activations and models the excitatory and inhibitory connections in a
population of neurons. Besides its biological inspiration, our model also shows
competitive performance relative to LSTMs on subject-verb agreement, sentence
grammaticality, and language modeling tasks. These results provide some
pointers towards probing the nature of the inductive biases required for RNN
architectures to model linguistic phenomena successfully.
| 2,020 | Computation and Language |
Building a Hebrew Semantic Role Labeling Lexical Resource from Parallel
Movie Subtitles | We present a semantic role labeling resource for Hebrew built
semi-automatically through annotation projection from English. This corpus is
derived from the multilingual OpenSubtitles dataset and includes short informal
sentences, for which reliable linguistic annotations have been computed. We
provide a fully annotated version of the data including morphological analysis,
dependency syntax and semantic role labeling in both FrameNet and PropBank
styles. Sentences are aligned between English and Hebrew, both sides include
full annotations and the explicit mapping from the English arguments to the
Hebrew ones. We train a neural SRL model on this Hebrew resource exploiting the
pre-trained multilingual BERT transformer model, and provide the first
available baseline model for Hebrew SRL as a reference point. The code we
provide is generic and can be adapted to other languages to bootstrap SRL
resources.
| 2,020 | Computation and Language |
Speech to Text Adaptation: Towards an Efficient Cross-Modal Distillation | Speech is one of the most effective means of communication and is full of
information that helps the transmission of utterer's thoughts. However, mainly
due to the cumbersome processing of acoustic features, phoneme or word
posterior probability has frequently been discarded in understanding the
natural language. Thus, some recent spoken language understanding (SLU) modules
have utilized end-to-end structures that preserve the uncertainty information.
This further reduces the propagation of speech recognition error and guarantees
computational efficiency. We claim that in this process, the speech
comprehension can benefit from the inference of massive pre-trained language
models (LMs). We transfer the knowledge from a concrete Transformer-based text
LM to an SLU module which can face a data shortage, based on recent cross-modal
distillation methodologies. We demonstrate the validity of our proposal upon
the performance on Fluent Speech Command, an English SLU benchmark. Thereby, we
experimentally verify our hypothesis that the knowledge could be shared from
the top layer of the LM to a fully speech-based module, in which the abstracted
speech is expected to meet the semantic representation.
| 2,020 | Computation and Language |
LiSSS: A toy corpus of Spanish Literary Sentences for Emotions detection | In this work we present a new small data-set in Computational Creativity (CC)
field, the Spanish Literary Sentences for emotions detection corpus (LISSS). We
address this corpus of literary sentences in order to evaluate or design
algorithms of emotions classification and detection. We have constitute this
corpus by manually classifying the sentences in a set of emotions: Love, Fear,
Happiness, Anger and Sadness/Pain. We also present some baseline classification
algorithms applied on our corpus. The LISSS corpus will be available to the
community as a free resource to evaluate or create CC-like algorithms.
| 2,020 | Computation and Language |
Support-BERT: Predicting Quality of Question-Answer Pairs in MSDN using
Deep Bidirectional Transformer | Quality of questions and answers from community support websites (e.g.
Microsoft Developers Network, Stackoverflow, Github, etc.) is difficult to
define and a prediction model of quality questions and answers is even more
challenging to implement. Previous works have addressed the question quality
models and answer quality models separately using meta-features like number of
up-votes, trustworthiness of the person posting the questions or answers,
titles of the post, and context naive natural language processing features.
However, there is a lack of an integrated question-answer quality model for
community question answering websites in the literature. In this brief paper,
we tackle the quality Q&A modeling problems from the community support websites
using a recently developed deep learning model using bidirectional
transformers. We investigate the applicability of transfer learning on Q&A
quality modeling using Bidirectional Encoder Representations from Transformers
(BERT) trained on a separate tasks originally using Wikipedia. It is found that
a further pre-training of BERT model along with finetuning on the Q&As
extracted from Microsoft Developer Network (MSDN) can boost the performance of
automated quality prediction to more than 80%. Furthermore, the implementations
are carried out for deploying the finetuned model in real-time scenario using
AzureML in Azure knowledge base system.
| 2,020 | Computation and Language |
TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data | Recent years have witnessed the burgeoning of pretrained language models
(LMs) for text-based natural language (NL) understanding tasks. Such models are
typically trained on free-form NL text, hence may not be suitable for tasks
like semantic parsing over structured data, which require reasoning over both
free-form NL questions and structured tabular data (e.g., database tables). In
this paper we present TaBERT, a pretrained LM that jointly learns
representations for NL sentences and (semi-)structured tables. TaBERT is
trained on a large corpus of 26 million tables and their English contexts. In
experiments, neural semantic parsers using TaBERT as feature representation
layers achieve new best results on the challenging weakly-supervised semantic
parsing benchmark WikiTableQuestions, while performing competitively on the
text-to-SQL dataset Spider. Implementation of the model will be available at
http://fburl.com/TaBERT .
| 2,020 | Computation and Language |
Context-Based Quotation Recommendation | While composing a new document, anything from a news article to an email or
essay, authors often utilize direct quotes from a variety of sources. Although
an author may know what point they would like to make, selecting an appropriate
quote for the specific context may be time-consuming and difficult. We
therefore propose a novel context-aware quote recommendation system which
utilizes the content an author has already written to generate a ranked list of
quotable paragraphs and spans of tokens from a given source document.
We approach quote recommendation as a variant of open-domain question
answering and adapt the state-of-the-art BERT-based methods from open-QA to our
task. We conduct experiments on a collection of speech transcripts and
associated news articles, evaluating models' paragraph ranking and span
prediction performances. Our experiments confirm the strong performance of
BERT-based methods on this task, which outperform bag-of-words and neural
ranking baselines by more than 30% relative across all ranking metrics.
Qualitative analyses show the difficulty of the paragraph and span
recommendation tasks and confirm the quotability of the best BERT model's
predictions, even if they are not the true selected quotes from the original
news articles.
| 2,020 | Computation and Language |
Cross-Lingual Word Embeddings for Turkic Languages | There has been an increasing interest in learning cross-lingual word
embeddings to transfer knowledge obtained from a resource-rich language, such
as English, to lower-resource languages for which annotated data is scarce,
such as Turkish, Russian, and many others. In this paper, we present the first
viability study of established techniques to align monolingual embedding spaces
for Turkish, Uzbek, Azeri, Kazakh and Kyrgyz, members of the Turkic family
which is heavily affected by the low-resource constraint. Those techniques are
known to require little explicit supervision, mainly in the form of bilingual
dictionaries, hence being easily adaptable to different domains, including
low-resource ones. We obtain new bilingual dictionaries and new word embeddings
for these languages and show the steps for obtaining cross-lingual word
embeddings using state-of-the-art techniques. Then, we evaluate the results
using the bilingual dictionary induction task. Our experiments confirm that the
obtained bilingual dictionaries outperform previously-available ones, and that
word embeddings from a low-resource language can benefit from resource-rich
closely-related languages when they are aligned together. Furthermore,
evaluation on an extrinsic task (Sentiment analysis on Uzbek) proves that
monolingual word embeddings can, although slightly, benefit from cross-lingual
alignments.
| 2,020 | Computation and Language |
MixingBoard: a Knowledgeable Stylized Integrated Text Generation
Platform | We present MixingBoard, a platform for quickly building demos with a focus on
knowledge grounded stylized text generation. We unify existing text generation
algorithms in a shared codebase and further adapt earlier algorithms for
constrained generation. To borrow advantages from different models, we
implement strategies for cross-model integration, from the token probability
level to the latent space level. An interface to external knowledge is provided
via a module that retrieves on-the-fly relevant knowledge from passages on the
web or any document collection. A user interface for local development, remote
webpage access, and a RESTful API are provided to make it simple for users to
build their own demos.
| 2,020 | Computation and Language |
Syntax-guided Controlled Generation of Paraphrases | Given a sentence (e.g., "I like mangoes") and a constraint (e.g., sentiment
flip), the goal of controlled text generation is to produce a sentence that
adapts the input sentence to meet the requirements of the constraint (e.g., "I
hate mangoes"). Going beyond such simple constraints, recent works have started
exploring the incorporation of complex syntactic-guidance as constraints in the
task of controlled paraphrase generation. In these methods, syntactic-guidance
is sourced from a separate exemplar sentence. However, these prior works have
only utilized limited syntactic information available in the parse tree of the
exemplar sentence. We address this limitation in the paper and propose Syntax
Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic
paraphrase generation. We find that SGCP can generate syntax conforming
sentences while not compromising on relevance. We perform extensive automated
and human evaluations over multiple real-world English language datasets to
demonstrate the efficacy of SGCP over state-of-the-art baselines. To drive
future research, we have made SGCP's source code available
| 2,020 | Computation and Language |
Text Classification with Few Examples using Controlled Generalization | Training data for text classification is often limited in practice,
especially for applications with many output classes or involving many related
classification problems. This means classifiers must generalize from limited
evidence, but the manner and extent of generalization is task dependent.
Current practice primarily relies on pre-trained word embeddings to map words
unseen in training to similar seen ones. Unfortunately, this squishes many
components of meaning into highly restricted capacity. Our alternative begins
with sparse pre-trained representations derived from unlabeled parsed corpora;
based on the available training data, we select features that offers the
relevant generalizations. This produces task-specific semantic vectors; here,
we show that a feed-forward network over these vectors is especially effective
in low-data scenarios, compared to existing state-of-the-art methods. By
further pairing this network with a convolutional neural network, we keep this
edge in low data scenarios and remain competitive when using full training
sets.
| 2,019 | Computation and Language |
Towards Question Format Independent Numerical Reasoning: A Set of
Prerequisite Tasks | Numerical reasoning is often important to accurately understand the world.
Recently, several format-specific datasets have been proposed, such as
numerical reasoning in the settings of Natural Language Inference (NLI),
Reading Comprehension (RC), and Question Answering (QA). Several
format-specific models and architectures in response to those datasets have
also been proposed. However, there exists a strong need for a benchmark which
can evaluate the abilities of models, in performing question format independent
numerical reasoning, as (i) the numerical reasoning capabilities we want to
teach are not controlled by question formats, (ii) for numerical reasoning
technology to have the best possible application, it must be able to process
language and reason in a way that is not exclusive to a single format, task,
dataset or domain. In pursuit of this goal, we introduce NUMBERGAME, a
multifaceted benchmark to evaluate model performance across numerical reasoning
tasks of eight diverse formats. We add four existing question types in our
compilation. Two of the new types we add are about questions that require
external numerical knowledge, commonsense knowledge and domain knowledge. For
building a more practical numerical reasoning system, NUMBERGAME demands four
capabilities beyond numerical reasoning: (i) detecting question format directly
from data (ii) finding intermediate common format to which every format can be
converted (iii) incorporating commonsense knowledge (iv) handling data
imbalance across formats. We build several baselines, including a new model
based on knowledge hunting using a cheatsheet. However, all baselines perform
poorly in contrast to the human baselines, indicating the hardness of our
benchmark. Our work takes forward the recent progress in generic system
development, demonstrating the scope of these under-explored tasks.
| 2,020 | Computation and Language |
Yseop at SemEval-2020 Task 5: Cascaded BERT Language Model for
Counterfactual Statement Analysis | In this paper, we explore strategies to detect and evaluate counterfactual
sentences. We describe our system for SemEval-2020 Task 5: Modeling Causal
Reasoning in Language: Detecting Counterfactuals. We use a BERT base model for
the classification task and build a hybrid BERT Multi-Layer Perceptron system
to handle the sequence identification task. Our experiments show that while
introducing syntactic and semantic features does little in improving the system
in the classification task, using these types of features as cascaded linear
inputs to fine-tune the sequence-delimiting ability of the model ensures it
outperforms other similar-purpose complex systems like BiLSTM-CRF in the second
task. Our system achieves an F1 score of 85.00% in Task 1 and 83.90% in Task 2.
| 2,020 | Computation and Language |
Efficient Wait-k Models for Simultaneous Machine Translation | Simultaneous machine translation consists in starting output generation
before the entire input sequence is available. Wait-k decoders offer a simple
but efficient approach for this problem. They first read k source tokens, after
which they alternate between producing a target token and reading another
source token. We investigate the behavior of wait-k decoding in low resource
settings for spoken corpora using IWSLT datasets. We improve training of these
models using unidirectional encoders, and training across multiple values of k.
Experiments with Transformer and 2D-convolutional architectures show that our
wait-k models generalize well across a wide range of latency levels. We also
show that the 2D-convolution architecture is competitive with Transformers for
simultaneous translation of spoken language.
| 2,020 | Computation and Language |
The presence of occupational structure in online texts based on word
embedding NLP models | Research on social stratification is closely linked to analysing the prestige
associated with different occupations. This research focuses on the positions
of occupations in the semantic space represented by large amounts of textual
data. The results are compared to standard results in social stratification to
see whether the classical results are reproduced and if additional insights can
be gained into the social positions of occupations. The paper gives an
affirmative answer to both questions. The results show fundamental similarity
of the occupational structure obtained from text analysis to the structure
described by prestige and social distance scales. While our research reinforces
many theories and empirical findings of the traditional body of literature on
social stratification and, in particular, occupational hierarchy, it pointed to
the importance of a factor not discussed in the main line of stratification
literature so far: the power and organizational aspect.
| 2,023 | Computation and Language |
Improving Named Entity Recognition in Tor Darknet with Local Distance
Neighbor Feature | Name entity recognition in noisy user-generated texts is a difficult task
usually enhanced by incorporating an external resource of information, such as
gazetteers. However, gazetteers are task-specific, and they are expensive to
build and maintain. This paper adopts and improves the approach of Aguilar et
al. by presenting a novel feature, called Local Distance Neighbor, which
substitutes gazetteers. We tested the new approach on the W-NUT-2017 dataset,
obtaining state-of-the-art results for the Group, Person and Product categories
of Named Entities. Next, we added 851 manually labeled samples to the
W-NUT-2017 dataset to account for named entities in the Tor Darknet related to
weapons and drug selling. Finally, our proposal achieved an entity and surface
F1 scores of 52.96% and 50.57% on this extended dataset, demonstrating its
usefulness for Law Enforcement Agencies to detect named entities in the Tor
hidden services.
| 2,020 | Computation and Language |
Classification of Spam Emails through Hierarchical Clustering and
Supervised Learning | Spammers take advantage of email popularity to send indiscriminately
unsolicited emails. Although researchers and organizations continuously develop
anti-spam filters based on binary classification, spammers bypass them through
new strategies, like word obfuscation or image-based spam. For the first time
in literature, we propose to classify spam email in categories to improve the
handle of already detected spam emails, instead of just using a binary model.
First, we applied a hierarchical clustering algorithm to create SPEMC-$11$K
(SPam EMail Classification), the first multi-class dataset, which contains
three types of spam emails: Health and Technology, Personal Scams, and Sexual
Content. Then, we used SPEMC-$11$K to evaluate the combination of TF-IDF and
BOW encodings with Na\"ive Bayes, Decision Trees and SVM classifiers. Finally,
we recommend for the task of multi-class spam classification the use of (i)
TF-IDF combined with SVM for the best micro F1 score performance, $95.39\%$,
and (ii) TD-IDF along with NB for the fastest spam classification, analyzing an
email in $2.13$ms.
| 2,020 | Computation and Language |
Corpus of Chinese Dynastic Histories: Gender Analysis over Two Millennia | Chinese dynastic histories form a large continuous linguistic space of
approximately 2000 years, from the 3rd century BCE to the 18th century CE. The
histories are documented in Classical (Literary) Chinese in a corpus of over 20
million characters, suitable for the computational analysis of historical
lexicon and semantic change. However, there is no freely available open-source
corpus of these histories, making Classical Chinese low-resource. This project
introduces a new open-source corpus of twenty-four dynastic histories covered
by Creative Commons license. An original list of Classical Chinese
gender-specific terms was developed as a case study for analyzing the
historical linguistic use of male and female terms. The study demonstrates
considerable stability in the usage of these terms, with dominance of male
terms. Exploration of word meanings uses keyword analysis of focus corpora
created for genderspecific terms. This method yields meaningful semantic
representations that can be used for future studies of diachronic semantics.
| 2,020 | Computation and Language |
Interaction Matching for Long-Tail Multi-Label Classification | We present an elegant and effective approach for addressing limitations in
existing multi-label classification models by incorporating interaction
matching, a concept shown to be useful for ad-hoc search result ranking. By
performing soft n-gram interaction matching, we match labels with natural
language descriptions (which are common to have in most multi-labeling tasks).
Our approach can be used to enhance existing multi-label classification
approaches, which are biased toward frequently-occurring labels. We evaluate
our approach on two challenging tasks: automatic medical coding of clinical
notes and automatic labeling of entities from software tutorial text. Our
results show that our method can yield up to an 11% relative improvement in
macro performance, with most of the gains stemming labels that appear
infrequently in the training set (i.e., the long tail of labels).
| 2,020 | Computation and Language |
Inflecting when there's no majority: Limitations of encoder-decoder
neural networks as cognitive models for German plurals | Can artificial neural networks learn to represent inflectional morphology and
generalize to new words as human speakers do? Kirov and Cotterell (2018) argue
that the answer is yes: modern Encoder-Decoder (ED) architectures learn
human-like behavior when inflecting English verbs, such as extending the
regular past tense form -(e)d to novel words. However, their work does not
address the criticism raised by Marcus et al. (1995): that neural models may
learn to extend not the regular, but the most frequent class -- and thus fail
on tasks like German number inflection, where infrequent suffixes like -s can
still be productively generalized.
To investigate this question, we first collect a new dataset from German
speakers (production and ratings of plural forms for novel nouns) that is
designed to avoid sources of information unavailable to the ED model. The
speaker data show high variability, and two suffixes evince 'regular' behavior,
appearing more often with phonologically atypical inputs. Encoder-decoder
models do generalize the most frequently produced plural class, but do not show
human-like variability or 'regular' extension of these other plural markers. We
conclude that modern neural models may still struggle with minority-class
generalization.
| 2,020 | Computation and Language |
Grammatical gender associations outweigh topical gender bias in
crosslinguistic word embeddings | Recent research has demonstrated that vector space models of semantics can
reflect undesirable biases in human culture. Our investigation of
crosslinguistic word embeddings reveals that topical gender bias interacts
with, and is surpassed in magnitude by, the effect of grammatical gender
associations, and both may be attenuated by corpus lemmatization. This finding
has implications for downstream applications such as machine translation.
| 2,020 | Computation and Language |
Span-ConveRT: Few-shot Span Extraction for Dialog with Pretrained
Conversational Representations | We introduce Span-ConveRT, a light-weight model for dialog slot-filling which
frames the task as a turn-based span extraction task. This formulation allows
for a simple integration of conversational knowledge coded in large pretrained
conversational models such as ConveRT (Henderson et al., 2019). We show that
leveraging such knowledge in Span-ConveRT is especially useful for few-shot
learning scenarios: we report consistent gains over 1) a span extractor that
trains representations from scratch in the target domain, and 2) a BERT-based
span extractor. In order to inspire more work on span extraction for the
slot-filling task, we also release RESTAURANTS-8K, a new challenging data set
of 8,198 utterances, compiled from actual conversations in the restaurant
booking domain.
| 2,020 | Computation and Language |
Reconstructing Maps from Text | Previous research has demonstrated that Distributional Semantic Models (DSMs)
are capable of reconstructing maps from news corpora (Louwerse & Zwaan, 2009)
and novels (Louwerse & Benesh, 2012). The capacity for reproducing maps is
surprising since DSMs notoriously lack perceptual grounding (De Vega et al.,
2012). In this paper we investigate the statistical sources required in
language to infer maps, and resulting constraints placed on mechanisms of
semantic representation. Study 1 brings word co-occurrence under experimental
control to demonstrate that direct co-occurrence in language is necessary for
traditional DSMs to successfully reproduce maps. Study 2 presents an
instance-based DSM that is capable of reconstructing maps independent of the
frequency of co-occurrence of city names.
| 2,020 | Computation and Language |
Arabic Offensive Language Detection Using Machine Learning and Ensemble
Machine Learning Approaches | This study aims at investigating the effect of applying single learner
machine learning approach and ensemble machine learning approach for offensive
language detection on Arabic language. Classifying Arabic social media text is
a very challenging task due to the ambiguity and informality of the written
format of the text. Arabic language has multiple dialects with diverse
vocabularies and structures, which increase the complexity of obtaining high
classification performance. Our study shows significant impact for applying
ensemble machine learning approach over the single learner machine learning
approach. Among the trained ensemble machine learning classifiers, bagging
performs the best in offensive language detection with F1 score of 88%, which
exceeds the score obtained by the best single learner classifier by 6%. Our
findings highlight the great opportunities of investing more efforts in
promoting the ensemble machine learning approach solutions for offensive
language detection models.
| 2,020 | Computation and Language |
Question-Driven Summarization of Answers to Consumer Health Questions | Automatic summarization of natural language is a widely studied area in
computer science, one that is broadly applicable to anyone who routinely needs
to understand large quantities of information. For example, in the medical
domain, recent developments in deep learning approaches to automatic
summarization have the potential to make health information more easily
accessible to patients and consumers. However, to evaluate the quality of
automatically generated summaries of health information, gold-standard, human
generated summaries are required. Using answers provided by the National
Library of Medicine's consumer health question answering system, we present the
MEDIQA Answer Summarization dataset, the first summarization collection
containing question-driven summaries of answers to consumer health questions.
This dataset can be used to evaluate single or multi-document summaries
generated by algorithms using extractive or abstractive approaches. In order to
benchmark the dataset, we include results of baseline and state-of-the-art deep
learning summarization models, demonstrating that this dataset can be used to
effectively evaluate question-driven machine-generated summaries and promote
further machine learning research in medical question answering.
| 2,020 | Computation and Language |
P-SIF: Document Embeddings Using Partition Averaging | Simple weighted averaging of word vectors often yields effective
representations for sentences which outperform sophisticated seq2seq neural
models in many tasks. While it is desirable to use the same method to represent
documents as well, unfortunately, the effectiveness is lost when representing
long documents involving multiple sentences. One of the key reasons is that a
longer document is likely to contain words from many different topics; hence,
creating a single vector while ignoring all the topical structure is unlikely
to yield an effective document representation. This problem is less acute in
single sentences and other short text fragments where the presence of a single
topic is most likely. To alleviate this problem, we present P-SIF, a
partitioned word averaging model to represent long documents. P-SIF retains the
simplicity of simple weighted word averaging while taking a document's topical
structure into account. In particular, P-SIF learns topic-specific vectors from
a document and finally concatenates them all to represent the overall document.
We provide theoretical justifications on the correctness of P-SIF. Through a
comprehensive set of experiments, we demonstrate P-SIF's effectiveness compared
to simple weighted averaging and many other baselines.
| 2,020 | Computation and Language |
Are All Languages Created Equal in Multilingual BERT? | Multilingual BERT (mBERT) trained on 104 languages has shown surprisingly
good cross-lingual performance on several NLP tasks, even without explicit
cross-lingual signals. However, these evaluations have focused on cross-lingual
transfer with high-resource languages, covering only a third of the languages
covered by mBERT. We explore how mBERT performs on a much wider set of
languages, focusing on the quality of representation for low-resource
languages, measured by within-language performance. We consider three tasks:
Named Entity Recognition (99 languages), Part-of-speech Tagging, and Dependency
Parsing (54 languages each). mBERT does better than or comparable to baselines
on high resource languages but does much worse for low resource languages.
Furthermore, monolingual BERT models for these languages do even worse. Paired
with similar languages, the performance gap between monolingual BERT and mBERT
can be narrowed. We find that better models for low resource languages require
more efficient pretraining techniques or more data.
| 2,020 | Computation and Language |
(Re)construing Meaning in NLP | Human speakers have an extensive toolkit of ways to express themselves. In
this paper, we engage with an idea largely absent from discussions of meaning
in natural language understanding--namely, that the way something is expressed
reflects different ways of conceptualizing or construing the information being
conveyed. We first define this phenomenon more precisely, drawing on
considerable prior work in theoretical cognitive semantics and
psycholinguistics. We then survey some dimensions of construed meaning and show
how insights from construal could inform theoretical and practical work in NLP.
| 2,020 | Computation and Language |
Contextual Embeddings: When Are They Worth It? | We study the settings for which deep contextual embeddings (e.g., BERT) give
large improvements in performance relative to classic pretrained embeddings
(e.g., GloVe), and an even simpler baseline---random word embeddings---focusing
on the impact of the training set size and the linguistic properties of the
task. Surprisingly, we find that both of these simpler baselines can match
contextual embeddings on industry-scale data, and often perform within 5 to 10%
accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of
data for which contextual embeddings give particularly large gains: language
containing complex structure, ambiguous word usage, and words unseen in
training.
| 2,020 | Computation and Language |
GPT-too: A language-model-first approach for AMR-to-text generation | Meaning Representations (AMRs) are broad-coverage sentence-level semantic
graphs. Existing approaches to generating text from AMR have focused on
training sequence-to-sequence or graph-to-sequence models on AMR annotated data
only. In this paper, we propose an alternative approach that combines a strong
pre-trained language model with cycle consistency-based re-scoring. Despite the
simplicity of the approach, our experimental results show these models
outperform all previous techniques on the English LDC2017T10dataset, including
the recent use of transformer architectures. In addition to the standard
evaluation metrics, we provide human evaluation experiments that further
substantiate the strength of our approach.
| 2,020 | Computation and Language |
Neural Generation of Dialogue Response Timings | The timings of spoken response offsets in human dialogue have been shown to
vary based on contextual elements of the dialogue. We propose neural models
that simulate the distributions of these response offsets, taking into account
the response turn as well as the preceding turn. The models are designed to be
integrated into the pipeline of an incremental spoken dialogue system (SDS). We
evaluate our models using offline experiments as well as human listening tests.
We show that human listeners consider certain response timings to be more
natural based on the dialogue context. The introduction of these models into
SDS pipelines could increase the perceived naturalness of interactions.
| 2,020 | Computation and Language |
NEJM-enzh: A Parallel Corpus for English-Chinese Translation in the
Biomedical Domain | Machine translation requires large amounts of parallel text. While such
datasets are abundant in domains such as newswire, they are less accessible in
the biomedical domain. Chinese and English are two of the most widely spoken
languages, yet to our knowledge a parallel corpus in the biomedical domain does
not exist for this language pair. In this study, we develop an effective
pipeline to acquire and process an English-Chinese parallel corpus, consisting
of about 100,000 sentence pairs and 3,000,000 tokens on each side, from the New
England Journal of Medicine (NEJM). We show that training on out-of-domain data
and fine-tuning with as few as 4,000 NEJM sentence pairs improve translation
quality by 25.3 (13.4) BLEU for en$\to$zh (zh$\to$en) directions. Translation
quality continues to improve at a slower pace on larger in-domain datasets,
with an increase of 33.0 (24.3) BLEU for en$\to$zh (zh$\to$en) directions on
the full dataset.
| 2,020 | Computation and Language |
Cross-lingual Approaches for Task-specific Dialogue Act Recognition | In this paper we exploit cross-lingual models to enable dialogue act
recognition for specific tasks with a small number of annotations. We design a
transfer learning approach for dialogue act recognition and validate it on two
different target languages and domains. We compute dialogue turn embeddings
with both a CNN and multi-head self-attention model and show that the best
results are obtained by combining all sources of transferred information. We
further demonstrate that the proposed methods significantly outperform related
cross-lingual DA recognition approaches.
| 2,021 | Computation and Language |
Iterative Pseudo-Labeling for Speech Recognition | Pseudo-labeling has recently shown promise in end-to-end automatic speech
recognition (ASR). We study Iterative Pseudo-Labeling (IPL), a semi-supervised
algorithm which efficiently performs multiple iterations of pseudo-labeling on
unlabeled data as the acoustic model evolves. In particular, IPL fine-tunes an
existing model at each iteration using both labeled data and a subset of
unlabeled data. We study the main components of IPL: decoding with a language
model and data augmentation. We then demonstrate the effectiveness of IPL by
achieving state-of-the-art word-error rate on the Librispeech test sets in both
standard and low-resource setting. We also study the effect of language models
trained on different corpora to show IPL can effectively utilize additional
text. Finally, we release a new large in-domain text corpus which does not
overlap with the Librispeech training transcriptions to foster research in
low-resource, semi-supervised ASR
| 2,020 | Computation and Language |
Improving Accent Conversion with Reference Encoder and End-To-End
Text-To-Speech | Accent conversion (AC) transforms a non-native speaker's accent into a native
accent while maintaining the speaker's voice timbre. In this paper, we propose
approaches to improving accent conversion applicability, as well as quality.
First of all, we assume no reference speech is available at the conversion
stage, and hence we employ an end-to-end text-to-speech system that is trained
on native speech to generate native reference speech. To improve the quality
and accent of the converted speech, we introduce reference encoders which make
us capable of utilizing multi-source information. This is motivated by acoustic
features extracted from native reference and linguistic information, which are
complementary to conventional phonetic posteriorgrams (PPGs), so they can be
concatenated as features to improve a baseline system based only on PPGs.
Moreover, we optimize model architecture using GMM-based attention instead of
windowed attention to elevate synthesized performance. Experimental results
indicate when the proposed techniques are applied the integrated system
significantly raises the scores of acoustic quality (30$\%$ relative increase
in mean opinion score) and native accent (68$\%$ relative preference) while
retaining the voice identity of the non-native speaker.
| 2,020 | Computation and Language |
Matching Questions and Answers in Dialogues from Online Forums | Matching question-answer relations between two turns in conversations is not
only the first step in analyzing dialogue structures, but also valuable for
training dialogue systems. This paper presents a QA matching model considering
both distance information and dialogue history by two simultaneous attention
mechanisms called mutual attention. Given scores computed by the trained model
between each non-question turn with its candidate questions, a greedy matching
strategy is used for final predictions. Because existing dialogue datasets such
as the Ubuntu dataset are not suitable for the QA matching task, we further
create a dataset with 1,000 labeled dialogues and demonstrate that our proposed
model outperforms the state-of-the-art and other strong baselines, particularly
for matching long-distance QA pairs.
| 2,020 | Computation and Language |
Staying True to Your Word: (How) Can Attention Become Explanation? | The attention mechanism has quickly become ubiquitous in NLP. In addition to
improving performance of models, attention has been widely used as a glimpse
into the inner workings of NLP models. The latter aspect has in the recent
years become a common topic of discussion, most notably in work of Jain and
Wallace, 2019; Wiegreffe and Pinter, 2019. With the shortcomings of using
attention weights as a tool of transparency revealed, the attention mechanism
has been stuck in a limbo without concrete proof when and whether it can be
used as an explanation. In this paper, we provide an explanation as to why
attention has seen rightful critique when used with recurrent networks in
sequence classification tasks. We propose a remedy to these issues in the form
of a word level objective and our findings give credibility for attention to
provide faithful interpretations of recurrent models.
| 2,020 | Computation and Language |
Human Instruction-Following with Deep Reinforcement Learning via
Transfer-Learning from Text | Recent work has described neural-network-based agents that are trained with
reinforcement learning (RL) to execute language-like commands in simulated
worlds, as a step towards an intelligent agent or robot that can be instructed
by human users. However, the optimisation of multi-goal motor policies via deep
RL from scratch requires many episodes of experience. Consequently,
instruction-following with deep RL typically involves language generated from
templates (by an environment simulator), which does not reflect the varied or
ambiguous expressions of real users. Here, we propose a conceptually simple
method for training instruction-following agents with deep RL that are robust
to natural human instructions. By applying our method with a state-of-the-art
pre-trained text-based language model (BERT), on tasks requiring agents to
identify and position everyday objects relative to other objects in a
naturalistic 3D simulated room, we demonstrate substantially-above-chance
zero-shot transfer from synthetic template commands to natural instructions
given by humans. Our approach is a general recipe for training any deep
RL-based system to interface with human users, and bridges the gap between two
research directions of notable recent success: agent-centric motor behavior and
text-based representation learning.
| 2,020 | Computation and Language |
On the Choice of Auxiliary Languages for Improved Sequence Tagging | Recent work showed that embeddings from related languages can improve the
performance of sequence tagging, even for monolingual models. In this analysis
paper, we investigate whether the best auxiliary language can be predicted
based on language distances and show that the most related language is not
always the best auxiliary language. Further, we show that attention-based
meta-embeddings can effectively combine pre-trained embeddings from different
languages for sequence tagging and set new state-of-the-art results for
part-of-speech tagging in five languages.
| 2,020 | Computation and Language |
Adversarial Alignment of Multilingual Models for Extracting Temporal
Expressions from Text | Although temporal tagging is still dominated by rule-based systems, there
have been recent attempts at neural temporal taggers. However, all of them
focus on monolingual settings. In this paper, we explore multilingual methods
for the extraction of temporal expressions from text and investigate
adversarial training for aligning embedding spaces to one common space. With
this, we create a single multilingual model that can also be transferred to
unseen languages and set the new state of the art in those cross-lingual
transfer experiments.
| 2,020 | Computation and Language |
Closing the Gap: Joint De-Identification and Concept Extraction in the
Clinical Domain | Exploiting natural language processing in the clinical domain requires
de-identification, i.e., anonymization of personal information in texts.
However, current research considers de-identification and downstream tasks,
such as concept extraction, only in isolation and does not study the effects of
de-identification on other tasks. In this paper, we close this gap by reporting
concept extraction performance on automatically anonymized data and
investigating joint models for de-identification and concept extraction. In
particular, we propose a stacked model with restricted access to
privacy-sensitive information and a multitask model. We set the new state of
the art on benchmark datasets in English (96.1% F1 for de-identification and
88.9% F1 for concept extraction) and Spanish (91.4% F1 for concept extraction).
| 2,020 | Computation and Language |
Embeddings as representation for symbolic music | A representation technique that allows encoding music in a way that contains
musical meaning would improve the results of any model trained for computer
music tasks like generation of melodies and harmonies of better quality. The
field of natural language processing has done a lot of work in finding a way to
capture the semantic meaning of words and sentences, and word embeddings have
successfully shown the capabilities for such a task. In this paper, we
experiment with embeddings to represent musical notes from 3 different
variations of a dataset and analyze if the model can capture useful musical
patterns. To do this, the resulting embeddings are visualized in projections
using the t-SNE technique.
| 2,020 | Computation and Language |
Functorial Language Games for Question Answering | We present some categorical investigations into Wittgenstein's
language-games, with applications to game-theoretic pragmatics and
question-answering in natural language processing.
| 2,021 | Computation and Language |
Human Sentence Processing: Recurrence or Attention? | Recurrent neural networks (RNNs) have long been an architecture of interest
for computational models of human sentence processing. The recently introduced
Transformer architecture outperforms RNNs on many natural language processing
tasks but little is known about its ability to model human language processing.
We compare Transformer- and RNN-based language models' ability to account for
measures of human reading effort. Our analysis shows Transformers to outperform
RNNs in explaining self-paced reading times and neural activity during reading
English sentences, challenging the widely held idea that human sentence
processing involves recurrent and immediate processing and provides evidence
for cue-based retrieval.
| 2,021 | Computation and Language |
A Recipe for Creating Multimodal Aligned Datasets for Sequential Tasks | Many high-level procedural tasks can be decomposed into sequences of
instructions that vary in their order and choice of tools. In the cooking
domain, the web offers many partially-overlapping text and video recipes (i.e.
procedures) that describe how to make the same dish (i.e. high-level task).
Aligning instructions for the same dish across different sources can yield
descriptive visual explanations that are far richer semantically than
conventional textual instructions, providing commonsense insight into how
real-world procedures are structured. Learning to align these different
instruction sets is challenging because: a) different recipes vary in their
order of instructions and use of ingredients; and b) video instructions can be
noisy and tend to contain far more information than text instructions. To
address these challenges, we first use an unsupervised alignment algorithm that
learns pairwise alignments between instructions of different recipes for the
same dish. We then use a graph algorithm to derive a joint alignment between
multiple text and multiple video recipes for the same dish. We release the
Microsoft Research Multimodal Aligned Recipe Corpus containing 150K pairwise
alignments between recipes across 4,262 dishes with rich commonsense
information.
| 2,020 | Computation and Language |
Positive emotions help rank negative reviews in e-commerce | Negative reviews, the poor ratings in postpurchase evaluation, play an
indispensable role in e-commerce, especially in shaping future sales and firm
equities. However, extant studies seldom examine their potential value for
sellers and producers in enhancing capabilities of providing better services
and products. For those who exploited the helpfulness of reviews in the view of
e-commerce keepers, the ranking approaches were developed for customers
instead. To fill this gap, in terms of combining description texts and emotion
polarities, the aim of the ranking method in this study is to provide the most
helpful negative reviews under a certain product attribute for online sellers
and producers. By applying a more reasonable evaluating procedure, experts with
related backgrounds are hired to vote for the ranking approaches. Our ranking
method turns out to be more reliable for ranking negative reviews for sellers
and producers, demonstrating a better performance than the baselines like BM25
with a result of 8% higher. In this paper, we also enrich the previous
understandings of emotions in valuing reviews. Specifically, it is surprisingly
found that positive emotions are more helpful rather than negative emotions in
ranking negative reviews. The unexpected strengthening from positive emotions
in ranking suggests that less polarized reviews on negative experience in fact
offer more rational feedbacks and thus more helpfulness to the sellers and
producers. The presented ranking method could provide e-commerce practitioners
with an efficient and effective way to leverage negative reviews from online
consumers.
| 2,020 | Computation and Language |
GM-CTSC at SemEval-2020 Task 1: Gaussian Mixtures Cross Temporal
Similarity Clustering | This paper describes the system proposed for the SemEval-2020 Task 1:
Unsupervised Lexical Semantic Change Detection. We focused our approach on the
detection problem. Given the semantics of words captured by temporal word
embeddings in different time periods, we investigate the use of unsupervised
methods to detect when the target word has gained or loosed senses. To this
end, we defined a new algorithm based on Gaussian Mixture Models to cluster the
target similarities computed over the two periods. We compared the proposed
approach with a number of similarity-based thresholds. We found that, although
the performance of the detection methods varies across the word embedding
algorithms, the combination of Gaussian Mixture with Temporal Referencing
resulted in our best system.
| 2,020 | Computation and Language |
Leveraging Graph to Improve Abstractive Multi-Document Summarization | Graphs that capture relations between textual units have great benefits for
detecting salient information from multiple documents and generating overall
coherent summaries. In this paper, we develop a neural abstractive
multi-document summarization (MDS) model which can leverage well-known graph
representations of documents such as similarity graph and discourse graph, to
more effectively process multiple input documents and produce abstractive
summaries. Our model utilizes graphs to encode documents in order to capture
cross-document relations, which is crucial to summarizing long documents. Our
model can also take advantage of graphs to guide the summary generation
process, which is beneficial for generating coherent and concise summaries.
Furthermore, pre-trained language models can be easily combined with our model,
which further improve the summarization performance significantly. Empirical
results on the WikiSum and MultiNews dataset show that the proposed
architecture brings substantial improvements over several strong baselines.
| 2,020 | Computation and Language |
Enhancing Word Embeddings with Knowledge Extracted from Lexical
Resources | In this work, we present an effective method for semantic specialization of
word vector representations. To this end, we use traditional word embeddings
and apply specialization methods to better capture semantic relations between
words. In our approach, we leverage external knowledge from rich lexical
resources such as BabelNet. We also show that our proposed post-specialization
method based on an adversarial neural network with the Wasserstein distance
allows to gain improvements over state-of-the-art methods on two tasks: word
similarity and dialog state tracking.
| 2,020 | Computation and Language |
A Large-Scale Multi-Document Summarization Dataset from the Wikipedia
Current Events Portal | Multi-document summarization (MDS) aims to compress the content in large
document collections into short summaries and has important applications in
story clustering for newsfeeds, presentation of search results, and timeline
generation. However, there is a lack of datasets that realistically address
such use cases at a scale large enough for training supervised models for this
task. This work presents a new dataset for MDS that is large both in the total
number of document clusters and in the size of individual clusters. We build
this dataset by leveraging the Wikipedia Current Events Portal (WCEP), which
provides concise and neutral human-written summaries of news events, with links
to external source articles. We also automatically extend these source articles
by looking for related articles in the Common Crawl archive. We provide a
quantitative analysis of the dataset and empirical results for several
state-of-the-art MDS techniques.
| 2,020 | Computation and Language |
Examining the State-of-the-Art in News Timeline Summarization | Previous work on automatic news timeline summarization (TLS) leaves an
unclear picture about how this task can generally be approached and how well it
is currently solved. This is mostly due to the focus on individual subtasks,
such as date selection and date summarization, and to the previous lack of
appropriate evaluation metrics for the full TLS task. In this paper, we compare
different TLS strategies using appropriate evaluation frameworks, and propose a
simple and effective combination of methods that improves over the
state-of-the-art on all tested benchmarks. For a more robust evaluation, we
also present a new TLS dataset, which is larger and spans longer time periods
than previous datasets.
| 2,020 | Computation and Language |
BERTweet: A pre-trained language model for English Tweets | We present BERTweet, the first public large-scale pre-trained language model
for English Tweets. Our BERTweet, having the same architecture as BERT-base
(Devlin et al., 2019), is trained using the RoBERTa pre-training procedure (Liu
et al., 2019). Experiments show that BERTweet outperforms strong baselines
RoBERTa-base and XLM-R-base (Conneau et al., 2020), producing better
performance results than the previous state-of-the-art models on three Tweet
NLP tasks: Part-of-speech tagging, Named-entity recognition and text
classification. We release BERTweet under the MIT License to facilitate future
research and applications on Tweet data. Our BERTweet is available at
https://github.com/VinAIResearch/BERTweet
| 2,020 | Computation and Language |
Applying the Transformer to Character-level Transduction | The transformer has been shown to outperform recurrent neural network-based
sequence-to-sequence models in various word-level NLP tasks. Yet for
character-level transduction tasks, e.g. morphological inflection generation
and historical text normalization, there are few works that outperform
recurrent models using the transformer. In an empirical study, we uncover that,
in contrast to recurrent sequence-to-sequence models, the batch size plays a
crucial role in the performance of the transformer on character-level tasks,
and we show that with a large enough batch size, the transformer does indeed
outperform recurrent models. We also introduce a simple technique to handle
feature-guided character-level transduction that further improves performance.
With these insights, we achieve state-of-the-art performance on morphological
inflection and historical text normalization. We also show that the transformer
outperforms a strong baseline on two other character-level transduction tasks:
grapheme-to-phoneme conversion and transliteration.
| 2,021 | Computation and Language |
BlaBla: Linguistic Feature Extraction for Clinical Analysis in Multiple
Languages | We introduce BlaBla, an open-source Python library for extracting linguistic
features with proven clinical relevance to neurological and psychiatric
diseases across many languages. BlaBla is a unifying framework for accelerating
and simplifying clinical linguistic research. The library is built on
state-of-the-art NLP frameworks and supports multithreaded/GPU-enabled feature
extraction via both native Python calls and a command line interface. We
describe BlaBla's architecture and clinical validation of its features across
12 diseases. We further demonstrate the application of BlaBla to a task
visualizing and classifying language disorders in three languages on real
clinical data from the AphasiaBank dataset. We make the codebase freely
available to researchers with the hope of providing a consistent,
well-validated foundation for the next generation of clinical linguistic
research.
| 2,020 | Computation and Language |
Sentence level estimation of psycholinguistic norms using joint
multidimensional annotations | Psycholinguistic normatives represent various affective and mental constructs
using numeric scores and are used in a variety of applications in natural
language processing. They are commonly used at the sentence level, the scores
of which are estimated by extrapolating word level scores using simple
aggregation strategies, which may not always be optimal. In this work, we
present a novel approach to estimate the psycholinguistic norms at sentence
level. We apply a multidimensional annotation fusion model on annotations at
the word level to estimate a parameter which captures relationships between
different norms. We then use this parameter at sentence level to estimate the
norms. We evaluate our approach by predicting sentence level scores for various
normative dimensions and compare with standard word aggregation schemes.
| 2,020 | Computation and Language |
Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural
Machine Translation | Recent studies have revealed a number of pathologies of neural machine
translation (NMT) systems. Hypotheses explaining these mostly suggest there is
something fundamentally wrong with NMT as a model or its training algorithm,
maximum likelihood estimation (MLE). Most of this evidence was gathered using
maximum a posteriori (MAP) decoding, a decision rule aimed at identifying the
highest-scoring translation, i.e. the mode. We argue that the evidence
corroborates the inadequacy of MAP decoding more than casts doubt on the model
and its training algorithm. In this work, we show that translation
distributions do reproduce various statistics of the data well, but that beam
search strays from such statistics. We show that some of the known pathologies
and biases of NMT are due to MAP decoding and not to NMT's statistical
assumptions nor MLE. In particular, we show that the most likely translations
under the model accumulate so little probability mass that the mode can be
considered essentially arbitrary. We therefore advocate for the use of decision
rules that take into account the translation distribution holistically. We show
that an approximation to minimum Bayes risk decoding gives competitive results
confirming that NMT models do capture important aspects of translation well in
expectation.
| 2,020 | Computation and Language |
ScriptWriter: Narrative-Guided Script Generation | It is appealing to have a system that generates a story or scripts
automatically from a story-line, even though this is still out of our reach. In
dialogue systems, it would also be useful to drive dialogues by a dialogue
plan. In this paper, we address a key problem involved in these applications --
guiding a dialogue by a narrative. The proposed model ScriptWriter selects the
best response among the candidates that fit the context as well as the given
narrative. It keeps track of what in the narrative has been said and what is to
be said. A narrative plays a different role than the context (i.e., previous
utterances), which is generally used in current dialogue systems. Due to the
unavailability of data for this new application, we construct a new large-scale
data collection GraphMovie from a movie website where end-users can upload
their narratives freely when watching a movie. Experimental results on the
dataset show that our proposed approach based on narratives significantly
outperforms the baselines that simply use the narrative as a kind of context.
| 2,020 | Computation and Language |
Pretraining with Contrastive Sentence Objectives Improves Discourse
Performance of Language Models | Recent models for unsupervised representation learning of text have employed
a number of techniques to improve contextual word representations but have put
little focus on discourse-level representations. We propose CONPONO, an
inter-sentence objective for pretraining language models that models discourse
coherence and the distance between sentences. Given an anchor sentence, our
model is trained to predict the text k sentences away using a sampled-softmax
objective where the candidates consist of neighboring sentences and sentences
randomly sampled from the corpus. On the discourse representation benchmark
DiscoEval, our model improves over the previous state-of-the-art by up to 13%
and on average 4% absolute across 7 tasks. Our model is the same size as
BERT-Base, but outperforms the much larger BERT- Large model and other more
recent approaches that incorporate discourse. We also show that CONPONO yields
gains of 2%-6% absolute even for tasks that do not explicitly evaluate
discourse: textual entailment (RTE), common sense reasoning (COPA) and reading
comprehension (ReCoRD).
| 2,020 | Computation and Language |
Stance Prediction and Claim Verification: An Arabic Perspective | This work explores the application of textual entailment in news claim
verification and stance prediction using a new corpus in Arabic. The publicly
available corpus comes in two perspectives: a version consisting of 4,547 true
and false claims and a version consisting of 3,786 pairs (claim, evidence). We
describe the methodology for creating the corpus and the annotation process.
Using the introduced corpus, we also develop two machine learning baselines for
two proposed tasks: claim verification and stance prediction. Our best model
utilizes pretraining (BERT) and achieves 76.7 F1 on the stance prediction task
and 64.3 F1 on the claim verification task. Our preliminary experiments shed
some light on the limits of automatic claim verification that relies on claims
text only. Results hint that while the linguistic features and world knowledge
learned during pretraining are useful for stance prediction, such learned
representations from pretraining are insufficient for verifying claims without
access to context or evidence.
| 2,020 | Computation and Language |
Automated Question Answer medical model based on Deep Learning
Technology | Artificial intelligence can now provide more solutions for different
problems, especially in the medical field. One of those problems the lack of
answers to any given medical/health-related question. The Internet is full of
forums that allow people to ask some specific questions and get great answers
for them. Nevertheless, browsing these questions in order to locate one similar
to your own, also finding a satisfactory answer is a difficult and
time-consuming task. This research will introduce a solution to this problem by
automating the process of generating qualified answers to these questions and
creating a kind of digital doctor. Furthermore, this research will train an
end-to-end model using the framework of RNN and the encoder-decoder to generate
sensible and useful answers to a small set of medical/health issues. The
proposed model was trained and evaluated using data from various online
services, such as WebMD, HealthTap, eHealthForums, and iCliniq.
| 2,020 | Computation and Language |
Text-to-Text Pre-Training for Data-to-Text Tasks | We study the pre-train + fine-tune strategy for data-to-text tasks. Our
experiments indicate that text-to-text pre-training in the form of T5, enables
simple, end-to-end transformer based models to outperform pipelined neural
architectures tailored for data-to-text generation, as well as alternative
language model based pre-training techniques such as BERT and GPT-2.
Importantly, T5 pre-training leads to better generalization, as evidenced by
large improvements on out-of-domain test sets. We hope our work serves as a
useful baseline for future research, as transfer learning becomes ever more
prevalent for data-to-text tasks.
| 2,021 | Computation and Language |
MTSS: Learn from Multiple Domain Teachers and Become a Multi-domain
Dialogue Expert | How to build a high-quality multi-domain dialogue system is a challenging
work due to its complicated and entangled dialogue state space among each
domain, which seriously limits the quality of dialogue policy, and further
affects the generated response. In this paper, we propose a novel method to
acquire a satisfying policy and subtly circumvent the knotty dialogue state
representation problem in the multi-domain setting. Inspired by real school
teaching scenarios, our method is composed of multiple domain-specific teachers
and a universal student. Each individual teacher only focuses on one specific
domain and learns its corresponding domain knowledge and dialogue policy based
on a precisely extracted single domain dialogue state representation. Then,
these domain-specific teachers impart their domain knowledge and policies to a
universal student model and collectively make this student model a multi-domain
dialogue expert. Experiment results show that our method reaches competitive
results with SOTAs in both multi-domain and single domain setting.
| 2,020 | Computation and Language |
Symptom extraction from the narratives of personal experiences with
COVID-19 on Reddit | Social media discussion of COVID-19 provides a rich source of information
into how the virus affects people's lives that is qualitatively different from
traditional public health datasets. In particular, when individuals self-report
their experiences over the course of the virus on social media, it can allow
for identification of the emotions each stage of symptoms engenders in the
patient. Posts to the Reddit forum r/COVID19Positive contain first-hand
accounts from COVID-19 positive patients, giving insight into personal
struggles with the virus. These posts often feature a temporal structure
indicating the number of days after developing symptoms the text refers to.
Using topic modelling and sentiment analysis, we quantify the change in
discussion of COVID-19 throughout individuals' experiences for the first 14
days since symptom onset. Discourse on early symptoms such as fever, cough, and
sore throat was concentrated towards the beginning of the posts, while language
indicating breathing issues peaked around ten days. Some conversation around
critical cases was also identified and appeared at a roughly constant rate. We
identified two clear clusters of positive and negative emotions associated with
the evolution of these symptoms and mapped their relationships. Our results
provide a perspective on the patient experience of COVID-19 that complements
other medical data streams and can potentially reveal when mental health issues
might appear.
| 2,020 | Computation and Language |
Fluent Response Generation for Conversational Question Answering | Question answering (QA) is an important aspect of open-domain conversational
agents, garnering specific research focus in the conversational QA (ConvQA)
subtask. One notable limitation of recent ConvQA efforts is the response being
answer span extraction from the target corpus, thus ignoring the natural
language generation (NLG) aspect of high-quality conversational agents. In this
work, we propose a method for situating QA responses within a SEQ2SEQ NLG
approach to generate fluent grammatical answer responses while maintaining
correctness. From a technical perspective, we use data augmentation to generate
training data for an end-to-end system. Specifically, we develop Syntactic
Transformations (STs) to produce question-specific candidate answer responses
and rank them using a BERT-based classifier (Devlin et al., 2019). Human
evaluation on SQuAD 2.0 data (Rajpurkar et al., 2018) demonstrate that the
proposed model outperforms baseline CoQA and QuAC models in generating
conversational responses. We further show our model's scalability by conducting
tests on the CoQA dataset. The code and data are available at
https://github.com/abaheti95/QADialogSystem.
| 2,020 | Computation and Language |
LaCulturaNonSiFerma -- Report su uso e la diffusione degli hashtag delle
istituzioni culturali italiane durante il periodo di lockdown | This report presents an analysis of #hashtags used by Italian Cultural
Heritage institutions to promote and communicate cultural content during the
COVID-19 lock-down period in Italy. Several activities to support and engage
users' have been proposed using social media. Most of these activities present
one or more #hashtags which help to aggregate content and create a community on
specific topics. Results show that on one side Italian institutions have been
very proactive in adapting to the pandemic scenario and on the other side
users' reacted very positively increasing their participation in the proposed
activities.
| 2,020 | Computation and Language |
MultiMWE: Building a Multi-lingual Multi-Word Expression (MWE) Parallel
Corpora | Multi-word expressions (MWEs) are a hot topic in research in natural language
processing (NLP), including topics such as MWE detection, MWE decomposition,
and research investigating the exploitation of MWEs in other NLP fields such as
Machine Translation. However, the availability of bilingual or multi-lingual
MWE corpora is very limited. The only bilingual MWE corpora that we are aware
of is from the PARSEME (PARSing and Multi-word Expressions) EU Project. This is
a small collection of only 871 pairs of English-German MWEs. In this paper, we
present multi-lingual and bilingual MWE corpora that we have extracted from
root parallel corpora. Our collections are 3,159,226 and 143,042 bilingual MWE
pairs for German-English and Chinese-English respectively after filtering. We
examine the quality of these extracted bilingual MWEs in MT experiments. Our
initial experiments applying MWEs in MT show improved translation performances
on MWE terms in qualitative analysis and better general evaluation scores in
quantitative analysis, on both German-English and Chinese-English language
pairs. We follow a standard experimental pipeline to create our MultiMWE
corpora which are available online. Researchers can use this free corpus for
their own models or use them in a knowledge base as model features.
| 2,020 | Computation and Language |
Unsupervised Quality Estimation for Neural Machine Translation | Quality Estimation (QE) is an important component in making Machine
Translation (MT) useful in real-world applications, as it is aimed to inform
the user on the quality of the MT output at test time. Existing approaches
require large amounts of expert annotated data, computation and time for
training. As an alternative, we devise an unsupervised approach to QE where no
training or access to additional resources besides the MT system itself is
required. Different from most of the current work that treats the MT system as
a black box, we explore useful information that can be extracted from the MT
system as a by-product of translation. By employing methods for uncertainty
quantification, we achieve very good correlation with human judgments of
quality, rivalling state-of-the-art supervised QE models. To evaluate our
approach we collect the first dataset that enables work on both black-box and
glass-box approaches to QE.
| 2,020 | Computation and Language |
Towards Finite-State Morphology of Kurdish | Morphological analysis is the study of the formation and structure of words.
It plays a crucial role in various tasks in Natural Language Processing (NLP)
and Computational Linguistics (CL) such as machine translation and text and
speech generation. Kurdish is a less-resourced multi-dialect Indo-European
language with highly inflectional morphology. In this paper, as the first
attempt of its kind, the morphology of the Kurdish language (Sorani dialect) is
described from a computational point of view. We extract morphological rules
which are transformed into finite-state transducers for generating and
analyzing words. The result of this research assists in conducting studies on
language generation for Kurdish and enhances the Information Retrieval (IR)
capacity for the language while leveraging the Kurdish NLP and CL into a more
advanced computational level.
| 2,020 | Computation and Language |
RuBQ: A Russian Dataset for Question Answering over Wikidata | The paper presents RuBQ, the first Russian knowledge base question answering
(KBQA) dataset. The high-quality dataset consists of 1,500 Russian questions of
varying complexity, their English machine translations, SPARQL queries to
Wikidata, reference answers, as well as a Wikidata sample of triples containing
entities with Russian labels. The dataset creation started with a large
collection of question-answer pairs from online quizzes. The data underwent
automatic filtering, crowd-assisted entity linking, automatic generation of
SPARQL queries, and their subsequent in-house verification.
| 2,021 | Computation and Language |
Worse WER, but Better BLEU? Leveraging Word Embedding as Intermediate in
Multitask End-to-End Speech Translation | Speech translation (ST) aims to learn transformations from speech in the
source language to the text in the target language. Previous works show that
multitask learning improves the ST performance, in which the recognition
decoder generates the text of the source language, and the translation decoder
obtains the final translations based on the output of the recognition decoder.
Because whether the output of the recognition decoder has the correct semantics
is more critical than its accuracy, we propose to improve the multitask ST
model by utilizing word embedding as the intermediate.
| 2,020 | Computation and Language |
Beyond User Self-Reported Likert Scale Ratings: A Comparison Model for
Automatic Dialog Evaluation | Open Domain dialog system evaluation is one of the most important challenges
in dialog research. Existing automatic evaluation metrics, such as BLEU are
mostly reference-based. They calculate the difference between the generated
response and a limited number of available references. Likert-score based
self-reported user rating is widely adopted by social conversational systems,
such as Amazon Alexa Prize chatbots. However, self-reported user rating suffers
from bias and variance among different users. To alleviate this problem, we
formulate dialog evaluation as a comparison task. We also propose an automatic
evaluation model CMADE (Comparison Model for Automatic Dialog Evaluation) that
automatically cleans self-reported user ratings as it trains on them.
Specifically, we first use a self-supervised method to learn better dialog
feature representation, and then use KNN and Shapley to remove confusing
samples. Our experiments show that CMADE achieves 89.2% accuracy in the dialog
comparison task.
| 2,020 | Computation and Language |
The Frankfurt Latin Lexicon: From Morphological Expansion and Word
Embeddings to SemioGraphs | In this article we present the Frankfurt Latin Lexicon (FLL), a lexical
resource for Medieval Latin that is used both for the lemmatization of Latin
texts and for the post-editing of lemmatizations. We describe recent advances
in the development of lemmatizers and test them against the Capitularies corpus
(comprising Frankish royal edicts, mid-6th to mid-9th century), a corpus
created as a reference for processing Medieval Latin. We also consider the
post-correction of lemmatizations using a limited crowdsourcing process aimed
at continuous review and updating of the FLL. Starting from the texts resulting
from this lemmatization process, we describe the extension of the FLL by means
of word embeddings, whose interactive traversing by means of SemioGraphs
completes the digital enhanced hermeneutic circle. In this way, the article
argues for a more comprehensive understanding of lemmatization, encompassing
classical machine learning as well as intellectual post-corrections and, in
particular, human computation in the form of interpretation processes based on
graph representations of the underlying lexical resources.
| 2,020 | Computation and Language |
Evaluating Neural Morphological Taggers for Sanskrit | Neural sequence labelling approaches have achieved state of the art results
in morphological tagging. We evaluate the efficacy of four standard sequence
labelling models on Sanskrit, a morphologically rich, fusional Indian language.
As its label space can theoretically contain more than 40,000 labels, systems
that explicitly model the internal structure of a label are more suited for the
task, because of their ability to generalise to labels not seen during
training. We find that although some neural models perform better than others,
one of the common causes for error for all of these models is mispredictions
due to syncretism.
| 2,020 | Computation and Language |
Extracting Daily Dosage from Medication Instructions in EHRs: An
Automated Approach and Lessons Learned | Medication timelines have been shown to be effective in helping physicians
visualize complex patient medication information. A key feature in many such
designs is a longitudinal representation of a medication's daily dosage and its
changes over time. However, daily dosage as a discrete value is generally not
provided and needs to be derived from free text instructions (Sig). Existing
works in daily dosage extraction are narrow in scope, targeting dosage
extraction for a single drug from clinical notes. Here, we present an automated
approach to calculate daily dosage for all medications, combining deep
learning-based named entity extractor with lexicon dictionaries and regular
expressions, achieving 0.98 precision and 0.95 recall on an expert-generated
dataset of 1,000 Sigs. We also analyze our expert-generated dataset, discuss
the challenges in understanding the complex information contained in Sigs, and
provide insights to guide future work in the general-purpose daily dosage
calculation task.
| 2,021 | Computation and Language |
Investigating Label Bias in Beam Search for Open-ended Text Generation | Beam search is an effective and widely used decoding algorithm in many
sequence-to-sequence (seq2seq) text generation tasks. However, in open-ended
text generation, beam search is often found to produce repetitive and generic
texts, sampling-based decoding algorithms like top-k sampling and nucleus
sampling are more preferred. Standard seq2seq models suffer from label bias due
to its locally normalized probability formulation. This paper provides a series
of empirical evidence that label bias is a major reason for such degenerate
behaviors of beam search. By combining locally normalized maximum likelihood
estimation and globally normalized sequence-level training, label bias can be
reduced with almost no sacrifice in perplexity. To quantitatively measure label
bias, we test the model's ability to discriminate the groundtruth text and a
set of context-agnostic distractors. We conduct experiments on large-scale
response generation datasets. Results show that beam search can produce more
diverse and meaningful texts with our approach, in terms of both automatic and
human evaluation metrics. Our analysis also suggests several future working
directions towards the grand challenge of open-ended text generation.
| 2,020 | Computation and Language |
Intent Mining from past conversations for conversational agent | Conversational systems are of primary interest in the AI community. Chatbots
are increasingly being deployed to provide round-the-clock support and to
increase customer engagement. Many of the commercial bot building frameworks
follow a standard approach that requires one to build and train an intent model
to recognize a user input. Intent models are trained in a supervised setting
with a collection of textual utterance and intent label pairs. Gathering a
substantial and wide coverage of training data for different intent is a
bottleneck in the bot building process. Moreover, the cost of labeling a
hundred to thousands of conversations with intent is a time consuming and
laborious job. In this paper, we present an intent discovery framework that
involves 4 primary steps: Extraction of textual utterances from a conversation
using a pre-trained domain agnostic Dialog Act Classifier (Data Extraction),
automatic clustering of similar user utterances (Clustering), manual annotation
of clusters with an intent label (Labeling) and propagation of intent labels to
the utterances from the previous step, which are not mapped to any cluster
(Label Propagation); to generate intent training data from raw conversations.
We have introduced a novel density-based clustering algorithm ITER-DBSCAN for
unbalanced data clustering. Subject Matter Expert (Annotators with domain
expertise) manually looks into the clustered user utterances and provides an
intent label for discovery. We conducted user studies to validate the
effectiveness of the trained intent model generated in terms of coverage of
intents, accuracy and time saving concerning manual annotation. Although the
system is developed for building an intent model for the conversational system,
this framework can also be used for a short text clustering or as a labeling
framework.
| 2,020 | Computation and Language |
Robust Layout-aware IE for Visually Rich Documents with Pre-trained
Language Models | Many business documents processed in modern NLP and IR pipelines are visually
rich: in addition to text, their semantics can also be captured by visual
traits such as layout, format, and fonts. We study the problem of information
extraction from visually rich documents (VRDs) and present a model that
combines the power of large pre-trained language models and graph neural
networks to efficiently encode both textual and visual information in business
documents. We further introduce new fine-tuning objectives to improve in-domain
unsupervised fine-tuning to better utilize large amount of unlabeled in-domain
data. We experiment on real world invoice and resume data sets and show that
the proposed method outperforms strong text-based RoBERTa baselines by 6.3%
absolute F1 on invoices and 4.7% absolute F1 on resumes. When evaluated in a
few-shot setting, our method requires up to 30x less annotation data than the
baseline to achieve the same level of performance at ~90% F1.
| 2,020 | Computation and Language |
Improving Segmentation for Technical Support Problems | Technical support problems are often long and complex. They typically contain
user descriptions of the problem, the setup, and steps for attempted
resolution. Often they also contain various non-natural language text elements
like outputs of commands, snippets of code, error messages or stack traces.
These elements contain potentially crucial information for problem resolution.
However, they cannot be correctly parsed by tools designed for natural
language. In this paper, we address the problem of segmentation for technical
support questions. We formulate the problem as a sequence labelling task, and
study the performance of state of the art approaches. We compare this against
an intuitive contextual sentence-level classification baseline, and a state of
the art supervised text-segmentation approach. We also introduce a novel
component of combining contextual embeddings from multiple language models
pre-trained on different data sources, which achieves a marked improvement over
using embeddings from a single pre-trained language model. Finally, we also
demonstrate the usefulness of such segmentation with improvements on the
downstream task of answer retrieval.
| 2,020 | Computation and Language |
Interacting with Explanations through Critiquing | Using personalized explanations to support recommendations has been shown to
increase trust and perceived quality. However, to actually obtain better
recommendations, there needs to be a means for users to modify the
recommendation criteria by interacting with the explanation. We present a novel
technique using aspect markers that learns to generate personalized
explanations of recommendations from review texts, and we show that human users
significantly prefer these explanations over those produced by state-of-the-art
techniques. Our work's most important innovation is that it allows users to
react to a recommendation by critiquing the textual explanation: removing
(symmetrically adding) certain aspects they dislike or that are no longer
relevant (symmetrically that are of interest). The system updates its user
model and the resulting recommendations according to the critique. This is
based on a novel unsupervised critiquing method for single- and multi-step
critiquing with textual explanations. Experiments on two real-world datasets
show that our system is the first to achieve good performance in adapting to
the preferences expressed in multi-step critiquing.
| 2,022 | Computation and Language |
Bootstrapping Named Entity Recognition in E-Commerce with Positive
Unlabeled Learning | Named Entity Recognition (NER) in domains like e-commerce is an understudied
problem due to the lack of annotated datasets. Recognizing novel entity types
in this domain, such as products, components, and attributes, is challenging
because of their linguistic complexity and the low coverage of existing
knowledge resources. To address this problem, we present a bootstrapped
positive-unlabeled learning algorithm that integrates domain-specific
linguistic features to quickly and efficiently expand the seed dictionary. The
model achieves an average F1 score of 72.02% on a novel dataset of product
descriptions, an improvement of 3.63% over a baseline BiLSTM classifier, and in
particular exhibits better recall (4.96% on average).
| 2,020 | Computation and Language |
Living Machines: A study of atypical animacy | This paper proposes a new approach to animacy detection, the task of
determining whether an entity is represented as animate in a text. In
particular, this work is focused on atypical animacy and examines the scenario
in which typically inanimate objects, specifically machines, are given animate
attributes. To address it, we have created the first dataset for atypical
animacy detection, based on nineteenth-century sentences in English, with
machines represented as either animate or inanimate. Our method builds on
recent innovations in language modeling, specifically BERT contextualized word
embeddings, to better capture fine-grained contextual properties of words. We
present a fully unsupervised pipeline, which can be easily adapted to different
contexts, and report its performance on an established animacy dataset and our
newly introduced resource. We show that our method provides a substantially
more accurate characterization of atypical animacy, especially when applied to
highly complex forms of language use.
| 2,020 | Computation and Language |
Prototypical Q Networks for Automatic Conversational Diagnosis and
Few-Shot New Disease Adaption | Spoken dialog systems have seen applications in many domains, including
medical for automatic conversational diagnosis. State-of-the-art dialog
managers are usually driven by deep reinforcement learning models, such as deep
Q networks (DQNs), which learn by interacting with a simulator to explore the
entire action space since real conversations are limited. However, the
DQN-based automatic diagnosis models do not achieve satisfying performances
when adapted to new, unseen diseases with only a few training samples. In this
work, we propose the Prototypical Q Networks (ProtoQN) as the dialog manager
for the automatic diagnosis systems. The model calculates prototype embeddings
with real conversations between doctors and patients, learning from them and
simulator-augmented dialogs more efficiently. We create both supervised and
few-shot learning tasks with the Muzhi corpus. Experiments showed that the
ProtoQN significantly outperformed the baseline DQN model in both supervised
and few-shot learning scenarios, and achieves state-of-the-art few-shot
learning performances.
| 2,020 | Computation and Language |
RUSSE'2020: Findings of the First Taxonomy Enrichment Task for the
Russian language | This paper describes the results of the first shared task on taxonomy
enrichment for the Russian language. The participants were asked to extend an
existing taxonomy with previously unseen words: for each new word their systems
should provide a ranked list of possible (candidate) hypernyms. In comparison
to the previous tasks for other languages, our competition has a more realistic
task setting: new words were provided without definitions. Instead, we provided
a textual corpus where these new terms occurred. For this evaluation campaign,
we developed a new evaluation dataset based on unpublished RuWordNet data. The
shared task features two tracks: "nouns" and "verbs". 16 teams participated in
the task demonstrating high results with more than half of them outperforming
the provided baseline.
| 2,020 | Computation and Language |
End-to-end Named Entity Recognition from English Speech | Named entity recognition (NER) from text has been a widely studied problem
and usually extracts semantic information from text. Until now, NER from speech
is mostly studied in a two-step pipeline process that includes first applying
an automatic speech recognition (ASR) system on an audio sample and then
passing the predicted transcript to a NER tagger. In such cases, the error does
not propagate from one step to another as both the tasks are not optimized in
an end-to-end (E2E) fashion. Recent studies confirm that integrated approaches
(e.g., E2E ASR) outperform sequential ones (e.g., phoneme based ASR). In this
paper, we introduce a first publicly available NER annotated dataset for
English speech and present an E2E approach, which jointly optimizes the ASR and
NER tagger components. Experimental results show that the proposed E2E approach
outperforms the classical two-step approach. We also discuss how NER from
speech can be used to handle out of vocabulary (OOV) words in an ASR system.
| 2,020 | Computation and Language |
Low-Latency Sequence-to-Sequence Speech Recognition and Translation by
Partial Hypothesis Selection | Encoder-decoder models provide a generic architecture for
sequence-to-sequence tasks such as speech recognition and translation. While
offline systems are often evaluated on quality metrics like word error rates
(WER) and BLEU, latency is also a crucial factor in many practical use-cases.
We propose three latency reduction techniques for chunk-based incremental
inference and evaluate their efficiency in terms of accuracy-latency trade-off.
On the 300-hour How2 dataset, we reduce latency by 83% to 0.8 second by
sacrificing 1% WER (6% rel.) compared to offline transcription. Although our
experiments use the Transformer, the hypothesis selection strategies are
applicable to other encoder-decoder models. To avoid expensive re-computation,
we use a unidirectionally-attending encoder. After an adaptation procedure to
partial sequences, the unidirectional model performs on-par with the original
model. We further show that our approach is also applicable to low-latency
speech translation. On How2 English-Portuguese speech translation, we reduce
latency to 0.7 second (-84% rel.) while incurring a loss of 2.4 BLEU points (5%
rel.) compared to the offline system.
| 2,020 | Computation and Language |
Simplify-then-Translate: Automatic Preprocessing for Black-Box Machine
Translation | Black-box machine translation systems have proven incredibly useful for a
variety of applications yet by design are hard to adapt, tune to a specific
domain, or build on top of. In this work, we introduce a method to improve such
systems via automatic pre-processing (APP) using sentence simplification. We
first propose a method to automatically generate a large in-domain paraphrase
corpus through back-translation with a black-box MT system, which is used to
train a paraphrase model that "simplifies" the original sentence to be more
conducive for translation. The model is used to preprocess source sentences of
multiple low-resource language pairs. We show that this preprocessing leads to
better translation performance as compared to non-preprocessed source
sentences. We further perform side-by-side human evaluation to verify that
translations of the simplified sentences are better than the original ones.
Finally, we provide some guidance on recommended language pairs for generating
the simplification model corpora by investigating the relationship between ease
of translation of a language pair (as measured by BLEU) and quality of the
resulting simplification model from back-translations of this language pair (as
measured by SARI), and tie this into the downstream task of low-resource
translation.
| 2,020 | Computation and Language |
A Generative Approach to Titling and Clustering Wikipedia Sections | We evaluate the performance of transformer encoders with various decoders for
information organization through a new task: generation of section headings for
Wikipedia articles. Our analysis shows that decoders containing attention
mechanisms over the encoder output achieve high-scoring results by generating
extractive text. In contrast, a decoder without attention better facilitates
semantic encoding and can be used to generate section embeddings. We
additionally introduce a new loss function, which further encourages the
decoder to generate high-quality embeddings.
| 2,020 | Computation and Language |
Character-level Transformer-based Neural Machine Translation | Neural machine translation (NMT) is nowadays commonly applied at the subword
level, using byte-pair encoding. A promising alternative approach focuses on
character-level translation, which simplifies processing pipelines in NMT
considerably. This approach, however, must consider relatively longer
sequences, rendering the training process prohibitively expensive. In this
paper, we discuss a novel, Transformer-based approach, that we compare, both in
speed and in quality to the Transformer at subword and character levels, as
well as previously developed character-level models. We evaluate our models on
4 language pairs from WMT'15: DE-EN, CS-EN, FI-EN and RU-EN. The proposed novel
architecture can be trained on a single GPU and is 34% percent faster than the
character-level Transformer; still, the obtained results are at least on par
with it. In addition, our proposed model outperforms the subword-level model in
FI-EN and shows close results in CS-EN. To stimulate further research in this
area and close the gap with subword-level NMT, we make all our code and models
publicly available.
| 2,020 | Computation and Language |
Comparative Study of Machine Learning Models and BERT on SQuAD | This study aims to provide a comparative analysis of performance of certain
models popular in machine learning and the BERT model on the Stanford Question
Answering Dataset (SQuAD). The analysis shows that the BERT model, which was
once state-of-the-art on SQuAD, gives higher accuracy in comparison to other
models. However, BERT requires a greater execution time even when only 100
samples are used. This shows that with increasing accuracy more amount of time
is invested in training the data. Whereas in case of preliminary machine
learning models, execution time for full data is lower but accuracy is
compromised.
| 2,020 | Computation and Language |
The Discussion Tracker Corpus of Collaborative Argumentation | Although Natural Language Processing (NLP) research on argument mining has
advanced considerably in recent years, most studies draw on corpora of
asynchronous and written texts, often produced by individuals. Few published
corpora of synchronous, multi-party argumentation are available. The Discussion
Tracker corpus, collected in American high school English classes, is an
annotated dataset of transcripts of spoken, multi-party argumentation. The
corpus consists of 29 multi-party discussions of English literature transcribed
from 985 minutes of audio. The transcripts were annotated for three dimensions
of collaborative argumentation: argument moves (claims, evidence, and
explanations), specificity (low, medium, high) and collaboration (e.g.,
extensions of and disagreements about others' ideas). In addition to providing
descriptive statistics on the corpus, we provide performance benchmarks and
associated code for predicting each dimension separately, illustrate the use of
the multiple annotations in the corpus to improve performance via multi-task
learning, and finally discuss other ways the corpus might be used to further
NLP research.
| 2,020 | Computation and Language |
SentPWNet: A Unified Sentence Pair Weighting Network for Task-specific
Sentence Embedding | Pair-based metric learning has been widely adopted to learn sentence
embedding in many NLP tasks such as semantic text similarity due to its
efficiency in computation. Most existing works employed a sequence encoder
model and utilized limited sentence pairs with a pair-based loss to learn
discriminating sentence representation. However, it is known that the sentence
representation can be biased when the sampled sentence pairs deviate from the
true distribution of all sentence pairs. In this paper, our theoretical
analysis shows that existing works severely suffered from a good pair sampling
and instance weighting strategy. Instead of one time pair selection and
learning on equal weighted pairs, we propose a unified locality weighting and
learning framework to learn task-specific sentence embedding. Our model,
SentPWNet, exploits the neighboring spatial distribution of each sentence as
locality weight to indicate the informative level of sentence pair. Such weight
is updated along with pair-loss optimization in each round, ensuring the model
keep learning the most informative sentence pairs. Extensive experiments on
four public available datasets and a self-collected place search benchmark with
1.4 million places clearly demonstrate that our model consistently outperforms
existing sentence embedding methods with comparable efficiency.
| 2,020 | Computation and Language |
Towards Open Domain Event Trigger Identification using Adversarial
Domain Adaptation | We tackle the task of building supervised event trigger identification models
which can generalize better across domains. Our work leverages the adversarial
domain adaptation (ADA) framework to introduce domain-invariance. ADA uses
adversarial training to construct representations that are predictive for
trigger identification, but not predictive of the example's domain. It requires
no labeled data from the target domain, making it completely unsupervised.
Experiments with two domains (English literature and news) show that ADA leads
to an average F1 score improvement of 3.9 on out-of-domain data. Our best
performing model (BERT-A) reaches 44-49 F1 across both domains, using no
labeled target data. Preliminary experiments reveal that finetuning on 1%
labeled data, followed by self-training leads to substantial improvement,
reaching 51.5 and 67.2 F1 on literature and news respectively.
| 2,020 | Computation and Language |
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks | Large pre-trained language models have been shown to store factual knowledge
in their parameters, and achieve state-of-the-art results when fine-tuned on
downstream NLP tasks. However, their ability to access and precisely manipulate
knowledge is still limited, and hence on knowledge-intensive tasks, their
performance lags behind task-specific architectures. Additionally, providing
provenance for their decisions and updating their world knowledge remain open
research problems. Pre-trained models with a differentiable access mechanism to
explicit non-parametric memory can overcome this issue, but have so far been
only investigated for extractive downstream tasks. We explore a general-purpose
fine-tuning recipe for retrieval-augmented generation (RAG) -- models which
combine pre-trained parametric and non-parametric memory for language
generation. We introduce RAG models where the parametric memory is a
pre-trained seq2seq model and the non-parametric memory is a dense vector index
of Wikipedia, accessed with a pre-trained neural retriever. We compare two RAG
formulations, one which conditions on the same retrieved passages across the
whole generated sequence, the other can use different passages per token. We
fine-tune and evaluate our models on a wide range of knowledge-intensive NLP
tasks and set the state-of-the-art on three open domain QA tasks, outperforming
parametric seq2seq models and task-specific retrieve-and-extract architectures.
For language generation tasks, we find that RAG models generate more specific,
diverse and factual language than a state-of-the-art parametric-only seq2seq
baseline.
| 2,021 | Computation and Language |
Transformer-based Context-aware Sarcasm Detection in Conversation
Threads from Social Media | We present a transformer-based sarcasm detection model that accounts for the
context from the entire conversation thread for more robust predictions. Our
model uses deep transformer layers to perform multi-head attentions among the
target utterance and the relevant context in the thread. The context-aware
models are evaluated on two datasets from social media, Twitter and Reddit, and
show 3.1% and 7.0% improvements over their baselines. Our best models give the
F1-scores of 79.0% and 75.0% for the Twitter and Reddit datasets respectively,
becoming one of the highest performing systems among 36 participants in this
shared task.
| 2,020 | Computation and Language |
From Witch's Shot to Music Making Bones -- Resources for Medical Laymen
to Technical Language and Vice Versa | Many people share information in social media or forums, like food they eat,
sports activities they do or events which have been visited. This also applies
to information about a person's health status. Information we share online
unveils directly or indirectly information about our lifestyle and health
situation and thus provides a valuable data resource. If we can make advantage
of that data, applications can be created that enable e.g. the detection of
possible risk factors of diseases or adverse drug reactions of medications.
However, as most people are not medical experts, language used might be more
descriptive rather than the precise medical expression as medics do. To detect
and use those relevant information, laymen language has to be translated and/or
linked to the corresponding medical concept. This work presents baseline data
sources in order to address this challenge for German. We introduce a new data
set which annotates medical laymen and technical expressions in a patient
forum, along with a set of medical synonyms and definitions, and present first
baseline results on the data.
| 2,020 | Computation and Language |
Jointly Encoding Word Confusion Network and Dialogue Context with BERT
for Spoken Language Understanding | Spoken Language Understanding (SLU) converts hypotheses from automatic speech
recognizer (ASR) into structured semantic representations. ASR recognition
errors can severely degenerate the performance of the subsequent SLU module. To
address this issue, word confusion networks (WCNs) have been used to encode the
input for SLU, which contain richer information than 1-best or n-best
hypotheses list. To further eliminate ambiguity, the last system act of
dialogue context is also utilized as additional input. In this paper, a novel
BERT based SLU model (WCN-BERT SLU) is proposed to encode WCNs and the dialogue
context jointly. It can integrate both structural information and ASR posterior
probabilities of WCNs in the BERT architecture. Experiments on DSTC2, a
benchmark of SLU, show that the proposed method is effective and can outperform
previous state-of-the-art models significantly.
| 2,020 | Computation and Language |
A Question Type Driven and Copy Loss Enhanced Frameworkfor
Answer-Agnostic Neural Question Generation | The answer-agnostic question generation is a significant and challenging
task, which aims to automatically generate questions for a given sentence but
without an answer. In this paper, we propose two new strategies to deal with
this task: question type prediction and copy loss mechanism. The question type
module is to predict the types of questions that should be asked, which allows
our model to generate multiple types of questions for the same source sentence.
The new copy loss enhances the original copy mechanism to make sure that every
important word in the source sentence has been copied when generating
questions. Our integrated model outperforms the state-of-the-art approach in
answer-agnostic question generation, achieving a BLEU-4 score of 13.9 on SQuAD.
Human evaluation further validates the high quality of our generated questions.
We will make our code public available for further research.
| 2,020 | Computation and Language |
Transformer VQ-VAE for Unsupervised Unit Discovery and Speech Synthesis:
ZeroSpeech 2020 Challenge | In this paper, we report our submitted system for the ZeroSpeech 2020
challenge on Track 2019. The main theme in this challenge is to build a speech
synthesizer without any textual information or phonetic labels. In order to
tackle those challenges, we build a system that must address two major
components such as 1) given speech audio, extract subword units in an
unsupervised way and 2) re-synthesize the audio from novel speakers. The system
also needs to balance the codebook performance between the ABX error rate and
the bitrate compression rate. Our main contribution here is we proposed
Transformer-based VQ-VAE for unsupervised unit discovery and Transformer-based
inverter for the speech synthesis given the extracted codebook. Additionally,
we also explored several regularization methods to improve performance even
further.
| 2,020 | Computation and Language |
MASK: A flexible framework to facilitate de-identification of clinical
texts | Medical health records and clinical summaries contain a vast amount of
important information in textual form that can help advancing research on
treatments, drugs and public health. However, the majority of these information
is not shared because they contain private information about patients, their
families, or medical staff treating them. Regulations such as HIPPA in the US,
PHIPPA in Canada and GDPR regulate the protection, processing and distribution
of this information. In case this information is de-identified and personal
information are replaced or redacted, they could be distributed to the research
community. In this paper, we present MASK, a software package that is designed
to perform the de-identification task. The software is able to perform named
entity recognition using some of the state-of-the-art techniques and then mask
or redact recognized entities. The user is able to select named entity
recognition algorithm (currently implemented are two versions of CRF-based
techniques and BiLSTM-based neural network with pre-trained GLoVe and ELMo
embedding) and masking algorithm (e.g. shift dates, replace names/locations,
totally redact entity).
| 2,020 | Computation and Language |
Integrated Node Encoder for Labelled Textual Networks | Voluminous works have been implemented to exploit content-enhanced network
embedding models, with little focus on the labelled information of nodes.
Although TriDNR leverages node labels by treating them as node attributes, it
fails to enrich unlabelled node vectors with the labelled information, which
leads to the weaker classification result on the test set in comparison to
existing unsupervised textual network embedding models. In this study, we
design an integrated node encoder (INE) for textual networks which is jointly
trained on the structure-based and label-based objectives. As a result, the
node encoder preserves the integrated knowledge of not only the network text
and structure, but also the labelled information. Furthermore, INE allows the
creation of label-enhanced vectors for unlabelled nodes by entering their node
contents. Our node embedding achieves state-of-the-art performances in the
classification task on two public citation networks, namely Cora and DBLP,
pushing benchmarks up by 10.0\% and 12.1\%, respectively, with the 70\%
training ratio. Additionally, a feasible solution that generalizes our model
from textual networks to a broader range of networks is proposed.
| 2,022 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.