Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
A Sentiment Analysis Dataset for Code-Mixed Malayalam-English | There is an increasing demand for sentiment analysis of text from social
media which are mostly code-mixed. Systems trained on monolingual data fail for
code-mixed data due to the complexity of mixing at different levels of the
text. However, very few resources are available for code-mixed data to create
models specific for this data. Although much research in multilingual and
cross-lingual sentiment analysis has used semi-supervised or unsupervised
methods, supervised methods still performs better. Only a few datasets for
popular languages such as English-Spanish, English-Hindi, and English-Chinese
are available. There are no resources available for Malayalam-English
code-mixed data. This paper presents a new gold standard corpus for sentiment
analysis of code-mixed text in Malayalam-English annotated by voluntary
annotators. This gold standard corpus obtained a Krippendorff's alpha above 0.8
for the dataset. We use this new corpus to provide the benchmark for sentiment
analysis in Malayalam-English code-mixed texts.
| 2,020 | Computation and Language |
Dynamic Masking for Improved Stability in Spoken Language Translation | For spoken language translation (SLT) in live scenarios such as conferences,
lectures and meetings, it is desirable to show the translation to the user as
quickly as possible, avoiding an annoying lag between speaker and translated
captions. In other words, we would like low-latency, online SLT. If we assume a
pipeline of automatic speech recognition (ASR) and machine translation (MT)
then a viable approach to online SLT is to pair an online ASR system, with a a
retranslation strategy, where the MT system re-translates every update received
from ASR. However this can result in annoying "flicker" as the MT system
updates its translation. A possible solution is to add a fixed delay, or "mask"
to the the output of the MT system, but a fixed global mask introduces
undesirable latency to the output. We show how this mask can be set
dynamically, improving the latency-flicker trade-off without sacrificing
translation quality.
| 2,021 | Computation and Language |
Data Augmentation with Unsupervised Machine Translation Improves the
Structural Similarity of Cross-lingual Word Embeddings | Unsupervised cross-lingual word embedding (CLWE) methods learn a linear
transformation matrix that maps two monolingual embedding spaces that are
separately trained with monolingual corpora. This method relies on the
assumption that the two embedding spaces are structurally similar, which does
not necessarily hold true in general. In this paper, we argue that using a
pseudo-parallel corpus generated by an unsupervised machine translation model
facilitates the structural similarity of the two embedding spaces and improves
the quality of CLWEs in the unsupervised mapping method. We show that our
approach outperforms other alternative approaches given the same amount of
data, and, through detailed analysis, we show that data augmentation with the
pseudo data from unsupervised machine translation is especially effective for
mapping-based CLWEs because (1) the pseudo data makes the source and target
corpora (partially) parallel; (2) the pseudo data contains information on the
original language that helps to learn similar embedding spaces between the
source and target languages.
| 2,021 | Computation and Language |
Linguistic Features for Readability Assessment | Readability assessment aims to automatically classify text by the level
appropriate for learning readers. Traditional approaches to this task utilize a
variety of linguistically motivated features paired with simple machine
learning models. More recent methods have improved performance by discarding
these features and utilizing deep learning models. However, it is unknown
whether augmenting deep learning models with linguistically motivated features
would improve performance further. This paper combines these two approaches
with the goal of improving overall model performance and addressing this
question. Evaluating on two large readability corpora, we find that, given
sufficient training data, augmenting deep learning models with linguistically
motivated features does not improve state-of-the-art performance. Our results
provide preliminary evidence for the hypothesis that the state-of-the-art deep
learning models represent linguistic features of the text related to
readability. Future research on the nature of representations formed in these
models can shed light on the learned features and their relations to
linguistically motivated ones hypothesized in traditional approaches.
| 2,020 | Computation and Language |
Learning to refer informatively by amortizing pragmatic reasoning | A hallmark of human language is the ability to effectively and efficiently
convey contextually relevant information. One theory for how humans reason
about language is presented in the Rational Speech Acts (RSA) framework, which
captures pragmatic phenomena via a process of recursive social reasoning
(Goodman & Frank, 2016). However, RSA represents ideal reasoning in an
unconstrained setting. We explore the idea that speakers might learn to
amortize the cost of RSA computation over time by directly optimizing for
successful communication with an internal listener model. In simulations with
grounded neural speakers and listeners across two communication game datasets
representing synthetic and human-generated data, we find that our amortized
model is able to quickly generate language that is effective and concise across
a range of contexts, without the need for explicit pragmatic reasoning.
| 2,020 | Computation and Language |
SANA : Sentiment Analysis on Newspapers comments in Algeria | It is very current in today life to seek for tracking the people opinion from
their interaction with occurring events. A very common way to do that is
comments in articles published in newspapers web sites dealing with
contemporary events. Sentiment analysis or opinion mining is an emergent field
who is the purpose is finding the behind phenomenon masked in opinionated
texts. We are interested in our work by comments in Algerian newspaper
websites. For this end, two corpora were used SANA and OCA. SANA corpus is
created by collection of comments from three Algerian newspapers, and annotated
by two Algerian Arabic native speakers, while OCA is a freely available corpus
for sentiment analysis. For the classification we adopt Supports vector
machines, naive Bayes and knearest neighbors. Obtained results are very
promising and show the different effects of stemming in such domain, also
knearest neighbors give important improvement comparing to other classifiers
unlike similar works where SVM is the most dominant. From this study we observe
the importance of dedicated resources and methods the newspaper comments
sentiment analysis which we look forward in future works.
| 2,020 | Computation and Language |
Recognizing Chinese Judicial Named Entity using BiLSTM-CRF | Named entity recognition (NER) plays an essential role in natural language
processing systems. Judicial NER is a fundamental component of judicial
information retrieval, entity relation extraction, and knowledge map building.
However, Chinese judicial NER remains to be more challenging due to the
characteristics of Chinese and high accuracy requirements in the judicial
filed. Thus, in this paper, we propose a deep learning-based method named
BiLSTM-CRF which consists of bi-directional long short-term memory (BiLSTM) and
conditional random fields (CRF). For further accuracy promotion, we propose to
use Adaptive moment estimation (Adam) for optimization of the model. To
validate our method, we perform experiments on judgment documents including
commutation, parole and temporary service outside prison, which is acquired
from China Judgments Online. Experimental results achieve the accuracy of
0.876, recall of 0.856 and F1 score of 0.855, which suggests the superiority of
the proposed BiLSTM-CRF with Adam optimizer.
| 2,020 | Computation and Language |
Detecting Group Beliefs Related to 2018's Brazilian Elections in Tweets
A Combined Study on Modeling Topics and Sentiment Analysis | 2018's Brazilian presidential elections highlighted the influence of
alternative media and social networks, such as Twitter. In this work, we
perform an analysis covering politically motivated discourses related to the
second round in Brazilian elections. In order to verify whether similar
discourses reinforce group engagement to personal beliefs, we collected a set
of tweets related to political hashtags at that moment. To this end, we have
used a combination of topic modeling approach with opinion mining techniques to
analyze the motivated political discourses. Using SentiLex-PT, a Portuguese
sentiment lexicon, we extracted from the dataset the top 5 most frequent group
of words related to opinions. Applying a bag-of-words model, the cosine
similarity calculation was performed between each opinion and the observed
groups. This study allowed us to observe an exacerbated use of passionate
discourses in the digital political scenario as a form of appreciation and
engagement to the groups which convey similar beliefs.
| 2,020 | Computation and Language |
BiERU: Bidirectional Emotional Recurrent Unit for Conversational
Sentiment Analysis | Sentiment analysis in conversations has gained increasing attention in recent
years for the growing amount of applications it can serve, e.g., sentiment
analysis, recommender systems, and human-robot interaction. The main difference
between conversational sentiment analysis and single sentence sentiment
analysis is the existence of context information which may influence the
sentiment of an utterance in a dialogue. How to effectively encode contextual
information in dialogues, however, remains a challenge. Existing approaches
employ complicated deep learning structures to distinguish different parties in
a conversation and then model the context information. In this paper, we
propose a fast, compact and parameter-efficient party-ignorant framework named
bidirectional emotional recurrent unit for conversational sentiment analysis.
In our system, a generalized neural tensor block followed by a two-channel
classifier is designed to perform context compositionality and sentiment
classification, respectively. Extensive experiments on three standard datasets
demonstrate that our model outperforms the state of the art in most cases.
| 2,021 | Computation and Language |
Learning to Recognise Words using Visually Grounded Speech | We investigated word recognition in a Visually Grounded Speech model. The
model has been trained on pairs of images and spoken captions to create
visually grounded embeddings which can be used for speech to image retrieval
and vice versa. We investigate whether such a model can be used to recognise
words by embedding isolated words and using them to retrieve images of their
visual referents. We investigate the time-course of word recognition using a
gating paradigm and perform a statistical analysis to see whether well known
word competition effects in human speech processing influence word recognition.
Our experiments show that the model is able to recognise words, and the gating
paradigm reveals that words can be recognised from partial input as well and
that recognition is negatively influenced by word competition from the word
initial cohort.
| 2,020 | Computation and Language |
Benchmarking BioRelEx for Entity Tagging and Relation Extraction | Extracting relationships and interactions between different biological
entities is still an extremely challenging problem but has not received much
attention as much as extraction in other generic domains. In addition to the
lack of annotated data, low benchmarking is still a major reason for slow
progress. In order to fill this gap, we compare multiple existing entity and
relation extraction models over a recently introduced public dataset, BioRelEx
of sentences annotated with biological entities and relations. Our
straightforward benchmarking shows that span-based multi-task architectures
like DYGIE show 4.9% and 6% absolute improvements in entity tagging and
relation extraction respectively over the previous state-of-art and that
incorporating domain-specific information like embeddings pre-trained over
related domains boosts performance.
| 2,020 | Computation and Language |
Improve Document Embedding for Text Categorization Through Deep Siamese
Neural Network | Due to the increasing amount of data on the internet, finding a
highly-informative, low-dimensional representation for text is one of the main
challenges for efficient natural language processing tasks including text
classification. This representation should capture the semantic information of
the text while retaining their relevance level for document classification.
This approach maps the documents with similar topics to a similar space in
vector space representation. To obtain representation for large text, we
propose the utilization of deep Siamese neural networks. To embed document
relevance in topics in the distributed representation, we use a Siamese neural
network to jointly learn document representations. Our Siamese network consists
of two sub-network of multi-layer perceptron. We examine our representation for
the text categorization task on BBC news dataset. The results show that the
proposed representations outperform the conventional and state-of-the-art
representations in the text classification task on this dataset.
| 2,020 | Computation and Language |
Neural Entity Linking: A Survey of Models Based on Deep Learning | This survey presents a comprehensive description of recent neural entity
linking (EL) systems developed since 2015 as a result of the "deep learning
revolution" in natural language processing. Its goal is to systemize design
features of neural entity linking systems and compare their performance to the
remarkable classic methods on common benchmarks. This work distills a generic
architecture of a neural EL system and discusses its components, such as
candidate generation, mention-context encoding, and entity ranking, summarizing
prominent methods for each of them. The vast variety of modifications of this
general architecture are grouped by several common themes: joint entity mention
detection and disambiguation, models for global linking, domain-independent
techniques including zero-shot and distant supervision methods, and
cross-lingual approaches. Since many neural models take advantage of entity and
mention/context embeddings to represent their meaning, this work also overviews
prominent entity embedding techniques. Finally, the survey touches on
applications of entity linking, focusing on the recently emerged use-case of
enhancing deep pre-trained masked language models based on the Transformer
architecture.
| 2,022 | Computation and Language |
"Judge me by my size (noun), do you?'' YodaLib: A Demographic-Aware
Humor Generation Framework | The subjective nature of humor makes computerized humor generation a
challenging task. We propose an automatic humor generation framework for
filling the blanks in Mad Libs stories, while accounting for the demographic
backgrounds of the desired audience. We collect a dataset consisting of such
stories, which are filled in and judged by carefully selected workers on Amazon
Mechanical Turk. We build upon the BERT platform to predict location-biased
word fillings in incomplete sentences, and we fine tune BERT to classify
location-specific humor in a sentence. We leverage these components to produce
YodaLib, a fully-automated Mad Libs style humor generation framework, which
selects and ranks appropriate candidate words and sentences in order to
generate a coherent and funny story tailored to certain demographics. Our
experimental results indicate that YodaLib outperforms a previous
semi-automated approach proposed for this task, while also surpassing human
annotators in both qualitative and quantitative analyses.
| 2,020 | Computation and Language |
Efficient Deployment of Conversational Natural Language Interfaces over
Databases | Many users communicate with chatbots and AI assistants in order to help them
with various tasks. A key component of the assistant is the ability to
understand and answer a user's natural language questions for
question-answering (QA). Because data can be usually stored in a structured
manner, an essential step involves turning a natural language question into its
corresponding query language. However, in order to train most natural
language-to-query-language state-of-the-art models, a large amount of training
data is needed first. In most domains, this data is not available and
collecting such datasets for various domains can be tedious and time-consuming.
In this work, we propose a novel method for accelerating the training dataset
collection for developing the natural language-to-query-language machine
learning models. Our system allows one to generate conversational multi-term
data, where multiple turns define a dialogue session, enabling one to better
utilize chatbot interfaces. We train two current state-of-the-art NL-to-QL
models, on both an SQL and SPARQL-based datasets in order to showcase the
adaptability and efficacy of our created data.
| 2,020 | Computation and Language |
BPGC at SemEval-2020 Task 11: Propaganda Detection in News Articles with
Multi-Granularity Knowledge Sharing and Linguistic Features based Ensemble
Learning | Propaganda spreads the ideology and beliefs of like-minded people,
brainwashing their audiences, and sometimes leading to violence. SemEval 2020
Task-11 aims to design automated systems for news propaganda detection. Task-11
consists of two sub-tasks, namely, Span Identification - given any news
article, the system tags those specific fragments which contain at least one
propaganda technique; and Technique Classification - correctly classify a given
propagandist statement amongst 14 propaganda techniques. For sub-task 1, we use
contextual embeddings extracted from pre-trained transformer models to
represent the text data at various granularities and propose a
multi-granularity knowledge sharing approach. For sub-task 2, we use an
ensemble of BERT and logistic regression classifiers with linguistic features.
Our results reveal that the linguistic features are the strong indicators for
covering minority classes in a highly imbalanced dataset.
| 2,020 | Computation and Language |
LRG at SemEval-2020 Task 7: Assessing the Ability of BERT and Derivative
Models to Perform Short-Edits based Humor Grading | In this paper, we assess the ability of BERT and its derivative models
(RoBERTa, DistilBERT, and ALBERT) for short-edits based humor grading. We test
these models for humor grading and classification tasks on the Humicroedit and
the FunLines dataset. We perform extensive experiments with these models to
test their language modeling and generalization abilities via zero-shot
inference and cross-dataset inference based approaches. Further, we also
inspect the role of self-attention layers in humor-grading by performing a
qualitative analysis over the self-attention weights from the final layer of
the trained BERT model. Our experiments show that all the pre-trained BERT
derivative models show significant generalization capabilities for
humor-grading related tasks.
| 2,020 | Computation and Language |
CNRL at SemEval-2020 Task 5: Modelling Causal Reasoning in Language with
Multi-Head Self-Attention Weights based Counterfactual Detection | In this paper, we describe an approach for modelling causal reasoning in
natural language by detecting counterfactuals in text using multi-head
self-attention weights. We use pre-trained transformer models to extract
contextual embeddings and self-attention weights from the text. We show the use
of convolutional layers to extract task-specific features from these
self-attention weights. Further, we describe a fine-tuning approach with a
common base model for knowledge sharing between the two closely related
sub-tasks for counterfactual detection. We analyze and compare the performance
of various transformer models in our experiments. Finally, we perform a
qualitative analysis with the multi-head self-attention weights to interpret
our models' dynamics.
| 2,020 | Computation and Language |
Neural Unsupervised Domain Adaptation in NLP---A Survey | Deep neural networks excel at learning from labeled data and achieve
state-of-the-art resultson a wide array of Natural Language Processing tasks.
In contrast, learning from unlabeled data, especially under domain shift,
remains a challenge. Motivated by the latest advances, in this survey we review
neural unsupervised domain adaptation techniques which do not require labeled
target domain data. This is a more challenging yet a more widely applicable
setup. We outline methods, from early traditional non-neural methods to
pre-trained model transfer. We also revisit the notion of domain, and we
uncover a bias in the type of Natural Language Processing tasks which received
most attention. Lastly, we outline future directions, particularly the broader
need for out-of-distribution generalization of future NLP.
| 2,020 | Computation and Language |
A Unified Feature Representation for Lexical Connotations | Ideological attitudes and stance are often expressed through subtle meanings
of words and phrases. Understanding these connotations is critical to
recognizing the cultural and emotional perspectives of the speaker. In this
paper, we use distant labeling to create a new lexical resource representing
connotation aspects for nouns and adjectives. Our analysis shows that it aligns
well with human judgments. Additionally, we present a method for creating
lexical representations that captures connotations within the embedding space
and show that using the embeddings provides a statistically significant
improvement on the task of stance detection when data is limited.
| 2,021 | Computation and Language |
Conversational Machine Comprehension: a Literature Review | Conversational Machine Comprehension (CMC), a research track in
conversational AI, expects the machine to understand an open-domain natural
language text and thereafter engage in a multi-turn conversation to answer
questions related to the text. While most of the research in Machine Reading
Comprehension (MRC) revolves around single-turn question answering (QA),
multi-turn CMC has recently gained prominence, thanks to the advancement in
natural language understanding via neural language models such as BERT and the
introduction of large-scale conversational datasets such as CoQA and QuAC. The
rise in interest has, however, led to a flurry of concurrent publications, each
with a different yet structurally similar modeling approach and an inconsistent
view of the surrounding literature. With the volume of model submissions to
conversational datasets increasing every year, there exists a need to
consolidate the scattered knowledge in this domain to streamline future
research. This literature review attempts at providing a holistic overview of
CMC with an emphasis on the common trends across recently published models,
specifically in their approach to tackling conversational history. The review
synthesizes a generic framework for CMC models while highlighting the
differences in recent approaches and intends to serve as a compendium of CMC
for future researchers.
| 2,021 | Computation and Language |
Stance in Replies and Quotes (SRQ): A New Dataset For Learning Stance in
Twitter Conversations | Automated ways to extract stance (denying vs. supporting opinions) from
conversations on social media are essential to advance opinion mining research.
Recently, there is a renewed excitement in the field as we see new models
attempting to improve the state-of-the-art. However, for training and
evaluating the models, the datasets used are often small. Additionally, these
small datasets have uneven class distributions, i.e., only a tiny fraction of
the examples in the dataset have favoring or denying stances, and most other
examples have no clear stance. Moreover, the existing datasets do not
distinguish between the different types of conversations on social media (e.g.,
replying vs. quoting on Twitter). Because of this, models trained on one event
do not generalize to other events.
In the presented work, we create a new dataset by labeling stance in
responses to posts on Twitter (both replies and quotes) on controversial
issues. To the best of our knowledge, this is currently the largest
human-labeled stance dataset for Twitter conversations with over 5200 stance
labels. More importantly, we designed a tweet collection methodology that
favors the selection of denial-type responses. This class is expected to be
more useful in the identification of rumors and determining antagonistic
relationships between users. Moreover, we include many baseline models for
learning the stance in conversations and compare the performance of various
models. We show that combining data from replies and quotes decreases the
accuracy of models indicating that the two modalities behave differently when
it comes to stance learning.
| 2,020 | Computation and Language |
Online Versus Offline NMT Quality: An In-depth Analysis on
English-German and German-English | We conduct in this work an evaluation study comparing offline and online
neural machine translation architectures. Two sequence-to-sequence models:
convolutional Pervasive Attention (Elbayad et al. 2018) and attention-based
Transformer (Vaswani et al. 2017) are considered. We investigate, for both
architectures, the impact of online decoding constraints on the translation
quality through a carefully designed human evaluation on English-German and
German-English language pairs, the latter being particularly sensitive to
latency constraints. The evaluation results allow us to identify the strengths
and shortcomings of each model when we shift to the online setup.
| 2,020 | Computation and Language |
Efficient EUD Parsing | We present the system submission from the FASTPARSE team for the EUD Shared
Task at IWPT 2020. We engaged with the task by focusing on efficiency. For this
we considered training costs and inference efficiency. Our models are a
combination of distilled neural dependency parsers and a rule-based system that
projects UD trees into EUD graphs. We obtained an average ELAS of 74.04 for our
official submission, ranking 4th overall.
| 2,020 | Computation and Language |
Rhetoric, Logic, and Dialectic: Advancing Theory-based Argument Quality
Assessment in Natural Language Processing | Though preceding work in computational argument quality (AQ) mostly focuses
on assessing overall AQ, researchers agree that writers would benefit from
feedback targeting individual dimensions of argumentation theory. However, a
large-scale theory-based corpus and corresponding computational models are
missing. We fill this gap by conducting an extensive analysis covering three
diverse domains of online argumentative writing and presenting GAQCorpus: the
first large-scale English multi-domain (community Q&A forums, debate forums,
review forums) corpus annotated with theory-based AQ scores. We then propose
the first computational approaches to theory-based assessment, which can serve
as strong baselines for future work. We demonstrate the feasibility of
large-scale AQ annotation, show that exploiting relations between dimensions
yields performance improvements, and explore the synergies between theory-based
prediction and practical AQ assessment.
| 2,020 | Computation and Language |
Distilling Neural Networks for Greener and Faster Dependency Parsing | The carbon footprint of natural language processing research has been
increasing in recent years due to its reliance on large and inefficient neural
network implementations. Distillation is a network compression technique which
attempts to impart knowledge from a large model to a smaller one. We use
teacher-student distillation to improve the efficiency of the Biaffine
dependency parser which obtains state-of-the-art performance with respect to
accuracy and parsing speed (Dozat and Manning, 2017). When distilling to 20\%
of the original model's trainable parameters, we only observe an average
decrease of $\sim$1 point for both UAS and LAS across a number of diverse
Universal Dependency treebanks while being 2.30x (1.19x) faster than the
baseline model on CPU (GPU) at inference time. We also observe a small increase
in performance when compressing to 80\% for some treebanks. Finally, through
distillation we attain a parser which is not only faster but also more accurate
than the fastest modern parser on the Penn Treebank.
| 2,020 | Computation and Language |
Sarcasm Detection using Context Separators in Online Discourse | Sarcasm is an intricate form of speech, where meaning is conveyed implicitly.
Being a convoluted form of expression, detecting sarcasm is an assiduous
problem. The difficulty in recognition of sarcasm has many pitfalls, including
misunderstandings in everyday communications, which leads us to an increasing
focus on automated sarcasm detection. In the second edition of the Figurative
Language Processing (FigLang 2020) workshop, the shared task of sarcasm
detection released two datasets, containing responses along with their context
sampled from Twitter and Reddit.
In this work, we use RoBERTa_large to detect sarcasm in both the datasets. We
further assert the importance of context in improving the performance of
contextual word embedding based models by using three different types of inputs
- Response-only, Context-Response, and Context-Response (Separated). We show
that our proposed architecture performs competitively for both the datasets. We
also show that the addition of a separation token between context and target
response results in an improvement of 5.13% in the F1-score in the Reddit
dataset.
| 2,020 | Computation and Language |
Attention Word Embedding | Word embedding models learn semantically rich vector representations of words
and are widely used to initialize natural processing language (NLP) models. The
popular continuous bag-of-words (CBOW) model of word2vec learns a vector
embedding by masking a given word in a sentence and then using the other words
as a context to predict it. A limitation of CBOW is that it equally weights the
context words when making a prediction, which is inefficient, since some words
have higher predictive value than others. We tackle this inefficiency by
introducing the Attention Word Embedding (AWE) model, which integrates the
attention mechanism into the CBOW model. We also propose AWE-S, which
incorporates subword information. We demonstrate that AWE and AWE-S outperform
the state-of-the-art word embedding models both on a variety of word similarity
datasets and when used for initialization of NLP models.
| 2,020 | Computation and Language |
Amnesic Probing: Behavioral Explanation with Amnesic Counterfactuals | A growing body of work makes use of probing to investigate the working of
neural models, often considered black boxes. Recently, an ongoing debate
emerged surrounding the limitations of the probing paradigm. In this work, we
point out the inability to infer behavioral conclusions from probing results
and offer an alternative method that focuses on how the information is being
used, rather than on what information is encoded. Our method, Amnesic Probing,
follows the intuition that the utility of a property for a given task can be
assessed by measuring the influence of a causal intervention that removes it
from the representation. Equipped with this new analysis tool, we can ask
questions that were not possible before, e.g. is part-of-speech information
important for word prediction? We perform a series of analyses on BERT to
answer these types of questions. Our findings demonstrate that conventional
probing performance is not correlated to task importance, and we call for
increased scrutiny of claims that draw behavioral or causal conclusions from
probing results.
| 2,021 | Computation and Language |
Toxicity Detection: Does Context Really Matter? | Moderation is crucial to promoting healthy on-line discussions. Although
several `toxicity' detection datasets and models have been published, most of
them ignore the context of the posts, implicitly assuming that comments maybe
judged independently. We investigate this assumption by focusing on two
questions: (a) does context affect the human judgement, and (b) does
conditioning on context improve performance of toxicity detection systems? We
experiment with Wikipedia conversations, limiting the notion of context to the
previous post in the thread and the discussion title. We find that context can
both amplify or mitigate the perceived toxicity of posts. Moreover, a small but
significant subset of manually labeled posts (5% in one of our experiments) end
up having the opposite toxicity labels if the annotators are not provided with
context. Surprisingly, we also find no evidence that context actually improves
the performance of toxicity classifiers, having tried a range of classifiers
and mechanisms to make them context aware. This points to the need for larger
datasets of comments annotated in context. We make our code and data publicly
available.
| 2,020 | Computation and Language |
A Neural Network Model of Lexical Competition during Infant Spoken Word
Recognition | Visual world studies show that upon hearing a word in a target-absent visual
context containing related and unrelated items, toddlers and adults briefly
direct their gaze towards phonologically related items, before shifting towards
semantically and visually related ones. We present a neural network model that
processes dynamic unfolding phonological representations and maps them to
static internal semantic and visual representations. The model, trained on
representations derived from real corpora, simulates this early phonological
over semantic/visual preference. Our results support the hypothesis that
incremental unfolding of a spoken word is in itself sufficient to account for
the transient preference for phonological competitors over both unrelated and
semantically and visually related ones. Phonological representations mapped
dynamically in a bottom-up fashion to semantic-visual representations capture
the early phonological preference effects reported in a visual world task. The
semantic-visual preference observed later in such a trial does not require
top-down feedback from a semantic or visual system.
| 2,020 | Computation and Language |
DocBank: A Benchmark Dataset for Document Layout Analysis | Document layout analysis usually relies on computer vision models to
understand documents while ignoring textual information that is vital to
capture. Meanwhile, high quality labeled datasets with both visual and textual
information are still insufficient. In this paper, we present \textbf{DocBank},
a benchmark dataset that contains 500K document pages with fine-grained
token-level annotations for document layout analysis. DocBank is constructed
using a simple yet effective way with weak supervision from the \LaTeX{}
documents available on the arXiv.com. With DocBank, models from different
modalities can be compared fairly and multi-modal approaches will be further
investigated and boost the performance of document layout analysis. We build
several strong baselines and manually split train/dev/test sets for evaluation.
Experiment results show that models trained on DocBank accurately recognize the
layout information for a variety of documents. The DocBank dataset is publicly
available at \url{https://github.com/doc-analysis/DocBank}.
| 2,020 | Computation and Language |
Aligning Faithful Interpretations with their Social Attribution | We find that the requirement of model interpretations to be faithful is vague
and incomplete. With interpretation by textual highlights as a case-study, we
present several failure cases. Borrowing concepts from social science, we
identify that the problem is a misalignment between the causal chain of
decisions (causal attribution) and the attribution of human behavior to the
interpretation (social attribution). We re-formulate faithfulness as an
accurate attribution of causality to the model, and introduce the concept of
aligned faithfulness: faithful causal chains that are aligned with their
expected social behavior. The two steps of causal attribution and social
attribution together complete the process of explaining behavior. With this
formalization, we characterize various failures of misaligned faithful
highlight interpretations, and propose an alternative causal chain to remedy
the issues. Finally, we implement highlight explanations of the proposed causal
format using contrastive explanations.
| 2,021 | Computation and Language |
Is 42 the Answer to Everything in Subtitling-oriented Speech
Translation? | Subtitling is becoming increasingly important for disseminating information,
given the enormous amounts of audiovisual content becoming available daily.
Although Neural Machine Translation (NMT) can speed up the process of
translating audiovisual content, large manual effort is still required for
transcribing the source language, and for spotting and segmenting the text into
proper subtitles. Creating proper subtitles in terms of timing and segmentation
highly depends on information present in the audio (utterance duration, natural
pauses). In this work, we explore two methods for applying Speech Translation
(ST) to subtitling: a) a direct end-to-end and b) a classical cascade approach.
We discuss the benefit of having access to the source language speech for
improving the conformity of the generated subtitles to the spatial and temporal
subtitling constraints and show that length is not the answer to everything in
the case of subtitling-oriented ST.
| 2,020 | Computation and Language |
Emergence of Separable Manifolds in Deep Language Representations | Deep neural networks (DNNs) have shown much empirical success in solving
perceptual tasks across various cognitive modalities. While they are only
loosely inspired by the biological brain, recent studies report considerable
similarities between representations extracted from task-optimized DNNs and
neural populations in the brain. DNNs have subsequently become a popular model
class to infer computational principles underlying complex cognitive functions,
and in turn, they have also emerged as a natural testbed for applying methods
originally developed to probe information in neural populations. In this work,
we utilize mean-field theoretic manifold analysis, a recent technique from
computational neuroscience that connects geometry of feature representations
with linear separability of classes, to analyze language representations from
large-scale contextual embedding models. We explore representations from
different model families (BERT, RoBERTa, GPT, etc.) and find evidence for
emergence of linguistic manifolds across layer depth (e.g., manifolds for
part-of-speech tags), especially in ambiguous data (i.e, words with multiple
part-of-speech tags, or part-of-speech classes including many words). In
addition, we find that the emergence of linear separability in these manifolds
is driven by a combined reduction of manifolds' radius, dimensionality and
inter-manifold correlations.
| 2,020 | Computation and Language |
Cascaded Text Generation with Markov Transformers | The two dominant approaches to neural text generation are fully
autoregressive models, using serial beam search decoding, and
non-autoregressive models, using parallel decoding with no output dependencies.
This work proposes an autoregressive model with sub-linear parallel time
generation. Noting that conditional random fields with bounded context can be
decoded in parallel, we propose an efficient cascaded decoding approach for
generating high-quality output. To parameterize this cascade, we introduce a
Markov transformer, a variant of the popular fully autoregressive model that
allows us to simultaneously decode with specific autoregressive context
cutoffs. This approach requires only a small modification from standard
autoregressive training, while showing competitive accuracy/speed tradeoff
compared to existing methods on five machine translation datasets.
| 2,020 | Computation and Language |
NSTM: Real-Time Query-Driven News Overview Composition at Bloomberg | Millions of news articles from hundreds of thousands of sources around the
globe appear in news aggregators every day. Consuming such a volume of news
presents an almost insurmountable challenge. For example, a reader searching on
Bloomberg's system for news about the U.K. would find 10,000 articles on a
typical day. Apple Inc., the world's most journalistically covered company,
garners around 1,800 news articles a day.
We realized that a new kind of summarization engine was needed, one that
would condense large volumes of news into short, easy to absorb points. The
system would filter out noise and duplicates to identify and summarize key news
about companies, countries or markets.
When given a user query, Bloomberg's solution, Key News Themes (or NSTM),
leverages state-of-the-art semantic clustering techniques and novel
summarization methods to produce comprehensive, yet concise, digests to
dramatically simplify the news consumption process.
NSTM is available to hundreds of thousands of readers around the world and
serves thousands of requests daily with sub-second latency. At ACL 2020, we
will present a demo of NSTM.
| 2,020 | Computation and Language |
Lexical Normalization for Code-switched Data and its Effect on
POS-tagging | Lexical normalization, the translation of non-canonical data to standard
language, has shown to improve the performance of manynatural language
processing tasks on social media. Yet, using multiple languages in one
utterance, also called code-switching (CS), is frequently overlooked by these
normalization systems, despite its common use in social media. In this paper,
we propose three normalization models specifically designed to handle
code-switched data which we evaluate for two language pairs: Indonesian-English
(Id-En) and Turkish-German (Tr-De). For the latter, we introduce novel
normalization layers and their corresponding language ID and POS tags for the
dataset, and evaluate the downstream effect of normalization on POS tagging.
Results show that our CS-tailored normalization models outperform Id-En state
of the art and Tr-De monolingual models, and lead to 5.4% relative performance
increase for POS tagging as compared to unnormalized input.
| 2,021 | Computation and Language |
An Effective Contextual Language Modeling Framework for Speech
Summarization with Augmented Features | Tremendous amounts of multimedia associated with speech information are
driving an urgent need to develop efficient and effective automatic
summarization methods. To this end, we have seen rapid progress in applying
supervised deep neural network-based methods to extractive speech
summarization. More recently, the Bidirectional Encoder Representations from
Transformers (BERT) model was proposed and has achieved record-breaking success
on many natural language processing (NLP) tasks such as question answering and
language understanding. In view of this, we in this paper contextualize and
enhance the state-of-the-art BERT-based model for speech summarization, while
its contributions are at least three-fold. First, we explore the incorporation
of confidence scores into sentence representations to see if such an attempt
could help alleviate the negative effects caused by imperfect automatic speech
recognition (ASR). Secondly, we also augment the sentence embeddings obtained
from BERT with extra structural and linguistic features, such as sentence
position and inverse document frequency (IDF) statistics. Finally, we validate
the effectiveness of our proposed method on a benchmark dataset, in comparison
to several classic and celebrated speech summarization methods.
| 2,020 | Computation and Language |
Hybrid Improved Document-level Embedding (HIDE) | In recent times, word embeddings are taking a significant role in sentiment
analysis. As the generation of word embeddings needs huge corpora, many
applications use pretrained embeddings. In spite of the success, word
embeddings suffers from certain drawbacks such as it does not capture sentiment
information of a word, contextual information in terms of parts of speech tags
and domain-specific information. In this work we propose HIDE a Hybrid Improved
Document level Embedding which incorporates domain information, parts of speech
information and sentiment information into existing word embeddings such as
GloVe and Word2Vec. It combine improved word embeddings into document level
embeddings. Further, Latent Semantic Analysis (LSA) has been used to represent
documents as a vectors. HIDE is generated, combining LSA and document level
embeddings, which is computed from improved word embeddings. We test HIDE with
six different datasets and shown considerable improvement over the accuracy of
existing pretrained word vectors such as GloVe and Word2Vec. We further compare
our work with two existing document level sentiment analysis approaches. HIDE
performs better than existing systems.
| 2,020 | Computation and Language |
Automatic Dialogic Instruction Detection for K-12 Online One-on-one
Classes | Online one-on-one class is created for highly interactive and immersive
learning experience. It demands a large number of qualified online instructors.
In this work, we develop six dialogic instructions and help teachers achieve
the benefits of one-on-one learning paradigm. Moreover, we utilize neural
language models, i.e., long short-term memory (LSTM), to detect above six
instructions automatically. Experiments demonstrate that the LSTM approach
achieves AUC scores from 0.840 to 0.979 among all six types of instructions on
our real-world educational dataset.
| 2,020 | Computation and Language |
CS-NLP team at SemEval-2020 Task 4: Evaluation of State-of-the-art NLP
Deep Learning Architectures on Commonsense Reasoning Task | In this paper, we investigate a commonsense inference task that unifies
natural language understanding and commonsense reasoning. We describe our
attempt at SemEval-2020 Task 4 competition: Commonsense Validation and
Explanation (ComVE) challenge. We discuss several state-of-the-art deep
learning architectures for this challenge. Our system uses prepared labeled
textual datasets that were manually curated for three different natural
language inference subtasks. The goal of the first subtask is to test whether a
model can distinguish between natural language statements that make sense and
those that do not make sense. We compare the performance of several language
models and fine-tuned classifiers. Then, we propose a method inspired by
question/answering tasks to treat a classification problem as a multiple choice
question task to boost the performance of our experimental results (96.06%),
which is significantly better than the baseline. For the second subtask, which
is to select the reason why a statement does not make sense, we stand within
the first six teams (93.7%) among 27 participants with very competitive
results. Our result for last subtask of generating reason against the nonsense
statement shows many potentials for future researches as we applied the most
powerful generative model of language (GPT-2) with 6.1732 BLEU score among
first four teams.
| 2,020 | Computation and Language |
A Thousand Words are Worth More Than One Recording: NLP Based Speaker
Change Point Detection | Speaker Diarization (SD) consists of splitting or segmenting an input audio
burst according to speaker identities. In this paper, we focus on the crucial
task of the SD problem which is the audio segmenting process and suggest a
solution for the Change Point Detection (CPD) problem. We empirically
demonstrate the negative correlation between an increase in the number of
speakers and the Recall and F1-Score measurements. This negative correlation is
shown to be the outcome of a massive experimental evaluation process, which
accounts its superiority to recently developed voice based solutions. In order
to overcome the number of speakers issue, we suggest a robust solution based on
a novel Natural Language Processing (NLP) technique, as well as a metadata
features extraction process, rather than a vocal based alone. To the best of
our knowledge, we are the first to propose an intelligent NLP based solution
that (I) tackles the CPD problem with a dataset in Hebrew, and (II) solves the
CPD variant of the SD problem. We empirically show, based on two distinct
datasets, that our method is abled to accurately identify the CPDs in an audio
burst with 82.12% and 89.02% of success in the Recall and F1-score
measurements.
| 2,020 | Computation and Language |
Word-Emoji Embeddings from large scale Messaging Data reflect real-world
Semantic Associations of Expressive Icons | We train word-emoji embeddings on large scale messaging data obtained from
the Jodel online social network. Our data set contains more than 40 million
sentences, of which 11 million sentences are annotated with a subset of the
Unicode 13.0 standard Emoji list. We explore semantic emoji associations
contained in this embedding by analyzing associations between emojis, between
emojis and text, and between text and emojis. Our investigations demonstrate
anecdotally that word-emoji embeddings trained on large scale messaging data
can reflect real-world semantic associations. To enable further research we
release the Jodel Emoji Embedding Dataset (JEED1488) containing 1488 emojis and
their embeddings along 300 dimensions.
| 2,020 | Computation and Language |
Automatic Discovery of Novel Intents & Domains from Text Utterances | One of the primary tasks in Natural Language Understanding (NLU) is to
recognize the intents as well as domains of users' spoken and written language
utterances. Most existing research formulates this as a supervised
classification problem with a closed-world assumption, i.e. the domains or
intents to be identified are pre-defined or known beforehand. Real-world
applications however increasingly encounter dynamic, rapidly evolving
environments with newly emerging intents and domains, about which no
information is known during model training. We propose a novel framework,
ADVIN, to automatically discover novel domains and intents from large volumes
of unlabeled data. We first employ an open classification model to identify all
utterances potentially consisting of a novel intent. Next, we build a knowledge
transfer component with a pairwise margin loss function. It learns
discriminative deep features to group together utterances and discover multiple
latent intent categories within them in an unsupervised manner. We finally
hierarchically link mutually related intents into domains, forming an
intent-domain taxonomy. ADVIN significantly outperforms baselines on three
benchmark datasets, and real user utterances from a commercial voice-powered
agent.
| 2,020 | Computation and Language |
Learning Constraints for Structured Prediction Using Rectifier Networks | Various natural language processing tasks are structured prediction problems
where outputs are constructed with multiple interdependent decisions. Past work
has shown that domain knowledge, framed as constraints over the output space,
can help improve predictive accuracy. However, designing good constraints often
relies on domain expertise. In this paper, we study the problem of learning
such constraints. We frame the problem as that of training a two-layer
rectifier network to identify valid structures or substructures, and show a
construction for converting a trained network into a system of linear
constraints over the inference variables. Our experiments on several NLP tasks
show that the learned constraints can improve the prediction accuracy,
especially when the number of training examples is small.
| 2,020 | Computation and Language |
The 'Letter' Distribution in the Chinese Language | Corpus-based statistical analysis plays a significant role in linguistic
research, and ample evidence has shown that different languages exhibit some
common laws. Studies have found that letters in some alphabetic writing
languages have strikingly similar statistical usage frequency distributions.
Does this hold for Chinese, which employs ideogram writing? We obtained letter
frequency data of some alphabetic writing languages and found the common law of
the letter distributions. In addition, we collected Chinese literature corpora
for different historical periods from the Tang Dynasty to the present, and we
dismantled the Chinese written language into three kinds of basic particles:
characters, strokes and constructive parts. The results of the statistical
analysis showed that, in different historical periods, the intensity of the use
of basic particles in Chinese writing varied, but the form of the distribution
was consistent. In particular, the distributions of the Chinese constructive
parts are certainly consistent with those alphabetic writing languages. This
study provides new evidence of the consistency of human languages.
| 2,020 | Computation and Language |
Do All Good Actors Look The Same? Exploring News Veracity Detection
Across The U.S. and The U.K | A major concern with text-based news veracity detection methods is that they
may not generalize across countries and cultures. In this short paper, we
explicitly test news veracity models across news data from the United States
and the United Kingdom, demonstrating there is reason for concern of
generalizabilty. Through a series of testing scenarios, we show that text-based
classifiers perform poorly when trained on one country's news data and tested
on another. Furthermore, these same models have trouble classifying unseen,
unreliable news sources. In conclusion, we discuss implications of these
results and avenues for future work.
| 2,020 | Computation and Language |
BERT-based Ensembles for Modeling Disclosure and Support in
Conversational Social Media Text | There is a growing interest in understanding how humans initiate and hold
conversations. The affective understanding of conversations focuses on the
problem of how speakers use emotions to react to a situation and to each other.
In the CL-Aff Shared Task, the organizers released Get it #OffMyChest dataset,
which contains Reddit comments from casual and confessional conversations,
labeled for their disclosure and supportiveness characteristics. In this paper,
we introduce a predictive ensemble model exploiting the finetuned
contextualized word embeddings, RoBERTa and ALBERT. We show that our model
outperforms the base models in all considered metrics, achieving an improvement
of $3\%$ in the F1 score. We further conduct statistical analysis and outline
deeper insights into the given dataset while providing a new characterization
of impact for the dataset.
| 2,020 | Computation and Language |
An Effectiveness Metric for Ordinal Classification: Formal Properties
and Experimental Results | In Ordinal Classification tasks, items have to be assigned to classes that
have a relative ordering, such as positive, neutral, negative in sentiment
analysis. Remarkably, the most popular evaluation metrics for ordinal
classification tasks either ignore relevant information (for instance,
precision/recall on each of the classes ignores their relative ordering) or
assume additional information (for instance, Mean Average Error assumes
absolute distances between classes). In this paper we propose a new metric for
Ordinal Classification, Closeness Evaluation Measure, that is rooted on
Measurement Theory and Information Theory. Our theoretical analysis and
experimental results over both synthetic data and data from NLP shared tasks
indicate that the proposed metric captures quality aspects from different
traditional tasks simultaneously. In addition, it generalizes some popular
classification (nominal scale) and error minimization (interval scale) metrics,
depending on the measurement scale in which it is instantiated.
| 2,020 | Computation and Language |
Leveraging Affective Bidirectional Transformers for Offensive Language
Detection | Social media are pervasive in our life, making it necessary to ensure safe
online experiences by detecting and removing offensive and hate speech. In this
work, we report our submission to the Offensive Language and hate-speech
Detection shared task organized with the 4th Workshop on Open-Source Arabic
Corpora and Processing Tools Arabic (OSACT4). We focus on developing purely
deep learning systems, without a need for feature engineering. For that
purpose, we develop an effective method for automatic data augmentation and
show the utility of training both offensive and hate speech models off (i.e.,
by fine-tuning) previously trained affective models (i.e., sentiment and
emotion). Our best models are significantly better than a vanilla BERT model,
with 89.60% acc (82.31% macro F1) for hate speech and 95.20% acc (70.51% macro
F1) on official TEST data.
| 2,020 | Computation and Language |
Context-based Transformer Models for Answer Sentence Selection | An important task for the design of Question Answering systems is the
selection of the sentence containing (or constituting) the answer from
documents relevant to the asked question. Most previous work has only used the
target sentence to compute its score with the question as the models were not
powerful enough to also effectively encode additional contextual information.
In this paper, we analyze the role of the contextual information in the
sentence selection task, proposing a Transformer based architecture that
leverages two types of contexts, local and global. The former describes the
paragraph containing the sentence, aiming at solving implicit references,
whereas the latter describes the entire document containing the candidate
sentence, providing content-based information. The results on three different
benchmarks show that the combination of local and global contexts in a
Transformer model significantly improves the accuracy in Answer Sentence
Selection.
| 2,020 | Computation and Language |
A Survey of Neural Networks and Formal Languages | This report is a survey of the relationships between various state-of-the-art
neural network architectures and formal languages as, for example, structured
by the Chomsky Language Hierarchy. Of particular interest are the abilities of
a neural architecture to represent, recognize and generate words from a
specific language by learning from samples of the language.
| 2,020 | Computation and Language |
A Pairwise Probe for Understanding BERT Fine-Tuning on Machine Reading
Comprehension | Pre-trained models have brought significant improvements to many NLP tasks
and have been extensively analyzed. But little is known about the effect of
fine-tuning on specific tasks. Intuitively, people may agree that a pre-trained
model already learns semantic representations of words (e.g. synonyms are
closer to each other) and fine-tuning further improves its capabilities which
require more complicated reasoning (e.g. coreference resolution, entity
boundary detection, etc). However, how to verify these arguments analytically
and quantitatively is a challenging task and there are few works focus on this
topic. In this paper, inspired by the observation that most probing tasks
involve identifying matched pairs of phrases (e.g. coreference requires
matching an entity and a pronoun), we propose a pairwise probe to understand
BERT fine-tuning on the machine reading comprehension (MRC) task. Specifically,
we identify five phenomena in MRC. According to pairwise probing tasks, we
compare the performance of each layer's hidden representation of pre-trained
and fine-tuned BERT. The proposed pairwise probe alleviates the problem of
distraction from inaccurate model training and makes a robust and quantitative
comparison. Our experimental analysis leads to highly confident conclusions:
(1) Fine-tuning has little effect on the fundamental and low-level information
and general semantic tasks. (2) For specific abilities required for downstream
tasks, fine-tuned BERT is better than pre-trained BERT and such gaps are
obvious after the fifth layer.
| 2,020 | Computation and Language |
Embeddings of Label Components for Sequence Labeling: A Case Study of
Fine-grained Named Entity Recognition | In general, the labels used in sequence labeling consist of different types
of elements. For example, IOB-format entity labels, such as B-Person and
I-Person, can be decomposed into span (B and I) and type information (Person).
However, while most sequence labeling models do not consider such label
components, the shared components across labels, such as Person, can be
beneficial for label prediction. In this work, we propose to integrate label
component information as embeddings into models. Through experiments on English
and Japanese fine-grained named entity recognition, we demonstrate that the
proposed method improves performance, especially for instances with
low-frequency labels.
| 2,020 | Computation and Language |
Enhanced Universal Dependency Parsing with Second-Order Inference and
Mixture of Training Data | This paper presents the system used in our submission to the \textit{IWPT
2020 Shared Task}. Our system is a graph-based parser with second-order
inference. For the low-resource Tamil corpus, we specially mixed the training
data of Tamil with other languages and significantly improved the performance
of Tamil. Due to our misunderstanding of the submission requirements, we
submitted graphs that are not connected, which makes our system only rank
\textbf{6th} over 10 teams. However, after we fixed this problem, our system is
0.6 ELAS higher than the team that ranked \textbf{1st} in the official results.
| 2,021 | Computation and Language |
Analyzing the Quality and Stability of a Streaming End-to-End On-Device
Speech Recognizer | The demand for fast and accurate incremental speech recognition increases as
the applications of automatic speech recognition (ASR) proliferate. Incremental
speech recognizers output chunks of partially recognized words while the user
is still talking. Partial results can be revised before the ASR finalizes its
hypothesis, causing instability issues. We analyze the quality and stability of
on-device streaming end-to-end (E2E) ASR models. We first introduce a novel set
of metrics that quantify the instability at word and segment levels. We study
the impact of several model training techniques that improve E2E model
qualities but degrade model stability. We categorize the causes of instability
and explore various solutions to mitigate them in a streaming E2E ASR system.
Index Terms: ASR, stability, end-to-end, text normalization,on-device, RNN-T
| 2,020 | Computation and Language |
BERT Based Multilingual Machine Comprehension in English and Hindi | Multilingual Machine Comprehension (MMC) is a Question-Answering (QA)
sub-task that involves quoting the answer for a question from a given snippet,
where the question and the snippet can be in different languages. Recently
released multilingual variant of BERT (m-BERT), pre-trained with 104 languages,
has performed well in both zero-shot and fine-tuned settings for multilingual
tasks; however, it has not been used for English-Hindi MMC yet. We, therefore,
present in this article, our experiments with m-BERT for MMC in zero-shot,
mono-lingual (e.g. Hindi Question-Hindi Snippet) and cross-lingual (e.g.
English QuestionHindi Snippet) fine-tune setups. These model variants are
evaluated on all possible multilingual settings and results are compared
against the current state-of-the-art sequential QA system for these languages.
Experiments show that m-BERT, with fine-tuning, improves performance on all
evaluation settings across both the datasets used by the prior model, therefore
establishing m-BERT based MMC as the new state-of-the-art for English and
Hindi. We also publish our results on an extended version of the recently
released XQuAD dataset, which we propose to use as the evaluation benchmark for
future research.
| 2,020 | Computation and Language |
An Empirical Methodology for Detecting and Prioritizing Needs during
Crisis Events | In times of crisis, identifying the essential needs is a crucial step to
providing appropriate resources and services to affected entities. Social media
platforms such as Twitter contain vast amount of information about the general
public's needs. However, the sparsity of the information as well as the amount
of noisy content present a challenge to practitioners to effectively identify
shared information on these platforms. In this study, we propose two novel
methods for two distinct but related needs detection tasks: the identification
of 1) a list of resources needed ranked by priority, and 2) sentences that
specify who-needs-what resources. We evaluated our methods on a set of tweets
about the COVID-19 crisis. For task 1 (detecting top needs), we compared our
results against two given lists of resources and achieved 64% precision. For
task 2 (detecting who-needs-what), we compared our results on a set of 1,000
annotated tweets and achieved a 68% F1-score.
| 2,020 | Computation and Language |
Situated and Interactive Multimodal Conversations | Next generation virtual assistants are envisioned to handle multimodal inputs
(e.g., vision, memories of previous interactions, in addition to the user's
utterances), and perform multimodal actions (e.g., displaying a route in
addition to generating the system's utterance). We introduce Situated
Interactive MultiModal Conversations (SIMMC) as a new direction aimed at
training agents that take multimodal actions grounded in a co-evolving
multimodal input context in addition to the dialog history. We provide two
SIMMC datasets totalling ~13K human-human dialogs (~169K utterances) using a
multimodal Wizard-of-Oz (WoZ) setup, on two shopping domains: (a) furniture
(grounded in a shared virtual environment) and, (b) fashion (grounded in an
evolving set of images). We also provide logs of the items appearing in each
scene, and contextual NLU and coreference annotations, using a novel and
unified framework of SIMMC conversational acts for both user and assistant
utterances. Finally, we present several tasks within SIMMC as objective
evaluation protocols, such as Structural API Prediction and Response
Generation. We benchmark a collection of existing models on these SIMMC tasks
as strong baselines, and demonstrate rich multimodal conversational
interactions. Our data, annotations, code, and models are publicly available.
| 2,020 | Computation and Language |
WikiBERT models: deep transfer learning for many languages | Deep neural language models such as BERT have enabled substantial recent
advances in many natural language processing tasks. Due to the effort and
computational cost involved in their pre-training, language-specific models are
typically introduced only for a small number of high-resource languages such as
English. While multilingual models covering large numbers of languages are
available, recent work suggests monolingual training can produce better models,
and our understanding of the tradeoffs between mono- and multilingual training
is incomplete. In this paper, we introduce a simple, fully automated pipeline
for creating language-specific BERT models from Wikipedia data and introduce 42
new such models, most for languages up to now lacking dedicated deep neural
language models. We assess the merits of these models using the
state-of-the-art UDify parser on Universal Dependencies data, contrasting
performance with results using the multilingual BERT model. We find that UDify
using WikiBERT models outperforms the parser using mBERT on average, with the
language-specific models showing substantially improved performance for some
languages, yet limited improvement or a decrease in performance for others. We
also present preliminary results as first steps toward an understanding of the
conditions under which language-specific models are most beneficial. All of the
methods and models introduced in this work are available under open licenses
from https://github.com/turkunlp/wikibert.
| 2,020 | Computation and Language |
A Contextual Hierarchical Attention Network with Adaptive Objective for
Dialogue State Tracking | Recent studies in dialogue state tracking (DST) leverage historical
information to determine states which are generally represented as slot-value
pairs. However, most of them have limitations to efficiently exploit relevant
context due to the lack of a powerful mechanism for modeling interactions
between the slot and the dialogue history. Besides, existing methods usually
ignore the slot imbalance problem and treat all slots indiscriminately, which
limits the learning of hard slots and eventually hurts overall performance. In
this paper, we propose to enhance the DST through employing a contextual
hierarchical attention network to not only discern relevant information at both
word level and turn level but also learn contextual representations. We further
propose an adaptive objective to alleviate the slot imbalance problem by
dynamically adjust weights of different slots during training. Experimental
results show that our approach reaches 52.68% and 58.55% joint accuracy on
MultiWOZ 2.0 and MultiWOZ 2.1 datasets respectively and achieves new
state-of-the-art performance with considerable improvements (+1.24% and
+5.98%).
| 2,020 | Computation and Language |
Exploring Cross-sentence Contexts for Named Entity Recognition with BERT | Named entity recognition (NER) is frequently addressed as a sequence
classification task where each input consists of one sentence of text. It is
nevertheless clear that useful information for the task can often be found
outside of the scope of a single-sentence context. Recently proposed
self-attention models such as BERT can both efficiently capture long-distance
relationships in input as well as represent inputs consisting of several
sentences, creating new opportunitites for approaches that incorporate
cross-sentence information in natural language processing tasks. In this paper,
we present a systematic study exploring the use of cross-sentence information
for NER using BERT models in five languages. We find that adding context in the
form of additional sentences to BERT input systematically increases NER
performance on all of the tested languages and models. Including multiple
sentences in each input also allows us to study the predictions of the same
sentences in different contexts. We propose a straightforward method,
Contextual Majority Voting (CMV), to combine different predictions for
sentences and demonstrate this to further increase NER performance with BERT.
Our approach does not require any changes to the underlying BERT architecture,
rather relying on restructuring examples for training and prediction.
Evaluation on established datasets, including the CoNLL'02 and CoNLL'03 NER
benchmarks, demonstrates that our proposed approach can improve on the
state-of-the-art NER results on English, Dutch, and Finnish, achieves the best
reported BERT-based results on German, and is on par with performance reported
with other BERT-based approaches in Spanish. We release all methods implemented
in this work under open licenses.
| 2,020 | Computation and Language |
A Unified Dual-view Model for Review Summarization and Sentiment
Classification with Inconsistency Loss | Acquiring accurate summarization and sentiment from user reviews is an
essential component of modern e-commerce platforms. Review summarization aims
at generating a concise summary that describes the key opinions and sentiment
of a review, while sentiment classification aims to predict a sentiment label
indicating the sentiment attitude of a review. To effectively leverage the
shared sentiment information in both review summarization and sentiment
classification tasks, we propose a novel dual-view model that jointly improves
the performance of these two tasks. In our model, an encoder first learns a
context representation for the review, then a summary decoder generates a
review summary word by word. After that, a source-view sentiment classifier
uses the encoded context representation to predict a sentiment label for the
review, while a summary-view sentiment classifier uses the decoder hidden
states to predict a sentiment label for the generated summary. During training,
we introduce an inconsistency loss to penalize the disagreement between these
two classifiers. It helps the decoder to generate a summary to have a
consistent sentiment tendency with the review and also helps the two sentiment
classifiers learn from each other. Experiment results on four real-world
datasets from different domains demonstrate the effectiveness of our model.
| 2,020 | Computation and Language |
Training Multilingual Machine Translation by Alternately Freezing
Language-Specific Encoders-Decoders | We propose a modular architecture of language-specific encoder-decoders that
constitutes a multilingual machine translation system that can be incrementally
extended to new languages without the need for retraining the existing system
when adding new languages. Differently from previous works, we simultaneously
train $N$ languages in all translation directions by alternately freezing
encoder or decoder modules, which indirectly forces the system to train in a
common intermediate representation for all languages. Experimental results from
multilingual machine translation show that we can successfully train this
modular architecture improving on the initial languages while falling slightly
behind when adding new languages or doing zero-shot translation. Additional
comparison of the quality of sentence representation in the task of natural
language inference shows that the alternately freezing training is also
beneficial in this direction.
| 2,020 | Computation and Language |
DiscSense: Automated Semantic Analysis of Discourse Markers | Discourse markers ({\it by contrast}, {\it happily}, etc.) are words or
phrases that are used to signal semantic and/or pragmatic relationships between
clauses or sentences. Recent work has fruitfully explored the prediction of
discourse markers between sentence pairs in order to learn accurate sentence
representations, that are useful in various classification tasks. In this work,
we take another perspective: using a model trained to predict discourse markers
between sentence pairs, we predict plausible markers between sentence pairs
with a known semantic relation (provided by existing classification datasets).
These predictions allow us to study the link between discourse markers and the
semantic relations annotated in classification datasets. Handcrafted mappings
have been proposed between markers and discourse relations on a limited set of
markers and a limited set of categories, but there exist hundreds of discourse
markers expressing a wide variety of relations, and there is no consensus on
the taxonomy of relations between competing discourse theories (which are
largely built in a top-down fashion). By using an automatic rediction method
over existing semantically annotated datasets, we provide a bottom-up
characterization of discourse markers in English. The resulting dataset, named
DiscSense, is publicly available.
| 2,020 | Computation and Language |
Web Document Categorization Using Naive Bayes Classifier and Latent
Semantic Analysis | A rapid growth of web documents due to heavy use of World Wide Web
necessitates efficient techniques to efficiently classify the document on the
web. It is thus produced High volumes of data per second with high diversity.
Automatically classification of these growing amounts of web document is One of
the biggest challenges facing us today. Probabilistic classification algorithms
such as Naive Bayes have become commonly used for web document classification.
This problem is mainly because of the irrelatively high classification accuracy
on plenty application areas as well as their lack of support to handle high
dimensional and sparse data which is the exclusive characteristics of textual
data representation. also it is common to Lack of attention and support the
semantic relation between words using traditional feature selection method When
dealing with the big data and large-scale web documents. In order to solve the
problem, we proposed a method for web document classification that uses LSA to
increase similarity of documents under the same class and improve the
classification precision. Using this approach, we designed a faster and much
accurate classifier for Web Documents. Experimental results have shown that
using the mentioned preprocessing can improve accuracy and speed of Naive Bayes
availably, the precision and recall metrics have indicated the improvement.
| 2,020 | Computation and Language |
Event Arguments Extraction via Dilate Gated Convolutional Neural Network
with Enhanced Local Features | Event Extraction plays an important role in information-extraction to
understand the world. Event extraction could be split into two subtasks: one is
event trigger extraction, the other is event arguments extraction. However, the
F-Score of event arguments extraction is much lower than that of event trigger
extraction, i.e. in the most recent work, event trigger extraction achieves
80.7%, while event arguments extraction achieves only 58%. In pipelined
structures, the difficulty of event arguments extraction lies in its lack of
classification feature, and the much higher computation consumption. In this
work, we proposed a novel Event Extraction approach based on multi-layer Dilate
Gated Convolutional Neural Network (EE-DGCNN) which has fewer parameters. In
addition, enhanced local information is incorporated into word features, to
assign event arguments roles for triggers predicted by the first subtask. The
numerical experiments demonstrated significant performance improvement beyond
state-of-art event extraction approaches on real-world datasets. Further
analysis of extraction procedure is presented, as well as experiments are
conducted to analyze impact factors related to the performance improvement.
| 2,020 | Computation and Language |
On the Predictive Power of Neural Language Models for Human Real-Time
Comprehension Behavior | Human reading behavior is tuned to the statistics of natural language: the
time it takes human subjects to read a word can be predicted from estimates of
the word's probability in context. However, it remains an open question what
computational architecture best characterizes the expectations deployed in real
time by humans that determine the behavioral signatures of reading. Here we
test over two dozen models, independently manipulating computational
architecture and training dataset size, on how well their next-word
expectations predict human reading time behavior on naturalistic text corpora.
We find that across model architectures and training dataset sizes the
relationship between word log-probability and reading time is (near-)linear. We
next evaluate how features of these models determine their psychometric
predictive power, or ability to predict human reading behavior. In general, the
better a model's next-word expectations, the better its psychometric predictive
power. However, we find nontrivial differences across model architectures. For
any given perplexity, deep Transformer models and n-gram models generally show
superior psychometric predictive power over LSTM or structurally supervised
neural models, especially for eye movement data. Finally, we compare models'
psychometric predictive power to the depth of their syntactic knowledge, as
measured by a battery of syntactic generalization tests developed using methods
from controlled psycholinguistic experiments. Once perplexity is controlled
for, we find no significant relationship between syntactic knowledge and
predictive power. These results suggest that different approaches may be
required to best model human real-time language comprehension behavior in
naturalistic reading versus behavior for controlled linguistic materials
designed for targeted probing of syntactic knowledge.
| 2,020 | Computation and Language |
Nurse is Closer to Woman than Surgeon? Mitigating Gender-Biased
Proximities in Word Embeddings | Word embeddings are the standard model for semantic and syntactic
representations of words. Unfortunately, these models have been shown to
exhibit undesirable word associations resulting from gender, racial, and
religious biases. Existing post-processing methods for debiasing word
embeddings are unable to mitigate gender bias hidden in the spatial arrangement
of word vectors. In this paper, we propose RAN-Debias, a novel gender debiasing
methodology which not only eliminates the bias present in a word vector but
also alters the spatial distribution of its neighbouring vectors, achieving a
bias-free setting while maintaining minimal semantic offset. We also propose a
new bias evaluation metric - Gender-based Illicit Proximity Estimate (GIPE),
which measures the extent of undue proximity in word vectors resulting from the
presence of gender-based predilections. Experiments based on a suite of
evaluation metrics show that RAN-Debias significantly outperforms the
state-of-the-art in reducing proximity bias (GIPE) by at least 42.02%. It also
reduces direct bias, adding minimal semantic disturbance, and achieves the best
performance in a downstream application task (coreference resolution).
| 2,020 | Computation and Language |
The Typology of Polysemy: A Multilingual Distributional Framework | Lexical semantic typology has identified important cross-linguistic
generalizations about the variation and commonalities in polysemy
patterns---how languages package up meanings into words. Recent computational
research has enabled investigation of lexical semantics at a much larger scale,
but little work has explored lexical typology across semantic domains, nor the
factors that influence cross-linguistic similarities. We present a novel
computational framework that quantifies semantic affinity, the cross-linguistic
similarity of lexical semantics for a concept. Our approach defines a common
multilingual semantic space that enables a direct comparison of the lexical
expression of concepts across languages. We validate our framework against
empirical findings on lexical semantic typology at both the concept and domain
levels. Our results reveal an intricate interaction between semantic domains
and extra-linguistic factors, beyond language phylogeny, that co-shape the
typology of polysemy across languages.
| 2,020 | Computation and Language |
Automatic Text Summarization of COVID-19 Medical Research Articles using
BERT and GPT-2 | With the COVID-19 pandemic, there is a growing urgency for medical community
to keep up with the accelerating growth in the new coronavirus-related
literature. As a result, the COVID-19 Open Research Dataset Challenge has
released a corpus of scholarly articles and is calling for machine learning
approaches to help bridging the gap between the researchers and the rapidly
growing publications. Here, we take advantage of the recent advances in
pre-trained NLP models, BERT and OpenAI GPT-2, to solve this challenge by
performing text summarization on this dataset. We evaluate the results using
ROUGE scores and visual inspection. Our model provides abstractive and
comprehensive information based on keywords extracted from the original
articles. Our work can help the the medical community, by providing succinct
summaries of articles for which the abstract are not already available.
| 2,020 | Computation and Language |
Norm-Based Curriculum Learning for Neural Machine Translation | A neural machine translation (NMT) system is expensive to train, especially
with high-resource settings. As the NMT architectures become deeper and wider,
this issue gets worse and worse. In this paper, we aim to improve the
efficiency of training an NMT by introducing a novel norm-based curriculum
learning method. We use the norm (aka length or module) of a word embedding as
a measure of 1) the difficulty of the sentence, 2) the competence of the model,
and 3) the weight of the sentence. The norm-based sentence difficulty takes the
advantages of both linguistically motivated and model-based sentence
difficulties. It is easy to determine and contains learning-dependent features.
The norm-based model competence makes NMT learn the curriculum in a fully
automated way, while the norm-based sentence weight further enhances the
learning of the vector representation of the NMT. Experimental results for the
WMT'14 English-German and WMT'17 Chinese-English translation tasks demonstrate
that the proposed method outperforms strong baselines in terms of BLEU score
(+1.17/+1.56) and training speedup (2.22x/3.33x).
| 2,020 | Computation and Language |
Exploiting Class Labels to Boost Performance on Embedding-based Text
Classification | Text classification is one of the most frequent tasks for processing textual
data, facilitating among others research from large-scale datasets. Embeddings
of different kinds have recently become the de facto standard as features used
for text classification. These embeddings have the capacity to capture meanings
of words inferred from occurrences in large external collections. While they
are built out of external collections, they are unaware of the distributional
characteristics of words in the classification dataset at hand, including most
importantly the distribution of words across classes in training data. To make
the most of these embeddings as features and to boost the performance of
classifiers using them, we introduce a weighting scheme, Term
Frequency-Category Ratio (TF-CR), which can weight high-frequency,
category-exclusive words higher when computing word embeddings. Our experiments
on eight datasets show the effectiveness of TF-CR, leading to improved
performance scores over the well-known weighting schemes TF-IDF and KLD as well
as over the absence of a weighting scheme in most cases.
| 2,020 | Computation and Language |
Towards Large-Scale Data Mining for Data-Driven Analysis of Sign
Languages | Access to sign language data is far from adequate. We show that it is
possible to collect the data from social networking services such as TikTok,
Instagram, and YouTube by applying data filtering to enforce quality standards
and by discovering patterns in the filtered data, making it easier to analyse
and model. Using our data collection pipeline, we collect and examine the
interpretation of songs in both the American Sign Language (ASL) and the
Brazilian Sign Language (Libras). We explore their differences and similarities
by looking at the co-dependence of the orientation and location phonological
parameters
| 2,020 | Computation and Language |
Transfer Learning for British Sign Language Modelling | Automatic speech recognition and spoken dialogue systems have made great
advances through the use of deep machine learning methods. This is partly due
to greater computing power but also through the large amount of data available
in common languages, such as English. Conversely, research in minority
languages, including sign languages, is hampered by the severe lack of data.
This has led to work on transfer learning methods, whereby a model developed
for one language is reused as the starting point for a model on a second
language, which is less resourced. In this paper, we examine two transfer
learning techniques of fine-tuning and layer substitution for language
modelling of British Sign Language. Our results show improvement in perplexity
when using transfer learning with standard stacked LSTM models, trained
initially using a large corpus for standard English from the Penn Treebank
corpus
| 2,018 | Computation and Language |
Cross-model Back-translated Distillation for Unsupervised Machine
Translation | Recent unsupervised machine translation (UMT) systems usually employ three
main principles: initialization, language modeling and iterative
back-translation, though they may apply them differently. Crucially, iterative
back-translation and denoising auto-encoding for language modeling provide data
diversity to train the UMT systems. However, the gains from these
diversification processes has seemed to plateau. We introduce a novel component
to the standard UMT framework called Cross-model Back-translated Distillation
(CBD), that is aimed to induce another level of data diversification that
existing principles lack. CBD is applicable to all previous UMT approaches. In
our experiments, CBD achieves the state of the art in the WMT'14
English-French, WMT'16 English-German and English-Romanian bilingual
unsupervised translation tasks, with 38.2, 30.1, and 36.3 BLEU respectively. It
also yields 1.5-3.3 BLEU improvements in IWSLT English-French and
English-German tasks. Through extensive experimental analyses, we show that CBD
is effective because it embraces data diversity while other similar variants do
not.
| 2,021 | Computation and Language |
CompGuessWhat?!: A Multi-task Evaluation Framework for Grounded Language
Learning | Approaches to Grounded Language Learning typically focus on a single
task-based final performance measure that may not depend on desirable
properties of the learned hidden representations, such as their ability to
predict salient attributes or to generalise to unseen situations. To remedy
this, we present GROLLA, an evaluation framework for Grounded Language Learning
with Attributes with three sub-tasks: 1) Goal-oriented evaluation; 2) Object
attribute prediction evaluation; and 3) Zero-shot evaluation. We also propose a
new dataset CompGuessWhat?! as an instance of this framework for evaluating the
quality of learned neural representations, in particular concerning attribute
grounding. To this end, we extend the original GuessWhat?! dataset by including
a semantic layer on top of the perceptual one. Specifically, we enrich the
VisualGenome scene graphs associated with the GuessWhat?! images with abstract
and situated attributes. By using diagnostic classifiers, we show that current
models learn representations that are not expressive enough to encode object
attributes (average F1 of 44.27). In addition, they do not learn strategies nor
representations that are robust enough to perform well when novel scenes or
objects are involved in gameplay (zero-shot best accuracy 50.06%).
| 2,020 | Computation and Language |
Improved acoustic word embeddings for zero-resource languages using
multilingual transfer | Acoustic word embeddings are fixed-dimensional representations of
variable-length speech segments. Such embeddings can form the basis for speech
search, indexing and discovery systems when conventional speech recognition is
not possible. In zero-resource settings where unlabelled speech is the only
available resource, we need a method that gives robust embeddings on an
arbitrary language. Here we explore multilingual transfer: we train a single
supervised embedding model on labelled data from multiple well-resourced
languages and then apply it to unseen zero-resource languages. We consider
three multilingual recurrent neural network (RNN) models: a classifier trained
on the joint vocabularies of all training languages; a Siamese RNN trained to
discriminate between same and different words from multiple languages; and a
correspondence autoencoder (CAE) RNN trained to reconstruct word pairs. In a
word discrimination task on six target languages, all of these models
outperform state-of-the-art unsupervised models trained on the zero-resource
languages themselves, giving relative improvements of more than 30% in average
precision. When using only a few training languages, the multilingual CAE
performs better, but with more training languages the other multilingual models
perform similarly. Using more training languages is generally beneficial, but
improvements are marginal on some languages. We present probing experiments
which show that the CAE encodes more phonetic, word duration, language identity
and speaker information than the other multilingual models.
| 2,021 | Computation and Language |
Emergent Multi-Agent Communication in the Deep Learning Era | The ability to cooperate through language is a defining feature of humans. As
the perceptual, motory and planning capabilities of deep artificial networks
increase, researchers are studying whether they also can develop a shared
language to interact. From a scientific perspective, understanding the
conditions under which language evolves in communities of deep agents and its
emergent features can shed light on human language evolution. From an applied
perspective, endowing deep networks with the ability to solve problems
interactively by communicating with each other and with us should make them
more flexible and useful in everyday life.
This article surveys representative recent language emergence studies from
both of these two angles.
| 2,020 | Computation and Language |
Self-Training for End-to-End Speech Translation | One of the main challenges for end-to-end speech translation is data
scarcity. We leverage pseudo-labels generated from unlabeled audio by a cascade
and an end-to-end speech translation model. This provides 8.3 and 5.7 BLEU
gains over a strong semi-supervised baseline on the MuST-C English-French and
English-German datasets, reaching state-of-the art performance. The effect of
the quality of the pseudo-labels is investigated. Our approach is shown to be
more effective than simply pre-training the encoder on the speech recognition
task. Finally, we demonstrate the effectiveness of self-training by directly
generating pseudo-labels with an end-to-end model instead of a cascade model.
| 2,020 | Computation and Language |
Extracting a Knowledge Base of COVID-19 Events from Social Media | In this paper, we present a manually annotated corpus of 10,000 tweets
containing public reports of five COVID-19 events, including positive and
negative tests, deaths, denied access to testing, claimed cures and
preventions. We designed slot-filling questions for each event type and
annotated a total of 31 fine-grained slots, such as the location of events,
recent travel, and close contacts. We show that our corpus can support
fine-tuning BERT-based classifiers to automatically extract publicly reported
events and help track the spread of a new disease. We also demonstrate that, by
aggregating events extracted from millions of tweets, we achieve surprisingly
high precision when answering complex queries, such as "Which organizations
have employees that tested positive in Philadelphia?" We will release our
corpus (with user-information removed), automatic extraction models, and the
corresponding knowledge base to the research community.
| 2,022 | Computation and Language |
Meta Dialogue Policy Learning | Dialog policy determines the next-step actions for agents and hence is
central to a dialogue system. However, when migrated to novel domains with
little data, a policy model can fail to adapt due to insufficient interactions
with the new environment. We propose Deep Transferable Q-Network (DTQN) to
utilize shareable low-level signals between domains, such as dialogue acts and
slots. We decompose the state and action representation space into feature
subspaces corresponding to these low-level components to facilitate
cross-domain knowledge transfer. Furthermore, we embed DTQN in a meta-learning
framework and introduce Meta-DTQN with a dual-replay mechanism to enable
effective off-policy training and adaptation. In experiments, our model
outperforms baseline models in terms of both success rate and dialogue
efficiency on the multi-domain dialogue dataset MultiWOZ 2.0.
| 2,020 | Computation and Language |
M3P: Learning Universal Representations via Multitask Multilingual
Multimodal Pre-training | We present M3P, a Multitask Multilingual Multimodal Pre-trained model that
combines multilingual pre-training and multimodal pre-training into a unified
framework via multitask pre-training. Our goal is to learn universal
representations that can map objects occurred in different modalities or texts
expressed in different languages into a common semantic space. In addition, to
explicitly encourage fine-grained alignment between images and non-English
languages, we also propose Multimodal Code-switched Training (MCT) to combine
monolingual pre-training and multimodal pre-training via a code-switch
strategy. Experiments are performed on the multilingual image retrieval task
across two benchmark datasets, including MSCOCO and Multi30K. M3P can achieve
comparable results for English and new state-of-the-art results for non-English
languages.
| 2,021 | Computation and Language |
Experiments on Paraphrase Identification Using Quora Question Pairs
Dataset | We modeled the Quora question pairs dataset to identify a similar question.
The dataset that we use is provided by Quora. The task is a binary
classification. We tried several methods and algorithms and different approach
from previous works. For feature extraction, we used Bag of Words including
Count Vectorizer, and Term Frequency-Inverse Document Frequency with unigram
for XGBoost and CatBoost. Furthermore, we also experimented with WordPiece
tokenizer which improves the model performance significantly. We achieved up to
97 percent accuracy. Code and Dataset.
| 2,020 | Computation and Language |
Seq2Seq AI Chatbot with Attention Mechanism | Intelligent Conversational Agent development using Artificial Intelligence or
Machine Learning technique is an interesting problem in the field of Natural
Language Processing. With the rise of deep learning, these models were quickly
replaced by end to end trainable neural networks.
| 2,020 | Computation and Language |
Enhanced back-translation for low resource neural machine translation
using self-training | Improving neural machine translation (NMT) models using the back-translations
of the monolingual target data (synthetic parallel data) is currently the
state-of-the-art approach for training improved translation systems. The
quality of the backward system - which is trained on the available parallel
data and used for the back-translation - has been shown in many studies to
affect the performance of the final NMT model. In low resource conditions, the
available parallel data is usually not enough to train a backward model that
can produce the qualitative synthetic data needed to train a standard
translation model. This work proposes a self-training strategy where the output
of the backward model is used to improve the model itself through the forward
translation technique. The technique was shown to improve baseline low resource
IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation
models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the
improved English-German backward model was used to train a forward model which
out-performed another forward model trained using standard back-translation by
2.7 BLEU.
| 2,021 | Computation and Language |
CiwGAN and fiwGAN: Encoding information in acoustic data to model
lexical learning with Generative Adversarial Networks | How can deep neural networks encode information that corresponds to words in
human speech into raw acoustic data? This paper proposes two neural network
architectures for modeling unsupervised lexical learning from raw acoustic
inputs, ciwGAN (Categorical InfoWaveGAN) and fiwGAN (Featural InfoWaveGAN),
that combine a Deep Convolutional GAN architecture for audio data (WaveGAN;
arXiv:1705.07904) with an information theoretic extension of GAN -- InfoGAN
(arXiv:1606.03657), and propose a new latent space structure that can model
featural learning simultaneously with a higher level classification and allows
for a very low-dimension vector representation of lexical items. Lexical
learning is modeled as emergent from an architecture that forces a deep neural
network to output data such that unique information is retrievable from its
acoustic outputs. The networks trained on lexical items from TIMIT learn to
encode unique information corresponding to lexical items in the form of
categorical variables in their latent space. By manipulating these variables,
the network outputs specific lexical items. The network occasionally outputs
innovative lexical items that violate training data, but are linguistically
interpretable and highly informative for cognitive modeling and neural network
interpretability. Innovative outputs suggest that phonetic and phonological
representations learned by the network can be productively recombined and
directly paralleled to productivity in human speech: a fiwGAN network trained
on `suit' and `dark' outputs innovative `start', even though it never saw
`start' or even a [st] sequence in the training data. We also argue that
setting latent featural codes to values well beyond training range results in
almost categorical generation of prototypical lexical items and reveals
underlying values of each latent code.
| 2,021 | Computation and Language |
Personalizing Grammatical Error Correction: Adaptation to Proficiency
Level and L1 | Grammar error correction (GEC) systems have become ubiquitous in a variety of
software applications, and have started to approach human-level performance for
some datasets. However, very little is known about how to efficiently
personalize these systems to the user's characteristics, such as their
proficiency level and first language, or to emerging domains of text. We
present the first results on adapting a general-purpose neural GEC system to
both the proficiency level and the first language of a writer, using only a few
thousand annotated sentences. Our study is the broadest of its kind, covering
five proficiency levels and twelve different languages, and comparing three
different adaptation scenarios: adapting to the proficiency level only, to the
first language only, or to both aspects simultaneously. We show that tailoring
to both scenarios achieves the largest performance improvement (3.6 F0.5)
relative to a strong baseline.
| 2,019 | Computation and Language |
End-to-End Speech-Translation with Knowledge Distillation: FBK@IWSLT2020 | This paper describes FBK's participation in the IWSLT 2020 offline speech
translation (ST) task. The task evaluates systems' ability to translate English
TED talks audio into German texts. The test talks are provided in two versions:
one contains the data already segmented with automatic tools and the other is
the raw data without any segmentation. Participants can decide whether to work
on custom segmentation or not. We used the provided segmentation. Our system is
an end-to-end model based on an adaptation of the Transformer for speech data.
Its training process is the main focus of this paper and it is based on: i)
transfer learning (ASR pretraining and knowledge distillation), ii) data
augmentation (SpecAugment, time stretch and synthetic data), iii) combining
synthetic and real data marked as different domains, and iv) multi-task
learning using the CTC loss. Finally, after the training with word-level
knowledge distillation is complete, our ST models are fine-tuned using label
smoothed cross entropy. Our best model scored 29 BLEU on the MuST-C En-De test
set, which is an excellent result compared to recent papers, and 23.7 BLEU on
the same data segmented with VAD, showing the need for researching solutions
addressing this specific data condition.
| 2,020 | Computation and Language |
Linguists Who Use Probabilistic Models Love Them: Quantification in
Functional Distributional Semantics | Functional Distributional Semantics provides a computationally tractable
framework for learning truth-conditional semantics from a corpus. Previous work
in this framework has provided a probabilistic version of first-order logic,
recasting quantification as Bayesian inference. In this paper, I show how the
previous formulation gives trivial truth values when a precise quantifier is
used with vague predicates. I propose an improved account, avoiding this
problem by treating a vague predicate as a distribution over precise
predicates. I connect this account to recent work in the Rational Speech Acts
framework on modelling generic quantification, and I extend this to modelling
donkey sentences. Finally, I explain how the generic quantifier can be both
pragmatically complex and yet computationally simpler than precise quantifiers.
| 2,020 | Computation and Language |
Syntactic Search by Example | We present a system that allows a user to search a large linguistically
annotated corpus using syntactic patterns over dependency graphs. In contrast
to previous attempts to this effect, we introduce a light-weight query language
that does not require the user to know the details of the underlying syntactic
representations, and instead to query the corpus by providing an example
sentence coupled with simple markup. Search is performed at an interactive
speed due to an efficient linguistic graph-indexing and retrieval engine. This
allows for rapid exploration, development and refinement of syntax-based
queries. We demonstrate the system using queries over two corpora: the English
wikipedia, and a collection of English pubmed abstracts. A demo of the
wikipedia system is available at: https://allenai.github.io/spike
| 2,020 | Computation and Language |
Response to LiveBot: Generating Live Video Comments Based on Visual and
Textual Contexts | Live video commenting systems are an emerging feature of online video sites.
Recently the Chinese video sharing platform Bilibili, has popularised a novel
captioning system where user comments are displayed as streams of moving
subtitles overlaid on the video playback screen and broadcast to all viewers in
real-time. LiveBot was recently introduced as a novel Automatic Live Video
Commenting (ALVC) application. This enables the automatic generation of live
video comments from both the existing video stream and existing viewers
comments. In seeking to reproduce the baseline results reported in the original
Livebot paper, we found differences between the reproduced results using the
project codebase and the numbers reported in the paper. Further examination of
this situation suggests that this may be caused by a number of small issues in
the project code, including a non-obvious overlap between the training and test
sets. In this paper, we study these discrepancies in detail and propose an
alternative baseline implementation as a reference for other researchers in
this field.
| 2,020 | Computation and Language |
The SOFC-Exp Corpus and Neural Approaches to Information Extraction in
the Materials Science Domain | This paper presents a new challenging information extraction task in the
domain of materials science. We develop an annotation scheme for marking
information on experiments related to solid oxide fuel cells in scientific
publications, such as involved materials and measurement conditions. With this
paper, we publish our annotation guidelines, as well as our SOFC-Exp corpus
consisting of 45 open-access scholarly articles annotated by domain experts. A
corpus and an inter-annotator agreement study demonstrate the complexity of the
suggested named entity recognition and slot filling tasks as well as high
annotation quality. We also present strong neural-network based models for a
variety of tasks that can be addressed on the basis of our new data set. On all
tasks, using BERT embeddings leads to large performance gains, but with
increasing task complexity, adding a recurrent neural network on top seems
beneficial. Our models will serve as competitive baselines in future work, and
analysis of their performance highlights difficult cases when modeling the data
and suggests promising research directions.
| 2,020 | Computation and Language |
NewB: 200,000+ Sentences for Political Bias Detection | We present the Newspaper Bias Dataset (NewB), a text corpus of more than
200,000 sentences from eleven news sources regarding Donald Trump. While
previous datasets have labeled sentences as either liberal or conservative,
NewB covers the political views of eleven popular media sources, capturing more
nuanced political viewpoints than a traditional binary classification system
does. We train two state-of-the-art deep learning models to predict the news
source of a given sentence from eleven newspapers and find that a recurrent
neural network achieved top-1, top-3, and top-5 accuracies of 33.3%, 61.4%, and
77.6%, respectively, significantly outperforming a baseline logistic regression
model's accuracies of 18.3%, 42.6%, and 60.8%. Using the news source label of
sentences, we analyze the top n-grams with our model to gain meaningful insight
into the portrayal of Trump by media sources.We hope that the public release of
our dataset will encourage further research in using natural language
processing to analyze more complex political biases.
Our dataset is posted at https://github.com/JerryWeiAI/NewB .
| 2,023 | Computation and Language |
SOLO: A Corpus of Tweets for Examining the State of Being Alone | The state of being alone can have a substantial impact on our lives, though
experiences with time alone diverge significantly among individuals.
Psychologists distinguish between the concept of solitude, a positive state of
voluntary aloneness, and the concept of loneliness, a negative state of
dissatisfaction with the quality of one's social interactions. Here, for the
first time, we conduct a large-scale computational analysis to explore how the
terms associated with the state of being alone are used in online language. We
present SOLO (State of Being Alone), a corpus of over 4 million tweets
collected with query terms 'solitude', 'lonely', and 'loneliness'. We use SOLO
to analyze the language and emotions associated with the state of being alone.
We show that the term 'solitude' tends to co-occur with more positive,
high-dominance words (e.g., enjoy, bliss) while the terms 'lonely' and
'loneliness' frequently co-occur with negative, low-dominance words (e.g.,
scared, depressed), which confirms the conceptual distinctions made in
psychology. We also show that women are more likely to report on negative
feelings of being lonely as compared to men, and there are more teenagers among
the tweeters that use the word 'lonely' than among the tweeters that use the
word 'solitude'.
| 2,020 | Computation and Language |
Human or Machine: Automating Human Likeliness Evaluation of NLG Texts | Automatic evaluation of various text quality criteria produced by data-driven
intelligent methods is very common and useful because it is cheap, fast, and
usually yields repeatable results. In this paper, we present an attempt to
automate the human likeliness evaluation of the output text samples coming from
natural language generation methods used to solve several tasks. We propose to
use a human likeliness score that shows the percentage of the output samples
from a method that look as if they were written by a human. Instead of having
human participants label or rate those samples, we completely automate the
process by using a discrimination procedure based on large pretrained language
models and their probability distributions. As follow up, we plan to perform an
empirical analysis of human-written and machine-generated texts to find the
optimal setup of this evaluation approach. A validation procedure involving
human participants will also check how the automatic evaluation correlates with
human judgments.
| 2,020 | Computation and Language |
Cross-lingual Transfer Learning for COVID-19 Outbreak Alignment | The spread of COVID-19 has become a significant and troubling aspect of
society in 2020. With millions of cases reported across countries, new
outbreaks have occurred and followed patterns of previously affected areas.
Many disease detection models do not incorporate the wealth of social media
data that can be utilized for modeling and predicting its spread. In this case,
it is useful to ask, can we utilize this knowledge in one country to model the
outbreak in another? To answer this, we propose the task of cross-lingual
transfer learning for epidemiological alignment. Utilizing both macro and micro
text features, we train on Italy's early COVID-19 outbreak through Twitter and
transfer to several other countries. Our experiments show strong results with
up to 0.85 Spearman correlation in cross-country predictions.
| 2,020 | Computation and Language |
Evaluating Text Coherence at Sentence and Paragraph Levels | In this paper, to evaluate text coherence, we propose the paragraph ordering
task as well as conducting sentence ordering. We collected four distinct
corpora from different domains on which we investigate the adaptation of
existing sentence ordering methods to a paragraph ordering task. We also
compare the learnability and robustness of existing models by artificially
creating mini datasets and noisy datasets respectively and verifying the
efficiency of established models under these circumstances. Furthermore, we
carry out human evaluation on the rearranged passages from two competitive
models and confirm that WLCS-l is a better metric performing significantly
higher correlations with human rating than tau, the most prevalent metric used
before. Results from these evaluations show that except for certain extreme
conditions, the recurrent graph neural network-based model is an optimal choice
for coherence modeling.
| 2,020 | Computation and Language |
"To Target or Not to Target": Identification and Analysis of Abusive
Text Using Ensemble of Classifiers | With rising concern around abusive and hateful behavior on social media
platforms, we present an ensemble learning method to identify and analyze the
linguistic properties of such content. Our stacked ensemble comprises of three
machine learning models that capture different aspects of language and provide
diverse and coherent insights about inappropriate language. The proposed
approach provides comparable results to the existing state-of-the-art on the
Twitter Abusive Behavior dataset (Founta et al. 2018) without using any user or
network-related information; solely relying on textual properties. We believe
that the presented insights and discussion of shortcomings of current
approaches will highlight potential directions for future research.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.