Titles
stringlengths 6
220
| Abstracts
stringlengths 37
3.26k
| Years
int64 1.99k
2.02k
| Categories
stringclasses 1
value |
---|---|---|---|
Who did They Respond to? Conversation Structure Modeling using Masked
Hierarchical Transformer | Conversation structure is useful for both understanding the nature of
conversation dynamics and for providing features for many downstream
applications such as summarization of conversations. In this work, we define
the problem of conversation structure modeling as identifying the parent
utterance(s) to which each utterance in the conversation responds to. Previous
work usually took a pair of utterances to decide whether one utterance is the
parent of the other. We believe the entire ancestral history is a very
important information source to make accurate prediction. Therefore, we design
a novel masking mechanism to guide the ancestor flow, and leverage the
transformer model to aggregate all ancestors to predict parent utterances. Our
experiments are performed on the Reddit dataset (Zhang, Culbertson, and
Paritosh 2017) and the Ubuntu IRC dataset (Kummerfeld et al. 2019). In
addition, we also report experiments on a new larger corpus from the Reddit
platform and release this dataset. We show that the proposed model, that takes
into account the ancestral history of the conversation, significantly
outperforms several strong baselines including the BERT model on all datasets
| 2,019 | Computation and Language |
JParaCrawl: A Large Scale Web-Based English-Japanese Parallel Corpus | Recent machine translation algorithms mainly rely on parallel corpora.
However, since the availability of parallel corpora remains limited, only some
resource-rich language pairs can benefit from them. We constructed a parallel
corpus for English-Japanese, for which the amount of publicly available
parallel corpora is still limited. We constructed the parallel corpus by
broadly crawling the web and automatically aligning parallel sentences. Our
collected corpus, called JParaCrawl, amassed over 8.7 million sentence pairs.
We show how it includes a broader range of domains and how a neural machine
translation model trained with it works as a good pre-trained model for
fine-tuning specific domains. The pre-training and fine-tuning approaches
achieved or surpassed performance comparable to model training from the initial
state and reduced the training time. Additionally, we trained the model with an
in-domain dataset and JParaCrawl to show how we achieved the best performance
with them. JParaCrawl and the pre-trained models are freely available online
for research purposes.
| 2,020 | Computation and Language |
Non-autoregressive Transformer by Position Learning | Non-autoregressive models are promising on various text generation tasks.
Previous work hardly considers to explicitly model the positions of generated
words. However, position modeling is an essential problem in non-autoregressive
text generation. In this study, we propose PNAT, which incorporates positions
as a latent variable into the text generative process. Experimental results
show that PNAT achieves top results on machine translation and paraphrase
generation tasks, outperforming several strong baselines.
| 2,019 | Computation and Language |
Conversational implicatures in English dialogue: Annotated dataset | Human dialogue often contains utterances having meanings entirely different
from the sentences used and are clearly understood by the interlocutors. But in
human-computer interactions, the machine fails to understand the implicated
meaning unless it is trained with a dataset containing the implicated meaning
of an utterance along with the utterance and the context in which it is
uttered. In linguistic terms, conversational implicatures are the meanings of
the speaker's utterance that are not part of what is explicitly said. In this
paper, we introduce a dataset of dialogue snippets with three constituents,
which are the context, the utterance, and the implicated meanings. These
implicated meanings are the conversational implicatures. The utterances are
collected by transcribing from listening comprehension sections of English
tests like TOEFL (Test of English as a Foreign Language) as well as scraping
dialogues from movie scripts available on IMSDb (Internet Movie Script
Database). The utterances are manually annotated with implicatures.
| 2,019 | Computation and Language |
hauWE: Hausa Words Embedding for Natural Language Processing | Words embedding (distributed word vector representations) have become an
essential component of many natural language processing (NLP) tasks such as
machine translation, sentiment analysis, word analogy, named entity recognition
and word similarity. Despite this, the only work that provides word vectors for
Hausa language is that of Bojanowski et al. [1] trained using fastText,
consisting of only a few words vectors. This work presents words embedding
models using Word2Vec's Continuous Bag of Words (CBoW) and Skip Gram (SG)
models. The models, hauWE (Hausa Words Embedding), are bigger and better than
the only previous model, making them more useful in NLP tasks. To compare the
models, they were used to predict the 10 most similar words to 30 randomly
selected Hausa words. hauWE CBoW's 88.7% and hauWE SG's 79.3% prediction
accuracy greatly outperformed Bojanowski et al. [1]'s 22.3%.
| 2,020 | Computation and Language |
Learning to Reuse Translations: Guiding Neural Machine Translation with
Examples | In this paper, we study the problem of enabling neural machine translation
(NMT) to reuse previous translations from similar examples in target
prediction. Distinguishing reusable translations from noisy segments and
learning to reuse them in NMT are non-trivial. To solve these challenges, we
propose an Example-Guided NMT (EGNMT) framework with two models: (1) a
noise-masked encoder model that masks out noisy words according to word
alignments and encodes the noise-masked sentences with an additional example
encoder and (2) an auxiliary decoder model that predicts reusable words via an
auxiliary decoder sharing parameters with the primary decoder. We define and
implement the two models with the state-of-the-art Transformer. Experiments
show that the noise-masked encoder model allows NMT to learn useful information
from examples with low fuzzy match scores (FMS) while the auxiliary decoder
model is good for high-FMS examples. More experiments on Chinese-English,
English-German and English-Spanish translation demonstrate that the combination
of the two EGNMT models can achieve improvements of up to +9 BLEU points over
the baseline system and +7 BLEU points over a two-encoder Transformer.
| 2,019 | Computation and Language |
End-to-End Trainable Non-Collaborative Dialog System | End-to-end task-oriented dialog models have achieved promising performance on
collaborative tasks where users willingly coordinate with the system to
complete a given task. While in non-collaborative settings, for example,
negotiation and persuasion, users and systems do not share a common goal. As a
result, compared to collaborate tasks, people use social content to build
rapport and trust in these non-collaborative settings in order to advance their
goals. To handle social content, we introduce a hierarchical intent annotation
scheme, which can be generalized to different non-collaborative dialog tasks.
Building upon TransferTransfo (Wolf et al. 2019), we propose an end-to-end
neural network model to generate diverse coherent responses. Our model utilizes
intent and semantic slots as the intermediate sentence representation to guide
the generation process. In addition, we design a filter to select appropriate
responses based on whether these intermediate representations fit the designed
task and conversation constraints. Our non-collaborative dialog model guides
users to complete the task while simultaneously keeps them engaged. We test our
approach on our newly proposed ANTISCAM dataset and an existing
PERSUASIONFORGOOD dataset. Both automatic and human evaluations suggest that
our model outperforms multiple baselines in these two non-collaborative tasks.
| 2,019 | Computation and Language |
Chinese Spelling Error Detection Using a Fusion Lattice LSTM | Spelling error detection serves as a crucial preprocessing in many natural
language processing applications. Due to the characteristics of Chinese
Language, Chinese spelling error detection is more challenging than error
detection in English. Existing methods are mainly under a pipeline framework,
which artificially divides error detection process into two steps. Thus, these
methods bring error propagation and cannot always work well due to the
complexity of the language environment. Besides existing methods only adopt
character or word information, and ignore the positive effect of fusing
character, word, pinyin1 information together. We propose an LF-LSTM-CRF model,
which is an extension of the LSTMCRF with word lattices and
character-pinyin-fusion inputs. Our model takes advantage of the end-to-end
framework to detect errors as a whole process, and dynamically integrates
character, word and pinyin information. Experiments on the SIGHAN data show
that our LF-LSTM-CRF outperforms existing methods with similar external
resources consistently, and confirm the feasibility of adopting the end-to-end
framework and the availability of integrating of character, word and pinyin
information.
| 2,019 | Computation and Language |
Corpus Wide Argument Mining -- a Working Solution | One of the main tasks in argument mining is the retrieval of argumentative
content pertaining to a given topic. Most previous work addressed this task by
retrieving a relatively small number of relevant documents as the initial
source for such content. This line of research yielded moderate success, which
is of limited use in a real-world system. Furthermore, for such a system to
yield a comprehensive set of relevant arguments, over a wide range of topics,
it requires leveraging a large and diverse corpus in an appropriate manner.
Here we present a first end-to-end high-precision, corpus-wide argument mining
system. This is made possible by combining sentence-level queries over an
appropriate indexing of a very large corpus of newspaper articles, with an
iterative annotation scheme. This scheme addresses the inherent label bias in
the data and pinpoints the regions of the sample space whose manual labeling is
required to obtain high-precision among top-ranked candidates.
| 2,020 | Computation and Language |
Unsupervised Domain Adaptation of Language Models for Reading
Comprehension | This study tackles unsupervised domain adaptation of reading comprehension
(UDARC). Reading comprehension (RC) is a task to learn the capability for
question answering with textual sources. State-of-the-art models on RC still do
not have general linguistic intelligence; i.e., their accuracy worsens for
out-domain datasets that are not used in the training. We hypothesize that this
discrepancy is caused by a lack of the language modeling (LM) capability for
the out-domain. The UDARC task allows models to use supervised RC training data
in the source domain and only unlabeled passages in the target domain. To solve
the UDARC problem, we provide two domain adaptation models. The first one
learns the out-domain LM and in-domain RC task sequentially. The second one is
the proposed model that uses a multi-task learning approach of LM and RC. The
models can retain both the RC capability acquired from the supervised data in
the source domain and the LM capability from the unlabeled data in the target
domain. We evaluated the models on UDARC with five datasets in different
domains. The models outperformed the model without domain adaptation. In
particular, the proposed model yielded an improvement of 4.3/4.2 points in
EM/F1 in an unseen biomedical domain.
| 2,020 | Computation and Language |
Filling Conversation Ellipsis for Better Social Dialog Understanding | The phenomenon of ellipsis is prevalent in social conversations. Ellipsis
increases the difficulty of a series of downstream language understanding
tasks, such as dialog act prediction and semantic role labeling. We propose to
resolve ellipsis through automatic sentence completion to improve language
understanding. However, automatic ellipsis completion can result in output
which does not accurately reflect user intent. To address this issue, we
propose a method which considers both the original utterance that has ellipsis
and the automatically completed utterance in dialog act and semantic role
labeling tasks. Specifically, we first complete user utterances to resolve
ellipsis using an end-to-end pointer network model. We then train a prediction
model using both utterances containing ellipsis and our automatically completed
utterances. Finally, we combine the prediction results from these two
utterances using a selection model that is guided by expert knowledge. Our
approach improves dialog act prediction and semantic role labeling by 1.3% and
2.5% in F1 score respectively in social conversations. We also present an
open-domain human-machine conversation dataset with manually completed user
utterances and annotated semantic role labeling after manual completion.
| 2,019 | Computation and Language |
Financial Event Extraction Using Wikipedia-Based Weak Supervision | Extraction of financial and economic events from text has previously been
done mostly using rule-based methods, with more recent works employing machine
learning techniques. This work is in line with this latter approach, leveraging
relevant Wikipedia sections to extract weak labels for sentences describing
economic events. Whereas previous weakly supervised approaches required a
knowledge-base of such events, or corresponding financial figures, our approach
requires no such additional data, and can be employed to extract economic
events related to companies which are not even mentioned in the training data.
| 2,022 | Computation and Language |
A Causal Inference Method for Reducing Gender Bias in Word Embedding
Relations | Word embedding has become essential for natural language processing as it
boosts empirical performances of various tasks. However, recent research
discovers that gender bias is incorporated in neural word embeddings, and
downstream tasks that rely on these biased word vectors also produce
gender-biased results. While some word-embedding gender-debiasing methods have
been developed, these methods mainly focus on reducing gender bias associated
with gender direction and fail to reduce the gender bias presented in word
embedding relations. In this paper, we design a causal and simple approach for
mitigating gender bias in word vector relation by utilizing the statistical
dependency between gender-definition word embeddings and gender-biased word
embeddings. Our method attains state-of-the-art results on gender-debiasing
tasks, lexical- and sentence-level evaluation tasks, and downstream coreference
resolution tasks.
| 2,019 | Computation and Language |
Outbound Translation User Interface Ptakopet: A Pilot Study | It is not uncommon for Internet users to have to produce a text in a foreign
language they have very little knowledge of and are unable to verify the
translation quality. We call the task "outbound translation" and explore it by
introducing an open-source modular system Ptakop\v{e}t. Its main purpose is to
inspect human interaction with MT systems enhanced with additional subsystems,
such as backward translation and quality estimation. We follow up with an
experiment on (Czech) human annotators tasked to produce questions in a
language they do not speak (German), with the help of Ptakop\v{e}t. We focus on
three real-world use cases (communication with IT support, describing
administrative issues and asking encyclopedic questions) from which we gain
insight into different strategies users take when faced with outbound
translation tasks. Round trip translation is known to be unreliable for
evaluating MT systems but our experimental evaluation documents that it works
very well for users, at least on MT systems of mid-range quality.
| 2,020 | Computation and Language |
Towards robust word embeddings for noisy texts | Research on word embeddings has mainly focused on improving their performance
on standard corpora, disregarding the difficulties posed by noisy texts in the
form of tweets and other types of non-standard writing from social media. In
this work, we propose a simple extension to the skipgram model in which we
introduce the concept of bridge-words, which are artificial words added to the
model to strengthen the similarity between standard words and their noisy
variants. Our new embeddings outperform baseline models on noisy texts on a
wide range of evaluation tasks, both intrinsic and extrinsic, while retaining a
good performance on standard texts. To the best of our knowledge, this is the
first explicit approach at dealing with this type of noisy texts at the word
embedding level that goes beyond the support for out-of-vocabulary words.
| 2,020 | Computation and Language |
SWift -- A SignWriting improved fast transcriber | We present SWift (SignWriting improved fast transcriber), an advanced editor
for computer-aided writing and transcribing using SignWriting (SW). SW is
devised to allow deaf people and linguists alike to exploit an easy-to-grasp
written form of (any) sign language. Similarly, SWift has been developed for
everyone who masters SW, and is not exclusively deaf-oriented. Using SWift, it
is possible to compose and save any sign, using elementary components called
glyphs. A guided procedure facilitates the composition process. SWift is aimed
at helping to break down the "electronic" barriers that keep the deaf community
away from Information and Communication Technology (ICT). The editor has been
developed modularly and can be integrated everywhere the use of SW, as an
alternative to written vocal language, may be advisable.
| 2,012 | Computation and Language |
Discovering topics with neural topic models built from PLSA assumptions | In this paper we present a model for unsupervised topic discovery in texts
corpora. The proposed model uses documents, words, and topics lookup table
embedding as neural network model parameters to build probabilities of words
given topics, and probabilities of topics given documents. These probabilities
are used to recover by marginalization probabilities of words given documents.
For very large corpora where the number of documents can be in the order of
billions, using a neural auto-encoder based document embedding is more scalable
then using a lookup table embedding as classically done. We thus extended the
lookup based document embedding model to continuous auto-encoder based model.
Our models are trained using probabilistic latent semantic analysis (PLSA)
assumptions. We evaluated our models on six datasets with a rich variety of
contents. Conducted experiments demonstrate that the proposed neural topic
models are very effective in capturing relevant topics. Furthermore,
considering perplexity metric, conducted evaluation benchmarks show that our
topic models outperform latent Dirichlet allocation (LDA) model which is
classically used to address topic discovery tasks.
| 2,019 | Computation and Language |
Korean-to-Chinese Machine Translation using Chinese Character as Pivot
Clue | Korean-Chinese is a low resource language pair, but Korean and Chinese have a
lot in common in terms of vocabulary. Sino-Korean words, which can be converted
into corresponding Chinese characters, account for more than fifty of the
entire Korean vocabulary. Motivated by this, we propose a simple linguistically
motivated solution to improve the performance of the Korean-to-Chinese neural
machine translation model by using their common vocabulary. We adopt Chinese
characters as a translation pivot by converting Sino-Korean words in Korean
sentences to Chinese characters and then train the machine translation model
with the converted Korean sentences as source sentences. The experimental
results on Korean-to-Chinese translation demonstrate that the models with the
proposed method improve translation quality up to 1.5 BLEU points in comparison
to the baseline models.
| 2,019 | Computation and Language |
Emotional Neural Language Generation Grounded in Situational Contexts | Emotional language generation is one of the keys to human-like artificial
intelligence. Humans use different type of emotions depending on the situation
of the conversation. Emotions also play an important role in mediating the
engagement level with conversational partners. However, current conversational
agents do not effectively account for emotional content in the language
generation process. To address this problem, we develop a language modeling
approach that generates affective content when the dialogue is situated in a
given context. We use the recently released Empathetic-Dialogues corpus to
build our models. Through detailed experiments, we find that our approach
outperforms the state-of-the-art method on the perplexity metric by about 5
points and achieves a higher BLEU metric score.
| 2,019 | Computation and Language |
Examining the Role of Clickbait Headlines to Engage Readers with
Reliable Health-related Information | Clickbait headlines are frequently used to attract readers to read articles.
Although this headline type has turned out to be a technique to engage readers
with misleading items, it is still unknown whether the technique can be used to
attract readers to reliable pieces. This study takes the opportunity to test
its efficacy to engage readers with reliable health articles. A set of online
surveys would be conducted to test readers' engagement with and perception
about clickbait headlines with reliable articles. After that, we would design
an automation system to generate clickabit headlines to maximize user
engagement.
| 2,019 | Computation and Language |
Learning to Learn Words from Visual Scenes | Language acquisition is the process of learning words from the surrounding
scene. We introduce a meta-learning framework that learns how to learn word
representations from unconstrained scenes. We leverage the natural
compositional structure of language to create training episodes that cause a
meta-learner to learn strong policies for language acquisition. Experiments on
two datasets show that our approach is able to more rapidly acquire novel words
as well as more robustly generalize to unseen compositions, significantly
outperforming established baselines. A key advantage of our approach is that it
is data efficient, allowing representations to be learned from scratch without
language pre-training. Visualizations and analysis suggest visual information
helps our approach learn a rich cross-modal representation from minimal
examples. Project webpage is available at https://expert.cs.columbia.edu/
| 2,020 | Computation and Language |
Few-Shot Knowledge Graph Completion | Knowledge graphs (KGs) serve as useful resources for various natural language
processing applications. Previous KG completion approaches require a large
number of training instances (i.e., head-tail entity pairs) for every relation.
The real case is that for most of the relations, very few entity pairs are
available. Existing work of one-shot learning limits method generalizability
for few-shot scenarios and does not fully use the supervisory information;
however, few-shot KG completion has not been well studied yet. In this work, we
propose a novel few-shot relation learning model (FSRL) that aims at
discovering facts of new relations with few-shot references. FSRL can
effectively capture knowledge from heterogeneous graph structure, aggregate
representations of few-shot references, and match similar entity pairs of
reference set for every relation. Extensive experiments on two public datasets
demonstrate that FSRL outperforms the state-of-the-art.
| 2,019 | Computation and Language |
Tracing State-Level Obesity Prevalence from Sentence Embeddings of
Tweets: A Feasibility Study | Twitter data has been shown broadly applicable for public health
surveillance. Previous public health studies based on Twitter data have largely
relied on keyword-matching or topic models for clustering relevant tweets.
However, both methods suffer from the short-length of texts and unpredictable
noise that naturally occurs in user-generated contexts. In response, we
introduce a deep learning approach that uses hashtags as a form of supervision
and learns tweet embeddings for extracting informative textual features. In
this case study, we address the specific task of estimating state-level obesity
from dietary-related textual features. Our approach yields an estimation that
strongly correlates the textual features to government data and outperforms the
keyword-matching baseline. The results also demonstrate the potential of
discovering risk factors using the textual features. This method is
general-purpose and can be applied to a wide range of Twitter-based public
health studies.
| 2,019 | Computation and Language |
CAWA: An Attention-Network for Credit Attribution | Credit attribution is the task of associating individual parts in a document
with their most appropriate class labels. It is an important task with
applications to information retrieval and text summarization. When labeled
training data is available, traditional approaches for sequence tagging can be
used for credit attribution. However, generating such labeled datasets is
expensive and time-consuming. In this paper, we present "Credit Attribution
With Attention (CAWA)", a neural-network-based approach, that instead of using
sentence-level labeled data, uses the set of class labels that are associated
with an entire document as a source of distant-supervision. CAWA combines an
attention mechanism with a multilabel classifier into an end-to-end learning
framework to perform credit attribution. CAWA labels the individual sentences
from the input document using the resultant attention-weights. CAWA improves
upon the state-of-the-art credit attribution approach by not constraining a
sentence to belong to just one class, but modeling each sentence as a
distribution over all classes, leading to better modeling of
semantically-similar classes. Experiments on the credit attribution task on a
variety of datasets show that the sentence class labels generated by CAWA
outperform the competing approaches. Additionally, on the multilabel text
classification task, CAWA performs better than the competing credit attribution
approaches.
| 2,019 | Computation and Language |
ATCSpeech: a multilingual pilot-controller speech corpus from real Air
Traffic Control environment | Automatic Speech Recognition (ASR) is greatly developed in recent years,
which expedites many applications on other fields. For the ASR research, speech
corpus is always an essential foundation, especially for the vertical industry,
such as Air Traffic Control (ATC). There are some speech corpora for common
applications, public or paid. However, for the ATC, it is difficult to collect
raw speeches from real systems due to safety issues. More importantly, for a
supervised learning task like ASR, annotating the transcription is a more
laborious work, which hugely restricts the prospect of ASR application. In this
paper, a multilingual speech corpus (ATCSpeech) from real ATC systems,
including accented Mandarin Chinese and English, is built and released to
encourage the non-commercial ASR research in ATC domain. The corpus is detailly
introduced from the perspective of data amount, speaker gender and role, speech
quality and other attributions. In addition, the performance of our baseline
ASR models is also reported. A community edition for our speech database can be
applied and used under a special contrast. To our best knowledge, this is the
first work that aims at building a real and multilingual ASR corpus for the air
traffic related research.
| 2,021 | Computation and Language |
SemEval-2015 Task 3: Answer Selection in Community Question Answering | Community Question Answering (cQA) provides new interesting research
directions to the traditional Question Answering (QA) field, e.g., the
exploitation of the interaction between users and the structure of related
posts. In this context, we organized SemEval-2015 Task 3 on "Answer Selection
in cQA", which included two subtasks: (a) classifying answers as "good", "bad",
or "potentially relevant" with respect to the question, and (b) answering a
YES/NO question with "yes", "no", or "unsure", based on the list of all
answers. We set subtask A for Arabic and English on two relatively different
cQA domains, i.e., the Qatar Living website for English, and a Quran-related
website for Arabic. We used crowdsourcing on Amazon Mechanical Turk to label a
large English training dataset, which we released to the research community.
Thirteen teams participated in the challenge with a total of 61 submissions: 24
primary and 37 contrastive. The best systems achieved an official score
(macro-averaged F1) of 57.19 and 63.7 for the English subtasks A and B, and
78.55 for the Arabic subtask A.
| 2,015 | Computation and Language |
Natural Language Generation Using Reinforcement Learning with External
Rewards | We propose an approach towards natural language generation using a
bidirectional encoder-decoder which incorporates external rewards through
reinforcement learning (RL). We use attention mechanism and maximum mutual
information as an initial objective function using RL. Using a two-part
training scheme, we train an external reward analyzer to predict the external
rewards and then use the predicted rewards to maximize the expected rewards
(both internal and external). We evaluate the system on two standard dialogue
corpora - Cornell Movie Dialog Corpus and Yelp Restaurant Review Corpus. We
report standard evaluation metrics including BLEU, ROUGE-L, and perplexity as
well as human evaluation to validate our approach.
| 2,019 | Computation and Language |
A Large-scale Dataset for Argument Quality Ranking: Construction and
Analysis | Identifying the quality of free-text arguments has become an important task
in the rapidly expanding field of computational argumentation. In this work, we
explore the challenging task of argument quality ranking. To this end, we
created a corpus of 30,497 arguments carefully annotated for point-wise
quality, released as part of this work. To the best of our knowledge, this is
the largest dataset annotated for point-wise argument quality, larger by a
factor of five than previously released datasets. Moreover, we address the core
issue of inducing a labeled score from crowd annotations by performing a
comprehensive evaluation of different approaches to this problem. In addition,
we analyze the quality dimensions that characterize this dataset. Finally, we
present a neural method for argument quality ranking, which outperforms several
baselines on our own dataset, as well as previous methods published for another
dataset.
| 2,019 | Computation and Language |
Single Headed Attention RNN: Stop Thinking With Your Head | The leading approaches in language modeling are all obsessed with TV shows of
my youth - namely Transformers and Sesame Street. Transformers this,
Transformers that, and over here a bonfire worth of GPU-TPU-neuromorphic wafer
scale silicon. We opt for the lazy path of old and proven techniques with a
fancy crypto inspired acronym: the Single Headed Attention RNN (SHA-RNN). The
author's lone goal is to show that the entire field might have evolved a
different direction if we had instead been obsessed with a slightly different
acronym and slightly different result. We take a previously strong language
model based only on boring LSTMs and get it to within a stone's throw of a
stone's throw of state-of-the-art byte level language model results on enwik8.
This work has undergone no intensive hyperparameter optimization and lived
entirely on a commodity desktop machine that made the author's small studio
apartment far too warm in the midst of a San Franciscan summer. The final
results are achievable in plus or minus 24 hours on a single GPU as the author
is impatient. The attention mechanism is also readily extended to large
contexts with minimal computation. Take that Sesame Street.
| 2,019 | Computation and Language |
Relevance-Promoting Language Model for Short-Text Conversation | Despite the effectiveness of sequence-to-sequence framework on the task of
Short-Text Conversation (STC), the issue of under-exploitation of training data
(i.e., the supervision signals from query text is \textit{ignored}) still
remains unresolved. Also, the adopted \textit{maximization}-based decoding
strategies, inclined to generating the generic responses or responses with
repetition, are unsuited to the STC task. In this paper, we propose to
formulate the STC task as a language modeling problem and tailor-make a
training strategy to adapt a language model for response generation. To enhance
generation performance, we design a relevance-promoting transformer language
model, which performs additional supervised source attention after the
self-attention to increase the importance of informative query tokens in
calculating the token-level representation. The model further refines the query
representation with relevance clues inferred from its multiple references
during training. In testing, we adopt a
\textit{randomization-over-maximization} strategy to reduce the generation of
generic responses. Experimental results on a large Chinese STC dataset
demonstrate the superiority of the proposed model on relevance metrics and
diversity metrics.\footnote{Code available at
https://ai.tencent.com/ailab/nlp/dialogue/.
| 2,019 | Computation and Language |
Integrating Relation Constraints with Neural Relation Extractors | Recent years have seen rapid progress in identifying predefined relationship
between entity pairs using neural networks NNs. However, such models often make
predictions for each entity pair individually, thus often fail to solve the
inconsistency among different predictions, which can be characterized by
discrete relation constraints. These constraints are often defined over
combinations of entity-relation-entity triples, since there often lack of
explicitly well-defined type and cardinality requirements for the relations. In
this paper, we propose a unified framework to integrate relation constraints
with NNs by introducing a new loss term, ConstraintLoss. Particularly, we
develop two efficient methods to capture how well the local predictions from
multiple instance pairs satisfy the relation constraints. Experiments on both
English and Chinese datasets show that our approach can help NNs learn from
discrete relation constraints to reduce inconsistency among local predictions,
and outperform popular neural relation extraction NRE models even enhanced with
extra post-processing. Our source code and datasets will be released at
https://github.com/PKUYeYuan/Constraint-Loss-AAAI-2020.
| 2,019 | Computation and Language |
Feature-Rich Part-of-speech Tagging for Morphologically Complex
Languages: Application to Bulgarian | We present experiments with part-of-speech tagging for Bulgarian, a Slavic
language with rich inflectional and derivational morphology. Unlike most
previous work, which has used a small number of grammatical categories, we work
with 680 morpho-syntactic tags. We combine a large morphological lexicon with
prior linguistic knowledge and guided learning from a POS-annotated corpus,
achieving accuracy of 97.98%, which is a significant improvement over the
state-of-the-art for Bulgarian.
| 2,012 | Computation and Language |
Neural Machine Translation with Explicit Phrase Alignment | While neural machine translation (NMT) has achieved state-of-the-art
translation performance, it is unable to capture the alignment between the
input and output during the translation process. The lack of alignment in NMT
models leads to three problems: it is hard to (1) interpret the translation
process, (2) impose lexical constraints, and (3) impose structural constraints.
To alleviate these problems, we propose to introduce explicit phrase alignment
into the translation process of arbitrary NMT models. The key idea is to build
a search space similar to that of phrase-based statistical machine translation
for NMT where phrase alignment is readily available. We design a new decoding
algorithm that can easily impose lexical and structural constraints.
Experiments show that our approach makes the translation process of NMT more
interpretable without sacrificing translation quality. In addition, our
approach achieves significant improvements in lexically and structurally
constrained translation tasks.
| 2,019 | Computation and Language |
A Time Series Analysis of Emotional Loading in Central Bank Statements | We examine the affective content of central bank press statements using
emotion analysis. Our focus is on two major international players, the European
Central Bank (ECB) and the US Federal Reserve Bank (Fed), covering a time span
from 1998 through 2019. We reveal characteristic patterns in the emotional
dimensions of valence, arousal, and dominance and find---despite the commonly
established attitude that emotional wording in central bank communication
should be avoided---a correlation between the state of the economy and
particularly the dominance dimension in the press releases under scrutiny and,
overall, an impact of the president in office.
| 2,019 | Computation and Language |
A Vietnamese Text-Based Conversational Agent | This paper introduces a Vietnamese text-based conversational agent
architecture on specific knowledge domain which is integrated in a question
answering system. When the question answering system fails to provide answers
to users' input, our conversational agent can step in to interact with users to
provide answers to users. Experimental results are promising where our
Vietnamese text-based conversational agent achieves positive feedback in a
study conducted in the university academic regulation domain.
| 2,019 | Computation and Language |
PIQA: Reasoning about Physical Commonsense in Natural Language | To apply eyeshadow without a brush, should I use a cotton swab or a
toothpick? Questions requiring this kind of physical commonsense pose a
challenge to today's natural language understanding systems. While recent
pretrained models (such as BERT) have made progress on question answering over
more abstract domains - such as news articles and encyclopedia entries, where
text is plentiful - in more physical domains, text is inherently limited due to
reporting bias. Can AI systems learn to reliably answer physical common-sense
questions without experiencing the physical world? In this paper, we introduce
the task of physical commonsense reasoning and a corresponding benchmark
dataset Physical Interaction: Question Answering or PIQA. Though humans find
the dataset easy (95% accuracy), large pretrained models struggle (77%). We
provide analysis about the dimensions of knowledge that existing models lack,
which offers significant opportunities for future research.
| 2,019 | Computation and Language |
Hybrid Text Feature Modeling for Disease Group Prediction using
Unstructured Physician Notes | Existing Clinical Decision Support Systems (CDSSs) largely depend on the
availability of structured patient data and Electronic Health Records (EHRs) to
aid caregivers. However, in case of hospitals in developing countries,
structured patient data formats are not widely adopted, where medical
professionals still rely on clinical notes in the form of unstructured text.
Such unstructured clinical notes recorded by medical personnel can also be a
potential source of rich patient-specific information which can be leveraged to
build CDSSs, even for hospitals in developing countries. If such unstructured
clinical text can be used, the manual and time-consuming process of EHR
generation will no longer be required, with huge person-hours and cost savings.
In this paper, we propose a generic ICD9 disease group prediction CDSS built on
unstructured physician notes modeled using hybrid word embeddings. These word
embeddings are used to train a deep neural network for effectively predicting
ICD9 disease groups. Experimental evaluation showed that the proposed approach
outperformed the state-of-the-art disease group prediction model built on
structured EHRs by 15% in terms of AUROC and 40% in terms of AUPRC, thus
proving our hypothesis and eliminating dependency on availability of structured
patient data.
| 2,019 | Computation and Language |
Semi-supervised Bootstrapping of Dialogue State Trackers for Task
Oriented Modelling | Dialogue systems benefit greatly from optimizing on detailed annotations,
such as transcribed utterances, internal dialogue state representations and
dialogue act labels. However, collecting these annotations is expensive and
time-consuming, holding back development in the area of dialogue modelling. In
this paper, we investigate semi-supervised learning methods that are able to
reduce the amount of required intermediate labelling. We find that by
leveraging un-annotated data instead, the amount of turn-level annotations of
dialogue state can be significantly reduced when building a neural dialogue
system. Our analysis on the MultiWOZ corpus, covering a range of domains and
topics, finds that annotations can be reduced by up to 30\% while maintaining
equivalent system performance. We also describe and evaluate the first
end-to-end dialogue model created for the MultiWOZ corpus.
| 2,019 | Computation and Language |
Doc2Vec on the PubMed corpus: study of a new approach to generate
related articles | PubMed is the biggest and most used bibliographic database worldwide, hosting
more than 26M biomedical publications. One of its useful features is the
"similar articles" section, allowing the end-user to find scientific articles
linked to the consulted document in term of context. The aim of this study is
to analyze whether it is possible to replace the statistic model PubMed Related
Articles (pmra) with a document embedding method. Doc2Vec algorithm was used to
train models allowing to vectorize documents. Six of its parameters were
optimised by following a grid-search strategy to train more than 1,900 models.
Parameters combination leading to the best accuracy was used to train models on
abstracts from the PubMed database. Four evaluations tasks were defined to
determine what does or does not influence the proximity between documents for
both Doc2Vec and pmra. The two different Doc2Vec architectures have different
abilities to link documents about a common context. The terminological
indexing, words and stems contents of linked documents are highly similar
between pmra and Doc2Vec PV-DBOW architecture. These algorithms are also more
likely to bring closer documents having a similar size. In contrary, the manual
evaluation shows much better results for the pmra algorithm. While the pmra
algorithm links documents by explicitly using terminological indexing in its
formula, Doc2Vec does not need a prior indexing. It can infer relations between
documents sharing a similar indexing, without any knowledge about them,
particularly regarding the PV-DBOW architecture. In contrary, the human
evaluation, without any clear agreement between evaluators, implies future
studies to better understand this difference between PV-DBOW and pmra
algorithm.
| 2,019 | Computation and Language |
Self-Attention Enhanced Selective Gate with Entity-Aware Embedding for
Distantly Supervised Relation Extraction | Distantly supervised relation extraction intrinsically suffers from noisy
labels due to the strong assumption of distant supervision. Most prior works
adopt a selective attention mechanism over sentences in a bag to denoise from
wrongly labeled data, which however could be incompetent when there is only one
sentence in a bag. In this paper, we propose a brand-new light-weight neural
framework to address the distantly supervised relation extraction problem and
alleviate the defects in previous selective attention framework. Specifically,
in the proposed framework, 1) we use an entity-aware word embedding method to
integrate both relative position information and head/tail entity embeddings,
aiming to highlight the essence of entities for this task; 2) we develop a
self-attention mechanism to capture the rich contextual dependencies as a
complement for local dependencies captured by piecewise CNN; and 3) instead of
using selective attention, we design a pooling-equipped gate, which is based on
rich contextual representations, as an aggregator to generate bag-level
representation for final relation classification. Compared to selective
attention, one major advantage of the proposed gating mechanism is that, it
performs stably and promisingly even if only one sentence appears in a bag and
thus keeps the consistency across all training examples. The experiments on NYT
dataset demonstrate that our approach achieves a new state-of-the-art
performance in terms of both AUC and top-n precision metrics.
| 2,019 | Computation and Language |
Evaluating Commonsense in Pre-trained Language Models | Contextualized representations trained over large raw text data have given
remarkable improvements for NLP tasks including question answering and reading
comprehension. There have been works showing that syntactic, semantic and word
sense knowledge are contained in such representations, which explains why they
benefit such tasks. However, relatively little work has been done investigating
commonsense knowledge contained in contextualized representations, which is
crucial for human question answering and reading comprehension. We study the
commonsense ability of GPT, BERT, XLNet, and RoBERTa by testing them on seven
challenging benchmarks, finding that language modeling and its variants are
effective objectives for promoting models' commonsense ability while
bi-directional context and larger training set are bonuses. We additionally
find that current models do poorly on tasks require more necessary inference
steps. Finally, we test the robustness of models by making dual test cases,
which are correlated so that the correct prediction of one sample should lead
to correct prediction of the other. Interestingly, the models show confusion on
these test cases, which suggests that they learn commonsense at the surface
rather than the deep level. We release a test set, named CATs publicly, for
future research.
| 2,021 | Computation and Language |
Simultaneous Neural Machine Translation using Connectionist Temporal
Classification | Simultaneous machine translation is a variant of machine translation that
starts the translation process before the end of an input. This task faces a
trade-off between translation accuracy and latency. We have to determine when
we start the translation for observed inputs so far, to achieve good practical
performance. In this work, we propose a neural machine translation method to
determine this timing in an adaptive manner. The proposed method introduces a
special token '<wait>', which is generated when the translation model chooses
to read the next input token instead of generating an output token. It also
introduces an objective function to handle the ambiguity in wait timings that
can be optimized using an algorithm called Connectionist Temporal
Classification (CTC). The use of CTC enables the optimization to consider all
possible output sequences including '<wait>' that are equivalent to the
reference translations and to choose the best one adaptively. We apply the
proposed method into simultaneous translation from English to Japanese and
investigate its performance and remaining problems.
| 2,019 | Computation and Language |
AIPNet: Generative Adversarial Pre-training of Accent-invariant Networks
for End-to-end Speech Recognition | As one of the major sources in speech variability, accents have posed a grand
challenge to the robustness of speech recognition systems. In this paper, our
goal is to build a unified end-to-end speech recognition system that
generalizes well across accents. For this purpose, we propose a novel
pre-training framework AIPNet based on generative adversarial nets (GAN) for
accent-invariant representation learning: Accent Invariant Pre-training
Networks. We pre-train AIPNet to disentangle accent-invariant and
accent-specific characteristics from acoustic features through adversarial
training on accented data for which transcriptions are not necessarily
available. We further fine-tune AIPNet by connecting the accent-invariant
module with an attention-based encoder-decoder model for multi-accent speech
recognition. In the experiments, our approach is compared against four
baselines including both accent-dependent and accent-independent models.
Experimental results on 9 English accents show that the proposed approach
outperforms all the baselines by 2.3 \sim 4.5% relative reduction on average
WER when transcriptions are available in all accents and by 1.6 \sim 6.1%
relative reduction when transcriptions are only available in US accent.
| 2,019 | Computation and Language |
Taking a Stance on Fake News: Towards Automatic Disinformation
Assessment via Deep Bidirectional Transformer Language Models for Stance
Detection | The exponential rise of social media and digital news in the past decade has
had the unfortunate consequence of escalating what the United Nations has
called a global topic of concern: the growing prevalence of disinformation.
Given the complexity and time-consuming nature of combating disinformation
through human assessment, one is motivated to explore harnessing AI solutions
to automatically assess news articles for the presence of disinformation. A
valuable first step towards automatic identification of disinformation is
stance detection, where given a claim and a news article, the aim is to predict
if the article agrees, disagrees, takes no position, or is unrelated to the
claim. Existing approaches in literature have largely relied on hand-engineered
features or shallow learned representations (e.g., word embeddings) to encode
the claim-article pairs, which can limit the level of representational
expressiveness needed to tackle the high complexity of disinformation
identification. In this work, we explore the notion of harnessing large-scale
deep bidirectional transformer language models for encoding claim-article pairs
in an effort to construct state-of-the-art stance detection geared for
identifying disinformation. Taking advantage of bidirectional cross-attention
between claim-article pairs via pair encoding with self-attention, we construct
a large-scale language model for stance detection by performing transfer
learning on a RoBERTa deep bidirectional transformer language model, and were
able to achieve state-of-the-art performance (weighted accuracy of 90.01%) on
the Fake News Challenge Stage 1 (FNC-I) benchmark. These promising results
serve as motivation for harnessing such large-scale language models as powerful
building blocks for creating effective AI solutions to combat disinformation.
| 2,019 | Computation and Language |
JEC-QA: A Legal-Domain Question Answering Dataset | We present JEC-QA, the largest question answering dataset in the legal
domain, collected from the National Judicial Examination of China. The
examination is a comprehensive evaluation of professional skills for legal
practitioners. College students are required to pass the examination to be
certified as a lawyer or a judge. The dataset is challenging for existing
question answering methods, because both retrieving relevant materials and
answering questions require the ability of logic reasoning. Due to the high
demand of multiple reasoning abilities to answer legal questions, the
state-of-the-art models can only achieve about 28% accuracy on JEC-QA, while
skilled humans and unskilled humans can reach 81% and 64% accuracy
respectively, which indicates a huge gap between humans and machines on this
task. We will release JEC-QA and our baselines to help improve the reasoning
ability of machine comprehension models. You can access the dataset from
http://jecqa.thunlp.org/.
| 2,019 | Computation and Language |
Zero-shot Chinese Discourse Dependency Parsing via Cross-lingual Mapping | Due to the absence of labeled data, discourse parsing still remains
challenging in some languages. In this paper, we present a simple and efficient
method to conduct zero-shot Chinese text-level dependency parsing by leveraging
English discourse labeled data and parsing techniques. We first construct the
Chinese-English mapping from the level of sentence and elementary discourse
unit (EDU), and then exploit the parsing results of the corresponding English
translations to obtain the discourse trees for the Chinese text. This method
can automatically conduct Chinese discourse parsing, with no need of a large
scale of Chinese labeled data.
| 2,019 | Computation and Language |
word2word: A Collection of Bilingual Lexicons for 3,564 Language Pairs | We present word2word, a publicly available dataset and an open-source Python
package for cross-lingual word translations extracted from sentence-level
parallel corpora. Our dataset provides top-k word translations in 3,564
(directed) language pairs across 62 languages in OpenSubtitles2018 (Lison et
al., 2018). To obtain this dataset, we use a count-based bilingual lexicon
extraction model based on the observation that not only source and target words
but also source words themselves can be highly correlated. We illustrate that
the resulting bilingual lexicons have high coverage and attain competitive
translation quality for several language pairs. We wrap our dataset and model
in an easy-to-use Python library, which supports downloading and retrieving
top-k word translations in any of the supported language pairs as well as
computing top-k word translations for custom parallel corpora.
| 2,019 | Computation and Language |
Sideways Transliteration: How to Transliterate Multicultural Person
Names? | In a global setting, texts contain transliterated names from many cultural
origins. Correct transliteration depends not only on target and source
languages but also, on the source language of the name. We introduce a novel
methodology for transliteration of names originating in different languages
using only monolingual resources. Our method is based on a step of noisy
transliteration and then ranking of the results based on origin specific letter
models. The transliteration table used for noisy generation is learned in an
unsupervised manner for each possible origin language. We present a solution
for gathering monolingual training data used by our method by mining of social
media sites such as Facebook and Wikipedia. We present results in the context
of transliterating from English to Hebrew and provide an online web service for
transliteration from English to Hebrew
| 2,019 | Computation and Language |
Jejueo Datasets for Machine Translation and Speech Synthesis | Jejueo was classified as critically endangered by UNESCO in 2010. Although
diverse efforts to revitalize it have been made, there have been few
computational approaches. Motivated by this, we construct two new Jejueo
datasets: Jejueo Interview Transcripts (JIT) and Jejueo Single Speaker Speech
(JSS). The JIT dataset is a parallel corpus containing 170k+ Jejueo-Korean
sentences, and the JSS dataset consists of 10k high-quality audio files
recorded by a native Jejueo speaker and a transcript file. Subsequently, we
build neural systems of machine translation and speech synthesis using them.
All resources are publicly available via our GitHub repository. We hope that
these datasets will attract interest of both language and machine learning
communities.
| 2,019 | Computation and Language |
Large-Scale Noun Compound Interpretation Using Bootstrapping and the Web
as a Corpus | Responding to the need for semantic lexical resources in natural language
processing applications, we examine methods to acquire noun compounds (NCs),
e.g., "orange juice", together with suitable fine-grained semantic
interpretations, e.g., "squeezed from", which are directly usable as
paraphrases. We employ bootstrapping and web statistics, and utilize the
relationship between NCs and paraphrasing patterns to jointly extract NCs and
such patterns in multiple alternating iterations. In evaluation, we found that
having one compound noun fixed yields both a higher number of semantically
interpreted NCs and improved accuracy due to stronger semantic restrictions.
| 2,011 | Computation and Language |
Findings of the 2016 WMT Shared Task on Cross-lingual Pronoun Prediction | We describe the design, the evaluation setup, and the results of the 2016 WMT
shared task on cross-lingual pronoun prediction. This is a classification task
in which participants are asked to provide predictions on what pronoun class
label should replace a placeholder value in the target-language text, provided
in lemmatised and PoS-tagged form. We provided four subtasks, for the
English-French and English-German language pairs, in both directions. Eleven
teams participated in the shared task; nine for the English-French subtask,
five for French-English, nine for English-German, and six for German-English.
Most of the submissions outperformed two strong language-model based baseline
systems, with systems using deep recurrent neural networks outperforming those
using other architectures for most language pairs.
| 2,016 | Computation and Language |
NorNE: Annotating Named Entities for Norwegian | This paper presents NorNE, a manually annotated corpus of named entities
which extends the annotation of the existing Norwegian Dependency Treebank.
Comprising both of the official standards of written Norwegian (Bokm{\aa}l and
Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of
entity types including persons, organizations, locations, geo-political
entities, products, and events, in addition to a class corresponding to
nominals derived from names. We here present details on the annotation effort,
guidelines, inter-annotator agreement and an experimental analysis of the
corpus using a neural sequence labeling architecture.
| 2,020 | Computation and Language |
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive
Summarization | This paper introduces the SAMSum Corpus, a new dataset with abstractive
dialogue summaries. We investigate the challenges it poses for automated
summarization by testing several models and comparing their results with those
obtained on a corpus of news articles. We show that model-generated summaries
of dialogues achieve higher ROUGE scores than the model-generated summaries of
news -- in contrast with human evaluators' judgement. This suggests that a
challenging task of abstractive dialogue summarization requires dedicated
models and non-standard quality measures. To our knowledge, our study is the
first attempt to introduce a high-quality chat-dialogues corpus, manually
annotated with abstractive summarizations, which can be used by the research
community for further studies.
| 2,019 | Computation and Language |
Do Attention Heads in BERT Track Syntactic Dependencies? | We investigate the extent to which individual attention heads in pretrained
transformer language models, such as BERT and RoBERTa, implicitly capture
syntactic dependency relations. We employ two methods---taking the maximum
attention weight and computing the maximum spanning tree---to extract implicit
dependency relations from the attention weights of each layer/head, and compare
them to the ground-truth Universal Dependency (UD) trees. We show that, for
some UD relation types, there exist heads that can recover the dependency type
significantly better than baselines on parsed English text, suggesting that
some self-attention heads act as a proxy for syntactic structure. We also
analyze BERT fine-tuned on two datasets---the syntax-oriented CoLA and the
semantics-oriented MNLI---to investigate whether fine-tuning affects the
patterns of their self-attention, but we do not observe substantial differences
in the overall dependency relations extracted using our methods. Our results
suggest that these models have some specialist attention heads that track
individual dependency types, but no generalist head that performs holistic
parsing significantly better than a trivial baseline, and that analyzing
attention weights directly may not reveal much of the syntactic knowledge that
BERT-style models are known to learn.
| 2,019 | Computation and Language |
A Vietnamese Question Answering System | Question answering systems aim to produce exact answers to users' questions
instead of a list of related documents as used by current search engines. In
this paper, we propose an ontology-based Vietnamese question answering system
that allows users to express their questions in natural language. To the best
of our knowledge, this is the first attempt to enable users to query an
ontological knowledge base using Vietnamese natural language. Experiments of
our system on an organizational ontology show promising results.
| 2,019 | Computation and Language |
DeFINE: DEep Factorized INput Token Embeddings for Neural Sequence
Modeling | For sequence models with large vocabularies, a majority of network parameters
lie in the input and output layers. In this work, we describe a new method,
DeFINE, for learning deep token representations efficiently. Our architecture
uses a hierarchical structure with novel skip-connections which allows for the
use of low dimensional input and output layers, reducing total parameters and
training time while delivering similar or better performance versus existing
methods. DeFINE can be incorporated easily in new or existing sequence models.
Compared to state-of-the-art methods including adaptive input representations,
this technique results in a 6% to 20% drop in perplexity. On WikiText-103,
DeFINE reduces the total parameters of Transformer-XL by half with minimal
impact on performance. On the Penn Treebank, DeFINE improves AWD-LSTM by 4
points with a 17% reduction in parameters, achieving comparable performance to
state-of-the-art methods with fewer parameters. For machine translation, DeFINE
improves the efficiency of the Transformer model by about 1.4 times while
delivering similar performance.
| 2,020 | Computation and Language |
SimpleBooks: Long-term dependency book dataset with simplified English
vocabulary for word-level language modeling | With language modeling becoming the popular base task for unsupervised
representation learning in Natural Language Processing, it is important to come
up with new architectures and techniques for faster and better training of
language models. However, due to a peculiarity of languages -- the larger the
dataset, the higher the average number of times a word appears in that dataset
-- datasets of different sizes have very different properties. Architectures
performing well on small datasets might not perform well on larger ones. For
example, LSTM models perform well on WikiText-2 but poorly on WikiText-103,
while Transformer models perform well on WikiText-103 but not on WikiText-2.
For setups like architectural search, this is a challenge since it is
prohibitively costly to run a search on the full dataset but it is not
indicative to experiment on smaller ones. In this paper, we introduce
SimpleBooks, a small dataset with the average word frequency as high as that of
much larger ones. Created from 1,573 Gutenberg books with the highest ratio of
word-level book length to vocabulary size, SimpleBooks contains 92M word-level
tokens, on par with WikiText-103 (103M tokens), but has the vocabulary of 98K,
a third of WikiText-103's. SimpleBooks can be downloaded from
https://dldata-public.s3.us-east-2.amazonaws.com/simplebooks.zip.
| 2,019 | Computation and Language |
Metre as a stylometric feature in Latin hexameter poetry | This paper demonstrates that metre is a privileged indicator of authorial
style in classical Latin hexameter poetry. Using only metrical features,
pairwise classification experiments are performed between 5 first-century
authors (10 comparisons) using four different machine-learning models. The
results showed a two-label classification accuracy of at least 95% with samples
as small as ten lines and no greater than eighty lines (up to around 500
words). These sample sizes are an order of magnitude smaller than those
typically recommended for BOW ('bag of words') or n-gram approaches, and the
reported accuracy is outstanding. Additionally, this paper explores the
potential for novelty (forgery) detection, or 'one-class classification'. An
analysis of the disputed Aldine Additamentum (Sil. Ital. Puni. 8:144-225)
concludes (p=0.0013) that the metrical style differs significantly from that of
the rest of the poem.
| 2,019 | Computation and Language |
Minimum Bayes Risk Training of RNN-Transducer for End-to-End Speech
Recognition | In this work, we propose minimum Bayes risk (MBR) training of RNN-Transducer
(RNN-T) for end-to-end speech recognition. Specifically, initialized with a
RNN-T trained model, MBR training is conducted via minimizing the expected edit
distance between the reference label sequence and on-the-fly generated N-best
hypothesis. We also introduce a heuristic to incorporate an external neural
network language model (NNLM) in RNN-T beam search decoding and explore MBR
training with the external NNLM. Experimental results demonstrate an MBR
trained model outperforms a RNN-T trained model substantially and further
improvements can be achieved if trained with an external NNLM. Our best MBR
trained system achieves absolute character error rate (CER) reductions of 1.2%
and 0.5% on read and spontaneous Mandarin speech respectively over a strong
convolution and transformer based RNN-T baseline trained on ~21,000 hours of
speech.
| 2,019 | Computation and Language |
How Can We Know What Language Models Know? | Recent work has presented intriguing results examining the knowledge
contained in language models (LM) by having the LM fill in the blanks of
prompts such as "Obama is a _ by profession". These prompts are usually
manually created, and quite possibly sub-optimal; another prompt such as "Obama
worked as a _" may result in more accurately predicting the correct profession.
Because of this, given an inappropriate prompt, we might fail to retrieve facts
that the LM does know, and thus any given prompt only provides a lower bound
estimate of the knowledge contained in an LM. In this paper, we attempt to more
accurately estimate the knowledge contained in LMs by automatically discovering
better prompts to use in this querying process. Specifically, we propose
mining-based and paraphrasing-based methods to automatically generate
high-quality and diverse prompts, as well as ensemble methods to combine
answers from different prompts. Extensive experiments on the LAMA benchmark for
extracting relational knowledge from LMs demonstrate that our methods can
improve accuracy from 31.1% to 39.6%, providing a tighter lower bound on what
LMs know. We have released the code and the resulting LM Prompt And Query
Archive (LPAQA) at https://github.com/jzbjyb/LPAQA.
| 2,020 | Computation and Language |
Language-Independent Sentiment Analysis Using Subjectivity and
Positional Information | We describe a novel language-independent approach to the task of determining
the polarity, positive or negative, of the author's opinion on a specific topic
in natural language text. In particular, weights are assigned to attributes,
individual words or word bi-grams, based on their position and on their
likelihood of being subjective. The subjectivity of each attribute is estimated
in a two-step process, where first the probability of being subjective is
calculated for each sentence containing the attribute, and then these
probabilities are used to alter the attribute's weights for polarity
classification. The evaluation results on a standard dataset of movie reviews
shows 89.85% classification accuracy, which rivals the best previously
published results for this dataset for systems that use no additional
linguistic information nor external resources.
| 2,009 | Computation and Language |
DiscoTK: Using Discourse Structure for Machine Translation Evaluation | We present novel automatic metrics for machine translation evaluation that
use discourse structure and convolution kernels to compare the discourse tree
of an automatic translation with that of the human reference. We experiment
with five transformations and augmentations of a base discourse tree
representation based on the rhetorical structure theory, and we combine the
kernel scores for each of them into a single score. Finally, we add other
metrics from the ASIYA MT evaluation toolkit, and we tune the weights of the
combination on actual human judgments. Experiments on the WMT12 and WMT13
metrics shared task datasets show correlation with human judgments that
outperforms what the best systems that participated in these years achieved,
both at the segment and at the system level.
| 2,014 | Computation and Language |
Improving Neural Relation Extraction with Positive and Unlabeled
Learning | We present a novel approach to improve the performance of distant supervision
relation extraction with Positive and Unlabeled (PU) Learning. This approach
first applies reinforcement learning to decide whether a sentence is positive
to a given relation, and then positive and unlabeled bags are constructed. In
contrast to most previous studies, which mainly use selected positive instances
only, we make full use of unlabeled instances and propose two new
representations for positive and unlabeled bags. These two representations are
then combined in an appropriate way to make bag-level prediction. Experimental
results on a widely used real-world dataset demonstrate that this new approach
indeed achieves significant and consistent improvements as compared to several
competitive baselines.
| 2,019 | Computation and Language |
Emotion helps Sentiment: A Multi-task Model for Sentiment and Emotion
Analysis | In this paper, we propose a two-layered multi-task attention based neural
network that performs sentiment analysis through emotion analysis. The proposed
approach is based on Bidirectional Long Short-Term Memory and uses
Distributional Thesaurus as a source of external knowledge to improve the
sentiment and emotion prediction. The proposed system has two levels of
attention to hierarchically build a meaningful representation. We evaluate our
system on the benchmark dataset of SemEval 2016 Task 6 and also compare it with
the state-of-the-art systems on Stance Sentiment Emotion Corpus. Experimental
results show that the proposed system improves the performance of sentiment
analysis by 3.2 F-score points on SemEval 2016 Task 6 dataset. Our network also
boosts the performance of emotion analysis by 5 F-score points on Stance
Sentiment Emotion Corpus.
| 2,019 | Computation and Language |
Word Embedding based New Corpus for Low-resourced Language: Sindhi | Representing words and phrases into dense vectors of real numbers which
encode semantic and syntactic properties is a vital constituent in natural
language processing (NLP). The success of neural network (NN) models in NLP
largely rely on such dense word representations learned on the large unlabeled
corpus. Sindhi is one of the rich morphological language, spoken by large
population in Pakistan and India lacks corpora which plays an essential role of
a test-bed for generating word embeddings and developing language independent
NLP systems. In this paper, a large corpus of more than 61 million words is
developed for low-resourced Sindhi language for training neural word
embeddings. The corpus is acquired from multiple web-resources using
web-scrappy. Due to the unavailability of open source preprocessing tools for
Sindhi, the prepossessing of such large corpus becomes a challenging problem
specially cleaning of noisy data extracted from web resources. Therefore, a
preprocessing pipeline is employed for the filtration of noisy text.
Afterwards, the cleaned vocabulary is utilized for training Sindhi word
embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of
Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine
similarity matrix and WordSim-353 are employed for the evaluation of generated
Sindhi word embeddings. Moreover, we compare the proposed word embeddings with
recently revealed Sindhi fastText (SdfastText) word representations. Our
intrinsic evaluation results demonstrate the high quality of our generated
Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word
representations.
| 2,021 | Computation and Language |
A Fine-Grained Sentiment Dataset for Norwegian | We introduce NoReC_fine, a dataset for fine-grained sentiment analysis in
Norwegian, annotated with respect to polar expressions, targets and holders of
opinion. The underlying texts are taken from a corpus of professionally
authored reviews from multiple news-sources and across a wide variety of
domains, including literature, games, music, products, movies and more. We here
present a detailed description of this annotation effort. We provide an
overview of the developed annotation guidelines, illustrated with examples, and
present an analysis of inter-annotator agreement. We also report the first
experimental results on the dataset, intended as a preliminary benchmark for
further experiments.
| 2,020 | Computation and Language |
Inducing Relational Knowledge from BERT | One of the most remarkable properties of word embeddings is the fact that
they capture certain types of semantic and syntactic relationships. Recently,
pre-trained language models such as BERT have achieved groundbreaking results
across a wide range of Natural Language Processing tasks. However, it is
unclear to what extent such models capture relational knowledge beyond what is
already captured by standard word embeddings. To explore this question, we
propose a methodology for distilling relational knowledge from a pre-trained
language model. Starting from a few seed instances of a given relation, we
first use a large text corpus to find sentences that are likely to express this
relation. We then use a subset of these extracted sentences as templates.
Finally, we fine-tune a language model to predict whether a given word pair is
likely to be an instance of some relation, when given an instantiated template
for that relation as input.
| 2,019 | Computation and Language |
Multimodal Machine Translation through Visuals and Speech | Multimodal machine translation involves drawing information from more than
one modality, based on the assumption that the additional modalities will
contain useful alternative views of the input data. The most prominent tasks in
this area are spoken language translation, image-guided translation, and
video-guided translation, which exploit audio and visual modalities,
respectively. These tasks are distinguished from their monolingual counterparts
of speech recognition, image captioning, and video captioning by the
requirement of models to generate outputs in a different language. This survey
reviews the major data resources for these tasks, the evaluation campaigns
concentrated around them, the state of the art in end-to-end and pipeline
approaches, and also the challenges in performance evaluation. The paper
concludes with a discussion of directions for future research in these areas:
the need for more expansive and challenging datasets, for targeted evaluations
of model performance, and for multimodality in both the input and output space.
| 2,019 | Computation and Language |
Sentiment Analysis On Indian Indigenous Languages: A Review On
Multilingual Opinion Mining | An increase in the use of smartphones has laid to the use of the internet and
social media platforms. The most commonly used social media platforms are
Twitter, Facebook, WhatsApp and Instagram. People are sharing their personal
experiences, reviews, feedbacks on the web. The information which is available
on the web is unstructured and enormous. Hence, there is a huge scope of
research on understanding the sentiment of the data available on the web.
Sentiment Analysis (SA) can be carried out on the reviews, feedbacks,
discussions available on the web. There has been extensive research carried out
on SA in the English language, but data on the web also contains different
other languages which should be analyzed. This paper aims to analyze, review
and discuss the approaches, algorithms, challenges faced by the researchers
while carrying out the SA on Indigenous languages.
| 2,019 | Computation and Language |
GitHub Typo Corpus: A Large-Scale Multilingual Dataset of Misspellings
and Grammatical Errors | The lack of large-scale datasets has been a major hindrance to the
development of NLP tasks such as spelling correction and grammatical error
correction (GEC). As a complementary new resource for these tasks, we present
the GitHub Typo Corpus, a large-scale, multilingual dataset of misspellings and
grammatical errors along with their corrections harvested from GitHub, a large
and popular platform for hosting and sharing git repositories. The dataset,
which we have made publicly available, contains more than 350k edits and 65M
characters in more than 15 languages, making it the largest dataset of
misspellings to date. We also describe our process for filtering true typo
edits based on learned classifiers on a small annotated subset, and demonstrate
that typo edits can be identified with F1 ~ 0.9 using a very simple classifier
with only three features. The detailed analyses of the dataset show that
existing spelling correctors merely achieve an F-measure of approx. 0.5,
suggesting that the dataset serves as a new, rich source of spelling errors
that complement existing datasets.
| 2,019 | Computation and Language |
Neural Chinese Word Segmentation as Sequence to Sequence Translation | Recently, Chinese word segmentation (CWS) methods using neural networks have
made impressive progress. Most of them regard the CWS as a sequence labeling
problem which construct models based on local features rather than considering
global information of input sequence. In this paper, we cast the CWS as a
sequence translation problem and propose a novel sequence-to-sequence CWS model
with an attention-based encoder-decoder framework. The model captures the
global information from the input and directly outputs the segmented sequence.
It can also tackle other NLP tasks with CWS jointly in an end-to-end mode.
Experiments on Weibo, PKU and MSRA benchmark datasets show that our approach
has achieved competitive performances compared with state-of-the-art methods.
Meanwhile, we successfully applied our proposed model to jointly learning CWS
and Chinese spelling correction, which demonstrates its applicability of
multi-task fusion.
| 2,019 | Computation and Language |
Merging Weak and Active Supervision for Semantic Parsing | A semantic parser maps natural language commands (NLs) from the users to
executable meaning representations (MRs), which are later executed in certain
environment to obtain user-desired results. The fully-supervised training of
such parser requires NL/MR pairs, annotated by domain experts, which makes them
expensive to collect. However, weakly-supervised semantic parsers are learnt
only from pairs of NL and expected execution results, leaving the MRs latent.
While weak supervision is cheaper to acquire, learning from this input poses
difficulties. It demands that parsers search a large space with a very weak
learning signal and it is hard to avoid spurious MRs that achieve the correct
answer in the wrong way. These factors lead to a performance gap between
parsers trained in weakly- and fully-supervised setting. To bridge this gap, we
examine the intersection between weak supervision and active learning, which
allows the learner to actively select examples and query for manual annotations
as extra supervision to improve the model trained under weak supervision. We
study different active learning heuristics for selecting examples to query, and
various forms of extra supervision for such queries. We evaluate the
effectiveness of our method on two different datasets. Experiments on the
WikiSQL show that by annotating only 1.8% of examples, we improve over a
state-of-the-art weakly-supervised baseline by 6.4%, achieving an accuracy of
79.0%, which is only 1.3% away from the model trained with full supervision.
Experiments on WikiTableQuestions with human annotators show that our method
can improve the performance with only 100 active queries, especially for
weakly-supervised parsers learnt from a cold start.
| 2,019 | Computation and Language |
Sentiment Analysis of German Twitter | This thesis explores the ways by how people express their opinions on German
Twitter, examines current approaches to automatic mining of these feelings, and
proposes novel methods, which outperform state-of-the-art techniques. For this
purpose, I introduce a new corpus of German tweets that have been manually
annotated with sentiments, their targets and holders, as well as polar terms
and their contextual modifiers. Using these data, I explore four major areas of
sentiment research: (i) generation of sentiment lexicons, (ii) fine-grained
opinion mining, (iii) message-level polarity classification, and (iv)
discourse-aware sentiment analysis. In the first task, I compare three popular
groups of lexicon generation methods: dictionary-, corpus-, and
word-embedding-based ones, finding that dictionary-based systems generally
yield better lexicons than the last two groups. Apart from this, I propose a
linear projection algorithm, whose results surpass many existing automatic
lexicons. Afterwords, in the second task, I examine two common approaches to
automatic prediction of sentiments, sources, and targets: conditional random
fields and recurrent neural networks, obtaining higher scores with the former
model and improving these results even further by redefining the structure of
CRF graphs. When dealing with message-level polarity classification, I
juxtapose three major sentiment paradigms: lexicon-, machine-learning-, and
deep-learning-based systems, and try to unite the first and last of these
groups by introducing a bidirectional neural network with lexicon-based
attention. Finally, in order to make the new classifier aware of discourse
structure, I let it separately analyze the elementary discourse units of each
microblog and infer the overall polarity of a message from the scores of its
EDUs with the help of two new approaches: latent-marginalized CRFs and
Recursive Dirichlet Process.
| 2,019 | Computation and Language |
A Multi-cascaded Deep Model for Bilingual SMS Classification | Most studies on text classification are focused on the English language.
However, short texts such as SMS are influenced by regional languages. This
makes the automatic text classification task challenging due to the
multilingual, informal, and noisy nature of language in the text. In this work,
we propose a novel multi-cascaded deep learning model called McM for bilingual
SMS classification. McM exploits $n$-gram level information as well as
long-term dependencies of text for learning. Our approach aims to learn a model
without any code-switching indication, lexical normalization, language
translation, or language transliteration. The model relies entirely upon the
text as no external knowledge base is utilized for learning. For this purpose,
a 12 class bilingual text dataset is developed from SMS feedbacks of citizens
on public services containing mixed Roman Urdu and English languages. Our model
achieves high accuracy for classification on this dataset and outperforms the
previous model for multilingual text classification, highlighting language
independence of McM.
| 2,019 | Computation and Language |
Kurdish (Sorani) Speech to Text: Presenting an Experimental Dataset | We present an experimental dataset, Basic Dataset for Sorani Kurdish
Automatic Speech Recognition (BD-4SK-ASR), which we used in the first attempt
in developing an automatic speech recognition for Sorani Kurdish. The objective
of the project was to develop a system that automatically could recognize
simple sentences based on the vocabulary which is used in grades one to three
of the primary schools in the Kurdistan Region of Iraq. We used CMUSphinx as
our experimental environment. We developed a dataset to train the system. The
dataset is publicly available for non-commercial use under the CC BY-NC-SA 4.0
license.
| 2,019 | Computation and Language |
An Iterative Polishing Framework based on Quality Aware Masked Language
Model for Chinese Poetry Generation | Owing to its unique literal and aesthetical characteristics, automatic
generation of Chinese poetry is still challenging in Artificial Intelligence,
which can hardly be straightforwardly realized by end-to-end methods. In this
paper, we propose a novel iterative polishing framework for highly qualified
Chinese poetry generation. In the first stage, an encoder-decoder structure is
utilized to generate a poem draft. Afterwards, our proposed Quality-Aware
Masked Language Model (QAMLM) is employed to polish the draft towards higher
quality in terms of linguistics and literalness. Based on a multi-task learning
scheme, QA-MLM is able to determine whether polishing is needed based on the
poem draft. Furthermore, QAMLM is able to localize improper characters of the
poem draft and substitute with newly predicted ones accordingly. Benefited from
the masked language model structure, QAMLM incorporates global context
information into the polishing process, which can obtain more appropriate
polishing results than the unidirectional sequential decoding. Moreover, the
iterative polishing process will be terminated automatically when QA-MLM
regards the processed poem as a qualified one. Both human and automatic
evaluation have been conducted, and the results demonstrate that our approach
is effective to improve the performance of encoder-decoder structure.
| 2,019 | Computation and Language |
Deconstructing and reconstructing word embedding algorithms | Uncontextualized word embeddings are reliable feature representations of
words used to obtain high quality results for various NLP applications. Given
the historical success of word embeddings in NLP, we propose a retrospective on
some of the most well-known word embedding algorithms. In this work, we
deconstruct Word2vec, GloVe, and others, into a common form, unveiling some of
the necessary and sufficient conditions required for making performant word
embeddings. We find that each algorithm: (1) fits vector-covector dot products
to approximate pointwise mutual information (PMI); and, (2) modulates the loss
gradient to balance weak and strong signals. We demonstrate that these two
algorithmic features are sufficient conditions to construct a novel word
embedding algorithm, Hilbert-MLE. We find that its embeddings obtain equivalent
or better performance against other algorithms across 17 intrinsic and
extrinsic datasets.
| 2,019 | Computation and Language |
Tag Recommendation by Word-Level Tag Sequence Modeling | In this paper, we transform tag recommendation into a word-based text
generation problem and introduce a sequence-to-sequence model. The model
inherits the advantages of LSTM-based encoder for sequential modeling and
attention-based decoder with local positional encodings for learning relations
globally. Experimental results on Zhihu datasets illustrate the proposed model
outperforms other state-of-the-art text classification based methods.
| 2,019 | Computation and Language |
A Hybrid Approach Towards Two Stage Bengali Question Classification
Utilizing Smart Data Balancing Technique | Question classification (QC) is the primary step of the Question Answering
(QA) system. Question Classification (QC) system classifies the questions in
particular classes so that Question Answering (QA) System can provide correct
answers for the questions. Our system categorizes the factoid type questions
asked in natural language after extracting features of the questions. We
present a two stage QC system for Bengali. It utilizes one dimensional
convolutional neural network for classifying questions into coarse classes in
the first stage. Word2vec representation of existing words of the question
corpus have been constructed and used for assisting 1D CNN. A smart data
balancing technique has been employed for giving data hungry convolutional
neural network the advantage of a greater number of effective samples to learn
from. For each coarse class, a separate Stochastic Gradient Descent (SGD) based
classifier has been used in order to differentiate among the finer classes
within that coarse class. TF-IDF representation of each word has been used as
feature for the SGD classifiers implemented as part of second stage
classification. Experiments show the effectiveness of our proposed method for
Bengali question classification.
| 2,020 | Computation and Language |
Integrating Graph Contextualized Knowledge into Pre-trained Language
Models | Complex node interactions are common in knowledge graphs, and these
interactions also contain rich knowledge information. However, traditional
methods usually treat a triple as a training unit during the knowledge
representation learning (KRL) procedure, neglecting contextualized information
of the nodes in knowledge graphs (KGs). We generalize the modeling object to a
very general form, which theoretically supports any subgraph extracted from the
knowledge graph, and these subgraphs are fed into a novel transformer-based
model to learn the knowledge embeddings. To broaden usage scenarios of
knowledge, pre-trained language models are utilized to build a model that
incorporates the learned knowledge representations. Experimental results
demonstrate that our model achieves the state-of-the-art performance on several
medical NLP tasks, and improvement above TransE indicates that our KRL method
captures the graph contextualized information effectively.
| 2,021 | Computation and Language |
Automatic Creation of Text Corpora for Low-Resource Languages from the
Internet: The Case of Swiss German | This paper presents SwissCrawl, the largest Swiss German text corpus to date.
Composed of more than half a million sentences, it was generated using a
customized web scraping tool that could be applied to other low-resource
languages as well. The approach demonstrates how freely available web pages can
be used to construct comprehensive text corpora, which are of fundamental
importance for natural language processing. In an experimental evaluation, we
show that using the new corpus leads to significant improvements for the task
of language modeling. To capture new content, our approach will run
continuously to keep increasing the corpus over time.
| 2,020 | Computation and Language |
Modeling Fluency and Faithfulness for Diverse Neural Machine Translation | Neural machine translation models usually adopt the teacher forcing strategy
for training which requires the predicted sequence matches ground truth word by
word and forces the probability of each prediction to approach a 0-1
distribution. However, the strategy casts all the portion of the distribution
to the ground truth word and ignores other words in the target vocabulary even
when the ground truth word cannot dominate the distribution. To address the
problem of teacher forcing, we propose a method to introduce an evaluation
module to guide the distribution of the prediction. The evaluation module
accesses each prediction from the perspectives of fluency and faithfulness to
encourage the model to generate the word which has a fluent connection with its
past and future translation and meanwhile tends to form a translation
equivalent in meaning to the source. The experiments on multiple translation
tasks show that our method can achieve significant improvements over strong
baselines.
| 2,019 | Computation and Language |
Neural language modeling of free word order argument structure | Neural language models trained with a predictive or masked objective have
proven successful at capturing short and long distance syntactic dependencies.
Here, we focus on verb argument structure in German, which has the interesting
property that verb arguments may appear in a relatively free order in
subordinate clauses. Therefore, checking that the verb argument structure is
correct cannot be done in a strictly sequential fashion, but rather requires to
keep track of the arguments' cases irrespective of their orders. We introduce a
new probing methodology based on minimal variation sets and show that both
Transformers and LSTM achieve a score substantially better than chance on this
test. As humans, they also show graded judgments preferring canonical word
orders and plausible case assignments. However, we also found unexpected
discrepancies in the strength of these effects, the LSTMs having difficulties
rejecting ungrammatical sentences containing frequent argument structure types
(double nominatives), and the Transformers tending to overgeneralize, accepting
some infrequent word orders or implausible sentences that humans barely accept.
| 2,021 | Computation and Language |
Topic-aware chatbot using Recurrent Neural Networks and Nonnegative
Matrix Factorization | We propose a novel model for a topic-aware chatbot by combining the
traditional Recurrent Neural Network (RNN) encoder-decoder model with a topic
attention layer based on Nonnegative Matrix Factorization (NMF). After learning
topic vectors from an auxiliary text corpus via NMF, the decoder is trained so
that it is more likely to sample response words from the most correlated topic
vectors. One of the main advantages in our architecture is that the user can
easily switch the NMF-learned topic vectors so that the chatbot obtains desired
topic-awareness. We demonstrate our model by training on a single
conversational data set which is then augmented with topic matrices learned
from different auxiliary data sets. We show that our topic-aware chatbot not
only outperforms the non-topic counterpart, but also that each topic-aware
model qualitatively and contextually gives the most relevant answer depending
on the topic of question.
| 2,019 | Computation and Language |
Semi-supervised Visual Feature Integration for Pre-trained Language
Models | Integrating visual features has been proved useful for natural language
understanding tasks. Nevertheless, in most existing multimodal language models,
the alignment of visual and textual data is expensive. In this paper, we
propose a novel semi-supervised visual integration framework for pre-trained
language models. In the framework, the visual features are obtained through a
visualization and fusion mechanism. The uniqueness includes: 1) the integration
is conducted via a semi-supervised approach, which does not require aligned
images for every sentences 2) the visual features are integrated as an external
component and can be directly used by pre-trained language models. To verify
the efficacy of the proposed framework, we conduct the experiments on both
natural language inference and reading comprehension tasks. The results
demonstrate that our mechanism brings improvement to two strong baseline
models. Considering that our framework only requires an image database, and no
not requires further alignments, it provides an efficient and feasible way for
multimodal language learning.
| 2,020 | Computation and Language |
Machines Getting with the Program: Understanding Intent Arguments of
Non-Canonical Directives | Modern dialog managers face the challenge of having to fulfill human-level
conversational skills as part of common user expectations, including but not
limited to discourse with no clear objective. Along with these requirements,
agents are expected to extrapolate intent from the user's dialogue even when
subjected to non-canonical forms of speech. This depends on the agent's
comprehension of paraphrased forms of such utterances. Especially in
low-resource languages, the lack of data is a bottleneck that prevents
advancements of the comprehension performance for these types of agents. In
this regard, here we demonstrate the necessity of extracting the intent
argument of non-canonical directives in a natural language format, which may
yield more accurate parsing, and suggest guidelines for building a parallel
corpus for this purpose. Following the guidelines, we construct a Korean corpus
of 50K instances of question/command-intent pairs, including the labels for
classification of the utterance type. We also propose a method for mitigating
class imbalance, demonstrating the potential applications of the corpus
generation method and its multilingual extensibility.
| 2,020 | Computation and Language |
HSCJN: A Holistic Semantic Constraint Joint Network for Diverse Response
Generation | The sequence-to-sequence (Seq2Seq) model generates target words iteratively
given the previously observed words during decoding process, which results in
the loss of the holistic semantics in the target response and the complete
semantic relationship between responses and dialogue histories. In this paper,
we propose a generic diversity-promoting joint network, called Holistic
Semantic Constraint Joint Network (HSCJN), enhancing the global sentence
information, and then regularizing the objective function with penalizing the
low entropy output. Our network introduces more target information to improve
diversity, and captures direct semantic information to better constrain the
relevance simultaneously. Moreover, the proposed method can be easily applied
to any Seq2Seq structure. Extensive experiments on several dialogue corpuses
show that our method effectively improves both semantic consistency and
diversity of generated responses, and achieves better performance than other
competitive methods.
| 2,020 | Computation and Language |
Deep Human Answer Understanding for Natural Reverse QA | This study focuses on a reverse question answering (QA) procedure, in which
machines proactively raise questions and humans supply the answers. This
procedure exists in many real human-machine interaction applications. However,
a crucial problem in human-machine interaction is answer understanding. The
existing solutions have relied on mandatory option term selection to avoid
automatic answer understanding. However, these solutions have led to unnatural
human-computer interaction and negatively affected user experience. To this
end, the current study proposes a novel deep answer understanding network,
called AntNet, for reverse QA. The network consists of three new modules,
namely, skeleton attention for questions, relevance-aware representation of
answers, and multi-hop based fusion. As answer understanding for reverse QA has
not been explored, a new data corpus is compiled in this study. Experimental
results indicate that our proposed network is significantly better than
existing methods and those modified from classical natural language processing
deep models. The effectiveness of the three new modules is also verified.
| 2,020 | Computation and Language |
Speeding up Word Mover's Distance and its variants via properties of
distances between embeddings | The Word Mover's Distance (WMD) proposed by Kusner et al. is a distance
between documents that takes advantage of semantic relations among words that
are captured by their embeddings. This distance proved to be quite effective,
obtaining state-of-art error rates for classification tasks, but is also
impracticable for large collections/documents due to its computational
complexity. For circumventing this problem, variants of WMD have been proposed.
Among them, Relaxed Word Mover's Distance (RWMD) is one of the most successful
due to its simplicity, effectiveness, and also because of its fast
implementations.
Relying on assumptions that are supported by empirical properties of the
distances between embeddings, we propose an approach to speed up both WMD and
RWMD. Experiments over 10 datasets suggest that our approach leads to a
significant speed-up in document classification tasks while maintaining the
same error rates.
| 2,020 | Computation and Language |
Multi-Scale Self-Attention for Text Classification | In this paper, we introduce the prior knowledge, multi-scale structure, into
self-attention modules. We propose a Multi-Scale Transformer which uses
multi-scale multi-head self-attention to capture features from different
scales. Based on the linguistic perspective and the analysis of pre-trained
Transformer (BERT) on a huge corpus, we further design a strategy to control
the scale distribution for each layer. Results of three different kinds of
tasks (21 datasets) show our Multi-Scale Transformer outperforms the standard
Transformer consistently and significantly on small and moderate size datasets.
| 2,019 | Computation and Language |
Large-scale text processing pipeline with Apache Spark | In this paper, we evaluate Apache Spark for a data-intensive machine learning
problem. Our use case focuses on policy diffusion detection across the state
legislatures in the United States over time. Previous work on policy diffusion
has been unable to make an all-pairs comparison between bills due to
computational intensity. As a substitute, scholars have studied single topic
areas.
We provide an implementation of this analysis workflow as a distributed text
processing pipeline with Spark dataframes and Scala application programming
interface. We discuss the challenges and strategies of unstructured data
processing, data formats for storage and efficient access, and graph processing
at scale.
| 2,016 | Computation and Language |
Merging External Bilingual Pairs into Neural Machine Translation | As neural machine translation (NMT) is not easily amenable to explicit
correction of errors, incorporating pre-specified translations into NMT is
widely regarded as a non-trivial challenge. In this paper, we propose and
explore three methods to endow NMT with pre-specified bilingual pairs. Instead,
for instance, of modifying the beam search algorithm during decoding or making
complex modifications to the attention mechanism --- mainstream approaches to
tackling this challenge ---, we experiment with the training data being
appropriately pre-processed to add information about pre-specified
translations. Extra embeddings are also used to distinguish pre-specified
tokens from the other tokens. Extensive experimentation and analysis indicate
that over 99% of the pre-specified phrases are successfully translated (given a
85% baseline) and that there is also a substantive improvement in translation
quality with the methods explored here.
| 2,019 | Computation and Language |
BLiMP: The Benchmark of Linguistic Minimal Pairs for English | We introduce The Benchmark of Linguistic Minimal Pairs (shortened to BLiMP),
a challenge set for evaluating what language models (LMs) know about major
grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal pairs isolating specific contrasts in syntax,
morphology, or semantics. The data is automatically generated according to
expert-crafted grammars, and aggregate human agreement with the labels is
96.4%. We use it to evaluate n-gram, LSTM, and Transformer (GPT-2 and
Transformer-XL) LMs. We find that state-of-the-art models identify
morphological contrasts reliably, but they struggle with semantic restrictions
on the distribution of quantifiers and negative polarity items and subtle
syntactic phenomena such as extraction islands.
| 2,023 | Computation and Language |
GANCoder: An Automatic Natural Language-to-Programming Language
Translation Approach based on GAN | We propose GANCoder, an automatic programming approach based on Generative
Adversarial Networks (GAN), which can generate the same functional and logical
programming language codes conditioned on the given natural language
utterances. The adversarial training between generator and discriminator helps
generator learn distribution of dataset and improve code generation quality.
Our experimental results show that GANCoder can achieve comparable accuracy
with the state-of-the-art methods and is more stable when programming
languages.
| 2,019 | Computation and Language |
Fiction Sentence Expansion and Enhancement via Focused Objective and
Novelty Curve Sampling | We describe the task of sentence expansion and enhancement, in which a
sentence provided by a human is expanded in some creative way. The expansion
should be understandable, believably grammatical, and optimally
meaning-preserving. Sentence expansion and enhancement may serve as an
authoring tool, or integrate in dynamic media, conversational agents, or
variegated advertising.
We implement a neural sentence expander trained on sentence compressions
generated from a corpus of modern fiction. We modify an MLE objective to
support the task by focusing on new words, and decode at test time with
controlled curve-like novelty sampling. We run our sentence expander on
sentences provided by human subjects and have humans evaluate these expansions.
We show that, although the generation methods are inferior to professional
human writers, they are comparable to, and as well liked as, our subjects'
original input sentences, and preferred over baselines.
| 2,020 | Computation and Language |
SemEval-2017 Task 3: Community Question Answering | We describe SemEval-2017 Task 3 on Community Question Answering. This year,
we reran the four subtasks from SemEval-2016:(A) Question-Comment
Similarity,(B) Question-Question Similarity,(C) Question-External Comment
Similarity, and (D) Rerank the correct answers for a new question in Arabic,
providing all the data from 2015 and 2016 for training, and fresh data for
testing. Additionally, we added a new subtask E in order to enable
experimentation with Multi-domain Question Duplicate Detection in a
larger-scale scenario, using StackExchange subforums. A total of 23 teams
participated in the task, and submitted a total of 85 runs (36 primary and 49
contrastive) for subtasks A-D. Unfortunately, no teams participated in subtask
E. A variety of approaches and features were used by the participating systems
to address the different subtasks. The best systems achieved an official score
(MAP) of 88.43, 47.22, 15.46, and 61.16 in subtasks A, B, C, and D,
respectively. These scores are better than the baselines, especially for
subtasks A-C.
| 2,017 | Computation and Language |
SemEval-2017 Task 4: Sentiment Analysis in Twitter | This paper describes the fifth year of the Sentiment Analysis in Twitter
task. SemEval-2017 Task 4 continues with a rerun of the subtasks of
SemEval-2016 Task 4, which include identifying the overall sentiment of the
tweet, sentiment towards a topic with classification on a two-point and on a
five-point ordinal scale, and quantification of the distribution of sentiment
towards a topic across a number of tweets: again on a two-point and on a
five-point ordinal scale. Compared to 2016, we made two changes: (i) we
introduced a new language, Arabic, for all subtasks, and (ii)~we made available
information from the profiles of the Twitter users who posted the target
tweets. The task continues to be very popular, with a total of 48 teams
participating this year.
| 2,019 | Computation and Language |
EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural
Annotators | The recognition of emotion and dialogue acts enriches conversational analysis
and help to build natural dialogue systems. Emotion interpretation makes us
understand feelings and dialogue acts reflect the intentions and performative
functions in the utterances. However, most of the textual and multi-modal
conversational emotion corpora contain only emotion labels but not dialogue
acts. To address this problem, we propose to use a pool of various recurrent
neural models trained on a dialogue act corpus, with and without context. These
neural models annotate the emotion corpora with dialogue act labels, and an
ensemble annotator extracts the final dialogue act label. We annotated two
accessible multi-modal emotion corpora: IEMOCAP and MELD. We analyzed the
co-occurrence of emotion and dialogue act labels and discovered specific
relations. For example, Accept/Agree dialogue acts often occur with the Joy
emotion, Apology with Sadness, and Thanking with Joy. We make the Emotional
Dialogue Acts (EDA) corpus publicly available to the research community for
further study and analysis.
| 2,020 | Computation and Language |
Low Rank Factorization for Compact Multi-Head Self-Attention | Effective representation learning from text has been an active area of
research in the fields of NLP and text mining. Attention mechanisms have been
at the forefront in order to learn contextual sentence representations. Current
state-of-the-art approaches for many NLP tasks use large pre-trained language
models such as BERT, XLNet and so on for learning representations. These models
are based on the Transformer architecture that involves recurrent blocks of
computation consisting of multi-head self-attention and feedforward networks.
One of the major bottlenecks largely contributing to the computational
complexity of the Transformer models is the self-attention layer, that is both
computationally expensive and parameter intensive. In this work, we introduce a
novel multi-head self-attention mechanism operating on GRUs that is shown to be
computationally cheaper and more parameter efficient than self-attention
mechanism proposed in Transformers for text classification tasks. The
efficiency of our approach mainly stems from two optimizations; 1) we use
low-rank matrix factorization of the affinity matrix to efficiently get
multiple attention distributions instead of having separate parameters for each
head 2) attention scores are obtained by querying a global context vector
instead of densely querying all the words in the sentence. We evaluate the
performance of the proposed model on tasks such as sentiment analysis from
movie reviews, predicting business ratings from reviews and classifying news
articles into topics. We find that the proposed approach matches or outperforms
a series of strong baselines and is more parameter efficient than comparable
multi-head approaches. We also perform qualitative analyses to verify that the
proposed approach is interpretable and captures context-dependent word
importance.
| 2,020 | Computation and Language |
Automatic Generation of Headlines for Online Math Questions | Mathematical equations are an important part of dissemination and
communication of scientific information. Students, however, often feel
challenged in reading and understanding math content and equations. With the
development of the Web, students are posting their math questions online.
Nevertheless, constructing a concise math headline that gives a good
description of the posted detailed math question is nontrivial. In this study,
we explore a novel summarization task denoted as geNerating A concise Math
hEadline from a detailed math question (NAME). Compared to conventional
summarization tasks, this task has two extra and essential constraints: 1)
Detailed math questions consist of text and math equations which require a
unified framework to jointly model textual and mathematical information; 2)
Unlike text, math equations contain semantic and structural features, and both
of them should be captured together. To address these issues, we propose
MathSum, a novel summarization model which utilizes a pointer mechanism
combined with a multi-head attention mechanism for mathematical representation
augmentation. The pointer mechanism can either copy textual tokens or math
tokens from source questions in order to generate math headlines. The
multi-head attention mechanism is designed to enrich the representation of math
equations by modeling and integrating both its semantic and structural
features. For evaluation, we collect and make available two sets of real-world
detailed math questions along with human-written math headlines, namely
EXEQ-300k and OFEQ-10k. Experimental results demonstrate that our model
(MathSum) significantly outperforms state-of-the-art models for both the
EXEQ-300k and OFEQ-10k datasets.
| 2,020 | Computation and Language |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.