ACL-OCL / Base_JSON /prefixA /json /adaptnlp /2021.adaptnlp-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:11:18.339663Z"
},
"title": "Bridging the gap between supervised classification and unsupervised topic modelling for social-media assisted crisis management",
"authors": [
{
"first": "Mikael",
"middle": [],
"last": "Brunila",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University / Montreal",
"location": {
"region": "QC",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Rosie",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University / Montreal",
"location": {
"region": "QC",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Andrei",
"middle": [],
"last": "Mircea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University / Montreal",
"location": {
"region": "QC",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Sam",
"middle": [],
"last": "Lumley",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University / Montreal",
"location": {
"region": "QC",
"country": "Canada"
}
},
"email": "[email protected]"
},
{
"first": "Renee",
"middle": [],
"last": "Sieber",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "McGill University / Montreal",
"location": {
"region": "QC",
"country": "Canada"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Social media such as Twitter provide valuable information to crisis managers and affected people during natural disasters. Machine learning can help structure and extract information from the large volume of messages shared during a crisis; however, the constantly evolving nature of crises makes effective domain adaptation essential. Supervised classification is limited by unchangeable class labels that may not be relevant to new events, and unsupervised topic modelling by insufficient prior knowledge. In this paper, we bridge the gap between the two and show that BERT embeddings finetuned on crisis-related tweet classification can effectively be used to adapt to a new crisis, discovering novel topics while preserving relevant classes from supervised training, and leveraging bidirectional self-attention to extract topic keywords. We create a dataset of tweets from a snowstorm to evaluate our method's transferability to new crises, and find that it outperforms traditional topic models in both automatic, and human evaluations grounded in the needs of crisis managers. More broadly, our method can be used for textual domain adaptation where the latent classes are unknown but overlap with known classes from other domains.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Social media such as Twitter provide valuable information to crisis managers and affected people during natural disasters. Machine learning can help structure and extract information from the large volume of messages shared during a crisis; however, the constantly evolving nature of crises makes effective domain adaptation essential. Supervised classification is limited by unchangeable class labels that may not be relevant to new events, and unsupervised topic modelling by insufficient prior knowledge. In this paper, we bridge the gap between the two and show that BERT embeddings finetuned on crisis-related tweet classification can effectively be used to adapt to a new crisis, discovering novel topics while preserving relevant classes from supervised training, and leveraging bidirectional self-attention to extract topic keywords. We create a dataset of tweets from a snowstorm to evaluate our method's transferability to new crises, and find that it outperforms traditional topic models in both automatic, and human evaluations grounded in the needs of crisis managers. More broadly, our method can be used for textual domain adaptation where the latent classes are unknown but overlap with known classes from other domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "As climate change increases the frequency of extreme weather events and the vulnerability of affected people, effective crisis management is becoming increasingly important for mitigating the negative effects of these crises (Keim, 2008) . In the general crisis management literature, social media has been identified as a useful source of information for crisis managers to gauge reactions from and communicate with the public, increase situational awareness, and enable data-driven decision-making (Tobias, 2011; Alexander, 2014; Jin et al., 2014) . * Equal contribution.",
"cite_spans": [
{
"start": 225,
"end": 237,
"text": "(Keim, 2008)",
"ref_id": "BIBREF24"
},
{
"start": 500,
"end": 514,
"text": "(Tobias, 2011;",
"ref_id": "BIBREF56"
},
{
"start": 515,
"end": 531,
"text": "Alexander, 2014;",
"ref_id": "BIBREF1"
},
{
"start": 532,
"end": 549,
"text": "Jin et al., 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Social Media for Crisis Management",
"sec_num": "1.1"
},
{
"text": "The large volume and noise-to-signal ratio of social media platforms such as Twitter makes it difficult to extract actionable information, especially at a rate suited for the urgency of a crisis. This has motivated the application of natural language processing (NLP) techniques to help automatically filter information in real-time as a crisis unfolds (Imran et al., 2013; Emmanouil and Nikolaos, 2015) .",
"cite_spans": [
{
"start": 353,
"end": 373,
"text": "(Imran et al., 2013;",
"ref_id": "BIBREF20"
},
{
"start": 374,
"end": 403,
"text": "Emmanouil and Nikolaos, 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Need For NLP",
"sec_num": "1.2"
},
{
"text": "Other work has investigated the use of NLP models to automatically classify tweets into finergrained categories that can be more salient to crisis managers and affected people in rapidly evolving situations (Ragini and Anand, 2016; Schulz et al., 2014) . Training such classification models typically requires large-scale annotated corpora of crisis-related tweets such as that made available by Imran et al. (2016) , which covers a variety of countries and natural disasters including flooding, tropical storms, earthquakes, and forest fires.",
"cite_spans": [
{
"start": 219,
"end": 231,
"text": "Anand, 2016;",
"ref_id": "BIBREF44"
},
{
"start": 232,
"end": 252,
"text": "Schulz et al., 2014)",
"ref_id": "BIBREF52"
},
{
"start": 396,
"end": 415,
"text": "Imran et al. (2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Need For NLP",
"sec_num": "1.2"
},
{
"text": "Whereas supervised approaches work well for classifying tweets from the same event as their training data, they often fail to generalize to novel events (Nguyen et al., 2017) . A novel event may differ from past events in terms of location, type of event, or event characteristics; all of which can change the relevance of a tweet classification scheme.",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "(Nguyen et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of Current Methods",
"sec_num": "1.3"
},
{
"text": "Various methods of domain adaptation have been suggested for addressing this issue Sopova, 2017; Alrashdi and O'Keefe, 2020) . However, this type of supervised classification assumes that relevant classes remain the same from event to event. Probabilistic topic modelling approaches such as Latent Dirichlet Allocation (LDA) can overcome this limitation and identify novel categorizations (Blei et al., 2003) . Unfortunately, these unsupervised methods are typically difficult to apply to tweets due to issues of document length and non-standard language (Hong and Davison, 2010) .",
"cite_spans": [
{
"start": 83,
"end": 96,
"text": "Sopova, 2017;",
"ref_id": "BIBREF54"
},
{
"start": 97,
"end": 124,
"text": "Alrashdi and O'Keefe, 2020)",
"ref_id": "BIBREF2"
},
{
"start": 389,
"end": 408,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 555,
"end": 579,
"text": "(Hong and Davison, 2010)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of Current Methods",
"sec_num": "1.3"
},
{
"text": "Furthermore, the categorizations produced by these models can be difficult to interpret by humans, limiting their usefulness (Blekanov et al., 2020) .",
"cite_spans": [
{
"start": 125,
"end": 148,
"text": "(Blekanov et al., 2020)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations of Current Methods",
"sec_num": "1.3"
},
{
"text": "To address these issues, we propose a method for the unsupervised clustering of tweets from novel crises, using the representations learned from supervised classification. Specifically, we use the contextual embeddings of a pretrained language model finetuned on crisis tweet classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Contributions",
"sec_num": "1.4"
},
{
"text": "Our method bridges the gap between supervised approaches and unsupervised topic modelling, improving domain adaptation to new crises by allowing classification of tweets in novel topics while preserving relevant classes from supervised training. Our model is robust to idiosyncrasies of tweet texts such as short document length and nonstandard language, and leverages bi-directional selfattention to provide interpretable topic keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Contributions",
"sec_num": "1.4"
},
{
"text": "We assess our approach's transferability to novel crises by creating a dataset of tweets from Winter Storm Jacob, a severe winter storm that hit Newfoundland, Canada in January 2020. This event differs significantly from past crisis tweet classification datasets on which we finetune our model, and allows us to evaluate domain adaptation for novel events. We find that our approach indeed identifies novel topics that are distinct from the labels seen during supervised training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Contributions",
"sec_num": "1.4"
},
{
"text": "In line with human-centered machine learning principles (Ramos et al., 2019) , we also create a novel human evaluation task aligned with the needs of crisis managers. We find that, with high interrater reliability, our model provides consistently more interpretable and useful topic keywords than traditional approaches, while improving cluster coherence as measured by intruder detection. Automated coherence measures further support these findings. Our code and dataset are available at https://github.com/smacawi/bert-topics.",
"cite_spans": [
{
"start": 56,
"end": 76,
"text": "(Ramos et al., 2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Our Contributions",
"sec_num": "1.4"
},
{
"text": "Topic modelling in a variety of domains is widely studied. Although there are many existing approaches in the literature, most innovations are compared to the seminal Latent Dirichlet Allocation (LDA) model (Blei et al., 2003) . LDA is a generative probabilistic model that considers the joint distribution of observed variables (words) and hid-den variables (topics). While LDA has well-known issues with short text, approaches such as Biterm Topic Modelling (BTM) have been developed to address these (Yan et al., 2013) . BTM specifically addresses the sparsity of word co-occurrences: whereas LDA models word-document occurrences, BTM models word co-occurrences ('biterms') across the entire corpus.",
"cite_spans": [
{
"start": 207,
"end": 226,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 503,
"end": 521,
"text": "(Yan et al., 2013)",
"ref_id": "BIBREF60"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Modelling",
"sec_num": "2.1"
},
{
"text": "Topic modelling can be distinguished from clustering, where documents are usually represented in a multi-dimensional vector space and then grouped using vector similarity measures. These representations are typically formed through matrix factorization techniques (Levy and Goldberg, 2014) that compress words (Mikolov et al., 2013b; Pennington et al., 2014) or sentences (Mikolov et al., 2013a; Lau and Baldwin, 2016) into \"embeddings\". Recently, language representation models building on Transformer type neural networks (Vaswani et al., 2017) have upended much of NLP and provided new, \"contextualized\" embedding approaches (Devlin et al., 2019; Peters et al., 2018) . Among these is the Bidirectional Encoder Representations from Transformers (BERT) model, which is available pretrained on large amounts of text and can be finetuned on many different types of NLP tasks (Devlin et al., 2019) . Embeddings from BERT and other Transformer type language models can potentially serve as a basis for both topic modelling (Bianchi et al., 2020) and clustering , but many questions about their usefulness for these tasks remain open.",
"cite_spans": [
{
"start": 264,
"end": 289,
"text": "(Levy and Goldberg, 2014)",
"ref_id": "BIBREF31"
},
{
"start": 310,
"end": 333,
"text": "(Mikolov et al., 2013b;",
"ref_id": "BIBREF37"
},
{
"start": 334,
"end": 358,
"text": "Pennington et al., 2014)",
"ref_id": "BIBREF42"
},
{
"start": 372,
"end": 395,
"text": "(Mikolov et al., 2013a;",
"ref_id": "BIBREF36"
},
{
"start": 396,
"end": 418,
"text": "Lau and Baldwin, 2016)",
"ref_id": "BIBREF28"
},
{
"start": 524,
"end": 546,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
},
{
"start": 628,
"end": 649,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF11"
},
{
"start": 650,
"end": 670,
"text": "Peters et al., 2018)",
"ref_id": "BIBREF43"
},
{
"start": 875,
"end": 896,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1021,
"end": 1043,
"text": "(Bianchi et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering",
"sec_num": "2.2"
},
{
"text": "The use of embeddings for clustering in the field of crisis management has been explored by Demszky et al. (2019) using trained GloVe embeddings. Zahera et al. (2019) used contextualized word embeddings from BERT to train a classifier for crisis-related tweets on a fixed set of labels. Using BERT to classify tweets in the field of disaster management was also studied by Ma (2019) by aggregating labelled tweets from the CrisisNLP and CrisisLexT26 datasets. While the aforementioned work requires data with a gold standard set of labels, our proposed clustering approach using finetuned BERT embeddings is applied in an unsupervised environment on unseen data -it invites domain expertise to determine an appropriate set of labels specific to the crisis at hand.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "Demszky et al. (2019)",
"ref_id": "BIBREF10"
},
{
"start": 373,
"end": 382,
"text": "Ma (2019)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering",
"sec_num": "2.2"
},
{
"text": "A significant issue when clustering text documents is how the keywords of each cluster or topic are determined. Unlike standard topic models such as LDA, clustering approaches do not jointly model the distributions of keywords over topics and of topics over documents. In other words, clusters do not contain any obvious information about which keywords should represent the clusters as topics. While the previous generation of embedding models has been leveraged for interpretable linguistic analysis in a wide variety of settings (Garg et al., 2018; Hamilton et al., 2016; Kozlowski et al., 2019) , the interpretability of language models in general and BERT in particular remains a contested issue (Rogers et al., 2020) .",
"cite_spans": [
{
"start": 532,
"end": 551,
"text": "(Garg et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 552,
"end": 574,
"text": "Hamilton et al., 2016;",
"ref_id": "BIBREF18"
},
{
"start": 575,
"end": 598,
"text": "Kozlowski et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 701,
"end": 722,
"text": "(Rogers et al., 2020)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Keyword Extraction",
"sec_num": "2.3"
},
{
"text": "One promising line of research has been on the attention mechanism of Transformer models (Bahdanau et al., 2015; Jain and Wallace, 2019) . Clark et al. (2019) found that the attention heads of BERT contained a significant amount of syntactic and grammatical information, and Lin et al. (2019) concluded that this information is hierarchical, similar to syntactic tree structures. Kovaleva et al. (2019) noted that different attention heads often carry overlapping and redundant information. However, if attention is to be useful for selecting topic keywords, the crucial question is whether it captures semantic information. Jain and Wallace (2019) found that attention heads generally correlated poorly with traditional measures for feature importance in neural networks, such as gradients, while Serrano and Smith (2019) showed that attention can \"noisily\" predict the importance of features for overall model performance and Wiegreffe and Pinter (2019) argued that attention can serve plausible, although not faithful explanations of models. To the best of our knowledge, there is no previous work on leveraging attention to improve topic modelling interpretability.",
"cite_spans": [
{
"start": 89,
"end": 112,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 113,
"end": 136,
"text": "Jain and Wallace, 2019)",
"ref_id": "BIBREF22"
},
{
"start": 139,
"end": 158,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 275,
"end": 292,
"text": "Lin et al. (2019)",
"ref_id": "BIBREF33"
},
{
"start": 380,
"end": 402,
"text": "Kovaleva et al. (2019)",
"ref_id": "BIBREF26"
},
{
"start": 625,
"end": 648,
"text": "Jain and Wallace (2019)",
"ref_id": "BIBREF22"
},
{
"start": 798,
"end": 822,
"text": "Serrano and Smith (2019)",
"ref_id": "BIBREF53"
},
{
"start": 928,
"end": 955,
"text": "Wiegreffe and Pinter (2019)",
"ref_id": "BIBREF59"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Topic Keyword Extraction",
"sec_num": "2.3"
},
{
"text": "Whether based on clustering or probabilistic approaches, topic models are typically evaluated by their coherence. While human evaluation is preferable, several automated methods have been proposed to emulate that of human performance (Lau et al., 2014) . In both cases, coherence can be thought of formally as a measure of the extent to which keywords in a topic relate to each other as a set of semantically coherent facts (R\u00f6der et al., 2015; Aletras and Stevenson, 2013; Mimno et al., 2011) . If a word states a fact, then the coherence of a set of topic keywords can be measured by computing how strongly a word W is confirmed by a conditioning set of words W * . This can be done either directly where W is a word in a set of topic words and W * are the other words in the same set (e.g. Mimno et al., 2011) , or indirectly by computing context vectors for both W and W * and then comparing these (e.g. Aletras and Stevenson, 2013) . In a comprehensive comparison of different coherence measures, R\u00f6der et al. (2015) found that, when comparing the coherence scores assigned by humans to a set of topics against a large number of automated metrics, indirect confirmation measures tend to result in a higher correlation between human and automated coherence scores.",
"cite_spans": [
{
"start": 234,
"end": 252,
"text": "(Lau et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 424,
"end": 444,
"text": "(R\u00f6der et al., 2015;",
"ref_id": "BIBREF51"
},
{
"start": 445,
"end": 473,
"text": "Aletras and Stevenson, 2013;",
"ref_id": "BIBREF0"
},
{
"start": 474,
"end": 493,
"text": "Mimno et al., 2011)",
"ref_id": "BIBREF38"
},
{
"start": 793,
"end": 812,
"text": "Mimno et al., 2011)",
"ref_id": "BIBREF38"
},
{
"start": 908,
"end": 936,
"text": "Aletras and Stevenson, 2013)",
"ref_id": "BIBREF0"
},
{
"start": 1002,
"end": 1021,
"text": "R\u00f6der et al. (2015)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Coherence",
"sec_num": "2.4"
},
{
"text": "While automated coherence measures can be used to rapidly evaluate topic models such as LDA, studies have shown that these metrics can be uncorrelated -or negatively correlated -to human interpretability judgements. Chang et al. (2009) demonstrated that results given by perplexity measures differed from that of their proposed 'intrusion' tasks, where humans identify spurious words inserted in a topic, and mismatched topics assigned to a document. Tasks have been formulated in previous works to meaningfully enable human judgement when analyzing the topics (Chuang et al., 2013; Lee et al., 2017) .",
"cite_spans": [
{
"start": 216,
"end": 235,
"text": "Chang et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 561,
"end": 582,
"text": "(Chuang et al., 2013;",
"ref_id": "BIBREF8"
},
{
"start": 583,
"end": 600,
"text": "Lee et al., 2017)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Topic Models",
"sec_num": "2.5"
},
{
"text": "The latent topics given by these methods should provide a semantically meaningful decomposition of a given corpus. Formalizing the quality of the resulting latent topics via qualitative tasks or quantitative metrics is even less straightforward in an applied setting, where it is particularly important that the semantic meaning underlying the topic model is relevant to its users. Due to our focus on the comparison between the labels in the CrisisNLP dataset and novel topics discovered by our model, we restrict this part of the analysis to nine topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Topic Models",
"sec_num": "2.5"
},
{
"text": "Our work is motivated by structuring crude textual data for practical use by crisis managers, guiding corpus exploration and efficient information retrieval. It remains difficult to verify whether the latent space discovered by topic models is both interpretable and useful without a gold standard set of labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation of Topic Models",
"sec_num": "2.5"
},
{
"text": "In this section we describe the dataset of snowstorm-related tweets we created to evaluate our model's ability to discover novel topics and transfer to unseen crisis events. We then outline the process by which our model learns to extract crisis-relevant embeddings from tweets, and clusters them into novel topics from which it then extracts interpretable keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "3"
},
{
"text": "On January 18 2020, Winter Storm Jacob hit Newfoundland, Canada. As a result of the high winds and severe snowfall, 21,000 homes were left without power. A state of emergency was declared in the province as snowdrifts as high as 15 feet (4.6 m) trapped people indoors (Erdman, 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Snowstorm Dataset",
"sec_num": "3.1"
},
{
"text": "Following the Newfoundland Snowstorm, we collected 21,797 unique tweets from 8,471 users between January 17 and January 22 using the Twitter standard search API with the following search terms: #nlwhiteout, #nlweather, #Newfoundland, #nlblizzard2020, #NLStorm2020, #snowmaggedon2020, #stormageddon2020, #Snowpocalypse2020, #Snowmageddon, #nlstorm, #nltraffic, #NLwx, #NLblizzard. Based on past experience with the Twitter API, we opted to use hashtags to limit irrelevant tweets (e.g. searching for blizzard resulted in half the collected tweets being about the video game company with the same name). We filter retweets to only capture unique tweets and better work within API rate limits. We make the dataset publicly available with our code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Snowstorm Dataset",
"sec_num": "3.1"
},
{
"text": "Our proposed approach, Finetuned Tweet Embeddings (FTE) involves training a model with bidirectional self-attention such as BERT (Devlin et al., 2019) to generate embeddings for tweets so that these can be clustered using common off-theshelf algorithms such as K-Means (Lloyd, 1982; Elkan, 2003) . We then combine activations from the model's attention layers with Term frequency-Inverse document frequency (Tf-Idf) to identify keywords for each cluster and improve model interpretability.",
"cite_spans": [
{
"start": 129,
"end": 150,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 269,
"end": 282,
"text": "(Lloyd, 1982;",
"ref_id": "BIBREF34"
},
{
"start": 283,
"end": 295,
"text": "Elkan, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuned Tweet Embeddings Model",
"sec_num": "3.2"
},
{
"text": "To build a model that extracts tweet embeddings containing information relevant to crisis management, we finetune a pretrained BERT language representation model on classifying tweets from various crisis events. Similar to Romascanu et al. (2020) , we finetune on CrisisNLP, a dataset of Crowdflower-labeled tweets from various types of crises, aimed at crisis managers (Imran et al., 2016) . These show a significant class imbalance with large amounts of tweets shunted into uninformative categories such as Other useful information, further motivating the need for unsupervised topic discovery. CrisisNLP label descriptions and counts are included in Appendix A for context. To address the issue of class imbalance, we create a random stratified train-validation split of 0.8 across the datasets, preserving the same proportions of labels.",
"cite_spans": [
{
"start": 223,
"end": 246,
"text": "Romascanu et al. (2020)",
"ref_id": "BIBREF49"
},
{
"start": 370,
"end": 390,
"text": "(Imran et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Finetuning",
"sec_num": "3.2.1"
},
{
"text": "To finetune BERT, we add a dropout layer and a linear classification layer on top of the bert-base-uncased model, using the 768dimensional [CLS] last hidden state as input to our classifier. We train the model using the Adam optimizer with the default fixed weight decay, and a batch size of four over a single epoch. Our model obtains an accuracy of 0.78 on the withheld validation dataset. One advantage of BERT is its subword tokenization which can dynamically build representations of out-of-vocabulary words from subwords, allowing a robust handling of the nonstandard language found in tweets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Finetuning",
"sec_num": "3.2.1"
},
{
"text": "Once the model is trained, we use a mean-pooling layer across the last hidden states to generate a tweet embedding, similar to Reimers and Gurevych (2019) who found mean-pooling to work best for semantic similarity tasks. Whereas the hidden state for the [CLS] token contains sufficient information to separate tweets between the different Cri-sisNLP labels, the hidden states for the other tokens in the tweet allow our model to capture token-level information that can help in identifying novel topics beyond the supervised labels. For example, tweets that use similar words -even those not occurring in the CrisisNLP dataset -will have more similar embeddings and thus be more likely to cluster in the same topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tweet Embedding",
"sec_num": "3.2.2"
},
{
"text": "Given the embeddings for each tweet, we apply an optimized version of the K-Means clustering algorithm to find our candidate topics (Elkan, 2003) . We use the K-Means implementation in Sklearn with the default 'k-means++' initialization and n_int equal to 10.",
"cite_spans": [
{
"start": 132,
"end": 145,
"text": "(Elkan, 2003)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Clustering",
"sec_num": "3.2.3"
},
{
"text": "To extract keywords from topics generated by our model and ensure their interpretability, we experiment with two approaches and their combination.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.2.4"
},
{
"text": "We first identify relevant keywords for each cluster using Tf-Idf (Sp\u00e4rck Jones, 2004) , combining each cluster into one document to address the issue of low term frequencies in short-text tweets. During automatic evaluation, we perform a comprehensive grid search over Tf-Idf and other hyperparameters:",
"cite_spans": [
{
"start": 74,
"end": 86,
"text": "Jones, 2004)",
"ref_id": "BIBREF55"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.2.4"
},
{
"text": "1. maximum document frequency (mdf ) between 0.6 and 1.0 with intervals of 0.1 (to ignore snowstorm related terms common to many clusters); 2. sublinear Tf-Idf (shown to be advantageous by Paltoglou and Thelwall (2010)); 3. phrasing (grouping of frequently co-occurring words proposed by Mikolov et al. (2013b)) We find an mdf of 0.6 and sublinear Tf-Idf perform best for FTE with our number of topics, and we use these hyperparameters in our experiments. Phrasing makes no significant difference and we only include unigrams in our keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.2.4"
},
{
"text": "However, Tf-Idf only uses frequency and does not leverage the crisis-related knowledge learned during finetuning. Based on the observation by Clark et al. (2019) , we use BERT's last layer of attention for the [CLS] token to identify keywords that are important for classifying tweets along crisis management related labels. For each cluster, we score keywords by summing their attention values (averaged across subwords) across tweets where they occur, better capturing the relevance of a keyword to crisis management.",
"cite_spans": [
{
"start": 142,
"end": 161,
"text": "Clark et al. (2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.2.4"
},
{
"text": "We also experiment with the combination of Tf-Idf and attention by multiplying the two scores for each token, allowing us to down-weight frequent but irrelevant words and up-weight rarer but relevant words. For all three approaches, we drop stopwords, hashtags, special characters, and URLs based on preliminary experiments that found these contributing substantially to noise in the topic keywords.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Keyword Extraction",
"sec_num": "3.2.4"
},
{
"text": "In this section we describe the baselines, as well as the automatic and human evaluations used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We report on two standard topic modelling techniques: BTM and LDA, for which we train models ranging from five to fifteen topics under 10 passes and 100 iterations, following the work of Blei et al. (2003) . For LDA, we generate clusters of tweets by giving a weighted topic assignment to each word present in a given document according to the topic distribution over all words present in the corpus. Clusters can be generated similarly with BTM, but according to the topic distribution over biterms (with a window size of 15).",
"cite_spans": [
{
"start": 187,
"end": 205,
"text": "Blei et al. (2003)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.1"
},
{
"text": "We also report on BERT, which is simply our method without the finetuning step, i.e. using vanilla pretrained BERT embeddings with K-Means clustering and keyword extraction based on Tf-Idf and attention. For further comparison with the FTE model, we focus our analysis on trained baselines with nine topics, the number of labels in the CrisisNLP dataset (Imran et al., 2016) .",
"cite_spans": [
{
"start": 354,
"end": 374,
"text": "(Imran et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.1"
},
{
"text": "To evaluate topics we calculate C N P M I and C V , two topic coherence metrics based on direct and indirect confirmation respectively (R\u00f6der et al., 2015) . These are described in Appendix B. For subsequent human evaluation, we select the configuration ( \u00a73.2.4) of each model with the highest C V for nine topics. This allows us to directly compare with the nine labels from the CrisisNLP dataset and assess our model's ability to learn novel topics. We focus on C V as R\u00f6der et al. (2015) found it to correlate best with human judgements (in contrast to U M ASS, another commonly used coherence metric that was not included in our analysis due to poor correlation with human judgements).",
"cite_spans": [
{
"start": 135,
"end": 155,
"text": "(R\u00f6der et al., 2015)",
"ref_id": "BIBREF51"
},
{
"start": 472,
"end": 491,
"text": "R\u00f6der et al. (2015)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Evaluation",
"sec_num": "4.2"
},
{
"text": "We performed anonymous evaluation through four annotators 1 . Since these models are primarily for use by crisis managers, we aim to concentrate our evaluation from the perspective of annotators working in the field. Specifically, we propose two evaluation methods focused on (1) topic keywords and (2) document clustering within topics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "4.3"
},
{
"text": "To assess the quality of topic keywords, annotators were presented with the top 10 keywords for each topic (Table 1 ) and asked to assign an interpretability score and a usefulness score on a threepoint scale. Following the criteria of Rosner et al. (2013), we define interpretability as good (eight to ten words are related to each other), neutral (four to seven words are related), or bad (at most three words are related). Usefulness in turn considers the ease of assigning a short label to describe a topic based on its keywords, similar to Newman et al. (2010) except we further require the label should be useful for crisis managers. We score usefulness on a three-point scale: useful, average, or useless.",
"cite_spans": [],
"ref_spans": [
{
"start": 107,
"end": 115,
"text": "(Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Keyword Evaluation",
"sec_num": "4.3.1"
},
{
"text": "The second task assesses -from the perspective of a crisis manager -the interpretability and usefulness of the actual documents clustered within a topic, instead of only analyzing topic keywords as done in previous work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Evaluation",
"sec_num": "4.3.2"
},
{
"text": "Given an anonymized model, for each topic we sample 10 sets of four documents within its cluster along with one document -the 'intruder'outside of that topic. For each set of documents, all four annotators were tasked with identifying the intruder from the sample of five documents, as well as assigning an interpretability score and a usefulness score to each sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Evaluation",
"sec_num": "4.3.2"
},
{
"text": "The task of intrusion detection is a variation of Chang et al. (2009) . However, instead of intruder topics or topic words, we found that assessing intruder tweets would give us a better sense of the differences in the clusters produced by our models. Participants were also given the option of labeling the intruder as 'unsure' to discourage guessing.",
"cite_spans": [
{
"start": 50,
"end": 69,
"text": "Chang et al. (2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Evaluation",
"sec_num": "4.3.2"
},
{
"text": "The interpretability score was graded on a threepoint scale: good (3-4 tweets seem to be part of a coherent topic beyond \"snowstorm\"), neutral, and bad (no tweets seem to be part of a coherent topic beyond \"snowstorm\"). The cluster usefulness score was similar to the keyword usefulness score, but formulated as a less ambiguous binary assignment of useful or useless for crisis managers wanting to filter information during a crisis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster Evaluation",
"sec_num": "4.3.2"
},
{
"text": "While our human cluster evaluation provides a good estimate of how topic clusters appear to humans, it does not necessarily establish the difference between two models' document clusters due to the random sampling involved. In other words, two models may have different cluster evaluation results, but similar topic clusters. We define 'agreement' as a measure of the overlap between two unsupervised classification models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agreement",
"sec_num": "4.4"
},
{
"text": "Given a model A, its agreement Agr A with a model B is",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agreement",
"sec_num": "4.4"
},
{
"text": "Agr A (B) = N i=0 max j p(A i , B j ) N (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model agreement",
"sec_num": "4.4"
},
{
"text": "where A i is the set of documents in the i th cluster of A and p(A i , B j ) is the proportion of documents in A i that are also in B j . We further define model agreement between A and B as the average of Agr A (B) and Agr B (A). Figure 1 shows that combining attention and Tf-Idf produces the highest automated C V coherence scores for our method, across a range of topic numbers. However, the improvement of adding attention is marginal and attention alone performed much worse, suggesting it is suboptimal for identifying keywords. Recent work by Kobayashi et al. (2020) proposes a norm-based analysis which may improve upon this. Figure 2 shows that FTE significantly outperforms the LDA and BTM baselines, with similar scores to the BERT baseline. We observed similar trends for C N P M I . However, despite BERT's high C V scores, we found that the topics it generated were of very low quality, as described below.",
"cite_spans": [
{
"start": 551,
"end": 574,
"text": "Kobayashi et al. (2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 231,
"end": 239,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 635,
"end": 643,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Model agreement",
"sec_num": "4.4"
},
{
"text": "Topic keywords are shown in Table 1 . By restricting the number of topics to the number of labels in the CrisisNLP dataset, we were able to ask our annotators to identify overlap with these original labels. Conversely, this allows us to show that our approach indeed bridges the gap between supervised classification and unsupervised topic modelling by identifying novel topics in addition to salient topics from supervised training.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 35,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Qualitative Analysis of Keywords",
"sec_num": "5.2"
},
{
"text": "Annotators identified overlap between generated topics and relevant classes from the CrisisNLP dataset: Topic 4 capturing donation needs and volunteering services, Topic 5 expressing sympathy Topic Model 1 2 3 4 5 6 7 8 9 FTE reporting ivyparkxadidas outage assistance prayer blowingsnow trapped monster bread monster mood campus assist praying alert stranded meteorologist song snowiest song widening troop pray advisory hydrant drifting coffee recorded blackswan advisory volunteer wish caution ambulance perspective milk peak le reported providing wishing advised dead stormofthecentury feelin temperature snowdoor impassable relief humanity stormsurge garbage mood pin cloudy perspective remaining aid brave wreckhouse rescue snowdrift enjoying and emotional support, Topic 7 covering missing or trapped people, and Topic 2 seemingly covering unrelated information. Distinct novel topics were also identified in the meteorological information in Topic 1, and information about power outages and closures in Topic 3. Topics 8 and 9 were less clear to annotators, but the former seemed to carry information about how extreme the storm was thought to be and the latter about citizens bundling up indoors with different foods and activities.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 823,
"text": "Topic Model 1 2 3 4 5 6 7 8 9 FTE reporting ivyparkxadidas outage assistance prayer blowingsnow trapped monster bread monster mood campus assist praying alert stranded meteorologist song snowiest song widening troop pray advisory hydrant drifting coffee recorded blackswan advisory volunteer wish caution ambulance perspective milk peak le reported providing wishing advised dead stormofthecentury feelin temperature snowdoor impassable relief humanity stormsurge garbage mood pin cloudy perspective remaining aid brave wreckhouse rescue snowdrift enjoying",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "FTE Keywords",
"sec_num": "5.2.1"
},
{
"text": "The topics in BTM were less semantically meaningful to annotators, although they found interesting topics there as well, with Topic 5 showing information about the need to stockpile provisions, Topic 7 relating to traffic conditions, and Topic 8 potentially providing information about a state of emergency and closed businesses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTM Keywords",
"sec_num": "5.2.2"
},
{
"text": "The topics in BERT were largely incoherent for annotators, with the exceptions being Topic 9 (positive sentiment) and Topic 5 (services and advisories). This is in stark contrast to the large automated coherence scores obtained by this method, indicating the importance of pairing automatic evaluation with human evaluation. Table 3 : Cluster Evaluation scores averaged across topics, number of topics with average scores greater than 0.5, and inter-rater agreements (Fleiss' \u03ba).",
"cite_spans": [],
"ref_spans": [
{
"start": 325,
"end": 332,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "BERT Keywords",
"sec_num": "5.2.3"
},
{
"text": "In Figure 3 , we compare topic-level results for keyword evaluations (averaged across annotators), and for cluster evaluations (averaged across samples and annotators). We summarize these results for BTM and FTE in Table 2 and Table 3 , leaving out LDA and BERT due to significantly lower scores. We find that our method outperforms the various baselines on both the keyword and cluster evaluations. In particular, the improvement over BERT further confirms the importance of the finetuning step in our method. In contrast, the improvements in average scores over BTM reported in Table 3 are marginal. Nevertheless, we find that the number of interpretable and useful topic clusters was greater for our approach. Indeed, while the BTM baseline had more semiinterpretable (i.e. only a subset of the sampled tweets seemed related) but non-useful topics, our method had a much clearer distinction between interpretable/useful and non-interpretable/non-useful topics, suggesting that tweets marked as hard to interpret and not useful are consistently irrelevant. This may be preferable for downstream applications, as it allows users to better filter our irrelevant content.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": null
},
{
"start": 215,
"end": 234,
"text": "Table 2 and Table 3",
"ref_id": "TABREF3"
},
{
"start": 580,
"end": 587,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.3"
},
{
"text": "The annotators also identified intruder tweets in topic samples from FTE more reliably and with less uncertainty, as measured by the number of correct intruders predicted and the number of times an intruder could not be predicted. Interestingly, BTM topics rated for high interpretability had lower rates of correct intruder detection, suggesting that these topics may seem misleadingly coherent to annotators. Inter-rater agreements as measured by Fleiss' \u03ba further confirm that annotators more often dis-agreed on intruder prediction and interpretability scoring for BTM topics. This is undesirable for downstream applications, where poor interpretability of topics can lead to a misinterpretation of data with real negative consequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human evaluation",
"sec_num": "5.3"
},
{
"text": "Agreement for the models was 32.7% between FTE and BERT, 26.5% between FTE and BTM, 19.9% between FTE and LDA and 20.7% between BTM and LDA. This confirms that the different models also generate different clusters.",
"cite_spans": [
{
"start": 43,
"end": 56,
"text": "FTE and BERT,",
"ref_id": null
},
{
"start": 57,
"end": 83,
"text": "26.5% between FTE and BTM,",
"ref_id": null
},
{
"start": 84,
"end": 140,
"text": "19.9% between FTE and LDA and 20.7% between BTM and LDA.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model agreement",
"sec_num": "5.4"
},
{
"text": "This paper introduces a novel approach for extracting useful information from social media and assisting crisis managers. We propose a simple method that bridges the gap between supervised classification and unsupervised topic modelling to address the issue of domain adaptation to novel crises.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our model (FTE, Finetuned Tweet Embeddings) incorporates crisis-related knowledge from supervised finetuning while also being able to generate salient topics for novel crises. To evaluate domain adaptation, we create a dataset of tweets from a crisis that significantly differs from existing crisis Twitter datasets: the 2020 Winter Storm Jacob.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our paper also introduces human evaluation methods better aligned with downstream use cases of topic modelling in crisis management, emphasizing human-centered machine learning. In both these human evaluations and traditional automatic evaluations, our method outperforms existing topic modelling methods, consistently producing more coherent, interpretable and useful topics for crisis managers. Interestingly, our annotators reported that several coherent topics seemed to be composed of related subtopics. In future work, the number of topics as a hyper-parameter could be explored to see if our approach captures these salient subtopics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Our method, while simple, is not specific to crisis management and can be more generally used for textual domain adaptation problems where the latent classes are unknown but likely to overlap with known classes from other domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Here we include class distributions (Table 4) and descriptions (Table 5) (Imran et al., 2016) B Automated Coherence Metrics",
"cite_spans": [
{
"start": 73,
"end": 93,
"text": "(Imran et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 36,
"end": 45,
"text": "(Table 4)",
"ref_id": "TABREF5"
},
{
"start": 63,
"end": 72,
"text": "(Table 5)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "The direct confirmation C N P M I , uses a token-bytoken ten word sliding window, where each step determines a new virtual document. Co-occurrence in these documents is used to compute the normalized pointwise mutual information (NPMI) between a given topic keyword W and each member in the conditioning set of other topic keywords W * , such that:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "N P M I = ( P M I(W , W * ) \u2212log(P (W , W * ) + ) ) \u03b3 P M I = log P (W , W * ) + P (W ) * P (W * )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "The coherence of a topic is then calculated by taking the arithmetic mean of these confirmation values, with as a small value for preventing the log of zero. The indirect confirmation C V is instead based on comparing the contexts in which W and W * appear. W and W * are represented as vectors of the size of the total word set W . Each value in these vectors consist of a direct confirmation between the word that the vector represents and the words in W . However, now the context is just the tweet that each word appears in. The indirect confirmation between each word in the topic is the cosine similarity of each pair of context vectors such that cos( u, w) =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "|W | i=1 u i \u2022 u i || u|| 2 || w|| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "where u = v(W ) and w = v(W * ). Once again, the arithmetic mean of these similarity values gives the coherence of the topic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "C V was found by R\u00f6der et al. (2015) to be the most interpretable topic coherence metric when compared to human judgement and has later been used extensively on assessing the coherence of short texts like tweets as well (Zeng et al., 2018; Habibabadi and Haghighi, 2019; Wallner et al., 2019) . We also use C N P M I , which has been one of the most successful topic coherence measures based on direct confirmation (R\u00f6der et al., 2015; Lau et al., 2014) . See R\u00f6der et al. (2015) for further details on both C V and C N P M I .",
"cite_spans": [
{
"start": 17,
"end": 36,
"text": "R\u00f6der et al. (2015)",
"ref_id": "BIBREF51"
},
{
"start": 220,
"end": 239,
"text": "(Zeng et al., 2018;",
"ref_id": "BIBREF62"
},
{
"start": 240,
"end": 270,
"text": "Habibabadi and Haghighi, 2019;",
"ref_id": "BIBREF17"
},
{
"start": 271,
"end": 292,
"text": "Wallner et al., 2019)",
"ref_id": "BIBREF58"
},
{
"start": 415,
"end": 435,
"text": "(R\u00f6der et al., 2015;",
"ref_id": "BIBREF51"
},
{
"start": 436,
"end": 453,
"text": "Lau et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 460,
"end": 479,
"text": "R\u00f6der et al. (2015)",
"ref_id": "BIBREF51"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A CrisisNLP Dataset",
"sec_num": null
},
{
"text": "Injured or dead people 1 Reports of casualties and/or injured people due to the crisis Missing, trapped, or found people 2 Reports and/or questions about missing or found people Displaced people and evacuations 3 People who have relocated due to the crisis, even for a short time (includes evacuations Infrastructure and utilities damage 4 Reports of damaged buildings, roads, bridges, or utilities/services interrupted or restored Donation needs or offers or volunteering services 5 Reports of urgent needs or donations of shelter and/or supplies such as food, water, clothing, money, medical supplies or blood; and volunteering services Caution and advice 6 Reports of warnings issued or lifted, guidance and tips Sympathy and emotional support 7 Prayers, thoughts, and emotional support Other useful information 8 Other useful information that helps one understand the situation Not related or irrelevant 9 Unrelated to the situation or irrelevant Table 5 : Label descriptions and id's in (Imran et al., 2016) ",
"cite_spans": [
{
"start": 992,
"end": 1012,
"text": "(Imran et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 951,
"end": 958,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Label Id Description",
"sec_num": null
},
{
"text": "Student researchers familiar with the crisis management literature and the needs of crisis managers as described in \u00a71.1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful for the funding from Environment and Climate Change Canada (ECCC GCXE19M010). Mikael Brunila also thanks the Kone Foundation for their support.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "Here we present one of the set of tweets presented to human annotators for each model and topic. We also show the ground truth intruder tweet which the annotators were asked to predict. Non-ASCII characters were removed here, but included in tweets shown to human annotators.C.1 FTE C.1.1 Topic 0\u2022 How bad was the blizzard in St. John's, Newfoundland?Here's what a seniors home looks like the day after.photo: https://t.co/6n8txqnuWl\u2022 East End of St. Johns 4 days post blizzard. #NLwx #NLtraffic #snowmeged-don2020 https://t.co/5s2p9Ivejf\u2022 Well here's the big mother storm en route.Currently the size of Nova Scotia, nbd. #nlwx https://t.co/CFi9szzunK\u2022 INTRUDER: I hope I don't have to go to work on Monday because I don't remember my password anymore. #nlwx #stormaged-don2020 https://t.co/BswQppEFVh\u2022 #StJohns declares #StateOfEmergency and #Newfoundland and #Labrador get pounded with #SnowFall with more to come https://t.co/JNxnIF5mNx\u2022 Gentle heart of Jesus #nlwx https://t.co/cEbvv3it5f\u2022 This was the moment I fell in love with #Newfoundland, #Canada when I first entered the Gros Morne National Park. https://t.co/2mSD79u6qC\u2022 #nlwx https://t.co/md4pRSafW5\u2022 @UGEABC families-send a pic of what youre reading!!! @NLESD @PowersGr6 @AndreaCoffin76 @MrBlackmoreGr1 https://t.co/FwEbRe9YJs\u2022 INTRUDER: So much for the snow I was really hoping to make a snowman Did anyone get their snowman built? #wheresthesnow https://t.co/FT3IJqIVsS \u2022 Not everyone has the funds to go to the grocery store (not just during a SOE) Hats off to the food banks that are o https://t.co/sljOgZZUmf\u2022 Up to 300 troops from across Canada will be asked to work on the response to the unprecedented #nlblizzard2020. Gag https://t.co/Yw7jUAWUJE\u2022 INTRUDER: Beautiful @Downtown-StJohns the morning after #Snowmageddon https://t.co/HS7jhLIt4l\u2022 So pleased to see the joint efforts of @GovNL with the cities/towns during #snow-maggedon2020. From calling it a SOE https://t.co/0oKer9CLmo\u2022 Updated story: Five days is a long time without food for people who can't afford to stock up. Staff at one St. John https://t.co/gNJSXMsv3B C.1.5 Topic 4\u2022 Were quite buried in Paradise right now! Luckily still have power for now. Stay safe everyone! #nlwx https://t.co/sOC7TL4Al7\u2022 Hoping everyone's pets are safe inside your homes. #nlblizzard2020\u2022 @IDontBlog Yikes! Hoping everyone there stays safe.... #nlblizzard2020 #NLStorm2020\u2022 @DanKudla The weather channel is calling for snow in Toronto, but nothing like that. Hope youre all safe in Newfou https://t.co/0KN2AD3JrS\u2022 INTRUDER: BREAKING -Province is calling in the military @NTVNewsNL #Nlwx https://t.co/ggfOlrbYCz C.1.6 Topic 5\u2022 We've made it through the worst of the storm, but #yyt's State of Emergency remains in effect. Pls stay inside & sa https://t.co/YGVxNOEzf1\u2022 INTRUDER: How COLD is it? Coby took less than 30 seconds to use the facilities this morning! #snowmaggedon2020 #snowstorm https://t.co/f4zZ38MJol\u2022 Stay safe St. Johns! #Newfoundland #snow-maggedon2020\u2022 Take your time cleaning this up, Newfoundland friends! It is a brutal amount of snow! Be safe! #nlwx https://t.co/POwfBYWHFy\u2022 Meanwhile in ON, 15-25cm w/ 50km wind forecast and asked to stay off streets. #nlbliz-zard2020 https://t.co/wwKVnqmM3F C.1.7 Topic 6\u2022 @VOCMNEWS This is amazing. In the next couple of days we could really get so many of our neighbourhood hydrants dug https://t.co/7Z2TOfwpwB\u2022 @weathernetwork batteries charging and camera gear drying out after 16 hours shooting in blizzard. @MurphTWN #nlwx https://t.co/yGUMV93Yrw\u2022 So we visited friends last night and left at the height of the storm ... somewhere under there are a couple of cars https://t.co/NYHHxXT4EI\u2022 @KrissyHolmes #nltraffic just like any other weekday morning coming out of Cbs. Two solid lines of traffic\u2022 INTRUDER: @BrianWalshWX so its 7:00 pm , how much more snow potentially will fall before noon tomorrow? #nlblizzard2020 #nlwx C.1.8 Topic 7\u2022 @StormchaserUKEU A glimpse into the future @yyt #nlwx\u2022 Its official -weve named this storm #Betty-WhiteOut2020 in honour of #BettyWhites-Birthday #nlwx\u2022 Even for just a moment with the front door open, snow is hitting you in the face and the wind is taking your breath https://t.co/pJkpwwqDfF\u2022 INTRUDER: #nlwx https://t.co/2mnaF7TkSl\u2022 Open those curtains, let all the sunshine heat in that you can! Solar gain will help us through. #nlwx https://t.co/AFAubBvWwg C.1.9 Topic 8\u2022 INTRUDER: Plow came by and then the neighbours started a snow clearing party. So thankful #nlblizzard2020 #nlwx https://t.co/QpIapG1N4Y\u2022 Y seguimos con la supernevada (fuera de lo comn, tambin hay que decirlo) de #Newfoundland , en #Canad ! Laia ll https://t.co/O1adKSXbFa\u2022 If you have to wear a full snowsuit to go shopping, STAY HOME. You do NOT need to be out. You do NOT need a fondue https://t.co/qig24xkrK5\u2022 @NewfieScumbag Pretty much... love of God! In case you didnt get the memo, keep your packin vehicle off the roads https://t.co/AJe6RpgWBS\u2022 So our front door looks like a neat little burrow now, at least. #nlwx #Snowmageddon2020 https://t.co/qUCdntRJgE \u2022 With a 9:30pm snow total of 20cm, today is #Gander's snowiest day so far this winter. #NLWx https://t.co/ZYhyV2Xpsm\u2022 #nlblizzard2020 #nlstorm look at the winds... the gusts are Cat 4 hurricane equivalent https://t.co/JjFbiMJAdg\u2022 10min avg wind speeds of 132.0km/h with max gust of 167.4km/h through 12:10pm at Green Island, Fortune Bay. #nlwx https://t.co/GtisE875av\u2022 INTRUDER: This is Lovely. I have always had a soft spot in my heart for #Newfoundland and the wonderful people there. https://t.co/CqPyzW3vLH\u2022 There has been the equivalent of 32.7 mm of precipitation since Fri 04:30 at \"ST JOHNS WEST CLIMATE\" #NLStorm",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Human evaluation tweet samples",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Evaluating Topic Coherence Using Distributional Semantics",
"authors": [
{
"first": "Nikolaos",
"middle": [],
"last": "Aletras",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 10th International Conference on Computational Semantics (IWCS 2013) -Long Papers",
"volume": "",
"issue": "",
"pages": "13--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikolaos Aletras and Mark Stevenson. 2013. Evalu- ating Topic Coherence Using Distributional Seman- tics. In Proceedings of the 10th International Con- ference on Computational Semantics (IWCS 2013) - Long Papers, pages 13-22, Potsdam, Germany. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Social Media in Disaster Risk Reduction and Crisis Management",
"authors": [
{
"first": "David",
"middle": [
"E"
],
"last": "Alexander",
"suffix": ""
}
],
"year": 2014,
"venue": "Science and Engineering Ethics",
"volume": "20",
"issue": "3",
"pages": "717--733",
"other_ids": {
"DOI": [
"10.1007/s11948-013-9502-z"
]
},
"num": null,
"urls": [],
"raw_text": "David E. Alexander. 2014. Social Media in Disaster Risk Reduction and Crisis Management. Science and Engineering Ethics, 20(3):717-733.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Automatic Labeling of Tweets for Crisis Response Using Distant Supervision",
"authors": [
{
"first": "Reem",
"middle": [],
"last": "Alrashdi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Keefe",
"suffix": ""
}
],
"year": 2020,
"venue": "Companion Proceedings of the Web Conference 2020, WWW '20",
"volume": "",
"issue": "",
"pages": "418--425",
"other_ids": {
"DOI": [
"10.1145/3366424.3383757"
]
},
"num": null,
"urls": [],
"raw_text": "Reem Alrashdi and Simon O'Keefe. 2020. Automatic Labeling of Tweets for Crisis Response Using Dis- tant Supervision. In Companion Proceedings of the Web Conference 2020, WWW '20, pages 418-425, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Pre-training is a Hot Topic",
"authors": [
{
"first": "Federico",
"middle": [],
"last": "Bianchi",
"suffix": ""
},
{
"first": "Silvia",
"middle": [],
"last": "Terragni",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2020,
"venue": "Contextualized Document Embeddings Improve Topic Coherence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2004.03974[cs].ArXiv:2004.03974"
]
},
"num": null,
"urls": [],
"raw_text": "Federico Bianchi, Silvia Terragni, and Dirk Hovy. 2020. Pre-training is a Hot Topic: Contextualized Document Embeddings Improve Topic Coherence. arXiv:2004.03974 [cs]. ArXiv: 2004.03974.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [
"Y"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"I"
],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Ma- chine Learning Research, 3(Jan):993-1022.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The Ideal Topic: Interdependence of Topic Interpretability and Other Quality Features in Topic Modelling for Short Texts",
"authors": [
{
"first": "Ivan",
"middle": [
"S"
],
"last": "Blekanov",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [
"S"
],
"last": "Bodrunova",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Zhuravleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Smoliarova",
"suffix": ""
},
{
"first": "Nikita",
"middle": [],
"last": "Tarasov",
"suffix": ""
}
],
"year": 2020,
"venue": "Social Computing and Social Media. Design, Ethics, User Behavior, and Social Network Analysis",
"volume": "",
"issue": "",
"pages": "19--26",
"other_ids": {
"DOI": [
"10.1007/978-3-030-49570-1_2"
]
},
"num": null,
"urls": [],
"raw_text": "Ivan S. Blekanov, Svetlana S. Bodrunova, Nina Zhu- ravleva, Anna Smoliarova, and Nikita Tarasov. 2020. The Ideal Topic: Interdependence of Topic Inter- pretability and Other Quality Features in Topic Mod- elling for Short Texts. In Social Computing and So- cial Media. Design, Ethics, User Behavior, and So- cial Network Analysis, Lecture Notes in Computer Science, pages 19-26, Cham. Springer International Publishing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reading Tea Leaves: How Humans Interpret Topic Models",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Gerrish",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jordan",
"middle": [
"L"
],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "David",
"middle": [
"M"
],
"last": "Blei",
"suffix": ""
}
],
"year": 2009,
"venue": "Advances in Neural Information Processing Systems",
"volume": "22",
"issue": "",
"pages": "288--296",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Chang, Sean Gerrish, Chong Wang, Jordan L. Boyd-graber, and David M. Blei. 2009. Reading Tea Leaves: How Humans Interpret Topic Models. In Y. Bengio, D. Schuurmans, J. D. Lafferty, C. K. I. Williams, and A. Culotta, editors, Advances in Neu- ral Information Processing Systems 22, pages 288- 296. Curran Associates, Inc.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Topic Model Diagnostics: Assessing Domain Relevance via Topical Alignment",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Chuang",
"suffix": ""
},
{
"first": "Sonal",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Heer",
"suffix": ""
}
],
"year": 2013,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1938--7228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason Chuang, Sonal Gupta, Christopher Manning, and Jeffrey Heer. 2013. Topic Model Diagnostics: As- sessing Domain Relevance via Topical Alignment. In International Conference on Machine Learning, pages 612-620. PMLR. ISSN: 1938-7228.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "What Does BERT Look at? An Analysis of BERT's Attention",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Urvashi",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "276--286",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4828"
]
},
"num": null,
"urls": [],
"raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What Does BERT Look at? An Analysis of BERT's Attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Analyzing polarization in social media: Method and application to tweets on 21 mass shootings",
"authors": [
{
"first": "Dorottya",
"middle": [],
"last": "Demszky",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Rob",
"middle": [],
"last": "Voigt",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
},
{
"first": "Jesse",
"middle": [],
"last": "Shapiro",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Gentzkow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2970--3005",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1304"
]
},
"num": null,
"urls": [],
"raw_text": "Dorottya Demszky, Nikhil Garg, Rob Voigt, James Zou, Jesse Shapiro, Matthew Gentzkow, and Dan Ju- rafsky. 2019. Analyzing polarization in social me- dia: Method and application to tweets on 21 mass shootings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 2970-3005, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using the triangle inequality to accelerate k-means",
"authors": [
{
"first": "Charles",
"middle": [],
"last": "Elkan",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Twentieth International Conference on International Conference on Machine Learning, ICML'03",
"volume": "",
"issue": "",
"pages": "147--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Charles Elkan. 2003. Using the triangle inequality to accelerate k-means. In Proceedings of the Twenti- eth International Conference on International Con- ference on Machine Learning, ICML'03, pages 147- 153, Washington, DC, USA. AAAI Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Big data analytics in prevention, preparedness, response and recovery in crisis and disaster management",
"authors": [
{
"first": "Dontas",
"middle": [],
"last": "Emmanouil",
"suffix": ""
},
{
"first": "Doukas",
"middle": [],
"last": "Nikolaos",
"suffix": ""
}
],
"year": 2015,
"venue": "The 18th International Conference on Circuits, Systems, Communications and Computers (CSCC 2015), Recent Advances in Computer Engineering Series",
"volume": "32",
"issue": "",
"pages": "476--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dontas Emmanouil and Doukas Nikolaos. 2015. Big data analytics in prevention, preparedness, response and recovery in crisis and disaster management. In The 18th International Conference on Circuits, Systems, Communications and Computers (CSCC 2015), Recent Advances in Computer Engineering Series, volume 32, pages 476-482.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Crippling Newfoundland, Canada, Blizzard From Bomb Cyclone Smashes All-Time Daily Snow Record. The Weather Channel",
"authors": [
{
"first": "Jonathan",
"middle": [
"Erdman"
],
"last": "",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Erdman. 2020. Crippling Newfoundland, Canada, Blizzard From Bomb Cyclone Smashes All- Time Daily Snow Record. The Weather Channel.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Word embeddings quantify 100 years of gender and ethnic stereotypes. Proceedings of the National Academy of Sciences",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Londa",
"middle": [],
"last": "Schiebinger",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Zou",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "115",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1073/pnas.1720347115"
]
},
"num": null,
"urls": [],
"raw_text": "Nikhil Garg, Londa Schiebinger, Dan Jurafsky, and James Zou. 2018. Word embeddings quantify 100 years of gender and ethnic stereotypes. Pro- ceedings of the National Academy of Sciences, 115(16):E3635-E3644. ISBN: 9781720347118",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Publisher: National Academy of Sciences Section",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Publisher: National Academy of Sciences Section: PNAS Plus.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Topic Modelling for Identification of Vaccine Reactions in Twitter",
"authors": [
{
"first": "Khademi",
"middle": [],
"last": "Sedigheh",
"suffix": ""
},
{
"first": "Pari Delir",
"middle": [],
"last": "Habibabadi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Haghighi",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Australasian Computer Science Week Multiconference, ACSW 2019",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1145/3290688.3290735"
]
},
"num": null,
"urls": [],
"raw_text": "Sedigheh Khademi Habibabadi and Pari Delir Haghighi. 2019. Topic Modelling for Identification of Vaccine Reactions in Twitter. In Proceedings of the Australasian Computer Science Week Multicon- ference, ACSW 2019, pages 1-10, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change",
"authors": [
{
"first": "William",
"middle": [
"L"
],
"last": "Hamilton",
"suffix": ""
},
{
"first": "Jure",
"middle": [],
"last": "Leskovec",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2116--2121",
"other_ids": {
"DOI": [
"10.18653/v1/D16-1229"
]
},
"num": null,
"urls": [],
"raw_text": "William L. Hamilton, Jure Leskovec, and Dan Jurafsky. 2016. Cultural Shift or Linguistic Drift? Comparing Two Computational Measures of Semantic Change. In Proceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 2116-2121, Austin, Texas. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Empirical study of topic modeling in Twitter",
"authors": [
{
"first": "Liangjie",
"middle": [],
"last": "Hong",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Brian",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Davison",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the First Workshop on Social Media Analytics, SOMA '10",
"volume": "",
"issue": "",
"pages": "80--88",
"other_ids": {
"DOI": [
"10.1145/1964858.1964870"
]
},
"num": null,
"urls": [],
"raw_text": "Liangjie Hong and Brian D. Davison. 2010. Empiri- cal study of topic modeling in Twitter. In Proceed- ings of the First Workshop on Social Media Analyt- ics, SOMA '10, pages 80-88, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Extracting information nuggets from disaster-Related messages in social media",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Imran",
"suffix": ""
},
{
"first": "Shady",
"middle": [],
"last": "Elbassuoni",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "D\u00edaz",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Meier",
"suffix": ""
}
],
"year": 2013,
"venue": "ISCRAM 2013 Conference Proceedings -10th International Conference on Information Systems for Crisis Response and Management",
"volume": "",
"issue": "",
"pages": "2411--3387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Imran, Shady Elbassuoni, Carlos Castillo, Fernando D\u00edaz, and Patrick Meier. 2013. Extract- ing information nuggets from disaster-Related mes- sages in social media. In ISCRAM 2013 Conference Proceedings -10th International Conference on In- formation Systems for Crisis Response and Man- agement, pages 791-801, KIT; Baden-Baden. Karl- sruher Institut fur Technologie. ISSN: 2411-3387 Journal Abbreviation: ISCRAM 2013.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Twitter as a Lifeline: Human-annotated Twitter Corpora for NLP of Crisis-related Messages",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Imran",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Mitra",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Castillo",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)",
"volume": "",
"issue": "",
"pages": "1638--1643",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Imran, Prasenjit Mitra, and Carlos Castillo. 2016. Twitter as a Lifeline: Human-annotated Twit- ter Corpora for NLP of Crisis-related Messages. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1638-1643, Portoro\u017e, Slovenia. European Language Resources Association (ELRA).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Attention is not Explanation",
"authors": [
{
"first": "Sarthak",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "3543--3556",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1357"
]
},
"num": null,
"urls": [],
"raw_text": "Sarthak Jain and Byron C. Wallace. 2019. Attention is not Explanation. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 3543-3556, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Examining the Role of Social Media in Effective Crisis Management: The Effects of Crisis Origin, Information Form, and Source on Publics' Crisis Responses",
"authors": [
{
"first": "Yan",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Brooke",
"middle": [
"Fisher"
],
"last": "Liu",
"suffix": ""
},
{
"first": "Lucinda",
"middle": [
"L"
],
"last": "Austin",
"suffix": ""
}
],
"year": 2014,
"venue": "Communication Research",
"volume": "41",
"issue": "1",
"pages": "",
"other_ids": {
"DOI": [
"10.1177/0093650211423918"
]
},
"num": null,
"urls": [],
"raw_text": "Yan Jin, Brooke Fisher Liu, and Lucinda L. Austin. 2014. Examining the Role of Social Media in Effec- tive Crisis Management: The Effects of Crisis Ori- gin, Information Form, and Source on Publics' Cri- sis Responses. Communication Research, 41(1):74- 94. ZSCC: 0000391 Publisher: SAGE Publications Inc.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Building Human Resilience: The Role of Public Health Preparedness and Response As an Adaptation to Climate Change",
"authors": [
{
"first": "Mark",
"middle": [
"E"
],
"last": "Keim",
"suffix": ""
}
],
"year": 2008,
"venue": "American Journal of Preventive Medicine",
"volume": "35",
"issue": "5",
"pages": "508--516",
"other_ids": {
"DOI": [
"10.1016/j.amepre.2008.08.022"
]
},
"num": null,
"urls": [],
"raw_text": "Mark E. Keim. 2008. Building Human Resilience: The Role of Public Health Preparedness and Response As an Adaptation to Climate Change. American Journal of Preventive Medicine, 35(5):508-516.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is not only a weight: Analyzing transformers with vector norms",
"authors": [
{
"first": "Goro",
"middle": [],
"last": "Kobayashi",
"suffix": ""
},
{
"first": "Tatsuki",
"middle": [],
"last": "Kuribayashi",
"suffix": ""
},
{
"first": "Sho",
"middle": [],
"last": "Yokoi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Inui",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "7057--7075",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.574"
]
},
"num": null,
"urls": [],
"raw_text": "Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi, and Kentaro Inui. 2020. Attention is not only a weight: Analyzing transformers with vector norms. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7057-7075, Online. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Revealing the Dark Secrets of BERT",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "4365--4374",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1445"
]
},
"num": null,
"urls": [],
"raw_text": "Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the Dark Secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 4365-4374, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings",
"authors": [
{
"first": "Austin",
"middle": [
"C"
],
"last": "Kozlowski",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Taddy",
"suffix": ""
},
{
"first": "James",
"middle": [
"A"
],
"last": "Evans",
"suffix": ""
}
],
"year": 2019,
"venue": "American Sociological Review",
"volume": "84",
"issue": "5",
"pages": "905--949",
"other_ids": {
"DOI": [
"10.1177/0003122419877135"
]
},
"num": null,
"urls": [],
"raw_text": "Austin C. Kozlowski, Matt Taddy, and James A. Evans. 2019. The Geometry of Culture: Analyzing the Meanings of Class through Word Embeddings. American Sociological Review, 84(5):905-949.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "An Empirical Evaluation of doc2vec with Practical Insights into Document Embedding Generation",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Jey",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 1st Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "78--86",
"other_ids": {
"DOI": [
"10.18653/v1/W16-1609"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau and Timothy Baldwin. 2016. An Empir- ical Evaluation of doc2vec with Practical Insights into Document Embedding Generation. In Proceed- ings of the 1st Workshop on Representation Learning for NLP, pages 78-86, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Machine Reading Tea Leaves: Automatically Evaluating Topic Coherence and Topic Model Quality",
"authors": [
{
"first": "David",
"middle": [],
"last": "Jey Han Lau",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "530--539",
"other_ids": {
"DOI": [
"10.3115/v1/E14-1056"
]
},
"num": null,
"urls": [],
"raw_text": "Jey Han Lau, David Newman, and Timothy Baldwin. 2014. Machine Reading Tea Leaves: Automati- cally Evaluating Topic Coherence and Topic Model Quality. In Proceedings of the 14th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 530-539, Gothenburg, Sweden. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The human touch: How non-expert users perceive, interpret, and fix topic models",
"authors": [
{
"first": "Alison",
"middle": [],
"last": "Tak Yeon Lee",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "Seppi",
"suffix": ""
},
{
"first": "Jordan",
"middle": [],
"last": "Elmqvist",
"suffix": ""
},
{
"first": "Leah",
"middle": [],
"last": "Boyd-Graber",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Findlater",
"suffix": ""
}
],
"year": 2017,
"venue": "International Journal of Human-Computer Studies",
"volume": "105",
"issue": "",
"pages": "28--42",
"other_ids": {
"DOI": [
"10.1016/j.ijhcs.2017.03.007"
]
},
"num": null,
"urls": [],
"raw_text": "Tak Yeon Lee, Alison Smith, Kevin Seppi, Niklas Elmqvist, Jordan Boyd-Graber, and Leah Findlater. 2017. The human touch: How non-expert users per- ceive, interpret, and fix topic models. International Journal of Human-Computer Studies, 105:28-42.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Neural word embedding as implicit matrix factorization",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Goldberg",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 27th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "2177--2185",
"other_ids": {
"DOI": [
"https://dl.acm.org/doi/10.5555/2969033.2969070"
]
},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Pro- ceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, pages 2177-2185, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Disaster response aided by tweet classification with a domain adaptation approach",
"authors": [
{
"first": "Hongmin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Doina",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "Cornelia",
"middle": [],
"last": "Caragea",
"suffix": ""
},
{
"first": "Nic",
"middle": [],
"last": "Herndon",
"suffix": ""
}
],
"year": 2018,
"venue": "Journal of Contingencies and Crisis Management",
"volume": "26",
"issue": "1",
"pages": "16--27",
"other_ids": {
"DOI": [
"10.1111/1468-5973.12194"
]
},
"num": null,
"urls": [],
"raw_text": "Hongmin Li, Doina Caragea, Cornelia Caragea, and Nic Herndon. 2018. Disaster response aided by tweet classification with a domain adaptation approach. Journal of Contingen- cies and Crisis Management, 26(1):16-27. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/1468- 5973.12194.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Open Sesame: Getting inside BERT's Linguistic Knowledge",
"authors": [
{
"first": "Yongjie",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chern Tan",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Frank",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP",
"volume": "",
"issue": "",
"pages": "241--253",
"other_ids": {
"DOI": [
"10.18653/v1/W19-4825"
]
},
"num": null,
"urls": [],
"raw_text": "Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open Sesame: Getting inside BERT's Linguistic Knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Least squares quantization in PCM",
"authors": [
{
"first": "S",
"middle": [],
"last": "Lloyd",
"suffix": ""
}
],
"year": 1982,
"venue": "Conference Name: IEEE Transactions on Information Theory",
"volume": "28",
"issue": "",
"pages": "129--137",
"other_ids": {
"DOI": [
"10.1109/TIT.1982.1056489"
]
},
"num": null,
"urls": [],
"raw_text": "S. Lloyd. 1982. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129-137. Conference Name: IEEE Transac- tions on Information Theory.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Tweets Classification with BERT in the Field of Disaster Management",
"authors": [
{
"first": "Guoqin",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guoqin Ma. 2019. Tweets Classification with BERT in the Field of Disaster Management. page 15.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tom\u00e1s",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "1st International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1s Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word represen- tations in vector space. In 1st International Con- ference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 26th International Conference on Neural Information Processing Systems",
"volume": "2",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Cor- rado, and Jeffrey Dean. 2013b. Distributed repre- sentations of words and phrases and their composi- tionality. In Proceedings of the 26th International Conference on Neural Information Processing Sys- tems -Volume 2, NIPS'13, pages 3111-3119, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Optimizing Semantic Coherence in Topic Models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Mimno",
"suffix": ""
},
{
"first": "Hanna",
"middle": [],
"last": "Wallach",
"suffix": ""
},
{
"first": "Edmund",
"middle": [],
"last": "Talley",
"suffix": ""
},
{
"first": "Miriam",
"middle": [],
"last": "Leenders",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "262--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Mimno, Hanna Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing Semantic Coherence in Topic Models. In Proceedings of the 2011 Conference on Empiri- cal Methods in Natural Language Processing, pages 262-272, Edinburgh, Scotland, UK. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Automatic evaluation of topic coherence",
"authors": [
{
"first": "David",
"middle": [],
"last": "Newman",
"suffix": ""
},
{
"first": "Jey",
"middle": [
"Han"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Karl",
"middle": [],
"last": "Grieser",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "100--108",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Newman, Jey Han Lau, Karl Grieser, and Tim- othy Baldwin. 2010. Automatic evaluation of topic coherence. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Lin- guistics, HLT '10, pages 100-108, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Robust classification of crisis-related data on social networks using convolutional neural networks",
"authors": [
{
"first": "Kamela Ali Al",
"middle": [],
"last": "Dat Tien Nguyen",
"suffix": ""
},
{
"first": "Shafiq",
"middle": [],
"last": "Mannai",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Sajjad",
"suffix": ""
},
{
"first": "Prasenjit",
"middle": [],
"last": "Imran",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitra",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017, Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017",
"volume": "",
"issue": "",
"pages": "632--635",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Tien Nguyen, Kamela Ali Al Mannai, Shafiq Joty, Hassan Sajjad, Muhammad Imran, and Prasenjit Mi- tra. 2017. Robust classification of crisis-related data on social networks using convolutional neural net- works. In Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017, Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017, pages 632- 635. AAAI press.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "A Study of Information Retrieval Weighting Schemes for Sentiment Analysis",
"authors": [
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1386--1395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Georgios Paltoglou and Mike Thelwall. 2010. A Study of Information Retrieval Weighting Schemes for Sentiment Analysis. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics, pages 1386-1395, Uppsala, Swe- den. Association for Computational Linguistics.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {
"DOI": [
"10.3115/v1/D14-1162"
]
},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Deep Contextualized Word Representations",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "2227--2237",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1202"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep Contextualized Word Rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "An empirical analysis and classification of crisis related tweets",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Rexiline",
"middle": [],
"last": "Ragini",
"suffix": ""
},
{
"first": "P",
"middle": [
"M"
],
"last": "Rubesh Anand",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)",
"volume": "",
"issue": "",
"pages": "2473--943",
"other_ids": {
"DOI": [
"10.1109/ICCIC.2016.7919608"
]
},
"num": null,
"urls": [],
"raw_text": "J. Rexiline Ragini and P. M. Rubesh Anand. 2016. An empirical analysis and classification of crisis related tweets. In 2016 IEEE International Conference on Computational Intelligence and Computing Re- search (ICCIC), pages 1-4. ISSN: 2473-943X.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Emerging Perspectives in Human-Centered Machine Learning",
"authors": [
{
"first": "Gonzalo",
"middle": [],
"last": "Ramos",
"suffix": ""
},
{
"first": "Jina",
"middle": [],
"last": "Suh",
"suffix": ""
},
{
"first": "Soroush",
"middle": [],
"last": "Ghorashi",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Banks",
"suffix": ""
},
{
"first": "Saleema",
"middle": [],
"last": "Amershi",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Fiebrink",
"suffix": ""
},
{
"first": "Alison",
"middle": [],
"last": "Smith-Renner",
"suffix": ""
},
{
"first": "Gagan",
"middle": [],
"last": "Bansal",
"suffix": ""
}
],
"year": 2019,
"venue": "Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {
"DOI": [
"10.1145/3290607.3299014"
]
},
"num": null,
"urls": [],
"raw_text": "Gonzalo Ramos, Jina Suh, Soroush Ghorashi, Christo- pher Meek, Richard Banks, Saleema Amershi, Re- becca Fiebrink, Alison Smith-Renner, and Gagan Bansal. 2019. Emerging Perspectives in Human- Centered Machine Learning. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, CHI EA '19, pages 1-8, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3982--3992",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1410"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence Embeddings using Siamese BERT- Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Classification and Clustering of Arguments with Contextualized Word Embeddings",
"authors": [
{
"first": "Nils",
"middle": [],
"last": "Reimers",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "Tilman",
"middle": [],
"last": "Beck",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Daxenberger",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Stab",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "567--578",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1054"
]
},
"num": null,
"urls": [],
"raw_text": "Nils Reimers, Benjamin Schiller, Tilman Beck, Jo- hannes Daxenberger, Christian Stab, and Iryna Gurevych. 2019. Classification and Clustering of Arguments with Contextualized Word Embeddings. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 567- 578, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "A primer in BERTology: What we know about how BERT works",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Rogers",
"suffix": ""
},
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "842--866",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00349"
]
},
"num": null,
"urls": [],
"raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Using deep learning and social network analysis to understand and manage extreme flooding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Romascanu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ker",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Sieber",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brunila",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Greenidge",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Lumley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bush",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Morgan",
"suffix": ""
}
],
"year": 2020,
"venue": "Journal of Contingencies and Crisis Management",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1111/1468-5973.12311"
]
},
"num": null,
"urls": [],
"raw_text": "A. Romascanu, H. Ker, R. Sieber, R. Zhao, M. Brunila, S. Greenidge, S. Lumley, D. Bush, and S. Morgan. 2020. Using deep learning and social network anal- ysis to understand and manage extreme flooding. Journal of Contingencies and Crisis Management.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Evaluating topic coherence measures",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Rosner",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hinneburg",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "R\u00f6der",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Nettling",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Both",
"suffix": ""
}
],
"year": 2013,
"venue": "Topic Models: Computation, Application, and Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Rosner, Alexander Hinneburg, Michael R\u00f6der, Martin Nettling, and Andreas Both. 2013. Evalu- ating topic coherence measures. In Topic Models: Computation, Application, and Evaluation.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Exploring the Space of Topic Coherence Measures",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "R\u00f6der",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Both",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Hinneburg",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, WSDM '15",
"volume": "",
"issue": "",
"pages": "399--408",
"other_ids": {
"DOI": [
"10.1145/2684822.2685324"
]
},
"num": null,
"urls": [],
"raw_text": "Michael R\u00f6der, Andreas Both, and Alexander Hinneb- urg. 2015. Exploring the Space of Topic Coherence Measures. In Proceedings of the Eighth ACM Inter- national Conference on Web Search and Data Min- ing, WSDM '15, pages 399-408, Shanghai, China. Association for Computing Machinery.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Evaluating multi-label classification of incident-related tweet",
"authors": [
{
"first": "Axel",
"middle": [],
"last": "Schulz",
"suffix": ""
},
{
"first": "Eneldo",
"middle": [],
"last": "Loza Menc\u00eda",
"suffix": ""
},
{
"first": "Thanh-Tung",
"middle": [],
"last": "Dang",
"suffix": ""
},
{
"first": "Benedikt",
"middle": [],
"last": "Schmidt",
"suffix": ""
}
],
"year": 2014,
"venue": "# MSM",
"volume": "",
"issue": "",
"pages": "26--33",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Axel Schulz, Eneldo Loza Menc\u00eda, Thanh-Tung Dang, and Benedikt Schmidt. 2014. Evaluating multi-label classification of incident-related tweet. In # MSM, pages 26-33. Citeseer.",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Is Attention Interpretable?",
"authors": [
{
"first": "Sofia",
"middle": [],
"last": "Serrano",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2931--2951",
"other_ids": {
"DOI": [
"10.18653/v1/P19-1282"
]
},
"num": null,
"urls": [],
"raw_text": "Sofia Serrano and Noah A. Smith. 2019. Is Attention Interpretable? In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, pages 2931-2951, Florence, Italy. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Domain adaptation for classifying disaster-related Twitter data",
"authors": [
{
"first": "Oleksandra",
"middle": [],
"last": "Sopova",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oleksandra Sopova. 2017. Domain adaptation for clas- sifying disaster-related Twitter data. Report, Kansas State University.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "A statistical interpretation of term specificity and its application in retrieval",
"authors": [
{
"first": "Karen Sp\u00e4rck",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Documentation",
"volume": "60",
"issue": "5",
"pages": "493--502",
"other_ids": {
"DOI": [
"10.1108/00220410410560573"
]
},
"num": null,
"urls": [],
"raw_text": "Karen Sp\u00e4rck Jones. 2004. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 60(5):493-502. Pub- lisher: Emerald Group Publishing Limited.",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Using Twitter and other social media platforms to provide situational awareness during an incident",
"authors": [
{
"first": "Ed",
"middle": [],
"last": "Tobias",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Business Continuity & Emergency Planning",
"volume": "5",
"issue": "3",
"pages": "208--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ed Tobias. 2011. Using Twitter and other social media platforms to provide situational awareness during an incident. Journal of Business Continuity & Emer- gency Planning, 5(3):208-223. Publisher: Henry Stewart Publications LLP.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {
"DOI": [
"10.5555/3295222.3295349"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, pages 6000-6010, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Tweeting your Destiny: Profiling Users in the Twitter Landscape around an Online Game",
"authors": [
{
"first": "G\u00fcnter",
"middle": [],
"last": "Wallner",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "Kriglstein",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Drachen",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE Conference on Games (CoG)",
"volume": "",
"issue": "",
"pages": "2325--4289",
"other_ids": {
"DOI": [
"10.1109/CIG.2019.8848079"
]
},
"num": null,
"urls": [],
"raw_text": "G\u00fcnter Wallner, Simone Kriglstein, and Anders Drachen. 2019. Tweeting your Destiny: Profiling Users in the Twitter Landscape around an Online Game. In 2019 IEEE Conference on Games (CoG), pages 1-8. ISSN: 2325-4289.",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Attention is not not Explanation",
"authors": [
{
"first": "Sarah",
"middle": [],
"last": "Wiegreffe",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Pinter",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "11--20",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1002"
]
},
"num": null,
"urls": [],
"raw_text": "Sarah Wiegreffe and Yuval Pinter. 2019. Attention is not not Explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 11-20, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "A biterm topic model for short texts",
"authors": [
{
"first": "Xiaohui",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Jiafeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Yanyan",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Xueqi",
"middle": [],
"last": "Cheng",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 22nd international conference on World Wide Web, WWW '13",
"volume": "",
"issue": "",
"pages": "1445--1456",
"other_ids": {
"DOI": [
"10.1145/2488388.2488514"
]
},
"num": null,
"urls": [],
"raw_text": "Xiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, WWW '13, pages 1445-1456, Rio de Janeiro, Brazil. Association for Computing Machinery.",
"links": null
},
"BIBREF61": {
"ref_id": "b61",
"title": "Ibrahim Elgendy, and Mohamed Ahmed Sherif. 2019. Fine-tuned BERT Model for Multi-Label Tweets Classification",
"authors": [
{
"first": "Rricha",
"middle": [],
"last": "Hamada M Zahera",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jalota",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hamada M Zahera, Rricha Jalota, Ibrahim Elgendy, and Mohamed Ahmed Sherif. 2019. Fine-tuned BERT Model for Multi-Label Tweets Classification. page 7.",
"links": null
},
"BIBREF62": {
"ref_id": "b62",
"title": "Topic Memory Networks for Short Text Classification",
"authors": [
{
"first": "Jichuan",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Cuiyun",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"R"
],
"last": "Lyu",
"suffix": ""
},
{
"first": "Irwin",
"middle": [],
"last": "King",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3120--3131",
"other_ids": {
"DOI": [
"10.18653/v1/D18-1351"
]
},
"num": null,
"urls": [],
"raw_text": "Jichuan Zeng, Jing Li, Yan Song, Cuiyun Gao, Michael R. Lyu, and Irwin King. 2018. Topic Mem- ory Networks for Short Text Classification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3120- 3131, Brussels, Belgium. Association for Computa- tional Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Effect of keyword extraction strategies."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Comparison of FTE with baselines."
},
"TABREF1": {
"type_str": "table",
"text": "Topic keywords for our FTE model and the BTM and BERT baselines used in human evaluation.",
"content": "<table/>",
"num": null,
"html": null
},
"TABREF2": {
"type_str": "table",
"text": "Figure 3: Topic-level results for keyword and cluster evaluations, aligned with topics from Table 1. All scores are rescaled to values between 0 and 1, then averaged across annotators and samples.",
"content": "<table><tr><td/><td colspan=\"3\">Average Score Topic Count</td><td>Fleiss' \u03ba</td></tr><tr><td>Score</td><td colspan=\"4\">BTM FTE BTM FTE BTM FTE</td></tr><tr><td colspan=\"2\">Interpretability 31.94 65.28</td><td>1</td><td>5</td><td>15.01 17.97</td></tr><tr><td>Usefulness</td><td>27.78 59.72</td><td>1</td><td>5</td><td>12.36 21.55</td></tr></table>",
"num": null,
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "Keyword Evaluation scores averaged across topics, number of topics with average scores greater than 0.5, and inter-rater agreements (Fleiss' \u03ba).",
"content": "<table><tr><td/><td colspan=\"3\">Average Score Topic Count</td><td colspan=\"2\">Fleiss' \u03ba</td></tr><tr><td>Score</td><td colspan=\"5\">BTM FTE BTM FTE BTM FTE</td></tr><tr><td>Interpretability</td><td>50.28 51.53</td><td>3</td><td>4</td><td colspan=\"2\">11.05 23.45</td></tr><tr><td>Usefulness</td><td>45.46 46.11</td><td>3</td><td>5</td><td colspan=\"2\">21.82 21.60</td></tr><tr><td>Correct Intruders</td><td>35.28 44.17</td><td>2</td><td>4</td><td colspan=\"2\">25.78 31.50</td></tr><tr><td colspan=\"2\">Unknown Intruders 26.39 8.89</td><td>0</td><td>0</td><td>-</td><td>-</td></tr></table>",
"num": null,
"html": null
},
"TABREF4": {
"type_str": "table",
"text": "from the CrisisNLP Dataset.",
"content": "<table><tr><td/><td/><td/><td/><td/><td>Label Id</td><td/><td/><td/><td/></tr><tr><td>Crisis Dataset</td><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td><td>8</td><td>9</td></tr><tr><td>2013 pak eq</td><td>351</td><td>5</td><td>16</td><td>29</td><td>325</td><td>75</td><td>112</td><td>764</td><td>336</td></tr><tr><td>2014 cali eq</td><td>217</td><td>6</td><td>4</td><td>351</td><td>83</td><td>84</td><td>83</td><td colspan=\"2\">1028 157</td></tr><tr><td>2014 chile eq</td><td>119</td><td>6</td><td>63</td><td>26</td><td>10</td><td>250</td><td>541</td><td>634</td><td>364</td></tr><tr><td>2014 odile</td><td>50</td><td colspan=\"3\">39 153 848</td><td>248</td><td>77</td><td>166</td><td>380</td><td>52</td></tr><tr><td colspan=\"2\">2014 india floods 959</td><td>14</td><td>27</td><td>67</td><td>48</td><td>44</td><td>30</td><td>312</td><td>502</td></tr><tr><td>2014 pak floods</td><td colspan=\"3\">259 117 106</td><td>94</td><td>529</td><td>56</td><td>127</td><td>698</td><td>27</td></tr><tr><td>2014 hagupit</td><td>66</td><td>8</td><td>130</td><td>92</td><td>113</td><td>349</td><td>290</td><td>732</td><td>233</td></tr><tr><td>2015 pam</td><td>143</td><td>18</td><td>49</td><td>212</td><td>364</td><td>93</td><td>95</td><td>542</td><td>497</td></tr><tr><td>2015 nepal eq</td><td colspan=\"3\">346 189 85</td><td>132</td><td>890</td><td>35</td><td>525</td><td>639</td><td>177</td></tr><tr><td>Total</td><td colspan=\"9\">2510 402 633 1851 2610 1063 1969 5729 2345</td></tr></table>",
"num": null,
"html": null
},
"TABREF5": {
"type_str": "table",
"text": "Label counts for the different datasets labeled by Crowdflower workers in",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}