{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T11:10:50.168128Z" }, "title": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification in Social Media Conversations", "authors": [ { "first": "Jianfei", "middle": [], "last": "Yu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanjing University of Science & Technology", "location": { "country": "China" } }, "email": "jfyu@njust.edu.cn" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Singapore Management University", "location": { "country": "Singapore" } }, "email": "jingjiang@smu.edu.sg" }, { "first": "Ling", "middle": [], "last": "Min", "suffix": "", "affiliation": {}, "email": "klingmin@dso.org.sg" }, { "first": "Serena", "middle": [], "last": "Khoo", "suffix": "", "affiliation": { "laboratory": "", "institution": "DSO National Laboratories", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Hai", "middle": [], "last": "Leong Chieu", "suffix": "", "affiliation": { "laboratory": "", "institution": "DSO National Laboratories", "location": { "country": "Singapore" } }, "email": "" }, { "first": "Rui", "middle": [], "last": "Xia", "suffix": "", "affiliation": { "laboratory": "", "institution": "Nanjing University of Science & Technology", "location": { "country": "China" } }, "email": "rxia@njust.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Lee Kuan Yew died already. www.pmo.gov.sg/lky. Source Post Query Is it true? Lee Kuan Yew Died? Can anyone confirm it? No, I don't believe it is true. R2: Reply Post R21: Reply Post Deny Support He died several days ago. They didn't announce until now. R1: Reply Post I also think so. He was on TV last week.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Lee Kuan Yew died already. www.pmo.gov.sg/lky. Source Post Query Is it true? Lee Kuan Yew Died? Can anyone confirm it? No, I don't believe it is true. R2: Reply Post R21: Reply Post Deny Support He died several days ago. They didn't announce until now. R1: Reply Post I also think so. He was on TV last week.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The prevalent use of social media enables rapid spread of rumors on a massive scale, which leads to the emerging need of automatic rumor verification (RV). A number of previous studies focus on leveraging stance classification to enhance RV with multi-task learning (MTL) methods. However, most of these methods failed to employ pre-trained contextualized embeddings such as BERT, and did not exploit inter-task dependencies by using predicted stance labels to improve the RV task. Therefore, in this paper, to extend BERT to obtain thread representations, we first propose a Hierarchical Transformer 1 , which divides each long thread into shorter subthreads, and employs BERT to separately represent each subthread, followed by a global Transformer layer to encode all the subthreads. We further propose a Coupled Transformer Module to capture the inter-task interactions and a Post-Level Attention layer to use the predicted stance labels for RV, respectively. Experiments on two benchmark datasets show the superiority of our Coupled Hierarchical Transformer model over existing MTL approaches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Recent years have witnessed a profound revolution in social media, as many individuals gradually turn to different social platforms to share the latest news and voice personal opinions. Meanwhile, the flourish of social media also enables rapid dissemination of unverified information (i.e., rumors) on a massive scale, which may cause serious harm to our society (e.g., impacting presidential election decisions (Allcott and Gentzkow, 2017) ). Since manually checking a sheer quantity of rumors on Figure 1 : An example conversation thread with both rumor veracity label and stance labels. Each post has a stance label towards the claim in the source post, and the source claim was later identified as false rumor.", "cite_spans": [ { "start": 413, "end": 441, "text": "(Allcott and Gentzkow, 2017)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 499, "end": 507, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "social media is naturally labor-intensive and timeconsuming, it is crucial to develop an automatic rumor verification approach to mitigate their harmful effect.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "Rumor verification is typically defined as a task of determining whether the source claim in a conversation thread is false rumor, true rumor, or unverified rumor (Zubiaga et al., 2018a) . In the literature, much work has been done for rumor verification (Liu et al., 2015; Ma et al., 2016; Ruchansky et al., 2017; Chen et al., 2018; Kochkina and Liakata, 2020) . Among them, one appealing line of work focuses on exploiting stance signals to enhance rumor verification , since it is observed that people's stances in reply posts usually provide important clues to rumor verification (e.g., in Fig. 1 , if the source claim is denied or queried by most replies, it is highly probable that the source claim contains misinformation and is false rumor).", "cite_spans": [ { "start": 163, "end": 186, "text": "(Zubiaga et al., 2018a)", "ref_id": "BIBREF37" }, { "start": 255, "end": 273, "text": "(Liu et al., 2015;", "ref_id": "BIBREF17" }, { "start": 274, "end": 290, "text": "Ma et al., 2016;", "ref_id": "BIBREF19" }, { "start": 291, "end": 314, "text": "Ruchansky et al., 2017;", "ref_id": "BIBREF30" }, { "start": 315, "end": 333, "text": "Chen et al., 2018;", "ref_id": "BIBREF4" }, { "start": 334, "end": 361, "text": "Kochkina and Liakata, 2020)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 594, "end": 600, "text": "Fig. 1", "ref_id": null } ], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "This line of work has attracted increasing attention in recent years. A number of multi-task learning (MTL) methods have been proposed to jointly perform stance classification (SC) and rumor verification (RV) over conversation threads, including Sequential LSTM-based methods (Li et al., 2019) , Tree LSTM-based methods (Kumar and Carley, 2019) , and Graph Convolutional Network-based methods . These MTL approaches are mainly constructed upon the MTL2 framework proposed in Kochkina et al. (2018) , which aims to first learn shared representations with shared layers in the low level, followed by learning task-specific representations with separate stance-specific layers and rumor-specific layers in the high level.", "cite_spans": [ { "start": 276, "end": 293, "text": "(Li et al., 2019)", "ref_id": "BIBREF16" }, { "start": 320, "end": 344, "text": "(Kumar and Carley, 2019)", "ref_id": "BIBREF14" }, { "start": 475, "end": 497, "text": "Kochkina et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "Although these MTL approaches have shown the usefulness of stance signals to rumor verification, they still suffer from the following shortcomings:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "(1) The first obstacle lies in their single-task models for SC or RV, whose randomly initialized text encoders such as LSTM tend to overfit existing small annotated corpora. With the recent trend of pre-training, many pre-trained text encoders such as BERT have been shown to overcome the overfitting problem and achieve significant improvements in many NLP tasks (Devlin et al., 2019) . However, unlike previous sentence-level tasks, our SC and RV tasks require the language understanding over conversation threads in social media. Since BERT is unable to process arbitrarily long sequences due to its maximum length constraint in the pre-training stage, it remains an open question how to extend BERT to our SC and RV tasks. (2) Another important limitation of previous studies lies in their multi-task learning framework. First, the MTL2 framework used in existing methods fails to explicitly model the inter-task interactions between the stance-specific and rumor-specific layers. Second, although it has been observed that people's stances in reply posts are crucial to rumor verification, the stance distributions predicted from stance-specific layers have not been utilized for rumor veracity prediction in the MTL2 framework.", "cite_spans": [ { "start": 364, "end": 385, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "To address the above two shortcomings, we explore the potential of BERT for stance-aware rumor verification, and propose a new multi-task learning model based on Transformer (Vaswani et al., 2017) , named Coupled Hierarchical Transformer. Our main contributions can be summarized as follows:", "cite_spans": [ { "start": 174, "end": 196, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "\u2022 To extend BERT as our single-task model for SC and RV, we propose a Hierarchical Transformer architecture. Specifically, we first flatten all the posts in a conversation thread into a long sequence, and then decompose them evenly into multiple subthreads, each within the length constraint of BERT. Next, each subthread is encoded with BERT to capture the local interactions be-tween posts within the subthread, and then a Transformer layer is stacked on top of all the subthreads to capture the global interactions between posts in the whole conversation thread. \u2022 To tackle the limitations of the MTL2 frame- work, we first design a Coupled Transformer Module to capture the inter-task interactions between the stance-specific and the rumor-specific layers. Moreover, to utilize the stance distributions predicted for each post, we propose to concatenate them with its associated post representations, followed by a post-level attention mechanism to automatically learn the importance of each post for the final rumor verification task.", "cite_spans": [], "ref_spans": [ { "start": 566, "end": 612, "text": "\u2022 To tackle the limitations of the MTL2 frame-", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "Evaluations on two benchmark datasets demonstrate the following: First, compared with existing single-task models, our Hierarchical Transformer brings consistent performance gains on Macro-F 1 for both SC and RV tasks. Second, our Coupled Hierarchical Transformer outperforms the state-ofthe-art multi-task learning approach by 9.2% and 6.3% on Macro-F 1 for the two benchmarks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "1" }, { "text": "Stance Classification: Although stance classification has been well studied in different contexts such as online forums (Hasan and Ng, 2013; Lukasik et al., 2016; Ferreira and Vlachos, 2016; Mohammad et al., 2016) , a recent trend is to study stance classification towards rumors in different social media platforms (Mendoza et al., 2010; Qazvinian et al., 2011) . These studies can be roughly categorized into two groups. One line of work aims to design different features to capture the sequential property of conversation threads Aker et al., 2017; Pamungkas et al., 2018; Zubiaga et al., 2018b; Giasemidis et al., 2018 ). Another line of work attempts to apply recent deep learning models to automatically capture effective stance features (Kochkina et al., 2017; Veyseh et al., 2017) . Our work extends the latter line of work by proposing a hierarchical Transformer based on the recent pre-trained BERT for this task. Moreover, we notice that our BERT-based hierarchical Transformer is similar to the model proposed in (Pappagari et al., 2019), but we want to point out that our model design in the input and output layers is specific to stance classification, which is different from their work. Rumor Verification: Due to the negative impact of various rumors spreading on social media, rumor verification has attracted increasing attention in recent years. Existing approaches to single-task rumor verification generally belong to two groups. The first line of work focuses on either employing a myriad of hand-crafted features (Qazvinian et al., 2011; Yang et al., 2012; Kwon et al., 2013; Ma et al., 2015) including post contents, user profiles, information credibility features (Castillo et al., 2011) , and propagation patterns, or resorting to various kinds of kernels to model the event propagation structure (Wu et al., 2015; Ma et al., 2017) . The second line of work applies variants of several neural network models to automatically capture important features among all the propagated posts (Ma et al., 2016; Ruchansky et al., 2017; Chen et al., 2018) . Different from these studies, the goal in this paper is to leverage stance classification to improve rumor verification with a multi-task learning architecture. Stance-Aware Rumor Verification: The recent advance in rumor verification is to exploit stance information to enhance rumor verification with different multi-task learning approaches. Specifically, Ma et al. (2018a) and Kochkina et al. (2018) respectively proposed two multi-task learning architectures to jointly optimize stance classification and rumor verification based on two different variants of RNN, i.e., GRU and LSTM. More recently, Kumar and Carley (2019) proposed another multi-task LSTM model based on tree structures for stanceaware rumor verification. Our work bears the same intuition to these previous studies, and aims to explore the potential of the pre-trained BERT to this multi-task learning task.", "cite_spans": [ { "start": 120, "end": 140, "text": "(Hasan and Ng, 2013;", "ref_id": "BIBREF9" }, { "start": 141, "end": 162, "text": "Lukasik et al., 2016;", "ref_id": "BIBREF18" }, { "start": 163, "end": 190, "text": "Ferreira and Vlachos, 2016;", "ref_id": "BIBREF7" }, { "start": 191, "end": 213, "text": "Mohammad et al., 2016)", "ref_id": "BIBREF25" }, { "start": 316, "end": 338, "text": "(Mendoza et al., 2010;", "ref_id": "BIBREF24" }, { "start": 339, "end": 362, "text": "Qazvinian et al., 2011)", "ref_id": "BIBREF29" }, { "start": 533, "end": 551, "text": "Aker et al., 2017;", "ref_id": "BIBREF0" }, { "start": 552, "end": 575, "text": "Pamungkas et al., 2018;", "ref_id": "BIBREF27" }, { "start": 576, "end": 598, "text": "Zubiaga et al., 2018b;", "ref_id": "BIBREF38" }, { "start": 599, "end": 622, "text": "Giasemidis et al., 2018", "ref_id": "BIBREF8" }, { "start": 744, "end": 767, "text": "(Kochkina et al., 2017;", "ref_id": "BIBREF12" }, { "start": 768, "end": 788, "text": "Veyseh et al., 2017)", "ref_id": "BIBREF32" }, { "start": 1537, "end": 1561, "text": "(Qazvinian et al., 2011;", "ref_id": "BIBREF29" }, { "start": 1562, "end": 1580, "text": "Yang et al., 2012;", "ref_id": "BIBREF35" }, { "start": 1581, "end": 1599, "text": "Kwon et al., 2013;", "ref_id": "BIBREF15" }, { "start": 1600, "end": 1616, "text": "Ma et al., 2015)", "ref_id": "BIBREF20" }, { "start": 1690, "end": 1713, "text": "(Castillo et al., 2011)", "ref_id": "BIBREF3" }, { "start": 1824, "end": 1841, "text": "(Wu et al., 2015;", "ref_id": "BIBREF34" }, { "start": 1842, "end": 1858, "text": "Ma et al., 2017)", "ref_id": "BIBREF21" }, { "start": 2010, "end": 2027, "text": "(Ma et al., 2016;", "ref_id": "BIBREF19" }, { "start": 2028, "end": 2051, "text": "Ruchansky et al., 2017;", "ref_id": "BIBREF30" }, { "start": 2052, "end": 2070, "text": "Chen et al., 2018)", "ref_id": "BIBREF4" }, { "start": 2432, "end": 2449, "text": "Ma et al. (2018a)", "ref_id": "BIBREF22" }, { "start": 2454, "end": 2476, "text": "Kochkina et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we first formulate the task of stance classification (SC) and rumor verification (RV). We then describe our single-task model for SC and RV, followed by introducing our multi-task learning framework for stance-aware rumor verification.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Given a Twitter corpus, let us first use D = {C 1 , C 2 , . . . , C |D| } to denote a set of conversation threads in the corpus. Each thread C i is then assumed to consist of a post with the source claim S 0 and a sequence of reply posts sorted in chronological order, denoted by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "R 1 , R 2 , ... , R N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "For the SC task, given an input thread C i , we assume that each post (including a source post and reply posts) in the thread is annotated with a stance label towards the source claim, namely support, deny, query, and comment. Formally, let s = (s 0 , s 1 , ..., s N ) denote the sequence of stance labels, and the goal of SC is to learn a sequence classification function g:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "S 0 , R 1 , . . . , R N \u2192 s 0 , s 1 , . . . , s N .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "For the RV task, we assume that each input thread C i is associated with a rumor label y i , which belongs to one of the three classes, namely false rumor, true rumor, and unverified rumor. The goal of RV is to learn a classification function f :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "C i \u2192 y i .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task Formulation", "sec_num": "3.1" }, { "text": "In this subsection, we present our proposed Hierarchical Transformer, which is a single-task learning framework encompassing the tasks of SC and RV. Fig. 2 illustrates the overview of our model, which mainly consists of four modules, including input thread transformation, local context encoding, global context encoding, and output layers. Motivation: Although BERT has been widely adopted in various NLP tasks (Devlin et al., 2019) , its application to our SC and RV tasks is not trivial. First, most previous studies employed BERT to obtain token-level representations for sentence or paragraph understanding, while our SC and RV tasks primarily require sentence-level representations for conversation thread understanding. Second, due to the maximum length constraint during the pre-training stage, BERT cannot be directly applied to encode arbitrarily long sequences, e.g., conversation threads in our tasks. Although truncating the input sequences is a feasible solution, it will inevitably ignore many posts that might be crucial for rumor verification. Our main idea to address the limitations above is to divide the long sequence of a thread into shorter sequences, each within the length constraint of BERT, and to use a hierarchical model to capture the global interactions at the top layer. Input Thread Transformation: First, to obtain post-level representations, we insert two special tokens, i.e., [CLS] and [SEP] , to the beginning and the end of each post, where the [CLS] token is intended to represent the semantic meaning of the post following it. We then sort the transformed posts in each thread C i in chronological order, followed by flattening them into a long sequence. Second, to eliminate the maximum length constraint, we propose to decompose the flattened sequence into multiple subthreads, so that each subthread has the same number of posts, and the sequence length of each subthread satisfies the length constraint.", "cite_spans": [ { "start": 412, "end": 433, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" }, { "start": 1413, "end": 1418, "text": "[CLS]", "ref_id": null }, { "start": 1423, "end": 1428, "text": "[SEP]", "ref_id": null } ], "ref_spans": [ { "start": 149, "end": 155, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Hierarchical Transformer for Stance Classification and Rumor Verification", "sec_num": "3.2" }, { "text": "True/False/Unverified S 1 S n-1 S n S 0 S 2n-1 S (k-1)n S kn-1 Task 2 Stance Label Support/Deny/Query/Comment \u2026... \u2026... \u2026... \u2026...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Task 1 Rumor Label", "sec_num": null }, { "text": "Formally, let C i = (S 0 , R 1 , . . . , R N ) denote the flattened thread, where S 0 is the source post, and R j refers to the j-th reply post. As shown in the bottom of Fig. 2 , we assume that C i is decomposed into k subthreads, each subthread consists of n consecutive posts, and each post consists of m tokens 2 . For the j-th post in the thread C i , let us use", "cite_spans": [], "ref_spans": [ { "start": 171, "end": 177, "text": "Fig. 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "P j = (x j CLS , x j 1 , . . . , x j m\u22122 , x j SEP )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "to denote its input representations, where each token x is represented by summing up its word embeddings, segment embeddings and position embeddings. For the l-th subthread in C i , we use B l = (P l0 , P l1 , . . . , P l(n\u22121) ) to refer to it. Local Context Encoding (LCE): Next, we employ the pre-trained BERT to separately process the k subthreads to capture the local interactions between adjacent posts within each subthread:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h l = BERT(B l ), l = 1, 2, . . . , k", "eq_num": "(1)" } ], "section": "Input Conversation Thread", "sec_num": null }, { "text": "where h l \u2208 R nm\u00d7d is the hidden representation generated for the l-th subthread.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "Global Context Encoding (GCE): To further capture the global interactions between all the posts in the whole conversation thread, we propose to first concatenate the hidden representations of each subthread:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "h = h 1 \u2295 h 2 \u2295 . . . \u2295 h k .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "We then feed h to a standard Transformer layer as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "h = LN(h + MH-ATT(h)),", "eq_num": "(2)" } ], "section": "Input Conversation Thread", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "H = LN( h + FFN( h)),", "eq_num": "(3)" } ], "section": "Input Conversation Thread", "sec_num": null }, { "text": "where MH-ATT and FFN respectively refer to the multi-head self-attention and the feed forward network (Vaswani et al., 2017) , and LN refers to layer normalization (Ba et al., 2016) . Output Layers: Based on the global hidden representation H, we further stack the output layers to make predictions for SC and RV, respectively. Specifically, for the SC task, we treat the hidden state of the j-th [CLS] token as the representation for the j-th post, followed by adding a softmax layer to classify its stance towards the source claim:", "cite_spans": [ { "start": 102, "end": 124, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF31" }, { "start": 164, "end": 181, "text": "(Ba et al., 2016)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(s j | H j CLS ) = softmax(W s H j CLS + bs),", "eq_num": "(4)" } ], "section": "Input Conversation Thread", "sec_num": null }, { "text": "where W s \u2208 R d\u00d74 and b s \u2208 R 4 are learnable parameters. Moreover, for the RV task, we add a softmax layer over the last hidden state of the first [CLS] token for rumor veracity prediction:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y | H 0 CLS ) = softmax(W r H 0 CLS + br),", "eq_num": "(5)" } ], "section": "Input Conversation Thread", "sec_num": null }, { "text": "where W r \u2208 R d\u00d73 and b r \u2208 R 3 are weight and bias parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Input Conversation Thread", "sec_num": null }, { "text": "Based on the above single-task model (i.e., Hierarchical Transformer), we describe our proposed multi-task learning (MTL) framework for stanceaware rumor verification in this subsection. framework is the MTL2 model proposed in Kochkina et al. (2018) , which assumes that the SC and RV tasks share the low-level neural layers but the high-level layers are specific to each task. As illustrated in Fig. 3 , to adapt our Hierarchical Transformer to this MTL2 framework, we propose to share the input and LCE modules between SC and RV, followed by employing separate GCE and output modules for these two tasks, respectively. Motivation: However, as mentioned before, this baseline MTL framework has two major limitations. First, it fails to consider the inter-task interaction. Since the GCE module in SC is supervised to capture salient stance-specific features such as no doubt, agree and fake news, these features can be leveraged to guide the GCE module in RV to capture those important rumor-specific features closely related to stance features. Moreover, since both stance-specific and rumor-specific features are intuitively crucial to RV, it is necessary to effectively integrate them. Second, it ignores the sequential stance labels predicted from the output module in SC. Actually, the predicted stance distributions for each post can capture the temporal evolution of public stances towards the source claim, which may reflect indicative clues for veracity prediction. Coupled Transformer Module: To model intertask interactions, we devise a Coupled Transformer Module with two coupled components in Fig. 4 : a stance-specific Transformer and a cross-task Transformer.", "cite_spans": [ { "start": 227, "end": 249, "text": "Kochkina et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 396, "end": 402, "text": "Fig. 3", "ref_id": "FIGREF1" }, { "start": 1607, "end": 1613, "text": "Fig. 4", "ref_id": null } ], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "Concretely, we first employ a standard Transformer layer (i.e., Eqn (2) and Eqn (3)) to obtain stance-specific representations P in the right channel. Next, to learn the inter-task interactions in the left channel, we design a multi-head stance-aware attention mechanism (MH-SATT) by treating P as queries, and h as keys and values, which essentially leverages stance-specific features in P to guide our model to pay more attention to stanceaware rumor-specific features. Specifically, the i-th head of MH-SATT is defined as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "SATTi(P, h) = softmax( [WqP] [W k h] d/z )[Wvh] ,", "eq_num": "(6)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "{W q , W k , W v } \u2208 R d/z\u00d7d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "are parameters, and z is the number of heads. Moreover, to integrate stance-specific and rumorspecific features, we propose to add a layer norm together with a residual connection as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V = LN(P + MH-SATT(P, h)).", "eq_num": "(7)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "Finally, we add a feed-forward network and a layer normalization to get the rumor-stance hybrid representations V:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "V = LN( V + FFN( V)).", "eq_num": "(8)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "Post-Level Attention with Stance Labels: To address the second limitation, we propose to concatenate each post's stance distribution and its corresponding hidden representation, followed by a post-level attention layer to automatically learn the importance of each post. Specifically, as shown in Fig. 4 , we first use Eqn (4) to predict the stance distribution of the j-th post in the right channel, denoted by p j . We then treat the hybrid representation of the j-th [CLS] token (i.e., V j CLS ) as the representation of the j-th post, and concatenate it with p j , followed by feeding them to a post-level attention layer to obtain the stance label-aware thread representation U: SemEval-17 325 5,568 1,004 415 464 3,685 145 74 106 PHEME 2,402 105,354 -1,067 638 697 Table 1 : Basic statistics of the SemEval-2017 dataset and the PHEME dataset.", "cite_spans": [], "ref_spans": [ { "start": 297, "end": 303, "text": "Fig. 4", "ref_id": null }, { "start": 684, "end": 795, "text": "SemEval-17 325 5,568 1,004 415 464 3,685 145 74 106 PHEME 2,402 105,354 -1,067 638 697 Table 1", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "uj = v tanh W h (V j CLS \u2295 pj) ,", "eq_num": "(9)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03b1j = exp(uj) N l=1 exp(u l ) ,", "eq_num": "(10)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "U = N j=1 \u03b1j(V j CLS \u2295 pj).", "eq_num": "(11)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "Output Layers: Finally, since V 0 CLS and U can be considered as the token-level thread representation and the post-level thread representation respectively, we propose to concatenate them to predict the veracity label of the source claim:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p(y | V 0 CLS , U) = softmax W (V 0 CLS \u2295 U) + b ,", "eq_num": "(12)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "where W \u2208 R (2d+4)\u00d73 and b \u2208 R 3 are weight and bias terms. Model Training: To optimize all the parameters in our Coupled Hierarchical Transformer, we adopt the alternating optimization strategy to minimize the following objective function, which is a combination of the cross-entropy loss of the two tasks:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "J = \u2212 1 M M i=1 log p(yi | V 0 CLS , U) + 1 M M k=1 N j=1 log p(s j | P j CLS ) ,", "eq_num": "(13)" } ], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "where M and M refer to the number of samples for the tasks of RV and SC, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coupled Hierarchical Transformer for Stance-Aware Rumor Verification", "sec_num": "3.3" }, { "text": "In this section, we first evaluate our single-task model on both stance classification (SC) and rumor verification (RV), followed by evaluating our multitask learning model on RV. Finally, we perform further analysis to provide deeper insights into our proposed multi-task learning model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "Dataset: To demonstrate the effectiveness of our proposed approaches, we carry out experiments on two benchmark datasets, i.e., SemEval-2017 and PHEME. Table 1 shows the basic statistics of the two datasets.", "cite_spans": [], "ref_spans": [ { "start": 152, "end": 159, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Experiment Setting", "sec_num": "4.1" }, { "text": "Specifically, SemEval-2017 is a widely used dataset from SemEval-2017 Challenge Task 8, which contains 325 Twitter conversation threads discussing rumors . The dataset has been split into training, development, and test sets, where the former two sets are related to eight events and the test set covers two additional events. Since each thread is annotated with a rumor veracity label and each post in the thread is annotated with its stance towards the source claim, this dataset is used for evaluating both SC and RV tasks in this work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "4.1" }, { "text": "PHEME is a well known dataset for RV, which contains 2402 Twitter conversation threads discussing nine events. For fair comparison with existing approaches, we perform cross-validation experiments based on leave-one-event-out settings: for each fold, all the threads related to one event are used for testing, and all the threads related to the other eight events are used for training. Following previous studies (Kochkina et al., 2018; , PHEME is only used for evaluating the performance of RV.", "cite_spans": [ { "start": 414, "end": 437, "text": "(Kochkina et al., 2018;", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "4.1" }, { "text": "Since the class distribution of the two datasets are imbalanced, we employ Macro-F 1 as the main evaluation metric and accuracy as the secondary evaluation metric for both tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "4.1" }, { "text": "Parameter Settings: Our models are based on the pre-trained uncased BERT base model (Devlin et al., 2019) , where the number of BERT layers is 12 and the number of attention heads is z = 12. Moreover, for both Hierarchical Transformer and Coupled Hierarchical Transformer, we set the learning rate as 5e-5, and the dropout rate as 0.1. Due to memory limitation, for each conversation thread, the number of subthreads is set to k = 6, and the maximum input length of each subthread is set as 512. For each subthread, the number of posts is set to n = 17, and the number of tokens in each post is fixed to m = 30. Moreover, the batch size is respectively set as 4 and 2 for Hierarchical Transformer and Coupled Hierarchical Transformer, respectively. We implement all the models based on PyTorch with a 24GB NVIDIA TITAN RTX GPU. ", "cite_spans": [ { "start": 84, "end": 105, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment Setting", "sec_num": "4.1" }, { "text": "In this subsection, we compare our proposed Hierarchical Transformer with existing single-task models for SC and RV, respectively. Stance Classification (SC): We first consider the following competitive approaches that focus on SC only: (1) SVM is a baseline method that feeds conversation-based and affective-based features to linear SVM (Pamungkas et al., 2018); (2) BranchLSTM is an LSTM-based architecture designed by Kochkina et al. (2018) , which focuses on modeling the sequential branches in each thread;", "cite_spans": [ { "start": 422, "end": 444, "text": "Kochkina et al. (2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Single-Task Models", "sec_num": "4.2.1" }, { "text": "(3) Temporal ATT is an attention-based model proposed by Veyseh et al. 2017, which treats each post's adjacent posts in a conversation timeline as its local context, followed by employing attention mechanism over the local context to learn the importance of each adjacent post; (4) Conversational GCN is the state-of-the-art approach recently proposed by , which leverages graph convolutional network to model the relations between posts in each thread. We report the SC results in Table 2 . First, it is clear to observe that our Hierarchical Transformer model performs much better than all the compared systems on Macro-F 1 . Second, compared with previous approaches, our model shows its strong capability of detecting posts belonging to the support and deny stances. This is crucial for veracity prediction, because the support and deny stances usually provide important clues to identify the true and false rumors respectively (see Fig. 5 ). All these observations demonstrate the general effectiveness of our Hierarchical Transformer model.", "cite_spans": [], "ref_spans": [ { "start": 482, "end": 489, "text": "Table 2", "ref_id": "TABREF5" }, { "start": 937, "end": 943, "text": "Fig. 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Evaluation on Single-Task Models", "sec_num": "4.2.1" }, { "text": "We then consider several competitive systems that focus on RV only: (1) RvNN is a recursive neural network model based on top-down tree structure, which is proposed by Ma et al. (2018b) ; (2) Hierarchical GCN-RNN is a variant of Conversational GCN for veracity prediction;", "cite_spans": [ { "start": 168, "end": 185, "text": "Ma et al. (2018b)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Rumor Verification (RV):", "sec_num": null }, { "text": "(3) PLAN is the state-of-the-art approach recently proposed by Khoo et al. (2020) , which uses a randomly initialized Transformer to encode each conversation thread.", "cite_spans": [ { "start": 63, "end": 81, "text": "Khoo et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Rumor Verification (RV):", "sec_num": null }, { "text": "We report the RV results of compared systems on SemEval-2017 and PHEME in the top part of Table 3 . First, compared with earlier methods for RV, we observe that our Hierarchical Transformer model gains significant improvements, outperforming Hierarchical GCN-RNN by 5.2 and 5.5 absolute percentage points on Macro-F 1 for the two datasets, respectively. Second, even compared with the recent state-of-the-art model PLAN, our model can still bring moderate performance gains on the two datasets. Since PLAN is based on randomly initialized Transformer whereas our model is based on pre-trained Transformer (i.e., BERT), this shows the usefulness of employing pre-trained models for RV, which agrees with our first motivation.", "cite_spans": [], "ref_spans": [ { "start": 90, "end": 97, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Rumor Verification (RV):", "sec_num": null }, { "text": "In this subsection, we evaluate the effectiveness of our Coupled Hierarchical Transformer model, and consider several multi-task learning frameworks for stance-aware rumor verification: (1) BranchLSTM+NileTMRG is a pipeline approach, which first trains a BranchLSTM model for SC, followed by a SVM classifier for RV (Kochkina et al., 2018) ; (2) MTL2 is the MTL framework proposed in (Kochkina et al., 2018) , which shares a single LSTM channel but uses two separate output layers for SC and RV, respectively; (3) Hierarchical PSV is a hierarchical model proposed by , which first learns content and stance features via Conversational-GCN, followed by exploiting temporal evolution for RV via Stance-Aware RNN;", "cite_spans": [ { "start": 316, "end": 339, "text": "(Kochkina et al., 2018)", "ref_id": "BIBREF13" }, { "start": 384, "end": 407, "text": "(Kochkina et al., 2018)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation on Multi-Task Models", "sec_num": "4.2.2" }, { "text": "(4) MTL2-Hierarchical Transformer is our adapted MTL2 model which is introduced in Section 3.3. In the bottom part of Table 3 , we can first find that all the multi-task learning models achieve better performance than their corresponding singletask baselines across the two datasets, which verifies the usefulness of stance signals for RV. Second, among all the multi-task learning approaches, it is clear to observe that our Coupled Hierarchical Transformer model consistently achieves the best results on both SemEval-2017 and PHEME, which outperforms the second best method by 2.3 and 2.1 absolute percentage points on Macro-F 1 for the two datasets, respectively. These observations show the superiority of our proposed model over previous multi-task learning methods for stanceaware rumor verification.", "cite_spans": [], "ref_spans": [ { "start": 118, "end": 125, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Evaluation on Multi-Task Models", "sec_num": "4.2.2" }, { "text": "To examine the impact of each key component in our single-task and multi-task approaches, we fur- ther perform ablation study in this subsection.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.3" }, { "text": "As shown in Table 4 , for our proposed Hierarchical Transformer, we can see that if we directly apply BERT to our RV task (i.e., truncating the input thread and removing the global Transformer layer), the performance will drop significantly. This is in line with our first motivation, and also demonstrates the effectiveness of our proposed model. Moreover, for our multi-task learning framework (i.e., Coupled Hierarchical Transformer), the postlevel attention layer shows its indispensable role because of the significant performance drop after removal. Meanwhile, replacing our cross-task Transformer with the standard Transformer will lead to moderate performance drop in both datasets, which also suggests its importance to our full model.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 19, "text": "Table 4", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.3" }, { "text": "To better understand the usefulness of stance signals to veracity prediction in our Coupled Hierarchical Transformer, we first analyze the correlation between predicted stance classes and predicted veracity labels on our two datasets. Since the comment stance is not crucial for rumor verification, we focus on the other three stance classes, i.e., deny, query, and support.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation Between Predicted Stance Labels and Veracity Labels", "sec_num": "4.4" }, { "text": "As shown in Fig. 5 , we can clearly see that true rumor is more closely associated with the support stance, whereas false rumor is generally dominated by the other two stances deny and query. This suggests that our multi-task learning model has implicitly learnt that the stance signal can provide important clues to rumor verification. Case Study: To provide deeper insights into our Coupled Hierarchical Transformer, we carefully choose one representative sample from our test set, and show the stance and veracity prediction results as well as the attention weights of each post learnt in the post-level attention layer. Due to space limitation, we only show five posts with the top-5 attention weights in the thread.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 18, "text": "Fig. 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Correlation Between Predicted Stance Labels and Veracity Labels", "sec_num": "4.4" }, { "text": "These are not timid colours; soldiers back guarding tomb of unknown soldier after today's shooting #standforcanada", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation Between Predicted Stance Labels and Veracity Labels", "sec_num": "4.4" }, { "text": "Predicted Stance @user1@user2 apparently a hoax. best to take tweet down.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Correlation Between Predicted Stance Labels and Veracity Labels", "sec_num": "4.4" }, { "text": "Reply Post @user3 not a hoax. This is before the shooting", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Post", "sec_num": null }, { "text": "Reply Post @user1@user4 I don't believe there are soldiers guarding this area right now. 2/2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Post", "sec_num": null }, { "text": "Reply Post @user5@user1 who wants to have a \"go\"???", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Post", "sec_num": null }, { "text": "Reply Post .....", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Source Post", "sec_num": null }, { "text": ".....", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deny", "sec_num": null }, { "text": "..... In Fig. 6 , we can see that although the source claim is supported by some replies, our model learns to pay much higher attention weights to the two posts with deny stance while primarily ignoring the other posts, which may help our model correctly predict its veracity label as false rumor.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 15, "text": "Fig. 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Query", "sec_num": null }, { "text": "In this paper, we first examined the limitations of existing approaches to stance classification (SC) and rumor verification (RV). To tackle these limitations, we first proposed a single-task model (i.e., Hierarchical Transformer) for SC and RV, followed by designing a multi-task learning framework with a Coupled Transformer module to capture intertask interactions and a Post-Level Attention Layer to use stance distributions for the RV task. Experiments on two benchmarks show the effectiveness of our single-task and multi-task learning methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Note that the concept of hierarchy in this paper is different from that inYang et al. (2016), as we use hierarchy to refer to a neural structure that first models the local interactions among posts within each subthread, followed by modeling the global interactions among all the posts in the whole thread.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that for parallel computing, each post is padded or truncated to have the same number of tokens, i.e., m, and each subthread is padded to have the same number of posts, i.e., n.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank three anonymous reviewers for their valuable comments. This research is supported by DSO grant DSOCL18009, the Natural Science Foundation of China (No. 61672288, 62076133, and 62006117), and the Natural Science Foundation of Jiangsu Province for Young Scholars (SBK2020040749) and Distinguished Young Scholars (SBK2020010154).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Simple open stance classification for rumour analysis", "authors": [ { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" }, { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", "volume": "", "issue": "", "pages": "31--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ahmet Aker, Leon Derczynski, and Kalina Bontcheva. 2017. Simple open stance classification for rumour analysis. In Proceedings of the International Con- ference Recent Advances in Natural Language Pro- cessing, RANLP 2017, pages 31-39.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Social media and fake news in the 2016 election", "authors": [ { "first": "Hunt", "middle": [], "last": "Allcott", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Gentzkow", "suffix": "" } ], "year": 2017, "venue": "Journal of economic perspectives", "volume": "31", "issue": "2", "pages": "211--247", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hunt Allcott and Matthew Gentzkow. 2017. Social me- dia and fake news in the 2016 election. Journal of economic perspectives, 31(2):211-36.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Information credibility on twitter", "authors": [ { "first": "Carlos", "middle": [], "last": "Castillo", "suffix": "" }, { "first": "Marcelo", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Poblete", "suffix": "" } ], "year": 2011, "venue": "Proceedings of WWW", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlos Castillo, Marcelo Mendoza, and Barbara Poblete. 2011. Information credibility on twitter. In Proceedings of WWW.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Call attention to rumors: Deep attention based recurrent neural networks for early rumor detection", "authors": [ { "first": "Tong", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Xue", "middle": [], "last": "Li", "suffix": "" }, { "first": "Hongzhi", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of PAKDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tong Chen, Xue Li, Hongzhi Yin, and Jun Zhang. 2018. Call attention to rumors: Deep attention based recur- rent neural networks for early rumor detection. In Proceedings of PAKDD.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Geraldine Wong Sak Hoi, and Arkaitz Zubiaga", "authors": [ { "first": "Leon", "middle": [], "last": "Derczynski", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2017, "venue": "Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for rumours. In Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Leon Derczynski, Kalina Bontcheva, Maria Liakata, Rob Procter, Geraldine Wong Sak Hoi, and Arkaitz Zubiaga. 2017. Semeval-2017 task 8: Rumoureval: Determining rumour veracity and support for ru- mours. In Proceedings of SemEval.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of NAACL.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Emergent: a novel data-set for stance classification", "authors": [ { "first": "William", "middle": [], "last": "Ferreira", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "Vlachos", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "William Ferreira and Andreas Vlachos. 2016. Emer- gent: a novel data-set for stance classification. In Proceedings of NAACL.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A semi-supervised approach to message stance classification", "authors": [ { "first": "Georgios", "middle": [], "last": "Giasemidis", "suffix": "" }, { "first": "Nikolaos", "middle": [], "last": "Kaplis", "suffix": "" } ], "year": 2018, "venue": "IEEE TKDE", "volume": "32", "issue": "1", "pages": "1--11", "other_ids": {}, "num": null, "urls": [], "raw_text": "Georgios Giasemidis, Nikolaos Kaplis, Ioannis Agrafi- otis, and Jason Nurse. 2018. A semi-supervised approach to message stance classification. IEEE TKDE, 32(1):1-11.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Stance classification of ideological debates: Data, models, features, and constraints", "authors": [ { "first": "Saidul", "middle": [], "last": "Kazi", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Hasan", "suffix": "" }, { "first": "", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2013, "venue": "Proceedings of IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kazi Saidul Hasan and Vincent Ng. 2013. Stance clas- sification of ideological debates: Data, models, fea- tures, and constraints. In Proceedings of IJCNLP.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Interpretable rumor detection in microblogs by attending to user interactions", "authors": [ { "first": "Serena", "middle": [], "last": "Ling Min", "suffix": "" }, { "first": "Hai", "middle": [ "Leong" ], "last": "Khoo", "suffix": "" }, { "first": "Zhong", "middle": [], "last": "Chieu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Qian", "suffix": "" }, { "first": "", "middle": [], "last": "Jiang", "suffix": "" } ], "year": 2020, "venue": "Proceedings of AAAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ling Min Serena Khoo, Hai Leong Chieu, Zhong Qian, and Jing Jiang. 2020. Interpretable rumor detection in microblogs by attending to user interactions. In Proceedings of AAAI.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Estimating predictive uncertainty for rumour verification models", "authors": [ { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" } ], "year": 2020, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Kochkina and Maria Liakata. 2020. Estimating predictive uncertainty for rumour verification mod- els. In Proceedings of ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Turing at semeval-2017 task 8: Sequential approach to rumour stance classification with branch-lstm", "authors": [ { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2017, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Kochkina, Maria Liakata, and Isabelle Augen- stein. 2017. Turing at semeval-2017 task 8: Sequen- tial approach to rumour stance classification with branch-lstm. In Proceedings of SemEval.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "All-in-one: Multi-task learning for rumour verification", "authors": [ { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" } ], "year": 2018, "venue": "Proceedings of COLING", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Elena Kochkina, Maria Liakata, and Arkaitz Zubiaga. 2018. All-in-one: Multi-task learning for rumour verification. In Proceedings of COLING.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Tree LSTMs with convolution units to predict stance and rumor veracity in social media conversations", "authors": [ { "first": "Sumeet", "middle": [], "last": "Kumar", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Carley", "suffix": "" } ], "year": 2019, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sumeet Kumar and Kathleen Carley. 2019. Tree LSTMs with convolution units to predict stance and rumor veracity in social media conversations. In Proceedings of ACL.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Prominent features of rumor propagation in online social media", "authors": [ { "first": "Meeyoung", "middle": [], "last": "Sejeong Kwon", "suffix": "" }, { "first": "Kyomin", "middle": [], "last": "Cha", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Jung", "suffix": "" }, { "first": "Yajun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2013, "venue": "Proceedings of ICDM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sejeong Kwon, Meeyoung Cha, Kyomin Jung, Wei Chen, and Yajun Wang. 2013. Prominent features of rumor propagation in online social media. In Pro- ceedings of ICDM.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Rumor detection by exploiting user credibility information, attention and multi-task learning", "authors": [ { "first": "Quanzhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Qiong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Luo", "middle": [], "last": "Si", "suffix": "" } ], "year": 2019, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Quanzhi Li, Qiong Zhang, and Luo Si. 2019. Rumor detection by exploiting user credibility information, attention and multi-task learning. In Proceedings of ACL.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Real-time rumor debunking on twitter", "authors": [ { "first": "Xiaomo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Armineh", "middle": [], "last": "Nourbakhsh", "suffix": "" }, { "first": "Quanzhi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Sameena", "middle": [], "last": "Shah", "suffix": "" } ], "year": 2015, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaomo Liu, Armineh Nourbakhsh, Quanzhi Li, Rui Fang, and Sameena Shah. 2015. Real-time rumor debunking on twitter. In Proceedings of CIKM.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hawkes processes for continuous time sequence classification: an application to rumour stance classification in twitter", "authors": [ { "first": "Michal", "middle": [], "last": "Lukasik", "suffix": "" }, { "first": "Duy", "middle": [], "last": "Srijith", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Vu", "suffix": "" }, { "first": "Arkaitz", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2016, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michal Lukasik, PK Srijith, Duy Vu, Kalina Bontcheva, Arkaitz Zubiaga, and Trevor Cohn. 2016. Hawkes processes for continuous time sequence classifica- tion: an application to rumour stance classification in twitter. In Proceedings of ACL.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Detecting rumors from microblogs with recurrent neural networks", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Prasenjit", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Sejeong", "middle": [], "last": "Kwon", "suffix": "" }, { "first": "J", "middle": [], "last": "Bernard", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Jansen", "suffix": "" }, { "first": "Meeyoung", "middle": [], "last": "Wong", "suffix": "" }, { "first": "", "middle": [], "last": "Cha", "suffix": "" } ], "year": 2016, "venue": "Proceedings of IJCAI", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, Prasenjit Mitra, Sejeong Kwon, Bernard J Jansen, Kam-Fai Wong, and Meeyoung Cha. 2016. Detecting rumors from microblogs with recurrent neural networks. In Proceedings of IJCAI.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Detect rumors using time series of social context information on microblogging websites", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhongyu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Yueming", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2015, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, Zhongyu Wei, Yueming Lu, and Kam-Fai Wong. 2015. Detect rumors using time se- ries of social context information on microblogging websites. In Proceedings of CIKM.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Detect rumors in microblog posts using propagation structure via kernel learning", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2017. Detect rumors in microblog posts using propagation struc- ture via kernel learning. In Proceedings of ACL.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Detect rumor and stance jointly by neural multi-task learning", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "Companion Proceedings of the The Web Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2018a. Detect rumor and stance jointly by neural multi-task learn- ing. In Companion Proceedings of the The Web Con- ference 2018.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Rumor detection on twitter with tree-structured recursive neural networks", "authors": [ { "first": "Jing", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2018, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jing Ma, Wei Gao, and Kam-Fai Wong. 2018b. Ru- mor detection on twitter with tree-structured recur- sive neural networks. In Proceedings of ACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Twitter under crisis: can we trust what we rt?", "authors": [ { "first": "Marcelo", "middle": [], "last": "Mendoza", "suffix": "" }, { "first": "Barbara", "middle": [], "last": "Poblete", "suffix": "" }, { "first": "Carlos", "middle": [], "last": "Castillo", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 3rd Workshop on Social Network Mining and Analysis", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marcelo Mendoza, Barbara Poblete, and Carlos Castillo. 2010. Twitter under crisis: can we trust what we rt? In Proceedings of the 3rd Workshop on Social Network Mining and Analysis, SNAKDD 2009, Paris, France, June 28, 2009.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Parinaz Sobhani, Xiaodan Zhu, and Colin Cherry", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad, Svetlana Kiritchenko, Parinaz Sob- hani, Xiaodan Zhu, and Colin Cherry. 2016.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Semeval-2016 task 6: Detecting stance in tweets", "authors": [], "year": null, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Semeval-2016 task 6: Detecting stance in tweets. In Proceedings of SemEval.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Stance classification for rumour analysis in twitter: Exploiting affective information and conversation structure", "authors": [ { "first": "", "middle": [], "last": "Ew Pamungkas", "suffix": "" }, { "first": "V", "middle": [], "last": "Basile", "suffix": "" }, { "first": "", "middle": [], "last": "Patti", "suffix": "" } ], "year": 2018, "venue": "2nd International Workshop on Rumours and Deception in Social Media (RDSM 2018)", "volume": "2482", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "EW Pamungkas, V Basile, and V Patti. 2018. Stance classification for rumour analysis in twitter: Exploit- ing affective information and conversation structure. In 2nd International Workshop on Rumours and De- ception in Social Media (RDSM 2018), volume 2482, pages 1-7.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Hierarchical transformers for long document classification", "authors": [ { "first": "Raghavendra", "middle": [], "last": "Pappagari", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Zelasko", "suffix": "" } ], "year": 2019, "venue": "2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Raghavendra Pappagari, Piotr Zelasko, Jes\u00fas Villalba, Yishay Carmiel, and Najim Dehak. 2019. Hierarchi- cal transformers for long document classification. In 2019 IEEE Automatic Speech Recognition and Un- derstanding Workshop (ASRU).", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Rumor has it: Identifying misinformation in microblogs", "authors": [ { "first": "Emily", "middle": [], "last": "Vahed Qazvinian", "suffix": "" }, { "first": "", "middle": [], "last": "Rosengren", "suffix": "" }, { "first": "Qiaozhu", "middle": [], "last": "Dragomir R Radev", "suffix": "" }, { "first": "", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2011, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vahed Qazvinian, Emily Rosengren, Dragomir R Radev, and Qiaozhu Mei. 2011. Rumor has it: Iden- tifying misinformation in microblogs. In Proceed- ings of EMNLP.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Csi: A hybrid deep model for fake news detection", "authors": [ { "first": "Natali", "middle": [], "last": "Ruchansky", "suffix": "" }, { "first": "Sungyong", "middle": [], "last": "Seo", "suffix": "" }, { "first": "Yan", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2017, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. Csi: A hybrid deep model for fake news detection. In Proceedings of CIKM.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of NIPS.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "A temporal attentional model for rumor stance classification", "authors": [ { "first": "Ben", "middle": [], "last": "Amir Pouran", "suffix": "" }, { "first": "Javid", "middle": [], "last": "Veyseh", "suffix": "" }, { "first": "Dejing", "middle": [], "last": "Ebrahimi", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Dou", "suffix": "" }, { "first": "", "middle": [], "last": "Lowd", "suffix": "" } ], "year": 2017, "venue": "Proceedings of CIKM", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amir Pouran Ben Veyseh, Javid Ebrahimi, Dejing Dou, and Daniel Lowd. 2017. A temporal attentional model for rumor stance classification. In Proceed- ings of CIKM.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Modeling conversation structure and temporal dynamics for jointly predicting rumor stance and veracity", "authors": [ { "first": "Penghui", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Wenji", "middle": [], "last": "Mao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Penghui Wei, Nan Xu, and Wenji Mao. 2019. Mod- eling conversation structure and temporal dynamics for jointly predicting rumor stance and veracity. In Proceedings of EMNLP.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "False rumors detection on sina weibo by propagation structures", "authors": [ { "first": "Ke", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Song", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Kenny", "middle": [ "Q" ], "last": "Zhu", "suffix": "" } ], "year": 2015, "venue": "Proceedings of ICDE", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ke Wu, Song Yang, and Kenny Q. Zhu. 2015. False ru- mors detection on sina weibo by propagation struc- tures. In Proceedings of ICDE.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Automatic detection of rumor on sina weibo", "authors": [ { "first": "Fan", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiaohui", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, MDS '12", "volume": "13", "issue": "", "pages": "1--13", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fan Yang, Yang Liu, Xiaohui Yu, and Min Yang. 2012. Automatic detection of rumor on sina weibo. In Pro- ceedings of the ACM SIGKDD Workshop on Mining Data Semantics, MDS '12, pages 13:1-13:7.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of NAACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of NAACL.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Detection and resolution of rumours in social media: A survey", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Ahmet", "middle": [], "last": "Aker", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2018, "venue": "ACM Computing Surveys (CSUR)", "volume": "51", "issue": "2", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Ahmet Aker, Kalina Bontcheva, Maria Liakata, and Rob Procter. 2018a. Detection and resolution of rumours in social media: A survey. ACM Computing Surveys (CSUR), 51(2):32.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Discourseaware rumour stance classification in social media using sequential classifiers", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Elena", "middle": [], "last": "Kochkina", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Lukasik", "suffix": "" }, { "first": "Kalina", "middle": [], "last": "Bontcheva", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2018, "venue": "Information Processing & Management", "volume": "54", "issue": "2", "pages": "273--290", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Elena Kochkina, Maria Liakata, Rob Procter, Michal Lukasik, Kalina Bontcheva, Trevor Cohn, and Isabelle Augenstein. 2018b. Discourse- aware rumour stance classification in social media using sequential classifiers. Information Processing & Management, 54(2):273-290.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Analysing how people orient to and spread rumours in social media by looking at conversational threads", "authors": [ { "first": "Arkaitz", "middle": [], "last": "Zubiaga", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Liakata", "suffix": "" }, { "first": "Rob", "middle": [], "last": "Procter", "suffix": "" } ], "year": 2016, "venue": "PloS one", "volume": "11", "issue": "3", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arkaitz Zubiaga, Maria Liakata, Rob Procter, Geral- dine Wong Sak Hoi, and Peter Tolmie. 2016. Analysing how people orient to and spread rumours in social media by looking at conversational threads. PloS one, 11(3):e0150989.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "Our Single-Task Model (Hierarchical Transformer) for Stance Classification and Rumor Verification." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Baseline Multi-Task Learning Framework (MTL2) for Stance-Aware Rumor Verification." }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "Correlation between predicted stance classes (yaxis) and predicted rumor labels (x-axis) from our Coupled Hierarchical Transformer on test sets of our two datasets." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "Stance classes and rumor labels predicted by Coupled Hierarchical Transformer on a test sample in PHEME dataset." }, "TABREF0": { "type_str": "table", "num": null, "html": null, "content": "
[CLS]\u2026[SEP] [CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP][CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP]\u2026... [CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP]
[CLS]\u2026[SEP] [CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP][CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP]\u2026... [CLS]\u2026[SEP]\u2026...[CLS]\u2026[SEP]
[CLS]S1\u2026Sm-2[SEP][CLS]R1\u2026Rm-2[SEP]\u2026...[CLS]R1\u2026Rm-2[SEP][CLS]R1\u2026Rm-2[SEP]...\u2026[CLS]R1\u2026Rm-2[SEP]\u2026...[CLS]R1\u2026Rm-2[SEP]\u2026[CLS]R1\u2026Rm-2[SEP]
Source S1 st Reply R 1(n-1) th Reply R n-1n th Reply R n2n-1 th Reply R 2n-1(k-1)n th Reply R (k-1)nkn-1 th Reply R kn-1
1 st Subthreadk th Subthread
", "text": "" }, "TABREF1": { "type_str": "table", "num": null, "html": null, "content": "
Rumor Label ys 0s kn-2 Stance Label s 1 \u2026...
[CLS]...[CLS]\u2026...[CLS]...[CLS]...[CLS]\u2026...[CLS]...
[CLS]...[CLS]\u2026...[CLS]...[CLS]...[CLS]\u2026...[CLS]...\u2026...[CLS] ... [CLS]\u2026...[CLS] ...
LCE with BERTLCE with BERTLCE with BERT
1 st Subthread\u2026...
", "text": "" }, "TABREF2": { "type_str": "table", "num": null, "html": null, "content": "
y Rumor Label
[CLS] +U
Post-Level Attentions 0s 1s kn-2s kn-1
Support
p 0p 1p kn-1p 0p 1\u2026...p kn-1Deny Query
Comment
++++
[CLS]...[CLS][CLS]...[CLS] ...V[CLS]...[CLS]......[CLS]...[CLS] ...P
GCE with Cross-Task TransformerStance-Aware Attention K V Q Add & Norm Feed Forward Add & NormSelf-Attention Q K V Add & Norm Add & Norm Feed ForwardSpecific TransformerGCE with Stance-
[CLS]...[CLS]\u2026...[CLS]...[CLS]...[CLS]\u2026...[CLS]...[CLS] ... [CLS]\u2026...[CLS] ...
LCE with BERTLCE with BERTLCE with BERT
1 st Subthread2 nd Subthread\u2026...k th Subthread
Input Conversation Thread
", "text": "" }, "TABREF5": { "type_str": "table", "num": null, "html": null, "content": "
SemEval-2017 DatasetPHEME Dataset
", "text": "Results of stance classification on the SemEval-2017 dataset." }, "TABREF6": { "type_str": "table", "num": null, "html": null, "content": "", "text": "Results of rumor veracity prediction. Single-Task indicates that stance labels are not used during the training stage. \u2020 indicates that our Coupled Hieararchical Transformer model is significantly better than the best compared system with p-value < 0.05 based on McNemar's significance test." }, "TABREF8": { "type_str": "table", "num": null, "html": null, "content": "
", "text": "Ablation study on the PHEME dataset." } } } }