ACL-OCL / Base_JSON /prefixF /json /figlang /2020.figlang-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:43:03.236411Z"
},
"title": "Transformers on Sarcasm Detection with Context",
"authors": [
{
"first": "Amardeep",
"middle": [],
"last": "Kumar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Indian Institute of Technology (ISM)",
"location": {
"settlement": "Dhanbad"
}
},
"email": ""
},
{
"first": "Vivek",
"middle": [],
"last": "Anand",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IIIT Hyderabad",
"location": {
"country": "India"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Sarcasm Detection with Context, a shared task of Second Workshop on Figurative Language Processing (co-located with ACL 2020), is study of effect of context on Sarcasm detection in conversations of Social media. We present different techniques and models, mostly based on transformer for Sarcasm Detection with Context. We extended latest pre-trained transformers like BERT, RoBERTa, spanBERT on different task objectives like single sentence classification, sentence pair classification, etc. to understand role of conversation context for sarcasm detection on Twitter conversations and conversation threads from Reddit. We also present our own architecture consisting of LSTM and Transformers to achieve the objective.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Sarcasm Detection with Context, a shared task of Second Workshop on Figurative Language Processing (co-located with ACL 2020), is study of effect of context on Sarcasm detection in conversations of Social media. We present different techniques and models, mostly based on transformer for Sarcasm Detection with Context. We extended latest pre-trained transformers like BERT, RoBERTa, spanBERT on different task objectives like single sentence classification, sentence pair classification, etc. to understand role of conversation context for sarcasm detection on Twitter conversations and conversation threads from Reddit. We also present our own architecture consisting of LSTM and Transformers to achieve the objective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "With advent of Internet and Social media platforms, it is important to know actual sentiments and beliefs of its users, and recognizing Sarcasm is very important for this. We can't always decide if a sentence is sarcastic or not without knowing its context. For example, consider below two sentences S1 and S2. S1: \"What you love on weekends?\" S2: \"I love going to the doctor.\" Just by looking at the 'S2' sentence we can tag the sentence 'S2' as \"not sarcastic\", but imagine this sentence as a reply to the sentence 'S1' , now we would like to tag the sentence 'S2' as \"sarcastic\". Hence it is necessary to know the context of a sentence to know sarcasm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We were provided with conversation threads from two of popular social media, Reddit and Twitter. For this objective We used different pre-trained language model and famous transformer architecture like BERT (Devlin et al., 2019) , RoBERTa (Liu et al., 2019) and spanBERT (Joshi et al., 2020) . We also propose our own architecture made of Transformers (Vaswani et al., 2017) and LSTM (Hochreiter and Schmidhuber, 1997) .",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 239,
"end": 257,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF6"
},
{
"start": 271,
"end": 291,
"text": "(Joshi et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 352,
"end": 374,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 384,
"end": 418,
"text": "(Hochreiter and Schmidhuber, 1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Two types of Datasets were used, corpus from Twitter conversations and conversation threads from Reddit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "Twitter Corpus Ghosh et al. (2018) introduced a self label twitter conversations corpus. The sarcastic tweets were collected by relying upon hashtags, like sarcasm, sarcastic, etc., that users assign to their sarcastic tweets. For non-sarcastic they adopted a methodology, according to which Nonsarcastic tweet doesn't contain sarcasm hashtag instead they were having sentiments hashtag like happy, positive, sad, etc.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "Ghosh et al. (2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "Reddit Corpus Khodak et al. (2018) collected 1.5 million sarcastic statement and many of nonsarcastic statement from Reddit. They self annotated all of these Reddit corpus manually.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "For both datasets, the training and testing data was provided in json format where each utterance contains the following fields: 1) \"label\" : SAR-CASM or NOT SARCASM. For test data, label was not provided. 2) \"response\" : the sarcastic response, whether a sarcastic Tweet or a Reddit post. 3) \"context\" : the conversation context of the \"response\". 4) \"id\" : unique id to identify and label each data point in test dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "Twitter data set is of 5,000 English Tweets balanced between the \"SARCASM\" and \"NOT SARCASM\" classes and Reddit dataset is of 4,400 Reddit posts balanced between the \"SARCASM\" and \"NOT SARCASM\" classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "2"
},
{
"text": "We used different text pre-processing technique to remove noise from text provided to us. We removed unwanted punctuation, multiple spaces, URL tags, etc. We changed different abbreviations to their proper format, for example: \"I'm\" was changed to \"I am\", \"idk\" to \"I don't know\", etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Process",
"sec_num": "3"
},
{
"text": "We experimented with different transformers and pretrained models like BERT , RoBERTa, span-BERT and our own architecture built over these Transformers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For both datasets, each training and testing utterance contains two major fields: \"response\" (i.e, the sarcastic response, whether a sarcastic Tweet or a Reddit post), \"context\" (i.e., the conversation context of the \"response\"). The \"context\" is an ordered list of dialogue, i.e., if the \"context\" contains three elements, \"c1\", \"c2\", \"c3\", in that order, then \"c2\" is a reply to \"c1\" and \"c3\" is a reply to \"c2\". Further, if the sarcastic \"response\" is \"r\", then \"r\" is a reply to \"c3\". For instance, for the following example, \"label\": \"SARCASM\", \"response\": \"Did Kelly just call someone else messy? Baaaahaaahahahaha\", \"context\": [\"X is looking a First Lady should\", \"didn't think it was tailored enough it looked messy\"]. The response tweet, \"Did Kelly...\" is a reply to its immediate context \"didn't think it was tailored...\" which is a reply to \"X is looking...\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For each utterance in datasets, We defined 'response' as response string and concatenation of all the 'context' in reverse order as context string. response string = \"response\" context string = \"c3\" + \"c2\" + \"c1\" We approached this classification task in two ways, first as Single sentence classification task and second as Sentence pair classification tasks. We also experimented single sentence classification only with response string. Throughout the experiment we used 'transformers' library by Hugging Face (Wolf et al., 2019) for experimenting with BERT and RoBERTa models and for span-BERT we used their official released code, and incorporated new methods to suit our task.",
"cite_spans": [
{
"start": 512,
"end": 531,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "As name indicates, to obtain a single sentence for classification, we concatenated response string and context string. Figure 1 represents general architecture of models used in subsection 4.1.1, 4.1.2 and 4.1.3, for single sentence classification where:",
"cite_spans": [],
"ref_spans": [
{
"start": 119,
"end": 127,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Single sentence Classification Task",
"sec_num": "4.1"
},
{
"text": "\u2022 Input : response string + context string \u2022 Transformer: layer could be any of the model from BERT, RoBERTa or spanBERT as transformer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single sentence Classification Task",
"sec_num": "4.1"
},
{
"text": "\u2022 Embedding output: is representation of \"[CLS]\" token by transformer, used for classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single sentence Classification Task",
"sec_num": "4.1"
},
{
"text": "\u2022 Feed Forward Network : has multiple dense and dropout layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single sentence Classification Task",
"sec_num": "4.1"
},
{
"text": "\u2022 Softmax: classifier for binary classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Single sentence Classification Task",
"sec_num": "4.1"
},
{
"text": "Devlin et al. 2019introduced Bidirectional Encoder Representations from Transformers(BERT). BERT's key technical innovation is applying bidirectional training of Transformers to language modeling. BERT is pre-trained on two objectives, Masked language modeling (MLM) and next sentence prediction (NSP). We used 'bert-base-uncased' and 'bert-largeuncased' pretrained model in transformer layer. 'bert-base-uncased' has 12-layers, 768-hidden state size, 12-attention heads and 110M parameters, with each hidden state of (max seq len, 768) size and embedding output of 768 length. 'bert-largeuncased' has 24-layers, 1024-hidden state size, 16attention heads and 340M parameters. it has each hidden state of (max seq len, 1024) size and embedding output of 1024 length. 'bert-large-uncased' gave better results than 'bert-base-uncased' on both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT",
"sec_num": "4.1.1"
},
{
"text": "Joshi et al. (2020) introduced pretraining method to represent and predict span instead of words. This approach is different from BERT based pretraining methods in two ways:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SpanBERT",
"sec_num": "4.1.2"
},
{
"text": "1. Masking contiguous random spans instead of masking random tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SpanBERT",
"sec_num": "4.1.2"
},
{
"text": "2. Span Boundary Objective: Predicting entire content of masked span with help of hidden states of boundary token of masked span.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SpanBERT",
"sec_num": "4.1.2"
},
{
"text": "We used 'spanbert-base-cased' and 'spanbert-largecased' pretrained model as transformer layer. 'spanbert-base-cased' has 12-layers, 768-hidden state size, 12-attention heads and 110M parameters, with each hidden state of (max seq len, 768) size and embedding output of 768 length. 'spanbertlarge-cased' has 24-layers, 1024-hidden state size, 16-attention heads and 340M parameters. It has each hidden state of (max seq len, 1024) size and embedding output of 1024 length. 'spanbertlarge-cased' gave better results than 'spanbertbase-cased', 'bert-base-uncased' and 'bert-largeuncased' respectively on both datasets .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "SpanBERT",
"sec_num": "4.1.2"
},
{
"text": "Liu et al. 2019presented a replication study of BERT pre-training, related to impact of key hyperparameter and size of training data on which it was pre-trained, and found BERT as significantly untrained. We tried only roberta large models, which has 24layers, 1024-hidden state size, 24-attention heads and 355M parameters. it has each hidden state of (max seq len, 1024) size and embedding output of 1024 length. 'roberta-large' gave better results than all previous models. \u2022 Concatenation: layer concatenate two or more tensors along suitable axis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "RoBERTa",
"sec_num": "4.1.3"
},
{
"text": "\u2022 LSTM: Hochreiter and Schmidhuber (1997) In this model, last two hidden states are concatenated and passed through LSTM to get more contextual representation of text. Later output of LSTM and embedding output of transformer is concatenated and fed through feed forward Neural network for classification. We tried 'bert-large-uncased' and 'Roberta large' as transformer layer in this architecture. 'Roberta large' gave best f1-score among all. This model also gave best result on classification using only 'response string' as input on both datsets.",
"cite_spans": [
{
"start": 8,
"end": 41,
"text": "Hochreiter and Schmidhuber (1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSTM over Transformer",
"sec_num": "4.1.4"
},
{
"text": "In this Sentence Pair classification task, we give a pair of text as input for binary classification. We present following two models:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence pair Classification task",
"sec_num": "4.2"
},
{
"text": "Our architecture was inspired from two things, first is intuition that it may be a case that only 'response' is Sarcastic but not concatenation of 'response' and 'context', and second, Siamese network (Mueller and Thyagarajan, 2016) . Figure 3 represents our Siamese Transformer, where: 'input 1' is response string, 'input 2' is response string + context string, 'Softmax' is last softmax layer intuitively work as 'OR' logical gate.",
"cite_spans": [
{
"start": 201,
"end": 232,
"text": "(Mueller and Thyagarajan, 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 235,
"end": 243,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Siamese Transformer",
"sec_num": "4.2.1"
},
{
"text": "We expected improvement in result over previous models, but it didn't happen. This also establishes that context is necessary for Sarcasm Detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Siamese Transformer",
"sec_num": "4.2.1"
},
{
"text": "Length of context string is larger than response string so it might be that their combined contextual representation is dominated by 'context string'. To overcome this, we pass them through different transformers to get their individual representation of equal size. These representation are then concatenated and passed through Bi-LSTM to get contextual representation of the Figure 4 represents our architecture of Dual transformer, where: 'input 1' is response string, 'input 2' is context string, 'BiL-STM' is bidirectional LSTM (Schuster and Paliwal, 1997) Last hidden state output of both transformers are concatenated and passed over Bi-LSTM to get a better contextual, output of which is passed through a classification layer. This model didn't give better results as expected. We guessed lack of training data as one of the possible reason. Table 2 depict results of all models and tasks on Twitter and Reddit datasets respectively. In both table 'SS' denotes single sentence classification task, 'LoT' denotes LSTM over Transformer(4.1.4), 'DT' denotes Dual Transformer(4.2.2) and 'ST' denotes Siamese Transformer (4.2.1).",
"cite_spans": [
{
"start": 533,
"end": 561,
"text": "(Schuster and Paliwal, 1997)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 377,
"end": 385,
"text": "Figure 4",
"ref_id": "FIGREF3"
},
{
"start": 850,
"end": 857,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dual Transformer",
"sec_num": "4.2.2"
},
{
"text": "Using only 'response string' (i.e without using context) we got best f1-score of 67.50 and 63.2 on Twitter and Reddit datsets respectively. Using response as well as context, LSTM over Transformer model (sub-section 4.1.4) with 'robert-large' as transformer layer performed best. We tried different maximum sequence legth, 126 on Twitter conversation and 80 on Reddit Conversation text gave the best results. We didn't benchmark our results with Ghosh et al. (2018) , Zhang et al. (2016) , etc. related works, becuase those models were trained on different datasets. To do a fair comparison, we would have to re-train those models on our dataset, but due to computational constraints we were unable to do this.",
"cite_spans": [
{
"start": 446,
"end": 465,
"text": "Ghosh et al. (2018)",
"ref_id": "BIBREF1"
},
{
"start": 468,
"end": 487,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dual Transformer",
"sec_num": "4.2.2"
},
{
"text": "Most of the existing works are on detecting sarcasm without considering context. Joshi et al. (2016) , Zhang et al. (2016) , Ghosh et al. (2018) have considered context and utterances separately for sarcasm detection and showed how context is helpful in sarcasm detection.",
"cite_spans": [
{
"start": 81,
"end": 100,
"text": "Joshi et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 103,
"end": 122,
"text": "Zhang et al. (2016)",
"ref_id": "BIBREF11"
},
{
"start": 125,
"end": 144,
"text": "Ghosh et al. (2018)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "To conclude, we showed effective method for sarcasm detection and how much context is necessary for it. We didn't use any dataset (reddit and twitter) specific pre-processing or hyperparameter tuning in order to evaluate effectiveness of models across various types of data. In future, we would like to experiment with supplementing external data or merging different types of data on this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Sarcasm analysis using conversation context",
"authors": [
{
"first": "Debanjan",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"R"
],
"last": "Fabbri",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
}
],
"year": 2018,
"venue": "Computational Linguistics",
"volume": "44",
"issue": "4",
"pages": "755--792",
"other_ids": {
"DOI": [
"10.1162/coli_a_00336"
]
},
"num": null,
"urls": [],
"raw_text": "Debanjan Ghosh, Alexander R. Fabbri, and Smaranda Muresan. 2018. Sarcasm analysis using conversa- tion context. Computational Linguistics, 44(4):755- 792.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural Computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Harnessing sequence labeling for sarcasm detection in dialogue from TV series 'Friends",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Vaibhav",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "Carman",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "146--155",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1015"
]
},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Vaibhav Tripathi, Pushpak Bhat- tacharyya, and Mark J. Carman. 2016. Harnessing sequence labeling for sarcasm detection in dialogue from TV series 'Friends'. In Proceedings of The 20th SIGNLL Conference on Computational Natu- ral Language Learning, pages 146-155, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Spanbert: Improving pre-training by representing and predicting spans",
"authors": [
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
}
],
"year": 2020,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "8",
"issue": "",
"pages": "64--77",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00300"
]
},
"num": null,
"urls": [],
"raw_text": "Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S. Weld, Luke Zettlemoyer, and Omer Levy. 2020. Spanbert: Improving pre-training by representing and predicting spans. Transactions of the Associa- tion for Computational Linguistics, 8:64-77.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A large self-annotated corpus for sarcasm",
"authors": [
{
"first": "Mikhail",
"middle": [],
"last": "Khodak",
"suffix": ""
},
{
"first": "Nikunj",
"middle": [],
"last": "Saunshi",
"suffix": ""
},
{
"first": "Kiran",
"middle": [],
"last": "Vodrahalli",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikhail Khodak, Nikunj Saunshi, and Kiran Vodra- halli. 2018. A large self-annotated corpus for sar- casm. In Proceedings of the Eleventh International Conference on Language Resources and Evalua- tion (LREC-2018), Miyazaki, Japan. European Lan- guages Resources Association (ELRA).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Roberta: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Siamese recurrent architectures for learning sentence similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "thirtieth AAAI conference on artificial intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In thirtieth AAAI conference on artificial intelli- gence.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Bidirectional recurrent neural networks",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Kuldip",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paliwal",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE transactions on Signal Processing",
"volume": "45",
"issue": "11",
"pages": "2673--2681",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Schuster and Kuldip K Paliwal. 1997. Bidirec- tional recurrent neural networks. IEEE transactions on Signal Processing, 45(11):2673-2681.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R'emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface's trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Tweet sarcasm detection using deep neural network",
"authors": [
{
"first": "Meishan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Guohong",
"middle": [],
"last": "Fu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2449--2460",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Meishan Zhang, Yue Zhang, and Guohong Fu. 2016. Tweet sarcasm detection using deep neural network. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 2449-2460, Osaka, Japan. The COLING 2016 Organizing Committee.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Transformer model for single sentence classification"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Architecture of model 4.1.4To improvise, We modified previously used model architecture.Figure 2 represents architecture of our successful improvised model, where: \u2022 HS[-1], HS[-2]: represent last hidden state and second last hidden state output by transformer respectively."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Architecture of Proposed Siamese Transformer"
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Architecture of Dual Transformer model combination."
},
"TABREF1": {
"content": "<table><tr><td>Model</td><td>P</td><td>R</td><td>f1</td></tr><tr><td colspan=\"4\">response only SS 64.2 64.7 63.83</td></tr><tr><td>bert-base SS</td><td colspan=\"3\">66.5 66.6 66.47</td></tr><tr><td>bert-large SS</td><td colspan=\"3\">67.3 67.3 67.27</td></tr><tr><td>roberta-large SS</td><td colspan=\"3\">67.5 67.5 67.49</td></tr><tr><td colspan=\"4\">spanbert-base SS 66.9 67.3 66.75</td></tr><tr><td colspan=\"4\">spanbert-large SS 67.4 67.4 67.36</td></tr><tr><td>bert-large LoT</td><td colspan=\"3\">68.1 68.1 68.0</td></tr><tr><td colspan=\"4\">roberta-large LoT 69.3 69.9 69.11</td></tr><tr><td>roberta-large ST</td><td colspan=\"3\">67.9 68.1 67.86</td></tr><tr><td>roberta-large DT</td><td colspan=\"3\">68.1 68.1 68.1</td></tr></table>",
"type_str": "table",
"text": "Result on Twitter",
"num": null,
"html": null
},
"TABREF2": {
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null,
"html": null
},
"TABREF3": {
"content": "<table/>",
"type_str": "table",
"text": "",
"num": null,
"html": null
}
}
}
}