Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1005",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:44:28.315340Z"
},
"title": "SemEval-2018 Task 3: Irony Detection in English Tweets",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": "",
"affiliation": {
"laboratory": "LT3 Language and Translation Technology Team Ghent University Groot",
"institution": "",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent"
}
},
"email": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": "",
"affiliation": {
"laboratory": "LT3 Language and Translation Technology Team Ghent University Groot",
"institution": "",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent"
}
},
"email": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": "",
"affiliation": {
"laboratory": "LT3 Language and Translation Technology Team Ghent University Groot",
"institution": "",
"location": {
"addrLine": "Brittanni\u00eblaan 45",
"postCode": "9000",
"settlement": "Ghent"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents the first shared task on irony detection: given a tweet, automatic natural language processing systems should determine whether the tweet is ironic (Task A) and which type of irony (if any) is expressed (Task B). The ironic tweets were collected using irony-related hashtags (i.e. #irony, #sarcasm, #not) and were subsequently manually annotated to minimise the amount of noise in the corpus. Prior to distributing the data, hashtags that were used to collect the tweets were removed from the corpus. For both tasks, a training corpus of 3,834 tweets was provided, as well as a test set containing 784 tweets. Our shared tasks received submissions from 43 teams for the binary classification Task A and from 31 teams for the multiclass Task B. The highest classification scores obtained for both subtasks are respectively F 1 = 0.71 and F 1 = 0.51 and demonstrate that fine-grained irony classification is much more challenging than binary irony detection.",
"pdf_parse": {
"paper_id": "S18-1005",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents the first shared task on irony detection: given a tweet, automatic natural language processing systems should determine whether the tweet is ironic (Task A) and which type of irony (if any) is expressed (Task B). The ironic tweets were collected using irony-related hashtags (i.e. #irony, #sarcasm, #not) and were subsequently manually annotated to minimise the amount of noise in the corpus. Prior to distributing the data, hashtags that were used to collect the tweets were removed from the corpus. For both tasks, a training corpus of 3,834 tweets was provided, as well as a test set containing 784 tweets. Our shared tasks received submissions from 43 teams for the binary classification Task A and from 31 teams for the multiclass Task B. The highest classification scores obtained for both subtasks are respectively F 1 = 0.71 and F 1 = 0.51 and demonstrate that fine-grained irony classification is much more challenging than binary irony detection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The development of the social web has stimulated the use of figurative and creative language, including irony, in public (Ghosh et al., 2015) . From a philosophical/psychological perspective, discerning the mechanisms that underlie ironic speech improves our understanding of human reasoning and communication, and more and more, this interest in understanding irony also emerges in the machine learning community (Wallace, 2015) . Although an unanimous definition of irony is still lacking in the literature, it is often identified as a trope whose actual meaning differs from what is literally enunciated. Due to its nature, irony has important implications for natural language processing (NLP) tasks, which aim to understand and produce human language. In fact, automatic irony detection has a large potential for various applications in the domain of text mining, especially those that require semantic analysis, such as author profiling, detecting online harassment, and, maybe the most well-known example, sentiment analysis.",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Ghosh et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 414,
"end": 429,
"text": "(Wallace, 2015)",
"ref_id": "BIBREF48"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Due to its importance in industry, sentiment analysis research is abundant and significant progress has been made in the field (e.g. in the context of SemEval (Rosenthal et al., 2017) ). However, the SemEval-2014 shared task Sentiment Analysis in Twitter (Rosenthal et al., 2014) demonstrated the impact of irony on automatic sentiment classification by including a test set of ironic tweets. The results revealed that, while sentiment classification performance on regular tweets reached up to F 1 = 0.71, scores on the ironic tweets varied between F 1 = 0.29 and F 1 = 0.57. In fact, it has been demonstrated that several applications struggle to maintain high performance when applied to ironic text (e.g. Liu, 2012; Maynard and Greenwood, 2014; Ghosh and Veale, 2016) . Like other types of figurative language, ironic text should not be interpreted in its literal sense; it requires a more complex understanding based on associations with the context or world knowledge. Examples 1 and 2 are sentences that regular sentiment analysis systems would probably classify as positive, whereas the intended sentiment is undeniably negative.",
"cite_spans": [
{
"start": 159,
"end": 183,
"text": "(Rosenthal et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 255,
"end": 279,
"text": "(Rosenthal et al., 2014)",
"ref_id": "BIBREF41"
},
{
"start": 709,
"end": 719,
"text": "Liu, 2012;",
"ref_id": "BIBREF26"
},
{
"start": 720,
"end": 748,
"text": "Maynard and Greenwood, 2014;",
"ref_id": "BIBREF27"
},
{
"start": 749,
"end": 771,
"text": "Ghosh and Veale, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1) I feel so blessed to get ocular migraines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(2) Go ahead drop me hate, I'm looking forward to it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For human readers, it is clear that the author of example 1 does not feel blessed at all, which can be inferred from the contrast between the positive sentiment expression \"I feel so blessed\", and the negative connotation associated with getting ocular migraines. Although such connotative infor-mation is easily understood by most people, it is difficult to access by machines. Example 2 illustrates implicit cyberbullying; instances that typically lack explicit profane words and where the offense is often made through irony. Similarly to example 1, a contrast can be perceived between a positive statement (\"I'm looking forward to\") and a negative situation (i.e. experiencing hate). To be able to interpret the above examples correctly, machines need, similarly to humans, to be aware that irony is used, and that the intended sentiment is opposite to what is literally enunciated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The irony detection task 1 we propose is formulated as follows: given a single post (i.e. a tweet), participants are challenged to automatically determine whether irony is used and which type of irony is expressed. We thus defined two subtasks:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Task A describes a binary irony classification task to define, for a given tweet, whether irony is expressed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Task B describes a multiclass irony classification task to define whether it contains a specific type of irony (verbal irony by means of a polarity clash, situational irony, or another type of verbal irony, see further) or is not ironic. Concretely, participants should define which one out of four categories a tweet contains: ironic by clash, situational irony, other verbal irony or not ironic.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is important to note that by a tweet, we understand the actual text it contains, without metadata (e.g. user id, time stamp, location). Although such metadata could help to recognise irony, the objective of this task is to learn, at message level, how irony is linguistically realised.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As described by Joshi et al. (2017) , recent approaches to irony can roughly be classified as either rule-based or (supervised and unsupervised) machine learning-based. While rule-based approaches mostly rely upon lexical information and require no training, machine learning invariably makes use of training data and exploits different types of information sources (or features), such as bags of words, syntactic patterns, sentiment information or semantic relatedness.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "Joshi et al. (2017)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Irony Detection",
"sec_num": "2"
},
{
"text": "Previous work on irony detection mostly applied supervised machine learning mainly exploiting lexical features. Other features often include punctuation mark/interjection counts (e.g Davidov et al., 2010) , sentiment lexicon scores (e.g. Bouazizi and Ohtsuki, 2016; Far\u00edas et al., 2016) , emoji (e.g. Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011) , writing style, emotional scenarios, part of speechpatterns (e.g. Reyes et al., 2013) , and so on. Also beneficial for this task are combinations of different feature types (e.g. Van Hee et al., 2016b), author information (e.g. Bamman and Smith, 2015) , features based on (semantic or factual) oppositions (e.g Karoui et al., 2015; Gupta and Yang, 2017; Van Hee, 2017) and even eye-movement patterns of human readers (Mishra et al., 2016) . While a wide range of features are and have been used extensively over the past years, deep learning techniques have recently gained increasing popularity for this task. Such systems often rely on semantic relatedness (i.e. through word and character embeddings (e.g. Amir et al., 2016; Ghosh and Veale, 2016) ) deduced by the network and reduce feature engineering efforts.",
"cite_spans": [
{
"start": 183,
"end": 204,
"text": "Davidov et al., 2010)",
"ref_id": "BIBREF10"
},
{
"start": 238,
"end": 265,
"text": "Bouazizi and Ohtsuki, 2016;",
"ref_id": "BIBREF4"
},
{
"start": 266,
"end": 286,
"text": "Far\u00edas et al., 2016)",
"ref_id": "BIBREF11"
},
{
"start": 301,
"end": 330,
"text": "Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011)",
"ref_id": "BIBREF17"
},
{
"start": 398,
"end": 417,
"text": "Reyes et al., 2013)",
"ref_id": "BIBREF37"
},
{
"start": 560,
"end": 583,
"text": "Bamman and Smith, 2015)",
"ref_id": "BIBREF1"
},
{
"start": 643,
"end": 663,
"text": "Karoui et al., 2015;",
"ref_id": "BIBREF23"
},
{
"start": 664,
"end": 685,
"text": "Gupta and Yang, 2017;",
"ref_id": "BIBREF18"
},
{
"start": 686,
"end": 700,
"text": "Van Hee, 2017)",
"ref_id": "BIBREF44"
},
{
"start": 749,
"end": 770,
"text": "(Mishra et al., 2016)",
"ref_id": "BIBREF28"
},
{
"start": 1041,
"end": 1059,
"text": "Amir et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 1060,
"end": 1082,
"text": "Ghosh and Veale, 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Irony Detection",
"sec_num": "2"
},
{
"text": "Regardless of the methodology and algorithm used, irony detection often involves binary classification where irony is defined as instances that express the opposite of what is meant (e.g. Riloff et al., 2013; Joshi et al., 2017) . Twitter has been a popular data genre for this task, as it is easily accessible and provides a rapid and convenient method to find (potentially) ironic messages by looking for hashtags like #irony, #not and #sarcasm. As a consequence, irony detection research often relies on automatically annotated (i.e. based on irony-related hashtags) corpora, which contain noise (Kunneman et al., 2015; Van Hee, 2017) .",
"cite_spans": [
{
"start": 188,
"end": 208,
"text": "Riloff et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 209,
"end": 228,
"text": "Joshi et al., 2017)",
"ref_id": "BIBREF21"
},
{
"start": 599,
"end": 622,
"text": "(Kunneman et al., 2015;",
"ref_id": "BIBREF24"
},
{
"start": 623,
"end": 637,
"text": "Van Hee, 2017)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Automatic Irony Detection",
"sec_num": "2"
},
{
"text": "We propose two subtasks A and B for the automatic detection of irony on Twitter, for which we provide more details below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description",
"sec_num": "3"
},
{
"text": "The first subtask is a two-class (or binary) classification task where submitted systems have to predict whether a tweet is ironic or not. The following examples respectively present an ironic and nonironic tweet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task A: Binary Irony Classification",
"sec_num": "3.1"
},
{
"text": "(3) I just love when you test my patience!! #not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task A: Binary Irony Classification",
"sec_num": "3.1"
},
{
"text": "(4) Had no sleep and have got school now #not happy",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task A: Binary Irony Classification",
"sec_num": "3.1"
},
{
"text": "Note that the examples contain irony-related hashtags (e.g. #irony) that were removed from the corpus prior to distributing the data for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task A: Binary Irony Classification",
"sec_num": "3.1"
},
{
"text": "The second subtask is a multiclass classification task where submitted systems have to predict one out of four labels describing i) verbal irony realised through a polarity contrast, ii) verbal irony without such a polarity contrast (i.e. other verbal irony), iii) descriptions of situational irony, and iv) non-irony. The following paragraphs present a description and a number of examples for each label.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task B: Multiclass Irony Classification",
"sec_num": "3.2"
},
{
"text": "This category applies to instances containing an evaluative expression whose polarity (positive, negative) is inverted between the literal and the intended evaluation, as shown in examples 5 and 6:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbal irony by means of a polarity contrast",
"sec_num": null
},
{
"text": "(5) I love waking up with migraines #not (6) I really love this year's summer; weeks and weeks of awful weather",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbal irony by means of a polarity contrast",
"sec_num": null
},
{
"text": "In the above examples, the irony results from a polarity inversion between two evaluations. For instance, in example 6, the literal evaluation (\"I really love this year's summer\") is positive, while the intended one, which is implied by the context (\"weeks and weeks of awful weather\"), is negative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbal irony by means of a polarity contrast",
"sec_num": null
},
{
"text": "Other verbal irony This category contains instances that show no polarity contrast between the literal and the intended evaluation, but are nevertheless ironic. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Verbal irony by means of a polarity contrast",
"sec_num": null
},
{
"text": "A data set of 3,000 English tweets was constructed by searching Twitter for the hashtags #irony, #sarcasm and #not (hereafter referred to as the 'hashtag corpus'), which could occur anywhere in the tweet that was finally included in the corpus. All tweets were collected between 01/12/2014 and 04/01/2015 and represent 2,676 unique users.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Construction and Annotation",
"sec_num": "4"
},
{
"text": "To minimise the noise introduced by groundless irony hashtags, all tweets were manually labelled using a fine-grained annotation scheme for irony (Van Hee et al., 2016a) . Prior to data annotation, the entire corpus was cleaned by removing retweets, duplicates and non-English tweets and replacing XML-escaped characters (e.g. &). The corpus was entirely annotated by three students in linguistics and second-language speakers of English, with each student annotating one third of the whole corpus. All annotations were done using the brat rapid annotation tool (Stenetorp et al., 2012) . To assess the reliability of the annotations, and whether the guidelines allowed to carry out the task consistently, an interannotator agreement study was set up in two rounds. Firstly, inter-rater agreement was calculated between the authors of the guidelines to test the guidelines for usability and to assess whether changes or additional clarifications were recommended prior annotating the entire corpus. For this purpose, a subset of 100 instances from the SemEval-2015 Task Sentiment Analysis of Figurative Language in Twitter (Ghosh et al., 2015) dataset were annotated. Based on the results, some clarifications and refinements were added to the annotation scheme, which are thoroughly described in Van Hee (2017). Next, a second agreement study was carried out on a subset (i.e. 100 randomly chosen instances) of the corpus. As metric, we used Fleiss' Kappa (Fleiss, 1971 ), a widespread statistical measure in the field of computational linguistics for assessing annotator agreement on categorical ratings (Carletta, 1996) . The measure calculates the degree of agreement in classification over the agreement which would be expected by chance, i.e. when annotators would randomly assign class labels. Table 1 : Inter-annotator agreement scores (Kappa) in two annotation rounds. Table 1 presents the inter-rater scores for the binary irony distinction and for three-way irony classification ('other' includes both situational irony and other forms of verbal irony). We see that better inter-annotator agreement is obtained after the refinement of the annotation scheme, especially for the binary irony distinction. Given the difficulty of the task, a Kappa score of 0.72 for recognising irony can be interpreted as good reliability 2 .",
"cite_spans": [
{
"start": 146,
"end": 169,
"text": "(Van Hee et al., 2016a)",
"ref_id": "BIBREF45"
},
{
"start": 566,
"end": 590,
"text": "(Stenetorp et al., 2012)",
"ref_id": "BIBREF43"
},
{
"start": 1127,
"end": 1147,
"text": "(Ghosh et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1461,
"end": 1474,
"text": "(Fleiss, 1971",
"ref_id": "BIBREF13"
},
{
"start": 1610,
"end": 1626,
"text": "(Carletta, 1996)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 1805,
"end": 1812,
"text": "Table 1",
"ref_id": null
},
{
"start": 1882,
"end": 1889,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Corpus Construction and Annotation",
"sec_num": "4"
},
{
"text": "The distribution if the different irony types in the experimental corpus are presented in Table 2 Based on the annotations, 2,396 instances out of the 3,000 are ironic, while 604 are not. To balance the class distribution in our experimental corpus, 1,792 non-ironic tweets were added from a background corpus. The tweets in this corpus were collected from the same set of Twitter users as in the hashtag corpus, and within the same time span. It is important to note that these tweets do not contain irony-related hashtags (as opposed to the non-ironic tweets in the hashtag corpus), and were manually filtered from ironic tweets. Adding 2 According to magnitude guidelines by Landis and Koch (1977) . these non-ironic tweets to the experimental corpus brought the total amount of data to 4,792 tweets (2,396 ironic + 2,396 non-ironic). For this shared task, the corpus was randomly split into a class-balanced training (80% or 3,833 instances) and test (20%, or 958 instances) set. In an additional cleaning step, we removed ambiguous tweets (i.e. where additional context was required to understand their ironic nature), from the test corpus, resulting in a test set containing 784 tweets (consisting of 40% ironic and 60% nonironic tweets).",
"cite_spans": [
{
"start": 678,
"end": 700,
"text": "Landis and Koch (1977)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 90,
"end": 97,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Corpus Construction and Annotation",
"sec_num": "4"
},
{
"text": "To train their systems, participants were not restricted to the provided training corpus. They were allowed to use additional training data that was collected and annotated at their own initiative. In the latter case, the submitted system was considered unconstrained, as opposed to constrained if only the distributed training data were used for training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus Construction and Annotation",
"sec_num": "4"
},
{
"text": "It is important to note that participating teams were allowed ten submissions at CodaLab, and that they could submit a constrained and unconstrained system for each subtask. However, only their last submission was considered for the official ranking (see Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Corpus Construction and Annotation",
"sec_num": "4"
},
{
"text": "For both subtasks, participating systems were evaluated using standard evaluation metrics, including accuracy, precision, recall and F 1 score, calculated as follows: (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F1 = 2 \u2022 precision \u2022 recall precision + recall",
"eq_num": "(4)"
}
],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "While accuracy provides insights into the system performance for all classes, the latter three measures were calculated for the positive class only (Task A) or were macro-averaged over four class labels (Task B). Macro-averaging of the F 1 score implies that all class labels have equal weight in the final score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "For both subtasks, two baselines were provided against which to compare the systems' performance. The first baseline randomly assigns irony labels and the second one is a linear SVM classifier with standard hyperparameter settings exploiting tf-idf word unigram features (implemented with scikit-learn (Pedregosa et al., 2011) ). The second baseline system is made available to the task participants via GitHub 3 .",
"cite_spans": [
{
"start": 302,
"end": 326,
"text": "(Pedregosa et al., 2011)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5"
},
{
"text": "In total, 43 teams competed in Task A on binary irony classification. Table 3 presents each team's performance in terms of accuracy, precision, recall and F 1 score. In all tables, the systems are ranked by the official F 1 score (shown in the fifth column). Scores from teams that are marked with an asterisk should be interpreted carefully, as the number of predictions they submitted does not correspond to the number of test instances.",
"cite_spans": [],
"ref_spans": [
{
"start": 70,
"end": 77,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "As can be observed from the table, the SVM unigram baseline clearly outperforms the random class baseline and generally performs well for the task. Below we discuss the top five bestperforming teams for Task A, which all built a constrained (i.e. only the provided training data were used) system. The best system yielded an F 1 score of 0.705 and was developed by THU NGN (Wu et al., 2018) . Their architecture consists of densely connected LSTMs based on (pre-trained) word embeddings, sentiment features using the AffectiveTweet package (Mohammad and Bravo-Marquez, 2017) and syntactic features (e.g. PoS-tag features + sentence embedding features). Hypothesising that the presence of a certain irony hashtag correlates with the type of irony that is used, they constructed a multi-task model able to predict simultaneously 1) the missing irony hashtag, 2) whether a tweet is ironic or not and 3) which fine-grained type of irony is used in a tweet.",
"cite_spans": [
{
"start": 373,
"end": 390,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "Also in the top five are the teams NTUA-SLP (F 1 = 0.672), WLV (F 1 = 0.650), NLPRL-IITBHU (F 1 = 0.648) and NIHRIO (F 1 = 0.648). NTUA-SLP (Baziotis et al., 2018) built an ensemble classifier of two deep learning models: a word-and character-based (bi-directional) LSTM to capture semantic and syntactic information in tweets, respectively. As features, the team used pre-trained character and word embeddings on a corpus of 550 million tweets. Their ensem-ble classifier applied majority voting to combine the outcomes of the two models. WLV (Rohanian et al., 2018) developed an ensemble voting classifier with logistic regression (LR) and a support vector machine (SVM) as component models. They combined (through averaging) pretrained word and emoji embeddings with handcrafted features, including sentiment contrasts between elements in a tweet (i.e. left vs. right sections, hashtags vs. text, emoji vs. text), sentiment intensity and word-based features like flooding and capitalisation). For Task B, they used a slightly altered (i.e. ensemble LR models and concatenated word embeddings instead of averaged) model. NLPRL-IITBHU (Rangwani et al., 2018) ranked fourth and used an XGBoost Classifier to tackle Task A. They combined pre-trained CNN activations using DeepMoji (Felbo et al., 2017) with ten types of handcrafted features. These were based on polarity contrast information, readability metrics, context incongruity, character flooding, punctuation counts, discourse markers/intensifiers/interjections/swear words counts, general token counts, WordNet similarity, polarity scores and URL counts. The fifth best system for Task A was built by NIHRIO (Vu et al., 2018) and consists of a neural-networks-based architecture (i.e. Multilayer Perceptron). The system exploited lexical (word-and character-level unigrams, bigrams and trigrams), syntactic (PoS-tags with tfidf values), semantic features (word embeddings using GloVe (Pennington et al., 2014) , LSI features and Brown cluster features (Brown et al., 1992) ) and polarity features derived from the Hu and Liu Opinion Lexicon (Hu and Liu, 2004) .",
"cite_spans": [
{
"start": 140,
"end": 163,
"text": "(Baziotis et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 1136,
"end": 1159,
"text": "(Rangwani et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 1280,
"end": 1300,
"text": "(Felbo et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 1942,
"end": 1967,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF33"
},
{
"start": 2010,
"end": 2030,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF6"
},
{
"start": 2099,
"end": 2117,
"text": "(Hu and Liu, 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "As such, all teams in the top five approached the task differently, by exploiting various algorithms and features, but all of them clearly outperformed the baselines. Like most other teams, they also showed a better performance in terms of recall compared to precision. Table 3 displays the results of each team's official submission for Task A, i.e. no distinction is made between constrained and unconstrained systems. By contrast, Tables 4 and 5 present the rankings of the best (i.e. not necessarily the last, and hence official submission) constrained and unconstrained submissions for Task A.",
"cite_spans": [],
"ref_spans": [
{
"start": 270,
"end": 277,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "As can be deduced from In the top five unconstrained (i.e. using additional training data) systems for Task A are #NonDicevoSulSerio, INAOE-UPV, RM@IT, Va-lenTO and UTMN, with F 1 scores ranging between 0.622 and 0.527. #NonDicevoSulserio extended the training corpus with 3,500 tweets from existing irony corpora (e.g. Riloff et al. (2013) ; Barbieri and Saggion (2014) ; Pt\u00e1\u010dek et al. (2014) and built an SVM classifier exploiting structural features (e.g. hashtag count, text length), sentiment-(e.g. contrast between text and emoji sentiment), and emotion-based (i.e. emotion lexicon scores) features. INAOE-UPV combined pretrained word embeddings from the Google News corpus with word-based features (e.g. n-grams). They also extended the official training data with benchmark corpora previously used in irony research and trained their system with a total of 165,000 instances. RM@IT approached the task using an ensemble classifier based on attentionbased recurrent neural networks and the Fast-Text (Joulin et al., 2017) library for learning word representations. They enriched the provided training corpus with, on the one hand, the data sets provided for SemEval-2015 Task 11 (Ghosh et al., 2015) and, on the other hand, the sarcasm corpus composed by Pt\u00e1\u010dek et al. (2014) . Altogether, this generated a training corpus of approximately 110,000 tweets. ValenTO took advantage of irony corpora previously used in irony detection that were manually annotated or through crowdsourcing (e.g. Riloff et al., 2013; Pt\u00e1\u010dek et al., 2014) . In addition, they extended their corpus with an unspecified number of self-collected irony tweets using the hashtags #irony and #sarcasm. Finally, UTMN developed an SVM classifier exploiting binary bag-of-words features. They enriched the training set with 1,000 humorous tweets from SemEval-2017 Task 6 (Potash et al., 2017) and another 1,000 tweets with positive polarity from SemEval-2016 Task 4 (Nakov et al., 2016) , resulting in a training corpus of 5,834 tweets.",
"cite_spans": [
{
"start": 320,
"end": 340,
"text": "Riloff et al. (2013)",
"ref_id": "BIBREF38"
},
{
"start": 343,
"end": 370,
"text": "Barbieri and Saggion (2014)",
"ref_id": "BIBREF2"
},
{
"start": 373,
"end": 393,
"text": "Pt\u00e1\u010dek et al. (2014)",
"ref_id": "BIBREF35"
},
{
"start": 1007,
"end": 1028,
"text": "(Joulin et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1186,
"end": 1206,
"text": "(Ghosh et al., 2015)",
"ref_id": "BIBREF15"
},
{
"start": 1262,
"end": 1282,
"text": "Pt\u00e1\u010dek et al. (2014)",
"ref_id": "BIBREF35"
},
{
"start": 1498,
"end": 1518,
"text": "Riloff et al., 2013;",
"ref_id": "BIBREF38"
},
{
"start": 1519,
"end": 1539,
"text": "Pt\u00e1\u010dek et al., 2014)",
"ref_id": "BIBREF35"
},
{
"start": 1846,
"end": 1867,
"text": "(Potash et al., 2017)",
"ref_id": "BIBREF34"
},
{
"start": 1941,
"end": 1961,
"text": "(Nakov et al., 2016)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "Interestingly, when comparing the best constrained with the best unconstrained system for Task A, we see a difference of 10 points in favour of the constrained system, which indicates that adding more training data does not necessarily improve the classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and results for Task A",
"sec_num": "6"
},
{
"text": "While 43 teams competed in Task A, 31 teams submitted a system for Task B on multiclass irony classification. Table 6 presents the official ranking with each team's performance in terms of accuracy, precision, recall and F 1 score. Similar to Task A, we discuss the top five systems in the overall ranking (Table 6 ) and then zoom in on the best performing constrained and unconstrained systems (Tables 7 and 8) .",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 6",
"ref_id": "TABREF9"
},
{
"start": 306,
"end": 314,
"text": "(Table 6",
"ref_id": "TABREF9"
},
{
"start": 395,
"end": 411,
"text": "(Tables 7 and 8)",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Systems and Results for Task B",
"sec_num": "7"
},
{
"text": "For Task B, the top five is nearly similar to the top five for Task A and includes the following teams: UCDCC (Ghosh, 2018) , NTUA-SLP (Baziotis et al., 2018) , THU NGN (Wu et al., 2018) , NLPRL-IITBHU (Rangwani et al., 2018) and NIHRIO (Vu et al., 2018) . All of the teams tackled multiclass irony classification by applying (mostly) the same architecture as for Task A (see earlier). Inspired by siamese networks (Bromley et al., 1993) used in image classification, the UCDCC team developed a siamese architecture for irony detection in both subtasks. The neural network architecture makes use of Glove word embeddings as features and creates two identical subnetworks that are each fed with different parts of a tweet. Under the premise that ironic statements are often characterised by a form of opposition or contrast, the architecture captures this incongruity between two parts in an ironic tweet. NTUA-SLP, THU NGN and NIHRIO used the same system for both subtasks. NLPRL-IITBHU also used the same architecture, but given the data skew for Task B, they used SMOTE (Chawla et al., 2002) as an oversampling technique to make sure each irony class was equally represented in the training corpus, which lead to an F 1 score increase of 5 points. As can be deduced from Table 7 , the top five constrained systems correspond to the five bestperforming systems overall (Table 6 ). Only four unconstrained systems were submitted for Task B. Differently from their Task A submission, #NonDicevoSulSerio applied a cascaded approach for this task, i.e. the first algorithm served an ironic/non-ironic classification, followed by a system distinguishing between ironic by clash and other forms of irony. Lastly, a third classifier distinguished between situational and other verbal irony. To account for class imbalance in step two, the team added 869 tweets of the situational and other verbal irony categories. INAOE-UPV, INGEOTEC-IIMAS and IITG also added tweets to the original training corpus, but it is not entirely clear how many were added and how these extra tweets were annotated.",
"cite_spans": [
{
"start": 110,
"end": 123,
"text": "(Ghosh, 2018)",
"ref_id": "BIBREF14"
},
{
"start": 135,
"end": 158,
"text": "(Baziotis et al., 2018)",
"ref_id": "BIBREF3"
},
{
"start": 169,
"end": 186,
"text": "(Wu et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 202,
"end": 225,
"text": "(Rangwani et al., 2018)",
"ref_id": "BIBREF36"
},
{
"start": 230,
"end": 254,
"text": "NIHRIO (Vu et al., 2018)",
"ref_id": null
},
{
"start": 415,
"end": 437,
"text": "(Bromley et al., 1993)",
"ref_id": "BIBREF5"
},
{
"start": 1066,
"end": 1093,
"text": "SMOTE (Chawla et al., 2002)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1273,
"end": 1280,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 1370,
"end": 1378,
"text": "(Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Systems and Results for Task B",
"sec_num": "7"
},
{
"text": "Similar to Task A, the unconstrained systems do not seem to benefit from additional data, as they do not outperform the constrained submissions for the task. A closer look at the best and worst-performing systems for each subtask reveals that Task A benefits from systems that exploit a variety of handcrafted features, especially sentiment-based (e.g. sentiment lexicon values, polarity contrast), but also bags of words, semantic cluster features and PoS-based features. Other promising features for the task are word embeddings trained on large Twitter corpora (e.g. 5M tweets). The classifiers and algorithms used are (bidirectional) LSTMs, Random Forest, Multilayer Perceptron, and an optimised (i.e. using feature selection) voting classifier combining Support Vector Machines with Logistic Regression. Neural networkbased systems exploiting word embeddings derived from the training dataset or generated from Wikipedia corpora perform less well for the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results for Task B",
"sec_num": "7"
},
{
"text": "Similarly, Task B seems to benefit from (ensemble) neural-network architectures exploiting large corpus-based word embeddings and sentiment features. Oversampling and adjusting class weights are used to overcome the class imbalance of labels 2 and 3 versus 1 and 0 and tend to improve the classification performance. Ensemble classifiers outperform multi-step approaches and combined binary classifiers for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Systems and Results for Task B",
"sec_num": "7"
},
{
"text": "Task B challenged the participants to distinguish between different types of irony. The class distributions in the training and test corpus are natural (i.e. no additional data were added after the annotation process) and imbalanced. For the evaluation of the task, F 1 scores were macro-averaged; on the one hand, this gives each label equal weight in the evaluation, but on the other hand, it does not show each class contribution to the average score. Table 9 therefore presents the participating teams' performance on each of the subtypes of irony in Task B. As can be deduced from Table 9 , all teams performed best on the non ironic and ironic by clash classes, while identifying situational irony and other irony seems to be much more challenging. Although the scores for these two classes are the lowest, we observe an important difference between situational and other verbal irony. This can probably be explained by the heterogeneous nature of the other category, which collects diverse realisations of verbal irony. A careful and manual annotation of this class, which is currently being conducted, should provide more detailed insights into this category of ironic tweets.",
"cite_spans": [],
"ref_spans": [
{
"start": 455,
"end": 462,
"text": "Table 9",
"ref_id": "TABREF14"
},
{
"start": 586,
"end": 593,
"text": "Table 9",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Systems and Results for Task B",
"sec_num": "7"
},
{
"text": "The systems that were submitted for both subtasks represent a variety of neural-network-based approaches (i.e. CNNs, RNNs and (bi-)LSTMs) exploiting word-and character embeddings as well as handcrafted features. Other popular classification algorithms include Support Vector Machines, Maximum Entropy, Random Forest, and Na\u00efve Bayes. While most approaches were based on one algorithm, some participants experimented with ensemble learners (e.g. SVM + LR, CNN + bi-LSTM, stacked LSTMs), implemented a voting system or built a cascaded architecture (for Task B) that first distinguished ironic from nonironic tweets and subsequently differentiated between the fine-grained irony categories.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "Among the most frequently used features are lexical features (e.g. n-grams, punctuation and hashtag counts, emoji presence) and sentimentor emotion-lexicon features (e.g. based on Sen-ticNet (Cambria et al., 2016) , VADER (Hutto and Gilbert, 2014), aFinn (Nielsen, 2011) ). Also important but to a lesser extent were syntactic (e.g. PoS-patterns) and semantic features, based on word, character and emoji embeddings or semantic clusters. The best systems for Task A and Task B obtained an F 1 score of respectively 0.705 and 0.507 and clearly outperformed the baselines provided for this task. When looking at the scores per class label in Task B, we observe that high scores were obtained for the non-ironic and ironic by clash classes, and that other irony appears to be the most challenging irony type. Among all submissions, a wide variety of preprocessing tools, machine learning libraries and lexicons were explored.",
"cite_spans": [
{
"start": 191,
"end": 213,
"text": "(Cambria et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 255,
"end": 270,
"text": "(Nielsen, 2011)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "As the provided datasets were relatively small, participants were allowed to include additional training data for both subtasks. Nevertheless, most submissions were constrained (i.e. only the provided training data were used): only nine unconstrained submissions were made for Task A, and four for Task B. When comparing constrained to unconstrained systems, it can be observed that adding more training data does not necessarily benefit the classification results. A possible explanation for this is that most unconstrained systems added training data from related irony research that were annotated differently (e.g. automatically) than the distributed corpus, which presumably limited the beneficial effect of increasing the training corpus size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "This paper provides some general insights into the main methodologies and bottlenecks for binary and multiclass irony classification. We observed that, overall, systems performed much better on Task A than Task B and the classification results for the subtypes of irony indicate that ironic by clash is most easily recognised (top F 1 = 0.697), while other types of verbal irony and situational irony are much harder (top F 1 scores are 0.114 and 0.376, respectively).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "8"
},
{
"text": "All practical information, data download links and the final results can be consulted via the CodaLab website of our task: https://competitions.codalab.org/competitions/17468.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Cyvhee/SemEval2018-Task3/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Modelling Context with User Embeddings for Sarcasm Detection in Social Media",
"authors": [
{
"first": "Silvio",
"middle": [],
"last": "Amir",
"suffix": ""
},
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Lyu",
"suffix": ""
},
{
"first": "Paula",
"middle": [],
"last": "Carvalho",
"suffix": ""
},
{
"first": "M\u00e1rio",
"middle": [
"J"
],
"last": "Silva",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Silvio Amir, Byron C. Wallace, Hao Lyu, Paula Car- valho, and M\u00e1rio J. Silva. 2016. Modelling Context with User Embeddings for Sarcasm Detection in So- cial Media. CoRR, abs/1607.00976.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Contextualized Sarcasm Detection on Twitter",
"authors": [
{
"first": "David",
"middle": [],
"last": "Bamman",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Ninth International Conference on Web and Social Media (ICWSM'15)",
"volume": "",
"issue": "",
"pages": "574--577",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Bamman and Noah A. Smith. 2015. Contextual- ized Sarcasm Detection on Twitter. In Proceedings of the Ninth International Conference on Web and Social Media (ICWSM'15), pages 574-577, Oxford, UK. AAAI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modelling Irony in Twitter",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Student Research Workshop at the 14th Conference of the European Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "56--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri and Horacio Saggion. 2014. Mod- elling Irony in Twitter. In Proceedings of the Stu- dent Research Workshop at the 14th Conference of the European Chapter of the ACL, pages 56-64, Gothenburg, Sweden. ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "NTUA-SLP at SemEval-2018 Task 3: Deep Character and Word-level RNNs with Attention for Irony Detection in Twitter",
"authors": [
{
"first": "Christos",
"middle": [],
"last": "Baziotis",
"suffix": ""
},
{
"first": "Nikolaos",
"middle": [],
"last": "Athanasiou",
"suffix": ""
},
{
"first": "Pinelopi",
"middle": [],
"last": "Papalampidi",
"suffix": ""
},
{
"first": "Athanasia",
"middle": [],
"last": "Kolovou",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christos Baziotis, Nikolaos Athanasiou, Pinelopi Papalampidi, Athanasia Kolovou, Georgios Paraskevopoulos, Nikolaos Ellinas, and Alexandros Potamianos. 2018. NTUA-SLP at SemEval-2018 Task 3: Deep Character and Word-level RNNs with Attention for Irony Detection in Twitter. In Proceedings of the 12th International Workshop on Semantic Evaluation, SemEval-2018, New Orleans, LA, USA. ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sarcasm detection in twitter: \"all your products are incredibly amazing!!!\" -are they really?",
"authors": [
{
"first": "Mondher",
"middle": [],
"last": "Bouazizi",
"suffix": ""
},
{
"first": "Tomoaki",
"middle": [],
"last": "Ohtsuki",
"suffix": ""
}
],
"year": 2016,
"venue": "Global Communications Conference, GLOBECOM 2015",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mondher Bouazizi and Tomoaki Ohtsuki. 2016. Sar- casm detection in twitter: \"all your products are in- credibly amazing!!!\" -are they really? In Global Communications Conference, GLOBECOM 2015, pages 1-6. IEEE.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Signature verification using a \"siamese\" time delay neural network",
"authors": [
{
"first": "Jane",
"middle": [],
"last": "Bromley",
"suffix": ""
},
{
"first": "Isabelle",
"middle": [],
"last": "Guyon",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "S\u00e4ckinger",
"suffix": ""
},
{
"first": "Roopak",
"middle": [],
"last": "Shah",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the 6th International Conference on Neural Information Processing Systems, NIPS'93",
"volume": "",
"issue": "",
"pages": "737--744",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard S\u00e4ckinger, and Roopak Shah. 1993. Signature ver- ification using a \"siamese\" time delay neural net- work. In Proceedings of the 6th International Con- ference on Neural Information Processing Systems, NIPS'93, pages 737-744, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Class-based N-gram Models of Natural Language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based N-gram Models of Natural Language. Computational Linguistics, 18(4):467-479.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives",
"authors": [
{
"first": "Erik",
"middle": [],
"last": "Cambria",
"suffix": ""
},
{
"first": "Soujanya",
"middle": [],
"last": "Poria",
"suffix": ""
},
{
"first": "Rajiv",
"middle": [],
"last": "Bajpai",
"suffix": ""
},
{
"first": "Bjoern",
"middle": [],
"last": "Schuller",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2666--2677",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik Cambria, Soujanya Poria, Rajiv Bajpai, and Bjo- ern Schuller. 2016. SenticNet 4: A Semantic Re- source for Sentiment Analysis Based on Concep- tual Primitives. In Proceedings of COLING 2016, 26th International Conference on Computational Linguistics, pages 2666-2677, Osaka, Japan. ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Assessing Agreement on Classification Tasks: The Kappa Statistic",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Carletta",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "2",
"pages": "249--254",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Carletta. 1996. Assessing Agreement on Classi- fication Tasks: The Kappa Statistic. Computational Linguistics, 22(2):249-254.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "SMOTE: Synthetic Minority Over-sampling Technique",
"authors": [
{
"first": "V",
"middle": [],
"last": "Nitesh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [
"W"
],
"last": "Chawla",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [
"O"
],
"last": "Bowyer",
"suffix": ""
},
{
"first": "W",
"middle": [
"Philip"
],
"last": "Hall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kegelmeyer",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Artificial Intelligence Research",
"volume": "16",
"issue": "1",
"pages": "321--357",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitesh V. Chawla, Kevin W. Bowyer, Lawrence O. Hall, and W. Philip Kegelmeyer. 2002. SMOTE: Synthetic Minority Over-sampling Technique. Jour- nal of Artificial Intelligence Research, 16(1):321- 357.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semi-supervised Recognition of Sarcastic Sentences in Twitter and Amazon",
"authors": [
{
"first": "Dmitry",
"middle": [],
"last": "Davidov",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Tsur",
"suffix": ""
},
{
"first": "Ari",
"middle": [],
"last": "Rappoport",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning (CoNLL'10)",
"volume": "",
"issue": "",
"pages": "107--116",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised Recognition of Sarcastic Sentences in Twitter and Amazon. In Proceedings of the Four- teenth Conference on Computational Natural Lan- guage Learning (CoNLL'10), pages 107-116, Upp- sala, Sweden. ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Irony detection in twitter: The role of affective content",
"authors": [
{
"first": "Delia Iraz\u00fa Herna\u0144dez",
"middle": [],
"last": "Far\u00edas",
"suffix": ""
},
{
"first": "Viviana",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
}
],
"year": 2016,
"venue": "ACM Transactions on Internet Technology",
"volume": "16",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Delia Iraz\u00fa Herna\u0144dez Far\u00edas, Viviana Patti, and Paolo Rosso. 2016. Irony detection in twitter: The role of affective content. ACM Transactions on Internet Technology, 16(3):19:1-19:24.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm",
"authors": [
{
"first": "Bjarke",
"middle": [],
"last": "Felbo",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Mislove",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Iyad",
"middle": [],
"last": "Rahwan",
"suffix": ""
},
{
"first": "Sune",
"middle": [],
"last": "Lehmann",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1615--1625",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using mil- lions of emoji occurrences to learn any-domain rep- resentations for detecting sentiment, emotion and sarcasm. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, pages 1615-1625, Copenhagen, Denmark. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Measuring nominal scale agreement among many raters",
"authors": [
{
"first": "Joseph",
"middle": [
"L"
],
"last": "Fleiss",
"suffix": ""
}
],
"year": 1971,
"venue": "Psychological Bulletin",
"volume": "76",
"issue": "5",
"pages": "378--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bul- letin, 76(5):378-382.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "IronyMagnet at SemEval-2018 Task 3: A Siamese network for Irony detection in Social media",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh. 2018. IronyMagnet at SemEval- 2018 Task 3: A Siamese network for Irony detection in Social media. In Proceedings of the 12th Interna- tional Workshop on Semantic Evaluation, SemEval- 2018, New Orleans, LA, USA. ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Guofu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Ekaterina",
"middle": [],
"last": "Shutova",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Barnden",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "470--478",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh, Guofu Li, Tony Veale, Paolo Rosso, Ekaterina Shutova, John Barnden, and Antonio Reyes. 2015. SemEval-2015 Task 11: Sentiment Analysis of Figurative Language in Twitter. In Pro- ceedings of the 9th International Workshop on Se- mantic Evaluation (SemEval 2015), pages 470-478, Denver, Colorado. ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Fracking Sarcasm using Neural Network",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "161--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking Sarcasm using Neural Network. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 161-169, San Diego, California. ACL.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Identifying Sarcasm in Twitter: A Closer Look",
"authors": [
{
"first": "Roberto",
"middle": [],
"last": "Gonz\u00e1lez-Ib\u00e1\u00f1ez",
"suffix": ""
},
{
"first": "Smaranda",
"middle": [],
"last": "Muresan",
"suffix": ""
},
{
"first": "Nina",
"middle": [],
"last": "Wacholder",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the ACL: Human Language Technologies (HLT'11)",
"volume": "",
"issue": "",
"pages": "581--586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1\u00f1ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying Sarcasm in Twit- ter: A Closer Look. In Proceedings of the 49th An- nual Meeting of the ACL: Human Language Tech- nologies (HLT'11), pages 581-586, Portland, Ore- gon. ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Crys-talNest at SemEval-2017 Task 4: Using Sarcasm Detection for Enhancing Sentiment Classification and Quantification",
"authors": [
{
"first": "Raj",
"middle": [],
"last": "Kumar Gupta",
"suffix": ""
},
{
"first": "Yinping",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "626--633",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raj Kumar Gupta and Yinping Yang. 2017. Crys- talNest at SemEval-2017 Task 4: Using Sarcasm De- tection for Enhancing Sentiment Classification and Quantification. In Proceedings of the 11th Interna- tional Workshop on Semantic Evaluation (SemEval- 2017), pages 626-633. ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mining and summarizing customer reviews",
"authors": [
{
"first": "Minqing",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04",
"volume": "",
"issue": "",
"pages": "168--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. ACM.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text",
"authors": [
{
"first": "J",
"middle": [],
"last": "Clayton",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Hutto",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gilbert",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Conference on Weblogs and Social Media (ICWSM-14)",
"volume": "",
"issue": "",
"pages": "216--225",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clayton J. Hutto and Eric Gilbert. 2014. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. In Proceedings of the 8th International Conference on Weblogs and Social Media (ICWSM-14), pages 216-225. AAAI.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Automatic Sarcasm Detection:A Survey",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J. Car- man. 2017. Automatic Sarcasm Detection:A Sur- vey. ACM Computing Surveys (CSUR), 50(5):73:1- 73:22.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Bag of Tricks for Efficient Text Classification",
"authors": [
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter",
"volume": "2",
"issue": "",
"pages": "427--431",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of Tricks for Efficient Text Classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Towards a Contextual Pragmatic Model to Detect Irony in Tweets",
"authors": [
{
"first": "Jihen",
"middle": [],
"last": "Karoui",
"suffix": ""
},
{
"first": "Benamara",
"middle": [],
"last": "Farah",
"suffix": ""
},
{
"first": "Moriceau",
"middle": [],
"last": "V\u00e9ronique",
"suffix": ""
},
{
"first": "Nathalie",
"middle": [],
"last": "Aussenac-Gilles",
"suffix": ""
},
{
"first": "Lamia",
"middle": [],
"last": "Hadrich-Belguith",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "644--650",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihen Karoui, Benamara Farah, V\u00e9ronique MORICEAU, Nathalie Aussenac-Gilles, and Lamia Hadrich-Belguith. 2015. Towards a Contex- tual Pragmatic Model to Detect Irony in Tweets. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 644-650, Beijing, China. ACL.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Signaling sarcasm: From hyperbole to hashtag",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Kunneman",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Liebrecht",
"suffix": ""
},
{
"first": "Margot",
"middle": [],
"last": "Van Mulken",
"suffix": ""
},
{
"first": "Antal",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bosch",
"suffix": ""
}
],
"year": 2015,
"venue": "Information Processing Management",
"volume": "51",
"issue": "4",
"pages": "500--509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Florian Kunneman, Christine Liebrecht, Margot van Mulken, and Antal van den Bosch. 2015. Signaling sarcasm: From hyperbole to hashtag. Information Processing Management, 51(4):500-509.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The measurement of observer agreement for categorical data",
"authors": [
{
"first": "J",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Landis",
"suffix": ""
},
{
"first": "Gary",
"middle": [
"G"
],
"last": "Koch",
"suffix": ""
}
],
"year": 1977,
"venue": "Biometrics",
"volume": "",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Richard Landis and Gary G. Koch. 1977. The mea- surement of observer agreement for categorical data. Biometrics, 33(1).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Sentiment Analysis and Opinion Mining",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2012,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Min- ing. Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool Publishers.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Greenwood",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "4238--4243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Maynard and Mark Greenwood. 2014. Who cares about Sarcastic Tweets? Investigating the Im- pact of Sarcasm on Sentiment Analysis. In Pro- ceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 4238-4243, Reykjavik, Iceland. European Language Resources Association.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Harnessing Cognitive Features for Sarcasm Detection",
"authors": [
{
"first": "Abhijit",
"middle": [],
"last": "Mishra",
"suffix": ""
},
{
"first": "Diptesh",
"middle": [],
"last": "Kanojia",
"suffix": ""
},
{
"first": "Seema",
"middle": [],
"last": "Nagar",
"suffix": ""
},
{
"first": "Kuntal",
"middle": [],
"last": "Dey",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1095--1104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhijit Mishra, Diptesh Kanojia, Seema Nagar, Kuntal Dey, and Pushpak Bhattacharyya. 2016. Harnessing Cognitive Features for Sarcasm Detection. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 1095-1104, Berlin, Germany. ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Emotion Intensities in Tweets",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017",
"volume": "",
"issue": "",
"pages": "65--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad and Felipe Bravo-Marquez. 2017. Emotion Intensities in Tweets. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics, *SEM @ACM 2017, pages 65-77.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "SemEval-2016 Task 4: Sentiment Analysis in Twitter",
"authors": [
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Fabrizio",
"middle": [],
"last": "Sebastiani",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)",
"volume": "",
"issue": "",
"pages": "1--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Sebastiani, and Veselin Stoyanov. 2016. SemEval- 2016 Task 4: Sentiment Analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1-18, San Diego, California. ACL.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A new ANEW: evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn\u00e5rup",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the ESWC2011 Workshop on 'Making Sense of Microposts': Big things come in small packages",
"volume": "718",
"issue": "",
"pages": "93--98",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finn\u00c5rup Nielsen. 2011. A new ANEW: evaluation of a word list for sentiment analysis in microblogs. In Proceedings of the ESWC2011 Workshop on 'Mak- ing Sense of Microposts': Big things come in small packages, volume 718, pages 93-98.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Scikit-learn: Machine Learning in Python",
"authors": [
{
"first": "F",
"middle": [],
"last": "Pedregosa",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Varoquaux",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Gramfort",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Michel",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Thirion",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Grisel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Blondel",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Prettenhofer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weiss",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dubourg",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Vanderplas",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Passos",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Cournapeau",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Brucher",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Perrot",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Duchesnay",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2825--2830",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine Learn- ing in Python. Journal of Machine Learning Re- search, 12:2825-2830.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. ACL.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Potash",
"suffix": ""
},
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Potash, Alexey Romanov, and Anna Rumshisky. 2017. SemEval-2017 Task 6: #HashtagWars: Learning a Sense of Humor. In Proceedings of the 11th International Workshop on Semantic Evalua- tion (SemEval-2017), pages 49-57. ACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sarcasm detection on czech and english twitter",
"authors": [
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Pt\u00e1\u010dek",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Habernal",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Hong",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "213--223",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom\u00e1\u0161 Pt\u00e1\u010dek, Ivan Habernal, and Jun Hong. 2014. Sarcasm detection on czech and english twitter. In Proceedings of COLING 2014, the 25th Inter- national Conference on Computational Linguistics: Technical Papers, pages 213-223, Dublin, Ireland. Dublin City University and ACL.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "NLPRL-IITBHU at SemEval-2018 Task 3: Combining Linguistic Features and Emoji pre-trained CNN for Irony Detection in Tweets",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Rangwani",
"suffix": ""
},
{
"first": "Devang",
"middle": [],
"last": "Kulshreshtha",
"suffix": ""
},
{
"first": "Anil Kumar",
"middle": [],
"last": "Sing",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harsh Rangwani, Devang Kulshreshtha, and Anil Ku- mar Sing. 2018. NLPRL-IITBHU at SemEval-2018 Task 3: Combining Linguistic Features and Emoji pre-trained CNN for Irony Detection in Tweets. In Proceedings of the 12th International Workshop on Semantic Evaluation, SemEval-2018, New Orleans, LA, USA. ACL.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Multidimensional Approach for Detecting Irony in Twitter. Language Resources and Evaluation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Reyes",
"suffix": ""
},
{
"first": "Paolo",
"middle": [],
"last": "Rosso",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "47",
"issue": "",
"pages": "239--268",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Reyes, Paolo Rosso, and Tony Veale. 2013. A Multidimensional Approach for Detecting Irony in Twitter. Language Resources and Evaluation, 47(1):239-268.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Sarcasm as Contrast between a Positive Sentiment and Negative Situation",
"authors": [
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "Ashequl",
"middle": [],
"last": "Qadir",
"suffix": ""
},
{
"first": "Prafulla",
"middle": [],
"last": "Surve",
"suffix": ""
},
{
"first": "Lalindra De",
"middle": [],
"last": "Silva",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Ruihong",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'13)",
"volume": "",
"issue": "",
"pages": "704--714",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellen Riloff, Ashequl Qadir, Prafulla Surve, Lalin- dra De Silva, Nathan Gilbert, and Ruihong Huang. 2013. Sarcasm as Contrast between a Positive Sen- timent and Negative Situation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'13), pages 704-714, Seattle, Washington, USA. ACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "WLV at SemEval-2018 Task 3: Dissecting Tweets in Search of Irony",
"authors": [
{
"first": "Shiva",
"middle": [],
"last": "Omid Rohanian",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Taslimipoor",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mitkov",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omid Rohanian, Shiva Taslimipoor, Richard Evans, and Ruslan Mitkov. 2018. WLV at SemEval-2018 Task 3: Dissecting Tweets in Search of Irony. In Proceedings of the 12th International Workshop on Semantic Evaluation, SemEval-2018, New Orleans, LA, USA. ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "SemEval-2017 Task 4: Sentiment Analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Noura",
"middle": [],
"last": "Farra",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)",
"volume": "",
"issue": "",
"pages": "502--518",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Noura Farra, and Preslav Nakov. 2017. SemEval-2017 Task 4: Sentiment Analysis in Twitter. In Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017), pages 502-518, Vancouver, Canada. ACL.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "SemEval-2014 Task 9: Sentiment Analysis in Twitter",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Ritter",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "Nakov",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "73--80",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Rosenthal, Alan Ritter, Preslav Nakov, and Veselin Stoyanov. 2014. SemEval-2014 Task 9: Sentiment Analysis in Twitter. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 73-80, Dublin, Ireland. ACL and Dublin City University.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The bicoherence theory of situational irony",
"authors": [
{
"first": "Cameron",
"middle": [],
"last": "Shelley",
"suffix": ""
}
],
"year": 2001,
"venue": "Cognitive Science",
"volume": "25",
"issue": "5",
"pages": "775--818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cameron Shelley. 2001. The bicoherence theory of sit- uational irony. Cognitive Science, 25(5):775-818.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "BRAT: A Web-based Tool for NLPassisted Text Annotation",
"authors": [
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sampo",
"middle": [],
"last": "Pyysalo",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Topi\u0107",
"suffix": ""
},
{
"first": "Tomoko",
"middle": [],
"last": "Ohta",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the ACL, EACL'12",
"volume": "",
"issue": "",
"pages": "102--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pontus Stenetorp, Sampo Pyysalo, Goran Topi\u0107, Tomoko Ohta, Sophia Ananiadou, and Jun'ichi Tsu- jii. 2012. BRAT: A Web-based Tool for NLP- assisted Text Annotation. In Proceedings of the 13th Conference of the European Chapter of the ACL, EACL'12, pages 102-107, Avignon, France. ACL.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Can machines sense irony? Exploring automatic irony detection on social media",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee. 2017. Can machines sense irony? Exploring automatic irony detection on social me- dia. Ph.D. thesis, Ghent University.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Guidelines for Annotating Irony in Social Media Text, version 2.0",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2016a. Guidelines for Annotating Irony in Social Media Text, version 2.0. Technical Report 16-01, LT3, Language and Translation Technology Team- Ghent University.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Monday mornings are my fave #not: Exploring the Automatic Recognition of Irony in English tweets",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2730--2739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2016b. Monday mornings are my fave #not: Explor- ing the Automatic Recognition of Irony in English tweets. In Proceedings of COLING 2016, 26th In- ternational Conference on Computational Linguis- tics, pages 2730-2739, Osaka, Japan.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "NIHRIO at SemEval-2018 Task 3: A Simple and Accurate Neural Network Model for Irony Detection in Twitter",
"authors": [],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh Vu, Dat Quoc Nguyen, Xuan-Son Vu, Dai Quoc Nguyen, Michael Catt, and Michael Trenell. 2018. NIHRIO at SemEval-2018 Task 3: A Simple and Accurate Neural Network Model for Irony Detection in Twitter. In Proceedings of the 12th International Workshop on Semantic Evaluation, SemEval-2018, New Orleans, LA, USA. ACL.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Computational irony: A survey and new perspectives",
"authors": [
{
"first": "Byron",
"middle": [
"C"
],
"last": "Wallace",
"suffix": ""
}
],
"year": 2015,
"venue": "Artificial Intelligence Review",
"volume": "43",
"issue": "4",
"pages": "467--483",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Byron C. Wallace. 2015. Computational irony: A sur- vey and new perspectives. Artificial Intelligence Re- view, 43(4):467-483.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "THU NGN at SemEval-2018 Task 3: Tweet Irony Detection with Densely Connected LSTM and Multi-task Learning",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Junxin",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuhan Wu, Fangzhao Wu, Sixing Wu, Junxin Liu, Zhigang Yuan, and Yongfeng Huang. 2018. THU NGN at SemEval-2018 Task 3: Tweet Irony Detection with Densely Connected LSTM and Multi-task Learning. In Proceedings of the 12th International Workshop on Semantic Evaluation, SemEval-2018, New Orleans, LA, USA. ACL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"type_str": "figure",
"text": "accuracy = true positives + true negatives total number of instances (1) precision = true positives true positives + f alse positives (2) recall = true positives true positives + f alse negatives"
},
"TABREF2": {
"content": "<table><tr><td>class label</td><td># instances</td></tr><tr><td>Verbal irony by means of a polarity contrast</td><td>1,728</td></tr><tr><td>Other types of verbal irony</td><td>267</td></tr><tr><td>Situational irony</td><td>401</td></tr><tr><td>Non-ironic</td><td>604</td></tr></table>",
"text": ".",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table/>",
"text": "Distribution of the different irony categories in the corpus",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>, when consid-</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>team UCDCC THU NGN NTUA-SLP WLV NLPRL-IITBHU NCL RM@IT #NonDicevo-SulSerio DLUTNLP-1 ELiRF-UPV</td><td>acc 0.797 0.735 0.732 0.643 0.661 0.702 0.691 0.666 0.628 0.611</td><td>precision recall 0.788 0.669 0.724 F1 0.630 0.801 0.705 0.654 0.691 0.672 0.532 0.836 0.650 0.551 0.788 0.648 0.609 0.691 0.648 0.598 0.679 0.636 0.562 0.717 0.630 0.520 0.797 0.629 0.506 0.833 0.629</td></tr></table>",
"text": "Official (CodaLab) results for Task A, ranked by F 1 score. The highest scores in each column are shown in bold and the baselines are indicated in purple. that the UCDCC team ranks first (F 1 = 0.724), followed by THU NGN, NTUA-SLP, WLV and NLPRL-IITBHU, whose approach was discussed earlier in this paper. The UCDCC-system is an LSTM model exploiting Glove word embedding features.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table><tr><td>team #NonDicevo-SulSerio INAOE-UPV RM@IT ValenTO UTMN IITG LDR milkstouts INGEOTEC-IIMAS</td><td>acc 0.679 0.651 0.649 0.598 0.603 0.556 0.571 0.584 0.643</td><td>precision recall 0.583 0.666 0.622 F1 0.546 0.714 0.618 0.544 0.714 0.618 0.496 0.781 0.607 0.500 0.556 0.527 0.450 0.540 0.491 0.455 0.408 0.431 0.427 0.142 0.213 0.897 0.113 0.200</td></tr></table>",
"text": "Best constrained systems for Task A.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table/>",
"text": "Best unconstrained systems for Task A.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF9": {
"content": "<table/>",
"text": "Official (CodaLab) results for Task B, ranked by F 1 score. The highest scores in each column are shown in bold and the baselines are indicated in purple.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF10": {
"content": "<table><tr><td>team UCDCC NTUA-SLP THU NGN NLPRL-IITBHU NCL Random Decision-Syntax Trees ELiRF-UPV WLV AI-KU</td><td>acc 0.732 0.652 0.605 0.603 0.659 0.633 0.633 0.671 0.584</td><td>precision recall 0.577 0.504 0.507 F1 0.496 0.512 0.496 0.486 0.541 0.495 0.466 0.506 0.474 0.545 0.448 0.444 0.487 0.439 0.435 0.412 0.440 0.421 0.431 0.415 0.415 0.422 0.402 0.393</td></tr></table>",
"text": "NLPRL-IITBHU built a Random Forest classifier making use of pre-trained DeepMoji embeddings, character embeddings (using Tweet2Vec) and sentiment lexicon features.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF11": {
"content": "<table><tr><td>team #NonDicevo SulSerio INGEOTEC-IIMAS INAOE-UPV IITG</td><td>acc 0.545 0.647 0.495 0.486</td><td>precision recall 0.409 0.441 0.413 F1 0.508 0.386 0.407 0.347 0.379 0.350 0.336 0.291 0.278</td></tr></table>",
"text": "Best constrained systems for Task B. The highest scores in each column are shown in bold.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF12": {
"content": "<table/>",
"text": "Unconstrained systems for Task B. The highest scores in each column are shown in bold.",
"num": null,
"type_str": "table",
"html": null
},
"TABREF14": {
"content": "<table/>",
"text": "Results for Task B, reporting the F 1 score for the class labels. The highest scores in each column are shown in bold.",
"num": null,
"type_str": "table",
"html": null
}
}
}
}