Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:43:49.909013Z"
},
"title": "THU NGN at SemEval-2018 Task 3: Tweet Irony Detection with Densely Connected LSTM and Multi-task Learning",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "Asia {wuch15,wu-sx15,ljx16,yuanzg14"
}
},
"email": "[email protected]"
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Junxin",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Detecting irony is an important task to mine fine-grained information from social web messages. Therefore, the Semeval-2018 task 3 is aimed to detect the ironic tweets (subtask A) and their irony types (subtask B). In order to address this task, we propose a system based on a densely connected LSTM network with multi-task learning strategy. In our dense LSTM model, each layer will take all outputs from previous layers as input. The last LSTM layer will output the hidden representations of texts, and they will be used in three classification task. In addition, we incorporate several types of features to improve the model performance. Our model achieved an F-score of 70.54 (ranked 2/43) in the subtask A and 49.47 (ranked 3/29) in the subtask B. The experimental results validate the effectiveness of our system.",
"pdf_parse": {
"paper_id": "S18-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "Detecting irony is an important task to mine fine-grained information from social web messages. Therefore, the Semeval-2018 task 3 is aimed to detect the ironic tweets (subtask A) and their irony types (subtask B). In order to address this task, we propose a system based on a densely connected LSTM network with multi-task learning strategy. In our dense LSTM model, each layer will take all outputs from previous layers as input. The last LSTM layer will output the hidden representations of texts, and they will be used in three classification task. In addition, we incorporate several types of features to improve the model performance. Our model achieved an F-score of 70.54 (ranked 2/43) in the subtask A and 49.47 (ranked 3/29) in the subtask B. The experimental results validate the effectiveness of our system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Figurative languages such as irony are widely used in web messages such as tweets to convey different sentiment. Identifying the ironic texts can help to understand the social web better and has many applications such as sentiment analysis (Ghosh and Veale, 2016) . Irony detecting techniques are important to improve the performance of sentiment analysis. For example, the tweet \"Monday mornings are my fave:)# not\" is an irony with negative sentiment, but it will be probably classified as a positive one by a standard sentiment analysis model (Van Hee et al., 2016b) . Thus, capturing the ironic information in texts is useful to predict sentiment more accurately (Van Hee et al., 2016a) .",
"cite_spans": [
{
"start": 240,
"end": 263,
"text": "(Ghosh and Veale, 2016)",
"ref_id": "BIBREF6"
},
{
"start": 546,
"end": 569,
"text": "(Van Hee et al., 2016b)",
"ref_id": "BIBREF16"
},
{
"start": 667,
"end": 690,
"text": "(Van Hee et al., 2016a)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, determining whether a text is ironic is a challenging task since the the differences between ironic and non-ironic texts are usually subtle. For example, the tweet \"Love this weather #not\" is ironic, but a similar tweet \"Hate this weather #not happy\" is non-ironic. Different approaches are proposed to recognize the complex irony in texts. Existing methods to detect irony are mainly based on rules or machine learning techniques (Joshi et al., 2017) . Rules based methods usually depend on lexicons to identify irony (Khattri et al., 2015; Maynard and Greenwood, 2014) . However, these methods cannot utilize the contextual information from texts. Traditional machine learning based methods such as SVM (Desai and Dave, 2016) are also effective in this task, but they usually need manually feature engineering (Barbieri et al., 2014) . Recently, deep learning techniques are successfully applied to this task. For example, Ghosh et al. (2016) propose to use a CNN-LSTM model to classify the ironic and non-ironic tweets. Their method can significantly improve the classification performance without heavy feature engineering. However, existing methods are aimed to detect irony in tweets with explicit irony related hashtags. For example, tweets with #irony or #sarcasm hashtags are very likely to be ironic. Therefore, models may focus on these hashtags rather than the contextual information.",
"cite_spans": [
{
"start": 440,
"end": 460,
"text": "(Joshi et al., 2017)",
"ref_id": "BIBREF8"
},
{
"start": 528,
"end": 550,
"text": "(Khattri et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 551,
"end": 579,
"text": "Maynard and Greenwood, 2014)",
"ref_id": "BIBREF10"
},
{
"start": 714,
"end": 736,
"text": "(Desai and Dave, 2016)",
"ref_id": "BIBREF5"
},
{
"start": 821,
"end": 844,
"text": "(Barbieri et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 934,
"end": 953,
"text": "Ghosh et al. (2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To fill this gap, the SemEval-2018 task 3 1 aims to detect irony of tweets without explicit irony hashtags (Van Hee et al., 2018) . The subtask A is aimed to determine whether a tweet is ironic. the subtask B is aimed to identify the irony types of tweets: Verbal irony by means of a polarity contrast, other verbal irony and situational irony. Several examples are as follows:",
"cite_spans": [
{
"start": 107,
"end": 129,
"text": "(Van Hee et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 verbal irony by means of a polarity contrast: I love waking up with migraines #not \u2022 situational irony: most of us didn't focus in the #ADHD lecture. #irony",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to address this problem, we propose a system 2 based on a densely connected LSTM model (Wu et al., 2017) with multitask learning techniques. In our model, each LSTM layer will take all outputs of previous LSTM layers as input. Then different levels of contextual information can be learned at the same time. Our model is required to predict in three tasks simultaneously: 1) identifying the missing irony related hashtags; 2) classify ironic or non-ironic; 3) irony type classification. By using multitask learning strategy, the model can combine the information in the different tasks to improve the performance. The experimental results in both subtasks validate the effectiveness of our method.",
"cite_spans": [
{
"start": 96,
"end": 113,
"text": "(Wu et al., 2017)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The architecture of our densely connected LSTM model is shown in Figure 1 . We denote this model as Dense-LSTM. The detailed information will be introduced in the following paragraphs. In our model, the embedding layer is used to convert the input tweets into a sequence of dense vectors. The POS tag features P i are one-hot encoded and concatenated with the word embedding vectors E i . Usually the affective words and creative languages in tweets are important irony clues. Since these words usually have specific POS tags, adding these features can help our model to capture the ironic information better. We use tweetokenize 3 tool to tokenize and the Ark-Tweet-NLP 4 tool to obtain the POS tags of tweets (Owoputi et al., 2013) .",
"cite_spans": [
{
"start": 711,
"end": 733,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 65,
"end": 73,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "The first Bi-LSTM layer takes the sequential vectors as input. For the j th Bi-LSTM layer, its output H j will input all LSTM layers after it. As shown in Figure 1 , the blue dashed lines represent such over-layer connections. All inputs of an LSTM layer will be concatenated together. Thus, the input of the j th (j > 1) layer is [H 1 ; ...; H j\u22121 ]. It indicates that each layer can learn different levels of information at the same time. Since the irony information is complex, jointly using all levels of information is beneficial to predict irony more accurately. The last LSTM layer will output the hidden representation H of texts. It will be concatenated with the sentiment features and the sentence embedding features. The sentiment features can provide additional sentiment information to detect irony, such as the sentiment polarity assigned by lexicons. The sentiment features are generated via the Af-fectiveTweets 5 package in weka provided by Mohammad et al. (Mohammad and Bravo-Marquez, 2017) . We use the TweetToLexiconFeatureVector (Bravo-Marquez et al., 2014) and TweetToSen-tiStrengthFeatureVector (Thelwall et al., 2012) filters in this package. The embedding of a sentence is obtained by taking the average of all words in this sentence using the 100-dim pre-trained embedding weights provided by Bravo et al. (Bravo-Marquez et al., 2016) . By incorporating the vector representation of tweet sentence, the irony information can be easier to be captured.",
"cite_spans": [
{
"start": 958,
"end": 1008,
"text": "Mohammad et al. (Mohammad and Bravo-Marquez, 2017)",
"ref_id": "BIBREF11"
},
{
"start": 1050,
"end": 1078,
"text": "(Bravo-Marquez et al., 2014)",
"ref_id": "BIBREF4"
},
{
"start": 1118,
"end": 1141,
"text": "(Thelwall et al., 2012)",
"ref_id": "BIBREF14"
},
{
"start": 1332,
"end": 1360,
"text": "(Bravo-Marquez et al., 2016)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 155,
"end": 163,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "Three dense layers with ReLU activation are used to predict for three different tasks including: determining the missing ironic hashtags (i.e. #not, #sarcasm, #irony or none of them) (task1); identifying ironic or non-ironic (task2) ; identifying the irony types (task3). Thus, the objective function of our model can be formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u03b1 1 L 1 + \u03b1 2 L 2 + \u03b1 3 L 3 ,",
"eq_num": "(1)"
}
],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "where L i and \u03b1 i denote the loss function and its weight of task i. L 1 and L 2 are categorical and binary cross-entropy respectively. In addition, the numbers of tweets with different irony types are very unbalanced. Motivated by the cost-sensitive entropy used by Santos et al. 2009, we formulate L 3 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L = \u2212 N i=1 w y i y i log(\u0177 i ),",
"eq_num": "(2)"
}
],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "where N is the number of tweets, y i is the irony type of the i th tweet,\u0177 i is the prediction score, and w y i is the loss weight of irony type label",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "y i . w y i is defined as C k=1 N k Ny i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": ", where C is the number of irony types and N j is the number of tweets with irony type label j. Thus, the infrequent irony types will gain relatively larger loss weights. By using this multi-task learning method, our model can incorporate different information such as the irony hashtags. In addition, classifying ironic/non-ironic and the irony types are similar tasks. Therefore, the performance of both tasks can be improved by combining the information of both tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "In order to improve the performance of our system, we use an ensemble strategy by averaging the classification results predicted by 10 models. Each model will be trained using a random dropout rate. Therefore in this way, the classification results will be voted by different models, which can improve the model performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Densely Connected LSTM with Multi-task Learning",
"sec_num": "2"
},
{
"text": "The detailed statistics of the dataset 6 in this task are shown in Table 1 . V-irony, O-irony and S-irony represent the three types respectively: verbal irony by means of a polarity contrast, other types of verbal irony and situational irony (Van Hee et al., 2018) . In subtask A, the performance of systems is evaluated by F-score for the positive class. In subtask B, the macro-averaged F-score over all classes is used as the metric. We combine two pre-trained word embeddings: 1) the embeddings provided by Godin et al. (2015) , which are trained on a corpus with 400 million tweets; 2) the embeddings provided by Barbieri et al. (2016) , which are trained on 20 million tweets. The dimensions of them are 400 and 300 respectively. They are concatenated together as the embeddings of words.",
"cite_spans": [
{
"start": 242,
"end": 264,
"text": "(Van Hee et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 511,
"end": 530,
"text": "Godin et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 618,
"end": 640,
"text": "Barbieri et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 67,
"end": 74,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Dataset and Experimental Settings",
"sec_num": "3.1"
},
{
"text": "In our network, the Dense-LSTM model has 4 LSTM layers with 200-dim hidden states. The hidden dimensions of dense layers are set to 300. The dropout rate of each layer is set to a random number between 0.2 to 0.4, and it will be set to a fixed value 0.3 in the comparative experiments without ensemble strategy. In subtask A, the loss weights \u03b1 of the three task are set to 0.5, 1 and 0.5 respectively. In subtask B, they are 0.5, 0.5 and 1. We use RMSProp as the optimizer, and the batch size is set to 64. In addition, we use 10% training data for validation to select the hyperparameters above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset and Experimental Settings",
"sec_num": "3.1"
},
{
"text": "We compare the performance of different methods including: 1) SVM, the benchmark system using SVM and BOW model; 2) CNN, using CNN with a global average pooling layer to obtain the hidden vector h, which is used to predict in the three tasks; 3) LSTM, using one Bi-LSTM layer in the network to get h; 4) 2-layer LSTM, using 2 Bi-LSTM layers; 5) Dense-LSTM, using our Dense-LSTM model; 6) Dense-LSTM+ens, using our Dense-LSTM model and ensemble strategy. In addition, we apply multi-task learning technique to all models except the benchmark system based on SVM. The results are shown in Table 1 . The experimental results show that our Dense-LSTM model significantly outperforms the baselines. Since the layers in our Dense-LSTM can learn from all previous outputs, our model can combine different levels of contextual information to capture the high-level irony clues. In addition, our model can predict more accurately via ensemble. Since models with random dropout can extract different information, we can take advantage of all models by voting. The ensemble strategy can reduce the noise in the dataset and make our system more stable (Xia et al., 2011) . ",
"cite_spans": [
{
"start": 1141,
"end": 1159,
"text": "(Xia et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 587,
"end": 595,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Performance Evaluation",
"sec_num": "3.2"
},
{
"text": "The performance of our Dense-LSTM model using different combinations of training tasks is shown in Table 3 . Note that we don't apply model ensemble here. Compared with the models trained in task2 or task3 only, the combination of both tasks can improve the performance. It may be because the two tasks have inherent relatedness and can share rich mutual information. Learning to predict the missing ironic hashtags (task1) can also improve the model performance. Since the ironic hashtags are often important ironic clues, identifying such clues can help our model to mine ironic information better.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effectiveness of Multi-task Learning",
"sec_num": "3.3"
},
{
"text": "We compare the performance using different combinations of pre-trained embeddings in our model. The results are illustrated in Table 4 . The results show that the pre-trained embeddings are important to capture irony information, and using the Table 4 : Influence of pre-trained word embedding. The emb1 and emb2 denote the embeddings provided by Godin et al. (2015) and Barbieri et al. (2016) respectively.",
"cite_spans": [
{
"start": 347,
"end": 366,
"text": "Godin et al. (2015)",
"ref_id": "BIBREF7"
},
{
"start": 371,
"end": 393,
"text": "Barbieri et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 127,
"end": 134,
"text": "Table 4",
"ref_id": null
},
{
"start": 244,
"end": 251,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Pre-trained Word Embedding",
"sec_num": "3.4"
},
{
"text": "The influence of different features on our model is shown in Table 5 . According to this table, all features can improve the classification performance in both subtasks, and the combination of the three features can achieve better performance. The improvement brought by POS tags is most significant. Affective words are important irony clues and they are usually verbs, adjectives or hashtags. Thus, incorporating the POS tag features can help to identify these words and capture the ironic information better. The sentiment features also improve our model, which can be inferred from the results. The sentiment polarities of ironic tweets are usually negative, but these texts often contain positive sentiment words. Since our sentiment features are obtained by several different sentiment or emotion lexicons, they can be used to assign the sentiment scores of texts, which can provide rich information to detect irony. The sentence embedding can also slightly improve the performance. The sentence embedding contains information of each word in the sentence. Thus, it can help to capture the word information better, which is ben-eficial to identify the overall sentiment of texts. The combination of all three types of features can take advantage of them and gain significant performance improvement. It validates the effectiveness of each type of features. ",
"cite_spans": [],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 5",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Influence of Additional Features",
"sec_num": "3.5"
},
{
"text": "Detecting irony in web texts is an important task to mine fine-grained sentiment information. In order to address this problem, we develop a system based on a densely connected LSTM model to participate in the SemEval-2018 Task 3. In our model, every LSTM layer will take all outputs of previous layers as inputs. Thus, the different levels of information can be learned at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "In addition, we propose to combine three different tasks to train our model jointly, which includes identifying the missing irony hashtags, determining ironic or non-ironic and classifying the irony types. These tasks have inherent relatedness thus the performance can be improved by sharing the mutual information. Our system achieved an Fscore of 70.54 and 49.47 which ranked the 2nd and 3rd place in the two subtasks. The experimental results validates the effectiveness of our method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "https://competitions.codalab.org/competitions/17468",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/wuch15/SemEval-2018-task3-THU NGN.git",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/jaredks/tweetokenize 4 http://www.cs.cmu.edu/ ark/TweetNLP 5 https://github.com/felipebravom/AffectiveTweets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/Cyvhee/SemEval2018-Task3/tree/master/datasets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank the reviewers for their insightful comments and constructive suggestions on improving this work. This work was supported in part by the National Key Research and Development Program of China under Grant 2016YFB0800402 and in part by the National Natural Science Foundation of China under Grant U1705261, Grant U1536207, Grant U1536201 and U1636113.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How cosmopolitan are emojis?: Exploring emojis usage and meaning over different languages with distributional semantics",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM on Multimedia Conference",
"volume": "",
"issue": "",
"pages": "531--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, German Kruszewski, Francesco Ronzano, and Horacio Saggion. 2016. How cos- mopolitan are emojis?: Exploring emojis usage and meaning over different languages with distributional semantics. In Proceedings of the 2016 ACM on Mul- timedia Conference, pages 531-535. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Modelling sarcasm in twitter, a novel approach",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, Horacio Saggion, and Francesco Ronzano. 2014. Modelling sarcasm in twitter, a novel approach. In Proceedings of the 5th Work- shop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 50-58.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Determining word-emotion associations from tweets by multilabel classification",
"authors": [
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pfahringer",
"suffix": ""
}
],
"year": 2016,
"venue": "Web Intelligence (WI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felipe Bravo-Marquez, Eibe Frank, Saif M Moham- mad, and Bernhard Pfahringer. 2016. Determining word-emotion associations from tweets by multi- label classification. In Web Intelligence (WI), 2016",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "ACM International Conference on",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "536--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "IEEE/WIC/ACM International Conference on, pages 536-539. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Meta-level sentiment models for big social data analysis. Knowledge-Based Systems",
"authors": [
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Mendoza",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "69",
"issue": "",
"pages": "86--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felipe Bravo-Marquez, Marcelo Mendoza, and Bar- bara Poblete. 2014. Meta-level sentiment models for big social data analysis. Knowledge-Based Systems, 69:86-99.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Sarcasm detection in hindi sentences using support vector machine",
"authors": [
{
"first": "Nikita",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Anandkumar D Dave",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal",
"volume": "4",
"issue": "7",
"pages": "8--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikita Desai and Anandkumar D Dave. 2016. Sar- casm detection in hindi sentences using support vec- tor machine. International Journal, 4(7):8-15.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Fracking sarcasm using neural network",
"authors": [
{
"first": "Aniruddha",
"middle": [],
"last": "Ghosh",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Veale",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "161--169",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aniruddha Ghosh and Tony Veale. 2016. Fracking sarcasm using neural network. In Proceedings of the 7th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 161-169.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Multimedia lab @ acl wnut ner shared task: Named entity recognition for twitter microposts using distributed word representations",
"authors": [
{
"first": "Fr\u00e9deric",
"middle": [],
"last": "Godin",
"suffix": ""
},
{
"first": "Baptist",
"middle": [],
"last": "Vandersmissen",
"suffix": ""
},
{
"first": "Wesley",
"middle": [],
"last": "De Neve",
"suffix": ""
},
{
"first": "Rik",
"middle": [],
"last": "Van De Walle",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Workshop on Noisy User-generated Text",
"volume": "",
"issue": "",
"pages": "146--153",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fr\u00e9deric Godin, Baptist Vandersmissen, Wesley De Neve, and Rik Van de Walle. 2015. Multimedia lab @ acl wnut ner shared task: Named entity recog- nition for twitter microposts using distributed word representations. In Proceedings of the Workshop on Noisy User-generated Text, pages 146-153.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatic sarcasm detection: A survey",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [
"J"
],
"last": "Car",
"suffix": ""
}
],
"year": 2017,
"venue": "ACM Computing Surveys (CSUR)",
"volume": "50",
"issue": "5",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):73.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Your sentiment precedes you: Using an authors historical tweets to predict sarcasm",
"authors": [
{
"first": "Anupam",
"middle": [],
"last": "Khattri",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Pushpak",
"middle": [],
"last": "Bhattacharyya",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Carman",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anupam Khattri, Aditya Joshi, Pushpak Bhat- tacharyya, and Mark Carman. 2015. Your sentiment precedes you: Using an authors historical tweets to predict sarcasm. In Proceedings of the 6th Work- shop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 25-30.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis",
"authors": [
{
"first": "Diana",
"middle": [],
"last": "Maynard",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Greenwood",
"suffix": ""
}
],
"year": 2014,
"venue": "Lrec",
"volume": "",
"issue": "",
"pages": "4238--4243",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the im- pact of sarcasm on sentiment analysis. In Lrec, pages 4238-4243.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Wassa-2017 shared task on emotion intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.03700"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Cost-sensitive classification based on bregman divergences for medical diagnosis",
"authors": [
{
"first": "Ra\u00fal",
"middle": [],
"last": "Santos-Rodr\u00edguez",
"suffix": ""
},
{
"first": "Dar\u00edo",
"middle": [],
"last": "Garc\u00eda-Garc\u00eda",
"suffix": ""
},
{
"first": "Jes\u00fas",
"middle": [],
"last": "Cid-Sueiro",
"suffix": ""
}
],
"year": 2009,
"venue": "Machine Learning and Applications, 2009. ICMLA'09. International Conference on",
"volume": "",
"issue": "",
"pages": "551--556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ra\u00fal Santos-Rodr\u00edguez, Dar\u00edo Garc\u00eda-Garc\u00eda, and Jes\u00fas Cid-Sueiro. 2009. Cost-sensitive classifi- cation based on bregman divergences for medi- cal diagnosis. In Machine Learning and Applica- tions, 2009. ICMLA'09. International Conference on, pages 551-556. IEEE.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sentiment strength detection for the social web",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
},
{
"first": "Kevan",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "63",
"issue": "1",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall, Kevan Buckley, and Georgios Pal- toglou. 2012. Sentiment strength detection for the social web. Journal of the Association for Informa- tion Science and Technology, 63(1):163-173.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Exploring the realization of irony in twitter data",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2016a. Exploring the realization of irony in twitter data. In LREC.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Monday mornings are my fave:)# not exploring the automatic recognition of irony in english tweets",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2730--2739",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2016b. Monday mornings are my fave:)# not ex- ploring the automatic recognition of irony in en- glish tweets. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2730-2739.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "SemEval-2018 Task 3: Irony Detection in English Tweets",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Van Hee",
"suffix": ""
},
{
"first": "Els",
"middle": [],
"last": "Lefever",
"suffix": ""
},
{
"first": "V\u00e9ronique",
"middle": [],
"last": "Hoste",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 12th International Workshop on Semantic Evaluation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cynthia Van Hee, Els Lefever, and V\u00e9ronique Hoste. 2018. SemEval-2018 Task 3: Irony Detection in English Tweets. In Proceedings of the 12th Interna- tional Workshop on Semantic Evaluation, SemEval- 2018, New Orleans, LA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Thu ngn at ijcnlp-2017 task 2: Dimensional sentiment analysis for chinese phrases with deep lstm",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IJCNLP 2017, Shared Tasks",
"volume": "",
"issue": "",
"pages": "47--52",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chuhan Wu, Fangzhao Wu, Yongfeng Huang, Sixing Wu, and Zhigang Yuan. 2017. Thu ngn at ijcnlp- 2017 task 2: Dimensional sentiment analysis for chi- nese phrases with deep lstm. Proceedings of the IJCNLP 2017, Shared Tasks, pages 47-52.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Ensemble of feature sets and classification algorithms for sentiment classification",
"authors": [
{
"first": "Rui",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Shoushan",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2011,
"venue": "Information Sciences",
"volume": "181",
"issue": "6",
"pages": "1138--1152",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rui Xia, Chengqing Zong, and Shoushan Li. 2011. En- semble of feature sets and classification algorithms for sentiment classification. Information Sciences, 181(6):1138-1152.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Architecture of our Dense-LSTM model. The V-irony, O-irony and S-irony denote the three different irony types respectively(Van Hee et al., 2018)."
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"html": null,
"type_str": "table",
"text": "The performance of different methods. P, R, F represent precision, recall and F-score respectively.",
"content": "<table/>"
},
"TABREF6": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"5\">: The performance in two subtasks using differ-ent combinations of training tasks.</td></tr><tr><td colspan=\"5\">combination of two different word embeddings</td></tr><tr><td colspan=\"5\">can improve the model performance. It proves</td></tr><tr><td colspan=\"5\">that this method can reduce the out-of-vocabulary</td></tr><tr><td colspan=\"5\">words in the single embedding file and provide</td></tr><tr><td colspan=\"3\">richer semantic information.</td><td/><td/></tr><tr><td>Feature</td><td>P</td><td>Subtask A R</td><td>F</td><td>Subtask B Macro-F</td></tr><tr><td colspan=\"4\">w/o pre-trained 56.25 67.14 61.21</td><td>42.28</td></tr><tr><td>+emb1</td><td colspan=\"3\">60.96 69.95 65.14</td><td>47.69</td></tr><tr><td>+emb2</td><td colspan=\"3\">61.77 70.59 65.89</td><td>47.24</td></tr><tr><td colspan=\"4\">+emb1 +emb2 62.78 72.69 67.36</td><td>48.28</td></tr></table>"
},
"TABREF8": {
"num": null,
"html": null,
"type_str": "table",
"text": "Influence of different features on our model.",
"content": "<table/>"
}
}
}
}