Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S18-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:44:37.928805Z"
},
"title": "THU NGN at SemEval-2018 Task 1: Fine-grained Tweet Sentiment Intensity Analysis with Attention CNN-LSTM",
"authors": [
{
"first": "Chuhan",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Fangzhao",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Microsoft Research",
"location": {
"addrLine": "Asia {wuch15,wu-sx15,ljx16,yuanzg14"
}
},
"email": "[email protected]"
},
{
"first": "Junxin",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Zhigang",
"middle": [],
"last": "Yuan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Sixing",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": ""
},
{
"first": "Yongfeng",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Tsinghua University",
"location": {
"postCode": "100084",
"settlement": "Beijing",
"country": "China"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Traditional sentiment analysis approaches mainly focus on classifying the sentiment polarities or emotion categories of texts. However, they can't exploit the sentiment intensity information. Therefore, the SemEval-2018 Task 1 is aimed to automatically determine the intensity of emotions or sentiment of tweets to mine fine-grained sentiment information. In order to address this task, we propose a system based on an attention CNN-LSTM model. In our model, LSTM is used to extract the long-term contextual information from texts. We apply attention techniques to selecting this information. A CNN layer with different kernel sizes is used to extract local features. The dense layers take the pooled CNN feature maps and predict the intensity scores. Our system achieves an average Pearson correlation score of 0.722 (ranked 12/48) in the emotion intensity regression task, and 0.810 in the valence regression task (ranked 15/38). It indicates that our system can be further extended.",
"pdf_parse": {
"paper_id": "S18-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "Traditional sentiment analysis approaches mainly focus on classifying the sentiment polarities or emotion categories of texts. However, they can't exploit the sentiment intensity information. Therefore, the SemEval-2018 Task 1 is aimed to automatically determine the intensity of emotions or sentiment of tweets to mine fine-grained sentiment information. In order to address this task, we propose a system based on an attention CNN-LSTM model. In our model, LSTM is used to extract the long-term contextual information from texts. We apply attention techniques to selecting this information. A CNN layer with different kernel sizes is used to extract local features. The dense layers take the pooled CNN feature maps and predict the intensity scores. Our system achieves an average Pearson correlation score of 0.722 (ranked 12/48) in the emotion intensity regression task, and 0.810 in the valence regression task (ranked 15/38). It indicates that our system can be further extended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Detecting the intensity of sentiment is an important task for fine-grained sentiment analysis (Kiritchenko et al., 2016; Mohammad and Bravo-Marquez, 2017) . Intensity refers to the degree or amount of an emotion or degree of sentiment. For example, we can express our emotion by \"very happy\" or \"a little angry\". The intensity can be analysis in multiple categories (i.e. low, moderate and high) or real-valued. Identifying the intensity information of sentiment has potential to applications such as electronic business, social computing and public health (Wilson, 2008) .",
"cite_spans": [
{
"start": 94,
"end": 120,
"text": "(Kiritchenko et al., 2016;",
"ref_id": null
},
{
"start": 121,
"end": 154,
"text": "Mohammad and Bravo-Marquez, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 557,
"end": 571,
"text": "(Wilson, 2008)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Twitter is a social platform which contains rich textual content. There have been many approaches to twitter sentiment analysis (Khan et al., 2015; Severyn and Moschitti, 2015; Philander et al., 2016) . However, twitter sentiment analysis is challenging because tweets usually contain nonstandard languages, including emoticons, emojis, creatively spelled words, and hash tags (Mohammad and Bravo-Marquez, 2017) . In order to improve the collective techniques on tweet sentiment intensity analysis, the SemEval-2018 Task 1 is aimed to identify the categorical and real-valued intensity of emotions or sentiment for English, Arabic, and Spanish (Mohammad et al., 2018) .",
"cite_spans": [
{
"start": 128,
"end": 147,
"text": "(Khan et al., 2015;",
"ref_id": null
},
{
"start": 148,
"end": 176,
"text": "Severyn and Moschitti, 2015;",
"ref_id": "BIBREF12"
},
{
"start": 177,
"end": 200,
"text": "Philander et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 377,
"end": 411,
"text": "(Mohammad and Bravo-Marquez, 2017)",
"ref_id": "BIBREF5"
},
{
"start": 615,
"end": 667,
"text": "English, Arabic, and Spanish (Mohammad et al., 2018)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Existing approaches to analysis the intensity of emotions or sentiment are mainly based on lexicons and supervised learning. Lexicon-based methods usually rely on lexicons to assign the intensity scores of affective words in texts (Mohammad and Bravo-Marquez, 2017). However, these method can't utilize the contextual information from texts. Supervised methods are mainly based on SVR (Madisetty and Desarkar, 2017) , linear regression (John and Vechtomova, 2017) and neural networks (Goel et al., 2017; K\u00f6per et al., 2017) . Usually neural network-based methods outperform SVR and linear regression-based methods siginificantly. Motivated by the successful applications of neural models in this task, we propose a system using a CNN-LSTM model with attention mechanism. Firstly, a tweet will be converted into a sequence of dense vectors by an embedding layer. Next, we use a Bi-LSTM layer to extract contextual information from them. The sequential features will be selected by an attention layer. Then we apply a CNN with different kernel sizes to extracting different local information. Thus, our model can exploit both local and longterm information by combining CNN and LSTM. Finally, two dense layers are used to predict the intensity scores. The system performance quantified by an average Pearson correlation score is 0.722 in the emotion intensity regression task (EIreg) and 0.810 in the valence regression task (V-reg). Our model outperforms several baseline neural networks, which proves that our model can identify the intensity of emotions and sentiment effectively.",
"cite_spans": [
{
"start": 385,
"end": 415,
"text": "(Madisetty and Desarkar, 2017)",
"ref_id": "BIBREF2"
},
{
"start": 484,
"end": 503,
"text": "(Goel et al., 2017;",
"ref_id": null
},
{
"start": 504,
"end": 523,
"text": "K\u00f6per et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Sentiment analysis in social media such as Twitter is an important task for opinion mining (Severyn and Moschitti, 2015) . Traditional Twitter sentiment analysis methods mainly focus on identifying the polarities (Da Silva et al., 2014; dos Santos and Gatti, 2014) or emotion categories (Dini and Bittar, 2016) of tweets. However, it's a difficult task to analysis the noisy tweets. They usually contain various nonstandard languages including emoticons, emojis, creatively spelled words and hash tags. In addition, these languages usually contain rich sentiment information. In order to capture such information, several lexicon-based methods are proposed. Nielsen et al. (2011) proposed to use a dictionary to incorporate emoticon information into tweet analysis models. Mohammad et al. proposed to use hash tags to identify emotion categories of tweets (2015). These lexicon-based methods are free from manual annotation, but they rely on the emotion lexicons and can't mine high-level contextual information from tweets. Supervised methods such as neural networks are also applied to tweet sentiment analysis. For example, Dos et al. (2014) propose to classify tweets using a deep convolutional neural network. Approaches based on deep neural networks need sufficient samples to train, but they usually outperforms lexicon-based methods in these tasks.",
"cite_spans": [
{
"start": 91,
"end": 120,
"text": "(Severyn and Moschitti, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 213,
"end": 236,
"text": "(Da Silva et al., 2014;",
"ref_id": null
},
{
"start": 237,
"end": 264,
"text": "dos Santos and Gatti, 2014)",
"ref_id": "BIBREF11"
},
{
"start": 658,
"end": 679,
"text": "Nielsen et al. (2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "However, these approaches usually ignore the intensity of emotions and sentiment, which provides important information for fine-grained sentiment analysis. Therefore, in order to capture such information, Mohammad et al. proposed to identify the emotion and sentiment intensity (valence) of texts (2016). Different approaches have been proposed to detect the tweet emotion intensity in the EmoInt-2017 shared task (Mohammad and Bravo-Marquez, 2017). For example, Madisetty et al. (2017) proposed an ensemble model based on SVR. Goel et al. (2017) and Koper et al. (2017) applied CNN-LSTM architecture to this task. These systems reached the top ranks in the EmoInt shared task.",
"cite_spans": [
{
"start": 463,
"end": 486,
"text": "Madisetty et al. (2017)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Motivated by the successful application of CNN-LSTM model Chen et al., 2016 ) and the attention mechanism for text classification (Yin et al., 2015) , we propose a system using attention-based CNN-LSTM model to address this task. In our model, we first use LSTM to extract sequential information, and select features via attention layer. Then we combine CNN with different kernel sizes to learn local information. Finally the dense layers are used to predict the intensity scores. In addition, several features are incorporated into our model. The evaluation results show that our system outperform several baseline neural networks and can be further extended.",
"cite_spans": [
{
"start": 58,
"end": 75,
"text": "Chen et al., 2016",
"ref_id": null
},
{
"start": 130,
"end": 148,
"text": "(Yin et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our network architecture is shown in Figure 1 . We will explain the detailed information of our system in the following subsections.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Attention CNN-LSTM Model",
"sec_num": "3"
},
{
"text": "As shown in Figure 1 , an embedding layer is used to provide word embedding and one-hot encoded part-of-speech (POS) tags of the input tweets. The Bi-LSTM layer takes the concatenated word embedding and POS tags as input, and output each hidden states. Let h i be the output hidden state at time step i. Then its attention weight \u03b1 i can be formulated as follows:",
"cite_spans": [],
"ref_spans": [
{
"start": 12,
"end": 20,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "m i = tanh(h i ), \u03b1 i = w i m i + b i , \u03b1 i = exp(\u03b1 i ) j exp(\u03b1 j ) ,",
"eq_num": "(1)"
}
],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "where w i m i + b i denote a linear transformation of m i . Therefore, the output representation r i is given by:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "r i = \u03b1 i h i .",
"eq_num": "(2)"
}
],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "Based on such text representation, the sequence of features will be assigned with different attention weights. Thus, important information such as affective words can be identified more easily. The convolutional layer takes the text representation r i as input. We use CNN with four different kernel sizes to learn local information with different contextual length. Based on this architecture, our model can combine both long-term and local information, which can help to identify sentiment information better. The output CNN feature maps are concatenated together, and will be squeezed by a global max pooling layer. They are concatenated with the lexicon features. We use two dense layers with ReLU and sigmoid activation respectively to predict the final intensity score. In order to mitigate overfitting, we apply dropout technique at each layer to regularize our model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "(V) (D) (#) (E) (E)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Network Architecture",
"sec_num": "3.1"
},
{
"text": "We use Word2Vec (Mikolov et al., 2013) as the vector representation of the words in tweets. We combine two kinds of word embeddings: The first embeddings are provided by Godin et al. (2015) . They are trained on a corpus with 400 million tweets. The second embeddings are provided by Barbier et al. (2016) . They are trained on 20 million geolocalized tweets. The dimensions of two embeddings are 400 and 300 respectively. We fine-tune the word embeddings during the network training.",
"cite_spans": [
{
"start": 16,
"end": 38,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 170,
"end": 189,
"text": "Godin et al. (2015)",
"ref_id": null
},
{
"start": 284,
"end": 305,
"text": "Barbier et al. (2016)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word Embedding",
"sec_num": "3.2"
},
{
"text": "We incorporate POS tags and lexicon features into our model. POS tags usually contain rich semantic information. For example, sentiment intensity can be expressed by adjectives like \"very\" and \"slight\". POS tags can help the neural model to identify such words. We use the Ark-Tweet-NLP 1 tool to obtain the POS tags of tweets (Owoputi et al., 2013) . The POS tag feature of each word is concatenated with the word embedding. Usually affective words in tweets such as specific hashtags express sentiment explicitly. Therefore, incorporating lexicon information can help our model to predict intensity more accurately. We use the AffectiveTweets 2 (Mohammad and Bravo-Marquez, 2017) package in Weka 3 to obtain the lexicon features of tweets. We use the Tweet-ToLexiconFeatureVector (Bravo-Marquez et al., 2014), TweetToSentiStrengthFeatureVector (Thelwall et al., 2012) and TweetToInputLexiconFea-tureVector filters in AffectiveTweets. In our experiment, the lexicon features are 49-dim. These lexicon features are concatenated with the pooled CNN feature maps.",
"cite_spans": [
{
"start": 327,
"end": 349,
"text": "(Owoputi et al., 2013)",
"ref_id": "BIBREF9"
},
{
"start": 846,
"end": 869,
"text": "(Thelwall et al., 2012)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Additional Features",
"sec_num": "3.3"
},
{
"text": "We use an ensemble strategy to improve the model performance. Our model is trained for 10 times by using randomly selected dropout rate. Then the final predictions on the test set are given by the average of all model predictions. In this way, the random error of our system can be reduced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Ensemble",
"sec_num": "3.4"
},
{
"text": "In order to process the noisy tweet texts, we use tweetokenize 4 for tokenizing, and use Ark-Tweet-NLP tool for POS tagging. In addition, we refine the texts and POS tags using several rules: 1) all URLs will be replaced with the word \"URL\", and their POS tags will be set to \"URL\"; 2) all @users will be replaced with \"USERNAME\", and their POS tags will be set to @; 3) POS tags of hashtags are set to \"#\"; 4) POS tags of emojis and emoticons are set to \"E\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preprocess",
"sec_num": "4.1"
},
{
"text": "The details of English datasets 5 we use is shown in Table 1 . The intensity in both task is annotated between 0 and 1. In the EI-reg task, the Pearson correlation scores across all four emotions will be averaged as the final score. In the V-reg task, the correlation score for valence is used as the competition metric. In our network, the dimension of word embeddings is 400 + 300. The hidden states of Bi-LSTM are 2\u00d7300-dim. The kernel sizes of CNN are 3, 5, 7 and 9 respectively. The number of feature maps are 4 \u00d7 200. The dimension of the first dense layer is set to 200. The padding length of tweets is set to 50. The dropout rate is a random number between 0.1 and 0.3. The loss function we use is MAE, and the batch size is set to 8. We combine the training and development sets in our experiment. We use 90% for training and reserve 10% for cross validation. In our official submissions, we use the full training and development sets to train models.",
"cite_spans": [],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 1",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "4.2"
},
{
"text": "We compare the performance of our model and several baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.3"
},
{
"text": "The models to be compared include: 1) CNN, using CNN and dense layers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.3"
},
{
"text": "2) LSTM, using LSTM and dense layers. 3) CNN+LSTM, combing CNN with LSTM to predict. 4) CNN+LSTM+att, adding attention mechanism to CNN-LSTM model. 5) CNN+LSTM+att+ensemble, using ensemble strategy in the attention-based CNN-LSTM model. The results in the EI-reg and V-reg tasks are shown in Table 2 . In comparison, we also present the cross validation results. Our system reaches average Pearson correlation score of 0.722 in the EI-reg task and 0.810 in the V-reg task. The results indicate that our CNN-LSTM model outperforms the CNN and LSTM baselines. It proves that CNN-LSTM model can combine the long-term information and local information in texts. The attention mechanism can also improve the model performance. Since the attention layer can select important information, our model can focus on important words in texts (e.g. affective words) to predict the intensity of emotions and sentiment more accurately. Although our system still needs to be improved compared with the top systems, our model outperforms the common baseline models, which validates the effectiveness of our model.",
"cite_spans": [],
"ref_spans": [
{
"start": 292,
"end": 299,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation Results",
"sec_num": "4.3"
},
{
"text": "We compare the performance using different pretrained embeddings in the EI-reg task. The results are shown in Table 3 . The results show that the pre-trained embeddings are important, and combining different word embedding can improve the model performance. It may be because the combination of embedding can cover more out-ofvocabulary words and provide rich semantic information.",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 117,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Influence of Pre-trained Word Embedding",
"sec_num": "4.4"
},
{
"text": "The influence of the POS tag features and lexicon features is shown in Table 4 . The results show that POS tags can improve the model performance significantly. Affective words, emojis and hashtags usually contain rich sentiment information. POS tags can be used to identify such words. Therefore, incorporating the POS information into our neural model can help to identify these words in tweets better. The lexicon features can also improve our model. The lexicon features are obtained by the sentiment words in tweets. Thus, incorporating these features into neural networks can improve the performance of our system.",
"cite_spans": [],
"ref_spans": [
{
"start": 71,
"end": 78,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Influence of Additional Features",
"sec_num": "4.5"
},
{
"text": "In the EI-reg and V-reg tasks, an automatically generated mystery set is used for testing the inappropriate biases in NLP systems, such as gender and race (i.e. African American and European American names). For example, the pairs of sentences \"She is happy.\" and \"He is happy.\"; \"Jamel feels angry.\" and \"Harry feels angry.\" should be assigned wit the same intensity by an unbiased NLP system. The score differences are calculated for such sentence pairs. The average score difference, the p-value, and whether the score differences are statistically significant are shown in Ta- Table 3 : Influence of using different combinations of pre-trained word embeddings. The emb1 and emb2 denote the embeddings provided by Godin et al. (2015) and Barbieri et al. (2016) respectively. ble 5. Although the average differences are small, but they are statistical significant in most tasks.",
"cite_spans": [
{
"start": 717,
"end": 736,
"text": "Godin et al. (2015)",
"ref_id": null
},
{
"start": 741,
"end": 763,
"text": "Barbieri et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 581,
"end": 588,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Inappropriate Biases",
"sec_num": "4.6"
},
{
"text": "Our system is based on word embedding, and we fine-tune the weights during the network training. Thus, our system will be influenced by the distribution of training data, which may lead to these biases. Valence 0.001 0.00382 \u00d7 -0.021 0 \u221a Table 5 : The average differences, p-value and statistical significance of predictions on the mystery set in each task. We denote them as Avg-D, p and Sig respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 238,
"end": 245,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of Inappropriate Biases",
"sec_num": "4.6"
},
{
"text": "Attention mechanism can encourage the neural model to focus on important words in texts. In order to prove its effectiveness of the attention layer, we present several examples in Table 6 . The green color represents low attention, while red color represents high attention. We can see that the affec-tive words (e.g. Happy) and hashtags (e.g. #funny) have high attention weights. It indicates that our attention-based model can capture important sentiment information to predict the intensity of tweets better.",
"cite_spans": [],
"ref_spans": [
{
"start": 180,
"end": 187,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visualization of Attention Mechanism",
"sec_num": "4.7"
},
{
"text": "Identifying the intensity of emotions or sentiment is important for fine-grained sentiment analysis. Thus, the Semeval-2018 task 1 is aimed to analyze the affective intensity of tweets. In this paper, we introduce the system participating in this task. We apply an attention-based CNN-LSTM model to predict the intensity scores of emotions and sentiment. We also use additional features to improve the performance of our system. Our system ranked 12/48 and 15/38 in the EI-reg and V-reg subtasks respectively. It indicates that our system can be further extended.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "http://www.cs.cmu.edu/ ark/TweetNLP 2 https://github.com/felipebravom/AffectiveTweets 3 https://www.cs.waikato.ac.nz/ml/weka 4 https://github.com/jaredks/tweetokenize",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.saifmohammad.com/WebDocs/AIT-2018/AIT2018-DATA",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors thank the reviewers for their insightful comments and constructive suggestions on improving this work. This work was supported in part by the National Key Research and Development Program of China under Grant 2016YFB0800402 and in part by the National Natural Science Foundation of China under Grant U1705261, Grant U1536207, Grant U1536201 and U1636113.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Tweets with visual attention weights someone cheer me up Happy birthday to me h #blessed What are some good #funny #entertaining #interesting accounts I should follow ? My twitter is dry ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How cosmopolitan are emojis?: Exploring emojis usage and meaning over different languages with distributional semantics",
"authors": [
{
"first": "Francesco",
"middle": [],
"last": "Barbieri",
"suffix": ""
},
{
"first": "German",
"middle": [],
"last": "Kruszewski",
"suffix": ""
},
{
"first": "Francesco",
"middle": [],
"last": "Ronzano",
"suffix": ""
},
{
"first": "Horacio",
"middle": [],
"last": "Saggion",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 ACM on Multimedia Conference",
"volume": "",
"issue": "",
"pages": "531--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francesco Barbieri, German Kruszewski, Francesco Ronzano, and Horacio Saggion. 2016. How cos- mopolitan are emojis?: Exploring emojis usage and meaning over different languages with distributional semantics. In Proceedings of the 2016 ACM on Mul- timedia Conference, pages 531-535. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Meta-level sentiment models for big social data analysis. Knowledge-Based Systems",
"authors": [
{
"first": "Felipe",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Mendoza",
"suffix": ""
},
{
"first": "Barbara",
"middle": [],
"last": "Poblete",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "69",
"issue": "",
"pages": "86--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felipe Bravo-Marquez, Marcelo Mendoza, and Bar- bara Poblete. 2014. Meta-level sentiment models for big social data analysis. Knowledge-Based Systems, 69:86-99.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Nsemo at emoint-2017: an ensemble to predict emotion intensity in tweets",
"authors": [
{
"first": "Sreekanth",
"middle": [],
"last": "Madisetty",
"suffix": ""
},
{
"first": "Maunendra",
"middle": [],
"last": "Sankar Desarkar",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "219--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sreekanth Madisetty and Maunendra Sankar Desarkar. 2017. Nsemo at emoint-2017: an ensemble to pre- dict emotion intensity in tweets. In Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 219-224.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Sentiment analysis: Detecting valence, emotions, and other affectual states from text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2016,
"venue": "Emotion measurement",
"volume": "",
"issue": "",
"pages": "201--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad. 2016. Sentiment analysis: De- tecting valence, emotions, and other affectual states from text. In Emotion measurement, pages 201-237. Elsevier.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wassa-2017 shared task on emotion intensity",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1708.03700"
]
},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Felipe Bravo-Marquez. 2017. Wassa-2017 shared task on emotion intensity. arXiv preprint arXiv:1708.03700.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semeval-2018 Task 1: Affect in tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Felipe",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Bravo-Marquez",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Using hashtags to capture fine emotion categories from tweets",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Intelligence",
"volume": "31",
"issue": "2",
"pages": "301--326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion cate- gories from tweets. Computational Intelligence, 31(2):301-326.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A new anew: Evaluation of a word list for sentiment analysis in microblogs",
"authors": [
{
"first": "Finn\u00e5rup",
"middle": [],
"last": "Nielsen",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1103.2903"
]
},
"num": null,
"urls": [],
"raw_text": "Finn\u00c5rup Nielsen. 2011. A new anew: Evaluation of a word list for sentiment analysis in microblogs. arXiv preprint arXiv:1103.2903.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved part-of-speech tagging for online conversational text with word clusters. Association for Computational Linguistics",
"authors": [
{
"first": "Olutobi",
"middle": [],
"last": "Owoputi",
"suffix": ""
},
{
"first": "O'",
"middle": [],
"last": "Brendan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Connor",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah A",
"middle": [],
"last": "Schneider",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Twitter sentiment analysis: capturing sentiment from integrated resort tweets",
"authors": [
{
"first": "Kahlil",
"middle": [],
"last": "Philander",
"suffix": ""
},
{
"first": "Yunying",
"middle": [],
"last": "Zhong",
"suffix": ""
}
],
"year": 2016,
"venue": "International Journal of Hospitality Management",
"volume": "55",
"issue": "",
"pages": "16--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kahlil Philander, YunYing Zhong, et al. 2016. Twitter sentiment analysis: capturing sentiment from inte- grated resort tweets. International Journal of Hos- pitality Management, 55:16-24.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep convolutional neural networks for sentiment analysis of short texts",
"authors": [
{
"first": "Santos",
"middle": [],
"last": "Cicero Dos",
"suffix": ""
},
{
"first": "Maira",
"middle": [],
"last": "Gatti",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "69--78",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cicero dos Santos and Maira Gatti. 2014. Deep con- volutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69-78.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Twitter sentiment analysis with deep convolutional 191",
"authors": [
{
"first": "Aliaksei",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. Twitter sentiment analysis with deep convolutional 191",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "959--962",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "neural networks. In Proceedings of the 38th Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval, pages 959- 962. ACM.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Sentiment strength detection for the social web",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Thelwall",
"suffix": ""
},
{
"first": "Kevan",
"middle": [],
"last": "Buckley",
"suffix": ""
},
{
"first": "Georgios",
"middle": [],
"last": "Paltoglou",
"suffix": ""
}
],
"year": 2012,
"venue": "Journal of the Association for Information Science and Technology",
"volume": "63",
"issue": "1",
"pages": "163--173",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Thelwall, Kevan Buckley, and Georgios Pal- toglou. 2012. Sentiment strength detection for the social web. Journal of the Association for Informa- tion Science and Technology, 63(1):163-173.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Fine-grained subjectivity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states",
"authors": [
{
"first": "Wilson",
"middle": [],
"last": "Theresa Ann",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Theresa Ann Wilson. 2008. Fine-grained subjectiv- ity and sentiment analysis: recognizing the intensity, polarity, and attitudes of private states. University of Pittsburgh.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Abcnn: Attention-based convolutional neural network for modeling sentence pairs",
"authors": [
{
"first": "Wenpeng",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Xiang",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.05193"
]
},
"num": null,
"urls": [],
"raw_text": "Wenpeng Yin, Hinrich Sch\u00fctze, Bing Xiang, and Bowen Zhou. 2015. Abcnn: Attention-based convo- lutional neural network for modeling sentence pairs. arXiv preprint arXiv:1512.05193.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A c-lstm neural network for text classification",
"authors": [
{
"first": "Chunting",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chonglin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Francis",
"middle": [],
"last": "Lau",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.08630"
]
},
"num": null,
"urls": [],
"raw_text": "Chunting Zhou, Chonglin Sun, Zhiyuan Liu, and Fran- cis Lau. 2015. A c-lstm neural network for text clas- sification. arXiv preprint arXiv:1511.08630.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The architecture of our attention CNN-LSTM model."
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Detailed statistics of the English datasets in our experiment"
},
"TABREF3": {
"html": null,
"num": null,
"content": "<table><tr><td>Model</td><td>macro-avg val test</td><td>anger val test</td><td>EI-reg fear val test</td><td>val</td><td>joy</td><td>test</td><td>sadness val test</td><td>V-reg valence val test</td></tr><tr><td>CNN</td><td>0.743 0</td><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"text": ".710 0.700 0.726 0.759 0.701 0.771 0.727 0.742 0.686 0.809 0.790 LSTM 0.741 0.706 0.701 0.720 0.751 0.694 0.766 0.726 0.746 0.683 0.802 0.785 CNN+LSTM 0.743 0.713 0.705 0.730 0.758 0.701 0.770 0.735 0.740 0.687 0.815 0.796 CNN+LSTM+att 0.749 0.718 0.706 0.731 0.760 0.706 0.774 0.739 0.756 0.695 0.828 0.801 CNN+LSTM+att+ensemble 0.758 0.722 0.720 0.734 0.771 0.710 0.782 0.743 0.760 0.700 0.845 0.810"
},
"TABREF4": {
"html": null,
"num": null,
"content": "<table><tr><td>Embedding</td><td>avg anger fear</td><td>joy</td><td>sadness</td></tr><tr><td colspan=\"3\">w/o pre-trained 0.669 0.678 0.672 0.682</td><td>0.645</td></tr><tr><td>+emb1</td><td colspan=\"2\">0.717 0.728 0.706 0.737</td><td>0.695</td></tr><tr><td>+emb2</td><td colspan=\"2\">0.709 0.716 0.702 0.728</td><td>0.691</td></tr><tr><td colspan=\"3\">+emb1+emb2 0.722 0.734 0.710 0.743</td><td>0.700</td></tr></table>",
"type_str": "table",
"text": "Evaluation and cross validation performance of our model ande baselines."
},
"TABREF6": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Influence of POS tags and lexicon features."
}
}
}
}