|
{ |
|
"paper_id": "S18-1039", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:43:38.480072Z" |
|
}, |
|
"title": "PlusEmo2Vec at SemEval-2018 Task 1: Exploiting emotion knowledge from emoji and #hashtags", |
|
"authors": [ |
|
{ |
|
"first": "Ji", |
|
"middle": [ |
|
"Ho" |
|
], |
|
"last": "Park", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Hong Kong University of Science and Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Peng", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Hong Kong University of Science and Technology", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Hong Kong University of Science and Technology", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our system that has been submitted to SemEval-2018 Task 1: Affect in Tweets (AIT) to solve five subtasks. We focus on modeling both sentence and word level representations of emotion inside texts through large distantly labeled corpora with emojis and hashtags. We transfer the emotional knowledge by exploiting neural network models as feature extractors and use these representations for traditional machine learning models such as support vector regression (SVR) and logistic regression to solve the competition tasks. Our system is placed among the Top3 for all subtasks we participated.", |
|
"pdf_parse": { |
|
"paper_id": "S18-1039", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our system that has been submitted to SemEval-2018 Task 1: Affect in Tweets (AIT) to solve five subtasks. We focus on modeling both sentence and word level representations of emotion inside texts through large distantly labeled corpora with emojis and hashtags. We transfer the emotional knowledge by exploiting neural network models as feature extractors and use these representations for traditional machine learning models such as support vector regression (SVR) and logistic regression to solve the competition tasks. Our system is placed among the Top3 for all subtasks we participated.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Finding a good representation of texts is very challenging since texts are sequences of words which are represented in a discrete space of the vocabulary. For this reason, many past works have investigated in finding the mapping of words (Mikolov et al., 2013; Pennington et al., 2014) or sentences (Kiros et al., 2015) to continuous spaces, so that each text can be represented by a fixed-size, realvalued N-dimensional vector. This vector representation then can be applied to machine learning models to solve problems like classification and regression. A good representation should contain essential information inside each text and be a useful input for statistical models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 260, |
|
"text": "(Mikolov et al., 2013;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 261, |
|
"end": 285, |
|
"text": "Pennington et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 299, |
|
"end": 319, |
|
"text": "(Kiros et al., 2015)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Emotions in texts further deepen the complexity of modeling natural language since they not only depend on the semantics of a language but also are inherently subjective and ambiguous. Despite the difficulty, accounting for emotion is important in achieving true natural language understanding, especially in areas involving human-computer interactions such as dialogue systems (Fung, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 378, |
|
"end": 390, |
|
"text": "(Fung, 2015)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Humans can naturally capture and express different emotions in texts, so machines should also be able to infer them. Many works (Tang et al., 2014; Felbo et al., 2017; Thelwall, 2017) explored modeling sentiment or emotion in texts in various directions. This work is highly related to these efforts.", |
|
"cite_spans": [ |
|
{ |
|
"start": 128, |
|
"end": 147, |
|
"text": "(Tang et al., 2014;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 148, |
|
"end": 167, |
|
"text": "Felbo et al., 2017;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 183, |
|
"text": "Thelwall, 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Semeval-2018 Task 1: Affect in Tweets (AIT-2018) encourages more efforts in this area with the task of sentiment analysis, which is one of the most practical applications of modeling emotional text representations. We have participated in five subtasks regarding English tweets: emotion intensity regression, emotion intensity ordinal classification, valence (sentiment) regression, valence ordinal classification, and emotion classification (More details on the tasks in Mohammad et al. (2018)).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Although these five tasks take different formats, the most important objective is finding a good representation of the tweets regarding emotions. However, the given competition training datasets are too small to achieve our goal (Table 3) . Therefore, we explore utilizing larger datasets that are distantly supervised by emojis and hashtags to learn a robust representation and transfer the knowledge of each dataset to the competition datasets to solve the tasks. We aim to minimize the use of lexicons and linguistic features by replacing them with continuous vector representations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 229, |
|
"end": 239, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Thanks to the endless stream of social media such as Twitter and Facebook, researchers nowadays are lucky enough to have access to almost an unlimited number of texts generated every day. Nevertheless, annotating these texts with explicit emotion or sentiment human labels is very expensive and difficult. For this reason, many works naturally focused on finding direct or indirect evidence of emotion inside each text, such as hash- Figure 1 : 11 clusters of emojis used as categorical labels and their distributions in the training set. Because some emojis appear much less frequently than others, we group the 34 emojis into 11 clusters according to the distance on the correlation matrix of the hierarchical clustering from Felbo et al. (2017) and use them as categorical labels tags and emoticons (Suttles and Ide, 2013; Wang et al., 2012) , and found them useful to distantly label an emotion of each text. Furthermore, the recent popular culture of using emojis (Wood and Ruder, 2016) inside social media posts and messages provides us even richer evidence of different emotions, and they have been proven to be very effective in learning rich representations for various affect-related tasks (Felbo et al., 2017) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 728, |
|
"end": 747, |
|
"text": "Felbo et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 825, |
|
"text": "(Suttles and Ide, 2013;", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 826, |
|
"end": 844, |
|
"text": "Wang et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 991, |
|
"text": "(Wood and Ruder, 2016)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1200, |
|
"end": 1220, |
|
"text": "(Felbo et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 434, |
|
"end": 442, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Emoji sentence representations", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In this paper, we compare two models using two different emoji dataset to transform the competition data into robust sentence representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology & Emoji Dataset", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "First model is the pre-trained DeepMoji model (Felbo et al., 2017) , which is trained through emoji predictions on a dataset of 1.2 billion tweets with 64 common emoji labels. We use the pretrained deep learning network, which consists of Bidirectional Long Short Term Memory (Bi-LSTM) with attention, except the last softmax layer, as a feature extractor of the original competition datasets. As a result, each sample is transformed into a 2304dimensional vector from the model. The second model is our proposed emoji cluster model. We crawled 8.1 million tweets with each of which has 34 different facial and hand emojis, assuming these kinds of emojis are more relevant to emotions. Since some emojis appear much less frequently than others, we cluster the 34 emojis into 11 clusters ( Figure 1 ) according to the distance on the correlation matrix of the hierarchical clustering from Felbo et al. (2017) . Sam-ples with emojis in the same cluster are assigned the same categorical label for prediction. Samples with multiple emojis are duplicated in the training set, whereas in the dev and test set we only use samples with one emoji to avoid confusion. We then train a one-layer Bi-LSTM classifier with 512 hidden units to predict the emoji cluster of each sample. We take part of the dataset to construct a balanced dev set with 15,000 samples per class (total 165,000) for hyperparameter tuning and early stopping. We use 200 dimension Glove vectors pre-trained on a much larger Twitter corpus to initialize the embedding layer.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 66, |
|
"text": "(Felbo et al., 2017)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 888, |
|
"end": 907, |
|
"text": "Felbo et al. (2017)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 789, |
|
"end": 797, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Methodology & Emoji Dataset", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "The motivation for exploring two different models is that, firstly, we want to replicate the effectiveness of using emoji for representing emotions from the previous work (Felbo et al., 2017 ) with a smaller dataset and a simpler model. Note that the dataset size of the emoji cluster model is less than 1% of that of the first model, whereas DeepMoji uses more than 1 billion training samples. Moreover, the first model implements a two-layer Bi-LSTM with self-attention, which has much more parameters than the second model's simple onelayer Bi-LSTM does. Secondly, we want to verify that ensembling both emoji representations trained from different datasets to boost our performance. We will present the result of the comparisons and the ensembles in Section 5.2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 171, |
|
"end": 190, |
|
"text": "(Felbo et al., 2017", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology & Emoji Dataset", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "One thing i dislike is laggers man I hate inconsistency The paper is irritating me As of right now i hate dre im sick of crying im tired of trying why body pain why uuugh i really have nothing to do right now i dont wanna go back to mex looking forward to holiday well today am on lake garda enjoying the life perfect time to read book im feeling great enjoying my holiday ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology & Emoji Dataset", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As a result, the model can achieve 29.8% top-1 accuracy and 61.0% top-3 accuracy on the emoji cluster prediction task. Since the objective of this model is not to predict the cluster label but to find a good sentence representation, we visual-ize the test set samples to discover that samples with similar semantics and emotions are grouped together (Table 1) . Finally, similar to the first model, we use this model as a feature extractor on the competition datasets. Each text sample in the competition datasets is transformed into a 512dimensional vector through the model except the last class predicting softmax layer.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 350, |
|
"end": 359, |
|
"text": "(Table 1)", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "In conclusion, we trained two deep learning models with two different emoji datasets to extract emoji representations of the competition datasets. They are transformed into high dimensional, realvalued, and continuous vectors, which can be used as features for the classification and regression tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "From now on, we will call the vectors from the first model, DeepMoji representations, and those from the second, Emoji Cluster representations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We also explore word-level representations, along with emoji sentence representations. Although sentence-level representations already build up from word representations (in particular we use pretrained Glove vectors (Pennington et al., 2014) ), they may not be enough to attend to the valence that each word contains. Previous works (Tang et al., 2014) examine the significance of using sentiment-specific word embedding for related tasks. For this reason, we train emotional word vectors that not only cluster together direct emotion words such as anger and joy, but also capture emotions inside indirect emotion words, such as anger inside headache and joy inside beach. We learn these vectors by training a Convolutional Neural Network (CNN) from another separate Twitter corpus distantly labeled with hashtags.", |
|
"cite_spans": [ |
|
{ |
|
"start": 217, |
|
"end": 242, |
|
"text": "(Pennington et al., 2014)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 334, |
|
"end": 353, |
|
"text": "(Tang et al., 2014)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Emotional word vectors (EVEC)", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Our intuition to learn effective emotion word vectors is that given a document labeled with emotion there exists one or more emotionally significant words inside. Nevertheless, we do not know which ones are more important. We assume that a deep learning model, which learns the representations of the data with different level of abstractions (LeCun et al., 2015) will be able to capture those words and encode the information in its word embedding layer while classifying the documents label.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For the model structure, we use CNN since it is proven to be effective in text classification tasks by looking at the documents n-gram features and its gradient can be directly back-propagated to the word embedding, whereas Recurrent Neural Network (RNN) models are updated sequentially. We use a similar structure used by (Kim, 2014) , which includes a max-pooling layer to force the network to find the most relevant feature for predicting the emotion class correctly. After the CNN network learns how to classify the documents into different emotion categories, we extract emotional word vectors from the network's embedding layer and use them as same as how other word embeddings, such as word2vec (Mikolov et al., 2013) or Glove, are used, treating them as features for other classification or regression models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 323, |
|
"end": 334, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 702, |
|
"end": 724, |
|
"text": "(Mikolov et al., 2013)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methodology", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "To accumulate a large corpus of emotion-labeled texts, we use a distant supervision method by using hashtags of tweets to automatically annotate emotions. Such method has proven to provide relevant emotion labels by previous works (Wang et al., 2012) . Their source of the emotion words came from emotion words list made from Shaver et al. (1987) , where the authors organize emotions into a hierarchy in which the first layer contains six basic emotions and each emotion has a list of emotion words. Wang et al. (2012) again expanded the list by including their lexical variants and also introduced some filtering heuristics, such as only using tweets with emotional hashtags at the end of tweets to make the distant supervision more relevant to human annotation. We combine their dataset, another public dataset 1 , which used the same method, and our own extracted tweets between January and October 2017 using the Twitter Firehose API.", |
|
"cite_spans": [ |
|
{ |
|
"start": 231, |
|
"end": 250, |
|
"text": "(Wang et al., 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 326, |
|
"end": 346, |
|
"text": "Shaver et al. (1987)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 501, |
|
"end": 519, |
|
"text": "Wang et al. (2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Hashtag Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For the emotion labels, we focus on four emotion categories: joy, sadness, anger, and fear, since the competition tasks are only limited to those categories. In total, our hashtag dataset consists of 1.9 million tweets (Table 2) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 219, |
|
"end": 228, |
|
"text": "(Table 2)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Hashtag Dataset", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "For every sample in the SemEval competition dataset, we extract all emotional word vectors of the words in the sentence and simply average them Table 2 : Description of the Twitter hashtag corpus. Hashtags at the end were removed from the document and used as labels. It is hard to construct a well-balanced dataset for all four classes since Twitter users tend to use more hashtags related to happy and sad emotions.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 151, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "into one vector. For words out of vocabulary of the hashtag corpus, we add zero vectors with the same dimension. As a result, every sentence is transformed into a 300-dimension vector to be used as features for the competition tasks. We expect these emotional word vectors can replace sentiment or emotion lexicons, since they are continuous representations learned from a large corpus, which can be more robust and rich in information about emotions inside words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "4 System Description", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "These are the three features that are used as input for our system to solve SemEval-2018 Task 1. Emoji Sentence Representations: Two models will be compared -DeepMoji representations (2304 dimensions) and Emoji cluster representations (512 dimensions). See Section 2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Emotional Word Vectors (EVEC): Average of emotional word vectors learned from hashtag corpus (300 dimensions). See Section 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Tweet-specific features: We employ Tweetspecific features to capture information that two previous representations cannot. Inspired from the previous SemEval papers (Zhou et al., 2016 ; Balikas and Amini, 2016), we choose five features, (1) number of words in uppercase, (2) number of positive and negative emoticons, (3) Sum of emoji valence score 2 , (4) number of elongated words, and (5) number of exclamation & question marks. Note that we do not use any linguistic features or sentiment/emotion lexicons for our system.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 183, |
|
"text": "(Zhou et al., 2016", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "Tweets in the competition datasets are tokenized after all non-alphanumeric characters are removed, except for extracting tweet-specific features. Some words, especially for hashtags, are merged together (e.g. #iloveyou), so unknown 2 https://github.com/words/emoji-emotion words in the vocabulary is put into a wordsegment library 3 to preserve the right segment (e.g. i, love, you). Then, the tokens are transformed into emoji sentence representations (2304 or 512 dimensions) and emotional word vectors (300 dimensions), according to the vocabulary of the emoji and hashtag dataset. These datasets respectively have 262,975 and 48,929 words in their vocabularies.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Due to the fact that the datasets of regression tasks (EI-reg & V-reg) and ordinal classification tasks (EI-oc & V-oc) have the same sample sentences, we assume that regression labels are more informative than the ordinals, since they tell us the rank among the samples within the same ordinal class. Therefore, we first train a regression model and then use it to predict ordinals, rather than training a separate classifier. We later prove that this trick yields a better result in ordinal classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regression and Ordinal Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "For regression, since our features are extracted from deep learning models, we find Support Vector Regression (SVR) and Kernel-Ridge Regression methods, which are effective for nonlinear features, perform better than linear methods. We tune the hyper-parameters with the given development (dev) set and later merge both train and dev set to train the final model with the best hyperparameter found. Also, we try ensembles by averaging the final regression predictions of different methods or feature combinations to boost performance. The best groups of models are selected by the development set results of many combinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regression and Ordinal Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Another important finding is that the mapping between the regression labels and ordinal labels are very different among emotion categories. For example in Figure 2 , Class 0 for fear is distributed in [0,0.6], whereas class 0 for joy is distributed in [0, 0.4]. Therefore, we try to find the mapping from the regression values (continuous) to ordinal 3. polynomial mapping: fits a polynomial regression function from the training data and finds the closest ordinal label.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 163, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Regression and Ordinal Classification", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "This task of multi-label classification is different from previous tasks in that the model needs to predict the binary label for each of the 11 classes given a tweet. The task is difficult in terms of three aspects. Firstly, some of the classes have opposite emotions (such as optimism and pessimism) but may have been labeled both as true. Secondly, it is not trivial to distinguish similar emotions such as joy, love, and optimism, which will include a lot of noise in the labels and make it hard to perform classification during training. Lastly, most of the tweets are labeled with no more than 3 categories out of 11 classes, which make the labels very sparse and imbalanced (Table 4 ) . We propose to train two models to tackle this problem: regularized linear regression and logistic regression classifier chain (Read et al., 2009) . Both models aim to exploit labels' correlation to perform multi-label classification.", |
|
"cite_spans": [ |
|
{ |
|
"start": 819, |
|
"end": 838, |
|
"text": "(Read et al., 2009)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 680, |
|
"end": 690, |
|
"text": "(Table 4 )", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-label Classification", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "We formulate the multi-label classification problem as a linear regression with label distance as the regularization term. We denote the features for i-th tweet as x i \u2208 R N where N is the number of features and the number of categories as C. Our prediction is y", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "\u2032 i = W * x i where W \u2208 R M * C", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "is the weight of the linear regression model. We take the following formula as loss function to minimize. The loss consists of two parts. First part aims to minimize the mean square loss between our prediction y \u2032 i and ground truth label y i . The second part is the regularization term to capture relationship among different emotion labels. To model the correlations among emotions, we implicitly treat each emotion category as a vertice in an undirected graph g and use Laplacian matrix of g for regularization (Grone et al., 1990; Shahid et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 515, |
|
"end": 535, |
|
"text": "(Grone et al., 1990;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 536, |
|
"end": 556, |
|
"text": "Shahid et al., 2016)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "loss = 1 M M \u2211 i (y \u2032 \u2212 y) 2 + \u03bby \u2032 T i Ly \u2032 i L = D \u2212 A", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "where M is the number of samples, L \u2208 R C * C is the Laplacian matrix, A \u2208 R C * C is the Euclidean matrix, D \u2208 R C * C is the Degree matrix. To derive L, we first compute the co-occurrence matrix O \u2208 R C * C among the emotion labels and take each row/column O i \u2208 R C as the representation of each emotion. Then we compute the distance matrix A by taking the Euclidean distance of different labels. That is", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "A ij = (O i \u2212 O j ) 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Here, A can be regarded as the adjacency matrix of the graph g. Afterwards, we calculate the degree matrix D by summing up each row/column and making it a diagonal matrix. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regularized linear regression model", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Classifier chain is another method to capture the correlation of emotion labels. It treats the multilabel problem as a sequence of binary classification problem while taking the prediction of the previous classifier as extra input. For example, when training the i-th emotion category, we take both the features of input tweet and also the 1st, 2nd, \u2022 \u2022 \u2022 , (i-1)-th prediction as the input of our logistic regression classifier to predict the i-th emotion label of input tweet. We further ensemble 10 logistic regression chains by shuffling the sequence of 11 emotion labels to achieve better generalization ability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Logistic regression classifier chain", |
|
"sec_num": "4.4.2" |
|
}, |
|
{ |
|
"text": "Most of our system experiments were implemented by using PyTorch (Paszke et al., 2017) and Scikit-learn (Pedregosa et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 65, |
|
"end": 86, |
|
"text": "(Paszke et al., 2017)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 128, |
|
"text": "(Pedregosa et al., 2011)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments & Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "SemEval-2018 Affect in Tweets (AIT) is created by human annotators through crowd-sourcing methods . Total three datasets are given: emotion intensity (with four emotion categories; Subtask 1a & 2a), sentiment intensity (subtask 3a & 4a), and multilabel emotion classification (subtask 5a). For emotion and sentiment intensity datasets, each tweet sample has both an ordinal label (coarse; {0,1,2,3} for emotion, {-3,-2,-1,0,1,2,3} for sentiment) and real-value regression label (fine-grained; [0,1]). For multi-label emotion classification dataset, each can have none or up to six number of multi-labels (Table 4) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 604, |
|
"end": 613, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Competition dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We used the given development set to tune the hyper-parameters and select models. For the fi-nal submission, we merged the train & development set together to retrain the model with the best hyper-parameter found (Table 3) . Table 5 shows the test set results on regression tasks, Subtask 1a&3a. We experimented with different features that we introduced before to analyze the effectiveness of each representation. For emoji sentence representations, emoji cluster worked better on sadness and sentiment, whereas DeepMoji outperformed in anger, fear, and joy. We presumed such difference was due to the different emoji types of the two datasets used to train each model. Emoji cluster only used 11 classes of emojis that were clustered together, but DeepMoji used 64 emoji classes. It may be possible clustering of emoji classes made it easy for regression models to predict the intensities in certain emotion categories, whereas some emotion categories needed more detailed representations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 213, |
|
"end": 222, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 232, |
|
"text": "Table 5", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Competition dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "The emotional word vectors overall did help enhance the performance of the regression model for all emotion categories. This shows that emotional word vectors can serve as additional word-level information which are helpful for solving this task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regression: Subtask 1a & 3a", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Tweet-specific features boosted the performance, notably for sentiment, since features like capital letters, emojis, elongated words, and the number of exclamation marks, could help to figure out the subtle difference of the emotion intensities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regression: Subtask 1a & 3a", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "One thing to note is that our system's rank in the fear category (7th) is relatively lower than other emotion categories. We found out from the previous literature (Wood and Ruder, 2016) that fear emojis were the most ambiguous, having the least correlation with human-annotated emotion labels among the six emotion categories. On the other hand, joy emojis were the most highly correlated. This may explain our best performance in the joy category and worst performance in the fear category. Future systems using emojis as a dataset may need to take this shortcoming into account.", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 186, |
|
"text": "(Wood and Ruder, 2016)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Regression: Subtask 1a & 3a", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "As mentioned in Sec 4.3, we used our best regression model to also predict ordinal labels. Since each emotion category has a different distribution of regression labels and ordinal labels, we experimented three different mappings, naive mapping, scope mapping, and polynomial mapping. Using the training set, we found the ideal mapping function to match the regression predictions and the ordinal predictions. Test set results (Table 6 ) on ordinal classification show that our mapping methods are indeed much more effective. For anger, fear, and sentiment categories, polynomial mapping performed the best, whereas scope mapping outperformed for joy and sadness categories. With our method, we achieved higher ranks in ordinal classification tasks (2a & 4a), placed both in 2nd. Figure 3 shows how a cubic function is fitted to find the mapping between regression labels and ordinal labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 427, |
|
"end": 435, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 788, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ordinal Classification: Subtask 2a& 4a", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Additionally, we report some results better than the final submission. The change is due to a new model selection strategy. For the final submission, we searched for the optimal pair of regression model & mapping method by looking at the Figure 3 : Plot of test labels and the mapping function derived from the training set. A polynomial function is fitted to map the regression predictions into ordinal predictions ordinal classification results on the development set. However, it turned out that always using the best ensemble prediction and then searching for the optimal mapping method with respect to the development set was better.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 238, |
|
"end": 246, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Ordinal Classification: Subtask 2a& 4a", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We found the best hyper-parameters by evaluating on our development set. We initialized the weight matrix W with a normal distribution of standard deviation of 0.1. We used gradient descent to optimize this function and set the learning rate to 1.0. Table 8 : Average differences of the system's bias. Gender difference is from female to male, and race differences is from African American names to European American names (sign of the percentage indicates the direction). \"Ours\" indicate the bias of our system, and \"Avg\" is the average of the biases of all systems from the competition.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 250, |
|
"end": 257, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-label Classification: Subtask 5a", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "The optimal \u03bb we found was -0.0001. We found that regularized linear regression model was always better than classifier chain model. The ensemble of classifier chain and regularized linear regression of both features combination(underlined elements in Table 7 ) achieved best performance than any single model (Table 7) .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 259, |
|
"text": "Table 7", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 310, |
|
"end": 319, |
|
"text": "(Table 7)", |
|
"ref_id": "TABREF9" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Multi-label Classification: Subtask 5a", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "In this year's competition, the organizers gave out a mystery test set that was included in the regression tasks (subtask 1a & 3a) . At the end of the evaluation period, they announced that these were set of pair sentences that differ only in the subject's or object's gender or racial names (See the task paper Mohammad et al. (2018) for details). It turned out that our system also included some biases like most other systems did, but fairly small, less than 1.5% for gender bias and 3.5% for racial bias (Table 8) . We believe that this is an interesting experiment and look forward to discussing more about the issue during the workshop.", |
|
"cite_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 130, |
|
"text": "(subtask 1a & 3a)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 508, |
|
"end": 517, |
|
"text": "(Table 8)", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Analysis of system's gender/racial biases", |
|
"sec_num": "5.5" |
|
}, |
|
{ |
|
"text": "In this paper, we explored a couple of different methods to find good representations of emo- : Official final scoreboard on all 5 subtasks that we participated. Scores for Subtask 1-4 are macro-average of the Pearson scores of 4 emotion categories and 5 is Jaccard index. About 35 participants are in each task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "tions inside tweets for solving 5 subtasks of predicting emotion/sentiment intensity and emotion labels. We used external datasets, which were much larger than the competition dataset but distantly labeled with emojis and #hashtags, to exploit the transferred knowledge to build a more robust machine learning system to solve the task. We avoided using traditional NLP features like linguistic features and emotion/sentiment lexicons by substituting them with continuous vector representations learned from huge corpora.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We performed experiments to show that emoji sentence representations and emotional word vectors trained from neural networks can be used with tweet-specific features as input for other traditional regression models, such as SVR and Kernel Regression, to solve the task of regression and ordinal classification. We proved the effectiveness of finding the mapping of the relationship between regression and ordinal labels from the training set to perform ordinal classification. Moreover, we tried using classifier chain and regularized logistic regression to deal with multi-label classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As a final official result (Table 9) , our system ranked among the top three in every subtask of the competition we participated. For future work, we want to work further on employing these emotion representations on other tasks, such as text generation, while we gather more data and improve the model to train the representations.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 27, |
|
"end": 36, |
|
"text": "(Table 9)", |
|
"ref_id": "TABREF11" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://hci.epfl.ch/sharing-emotion-lexicons-anddata#emo-hash-data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://www.grantjenks.com/docs/wordsegment/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "This work is partially funded by ITS/319/16FP of Innovation Technology Commission, HKUST 16214415 & 16248016 of Hong Kong Research Grants Council, and RDC 1718050-0 of EMOS.AI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgments", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Twise at semeval-2016 task 4: Twitter sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Georgios", |
|
"middle": [], |
|
"last": "Balikas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massih-Reza", |
|
"middle": [], |
|
"last": "Amini", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Georgios Balikas and Massih-Reza Amini. 2016. Twise at semeval-2016 task 4: Twitter sentiment classification. Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm", |
|
"authors": [ |
|
{ |
|
"first": "Bjarke", |
|
"middle": [], |
|
"last": "Felbo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Mislove", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anders", |
|
"middle": [], |
|
"last": "S\u00f8gaard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iyad", |
|
"middle": [], |
|
"last": "Rahwan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sune", |
|
"middle": [], |
|
"last": "Lehmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bjarke Felbo, Alan Mislove, Anders S\u00f8gaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. EMNLP 2017.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Robots with heart", |
|
"authors": [ |
|
{ |
|
"first": "Pascale", |
|
"middle": [], |
|
"last": "Fung", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Scientific American", |
|
"volume": "313", |
|
"issue": "5", |
|
"pages": "60--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pascale Fung. 2015. Robots with heart. Scientific American, 313(5):60-63.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The laplacian spectrum of a graph", |
|
"authors": [ |
|
{ |
|
"first": "Robert", |
|
"middle": [], |
|
"last": "Grone", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Russell", |
|
"middle": [], |
|
"last": "Merris", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V S", |
|
"middle": [], |
|
"last": "Sunder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "SIAM Journal on Matrix Analysis and Applications", |
|
"volume": "11", |
|
"issue": "2", |
|
"pages": "218--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Robert Grone, Russell Merris, and V S Sunder. 1990. The laplacian spectrum of a graph. SIAM Journal on Matrix Analysis and Applications, 11(2):218-238.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Convolutional neural networks for sentence classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "pub-- lisher=Citeseer", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP, pub- lisher=Citeseer,.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Skip-thought vectors", |
|
"authors": [ |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Kiros", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yukun", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Ruslan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Salakhutdinov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raquel", |
|
"middle": [], |
|
"last": "Zemel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Urtasun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sanja", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fidler", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3294--3302", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In NIPS, pages 3294-3302.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Deep learning", |
|
"authors": [ |
|
{ |
|
"first": "Yann", |
|
"middle": [], |
|
"last": "Lecun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoshua", |
|
"middle": [], |
|
"last": "Bengio", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Hinton", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Nature", |
|
"volume": "521", |
|
"issue": "7553", |
|
"pages": "436--444", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature, 521(7553):436-444.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Distributed representations of words and phrases and their compositionality", |
|
"authors": [ |
|
{ |
|
"first": "Tomas", |
|
"middle": [], |
|
"last": "Mikolov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ilya", |
|
"middle": [], |
|
"last": "Sutskever", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kai", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Greg", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Corrado", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeff", |
|
"middle": [], |
|
"last": "Dean", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "3111--3119", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111-3119.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Semeval-2018 Task 1: Affect in tweets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felipe", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Bravo-Marquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Salameh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of International Workshop on Semantic Evaluation (SemEval-2018)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad, Felipe Bravo-Marquez, Mo- hammad Salameh, and Svetlana Kiritchenko. 2018. Semeval-2018 Task 1: Affect in tweets. In Proceed- ings of International Workshop on Semantic Evalu- ation (SemEval-2018), New Orleans, LA, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Understanding emotions: A dataset of tweets to study interactions between affect categories", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad and Svetlana Kiritchenko. 2018. Understanding emotions: A dataset of tweets to study interactions between affect categories. In Proceedings of the 11th Edition of the Language Resources and Evaluation Conference, Miyazaki, Japan.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Automatic differentiation in pytorch", |
|
"authors": [ |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Paszke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Gross", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumith", |
|
"middle": [], |
|
"last": "Chintala", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Chanan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zachary", |
|
"middle": [], |
|
"last": "Devito", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zeming", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alban", |
|
"middle": [], |
|
"last": "Desmaison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luca", |
|
"middle": [], |
|
"last": "Antiga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Lerer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gre- gory Chanan, Edward Yang, Zachary DeVito, Zem- ing Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Scikit-learn: Machine learning in python", |
|
"authors": [ |
|
{ |
|
"first": "Fabian", |
|
"middle": [], |
|
"last": "Pedregosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Varoquaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandre", |
|
"middle": [], |
|
"last": "Gramfort", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Michel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bertrand", |
|
"middle": [], |
|
"last": "Thirion", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Olivier", |
|
"middle": [], |
|
"last": "Grisel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mathieu", |
|
"middle": [], |
|
"last": "Blondel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Prettenhofer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ron", |
|
"middle": [], |
|
"last": "Weiss", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vincent", |
|
"middle": [], |
|
"last": "Dubourg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Journal of machine learning research", |
|
"volume": "12", |
|
"issue": "", |
|
"pages": "2825--2830", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Fabian Pedregosa, Ga\u00ebl Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning research, 12(Oct):2825-2830.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Glove: Global vectors for word representation", |
|
"authors": [ |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Pennington", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "EMNLP2014", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1532--1543", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP2014, pages 1532-1543.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Classifier chains for multilabel classification", |
|
"authors": [ |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Read", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Pfahringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoff", |
|
"middle": [], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Joint European Conference on Machine Learning and Knowledge Discovery in Databases", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "254--269", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. 2009. Classifier chains for multi- label classification. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 254-269. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Pca using graph total variation", |
|
"authors": [ |
|
{ |
|
"first": "Nauman", |
|
"middle": [], |
|
"last": "Shahid", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathanael", |
|
"middle": [], |
|
"last": "Perraudin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vassilis", |
|
"middle": [], |
|
"last": "Kalofolias", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Benjamin", |
|
"middle": [], |
|
"last": "Ricaud", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pierre", |
|
"middle": [], |
|
"last": "Vandergheynst", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4668--4672", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nauman Shahid, Nathanael Perraudin, Vassilis Kalo- folias, Benjamin Ricaud, and Pierre Vandergheynst. 2016. Pca using graph total variation. In Acous- tics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 4668- 4672. Ieee.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Emotion knowledge: Further exploration of a prototype approach", |
|
"authors": [ |
|
{ |
|
"first": "Phillip", |
|
"middle": [], |
|
"last": "Shaver", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Judith", |
|
"middle": [], |
|
"last": "Schwartz", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Kirson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cary O'", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1987, |
|
"venue": "Journal of personality and social psychology", |
|
"volume": "52", |
|
"issue": "6", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Phillip Shaver, Judith Schwartz, Donald Kirson, and Cary O'connor. 1987. Emotion knowledge: Further exploration of a prototype approach. Journal of per- sonality and social psychology, 52(6):1061.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Distant supervision for emotion classification with discrete binary values", |
|
"authors": [ |
|
{ |
|
"first": "Jared", |
|
"middle": [], |
|
"last": "Suttles", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nancy", |
|
"middle": [], |
|
"last": "Ide", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--136", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jared Suttles and Nancy Ide. 2013. Distant supervision for emotion classification with discrete binary val- ues. In International Conference on Intelligent Text Processing and Computational Linguistics, pages 121-136. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Learning sentimentspecific word embedding for twitter sentiment classification", |
|
"authors": [ |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1555--1565", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentiment- specific word embedding for twitter sentiment clas- sification. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1555- 1565.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "The heart and soul of the web? sentiment strength detection in the social web with sentistrength", |
|
"authors": [ |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Thelwall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Cyberemotions", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "119--134", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mike Thelwall. 2017. The heart and soul of the web? sentiment strength detection in the social web with sentistrength. In Cyberemotions, pages 119-134. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Harnessing twitter\" big data\" for automatic emotion identification", |
|
"authors": [ |
|
{ |
|
"first": "Wenbo", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lu", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Krishnaprasad", |
|
"middle": [], |
|
"last": "Thirunarayan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit P", |
|
"middle": [], |
|
"last": "Sheth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 International Confernece on Social Computing (Social-Com)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "587--592", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P Sheth. 2012. Harnessing twitter\" big data\" for automatic emotion identification. In Privacy, Security, Risk and Trust (PASSAT), 2012 International Conference on and 2012 Interna- tional Confernece on Social Computing (Social- Com), pages 587-592. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Emoji as emotion tags for tweets", |
|
"authors": [ |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Wood", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Ruder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Emotion and Sentiment Analysis Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ian Wood and Sebastian Ruder. 2016. Emoji as emo- tion tags for tweets. In Emotion and Sentiment Anal- ysis Workshop, at LREC2016. LREC2016.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Ecnu at semeval-2016 task 4: An empirical investigation of traditional nlp features and word embedding features for sentence-level and topic-level sentiment analysis in twitter", |
|
"authors": [ |
|
{ |
|
"first": "Yunxiao", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zhihua", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Man", |
|
"middle": [], |
|
"last": "Lan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "256--261", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yunxiao Zhou, Zhihua Zhang, and Man Lan. 2016. Ecnu at semeval-2016 task 4: An empirical inves- tigation of traditional nlp features and word embed- ding features for sentence-level and topic-level sen- timent analysis in twitter. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 256-261.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Distribution of regression labels (x-axis) and ordinal labels (y-axis) on the training dataset of Task 1a & 2a. Class 0 for fear is distributed in [0,0.6], whereas class 0 for joy is distributed in [0, 0.4]. Vertical lines are boundaries between ordinal classes, which are used for scope mapping method values (discrete) from the training dataset. We experiment with three different mapping: 1. naive mapping: divides [0,1] into same size segments according to the number of ordinals 2. scope mapping: finds the boundary of each segment in the training dataset (vertical lines on Figure 2)", |
|
"num": null |
|
}, |
|
"FIGREF1": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>Test samples from the Emoji Cluster</td></tr><tr><td>model and their top-3 nearest sentences according</td></tr><tr><td>to the learned representations. It shows that emo-</td></tr><tr><td>tionally similar sentences are clustered together</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"type_str": "table", |
|
"text": "5% It's been such a great week #happy Sadness 33.8% I think I miss my boyfriend #lonely Anger 23.5% Ignoring me isn't going to make our problems go away. #annoyed Fear 6% What to wear for this job orientation.. #nervous", |
|
"content": "<table><tr><td>Emotion Label % Joy 36.</td><td>Samples</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"text": "Statistics of the competition dataset for all 5 subtasks", |
|
"content": "<table><tr><td># of labels %</td><td>0 2.9 14.3 40.6 30.9 9.6 1.4 0.2 1 2 3 4 5 6</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF4": { |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table><tr><td>: Number of multi-labels. Most samples</td></tr><tr><td>have from 1-3 labels, but can have no labels or up</td></tr><tr><td>to 6 labels. (subtask 5a)</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF6": { |
|
"type_str": "table", |
|
"text": "Test set results on Subtask 1a & 3a. For 1a, separate regression models were trained for each emotion category. The number next to the best result(bold & underlined) indicates our ranking of the competition. Underlined ones show the models that were selected for ensemble according to the dev set.", |
|
"content": "<table><tr><td>Task 2a (EI-oc) 4a (V-oc)</td><td>Anger Fear Joy Sadness Valence</td><td>Pearson (all instances) Naive Scope Poly .654 .664 .704(2) .498 .562 .570(*) .632 .720(1) .712 .645 .697(*) .692 .813 .816 .833(2)</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF7": { |
|
"type_str": "table", |
|
"text": "Test set results on Subtask 2a & 4a.", |
|
"content": "<table><tr><td>The predictions of the best regression models are</td></tr><tr><td>mapped into ordinal predictions. The number next</td></tr><tr><td>to the best result(bold & underlined) indicates our</td></tr><tr><td>ranking of the competition. (*) indicates better re-</td></tr><tr><td>sults that we acquired after our final submission</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF9": { |
|
"type_str": "table", |
|
"text": "Test set results on Subtask 5a. The competition metric is Jaccard index.", |
|
"content": "<table><tr><td>Gender Ours Avg 0.5% 0.1% -0.9% -0.3% 3.3% Race Ours 1% 0.4% Avg 0.5% -1.2% 0.4% -0.9% -0.7% 0% 0.2% 1.3% 0.8% Valence -0.6% 0.5% Anger Fear Joy Sadness -1% -0.6%</td></tr></table>", |
|
"num": null, |
|
"html": null |
|
}, |
|
"TABREF11": { |
|
"type_str": "table", |
|
"text": "", |
|
"content": "<table/>", |
|
"num": null, |
|
"html": null |
|
} |
|
} |
|
} |
|
} |