|
{ |
|
"paper_id": "S16-1032", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:26:19.528155Z" |
|
}, |
|
"title": "VCU-TSA at Semeval-2016 Task 4: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Gerard", |
|
"middle": [], |
|
"last": "Briones", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Kasun", |
|
"middle": [], |
|
"last": "Amarasinghe", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Bridget", |
|
"middle": [ |
|
"T" |
|
], |
|
"last": "Mcinnes", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "The aim of this paper is to produce a methodology for analyzing sentiments of selected Twitter messages, better known as Tweets. This project elaborates on two experiments carried out to analyze the sentiment of Tweets from SemEval-2016 Task 4 Subtask A and Subtask B. Our method is built from a simple unigram model baseline with three main feature enhancements incorporated into the model: 1) emoticon retention, 2) word stemming, and 3) token saliency calculation. Our results indicate an increase in classification accuracy with the addition of emoticon retention and word stemming, while token saliency shows mixed performance. These results elucidate a possible classification feature model that could aid in the sentiment analysis of Twitter feeds and other microblogging environments.", |
|
"pdf_parse": { |
|
"paper_id": "S16-1032", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "The aim of this paper is to produce a methodology for analyzing sentiments of selected Twitter messages, better known as Tweets. This project elaborates on two experiments carried out to analyze the sentiment of Tweets from SemEval-2016 Task 4 Subtask A and Subtask B. Our method is built from a simple unigram model baseline with three main feature enhancements incorporated into the model: 1) emoticon retention, 2) word stemming, and 3) token saliency calculation. Our results indicate an increase in classification accuracy with the addition of emoticon retention and word stemming, while token saliency shows mixed performance. These results elucidate a possible classification feature model that could aid in the sentiment analysis of Twitter feeds and other microblogging environments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Twitter is a widely used microblogging environment which serves as a medium to share opinions on various events and products. Because of this, analyzing Twitter has the potential to reveal opinions of the general public regarding these topics. However, mining the content of Twitter messages is a challenging task due to a multitude of reasons, such as the shortness of the posted content and the informal and unstructured nature of the language used. The aim of this study is to produce a methodology for analyzing sentiments of selected Twitter messages, better known as Tweets. This project elaborates on two experiments carried out to analyze the sentiment of Tweets, namely, Subtask A and Subtask B from SemEval-2016 Task 4 (Nakov et al., 2016) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 729, |
|
"end": 749, |
|
"text": "(Nakov et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Subtask A: Message Polarity Classification. The goal of this subtask was to predict a given Tweet's sentiment from three classes: 1) positive, 2) neutral, or 3) negative.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Subtask B: Tweet classification according to a two-point scale. The goal of this subtask was to classify a given Tweet's sentiment towards a given topic. The sentiments were limited to positive and negative, unlike Subtask A.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We viewed both tasks as classification problems. We represented the Tweets in a statistical feature matrix and performed the classification using supervised machine learning classification algorithms. Several different feature vectors were experimented with and the same set of feature vectors were applied to both Subtask A and Subtask B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Method", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our methods consume Tweet data and output a matrix where each row represents a Tweet and each column represents a feature. The values in this matrix are the frequency of appearance of a feature in the Tweet. As reference for the rest of the project, please note that n-grams are a continuous set of n terms in a document. Thus, when n=1, we are representing a unigram, or a single word; when n=2, we are representing a bigram, or a pair of words, and so on. We evaluated unigram, bigram and trigram models, but discuss only the unigram model. The bigram and trigram models results showed to be less effective.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Vectors", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "As a baseline, a unigram model was used as the primary feature vector. The unigram model consists of several one-state finite automatas, splitting the probabilities of different features in a context. The probability of occurrence for each feature is independent. In our project, each word in a Tweet represents a feature.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram Model", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "For the first step of creating the unigram feature vector, numbers and special characters were removed from each Tweet, since they carry little information when taken out of context. The Tweets were then converted to all lower case to reduce the dimensionality of the data, whereby different users' capitalization does not factor in as a new, separate feature. Next, the Tweets were tokenized by breaking up the messages into single word units that each represent a unique feature. All stop words were then removed from these token sets. Stop words, such as \"the\" and \"a\", are the most commonly occurring words in a language and are considered to carry little to no information due to their high frequency of appearance (Yao and Ze-wen, 2011) . Their presence in the dataset has the potential of adversely affecting the classification results. The most frequent words in the dataset were then identified based on a specified frequency threshold, filtering out all tokens that appeared less than the threshold. This was done to reduce the size of the resultant feature vector and identify the most general set of terms that represents the dataset.", |
|
"cite_spans": [ |
|
{ |
|
"start": 720, |
|
"end": 742, |
|
"text": "(Yao and Ze-wen, 2011)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram Model", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "In addition to the baseline methodology, we applied a technique known as stemming, which reduces words to their basic forms. This process combines words with similar basic forms, for example the words \"running\" and \"ran\" are reduced to the base form of \"run,\" thus reducing the overall feature count and increasing the co-occurrence count.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with feature reduction through stemming", |
|
"sec_num": "2.1.2" |
|
}, |
|
{ |
|
"text": "One of the disadvantages of removing special characters from the Tweets was that the emoticons, text representations of emotions, were lost. Emoticons are good indicators of expression and emotion, and are frequently used in Tweets. We again performed the steps in the previous methods, but before the removal of special characters from the Tweet, we implemented a series of regular expressions to capture a specific set of emoticons and convert them into unique key words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with feature enhancement through emoticon retention", |
|
"sec_num": "2.1.3" |
|
}, |
|
{ |
|
"text": "The saliency, or quality, of the terms in the unigram model were calculated using the Term Frequency -Inverse Document Frequency (TF-IDF) score. The values of the matrix were modified to the TF-IDF score through the equations below.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "TF(Tweet, term) = frequencyOfTerm(Tweet) totalTerms(Tweet)", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "IDF(term) = log totalDocuments numDocumentsContaining(term)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "TF-IDF(Tweet, term) = TF(Tweet, term)\u2022IDF(term)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "3 Classification", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "Once the feature vectors were created, the final step of classification was accomplished using supervised machine learning algorithms. In the presented methodology, classification was carried out using single classifiers as well as multiple classifiers. As a reminder, Subtask A is a three class problem, while Subtask B is a two class problem.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Unigram model with word saliency statistics", |
|
"sec_num": "2.1.4" |
|
}, |
|
{ |
|
"text": "For this classification method, only a single classifier was used to perform the classification. The feature vector of a Tweet was used as input and the classifier returned the predicted sentiment class for that Tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Single Classifier", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "For this classification method, multiple classifiers were utilized to produce the final sentiment class of the Tweet based on a voting system. Each classifier is given a single vote and performs the classification of the Tweet on its own; casting its vote for which classification should be assigned for that Tweet. The predicted class with the majority of votes is then assigned as the class for that Tweet. We refer to this process as voting.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Multiple Classifiers", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The SemEval-2016 (Nakov et al., 2016) training datasets were used for both tasks. The datasets consisted of Tweets with pre-labeled sentiments. Table 1 and Table 2 show the class distributions of the training data. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 17, |
|
"end": 37, |
|
"text": "(Nakov et al., 2016)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 144, |
|
"end": 163, |
|
"text": "Table 1 and Table 2", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "This section discusses the parameters and assumptions made in the implementation of our systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Specifics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our system is freely available for download 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Implementation Specifics", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "For the feature vector creation, the stemming process was carried out using the Porter stemmer (Porter, 1997) supplied in the Natural Language Toolkit (NLTK) (Loper and Bird, 2002 ) platform 2 . The stop word list was manually created and is freely available in our package. The emoticons were retained by converting them to unique tokens using regular expressions. Table 3 shows the emoticons used by the system and their conversion. For our implementation, we used a frequency threshold of five to filter our features. This parameter was determined during initial development of the system by evaluating several thresholds using 10-fold cross validation over the training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 95, |
|
"end": 109, |
|
"text": "(Porter, 1997)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 158, |
|
"end": 179, |
|
"text": "(Loper and Bird, 2002", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 366, |
|
"end": 373, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Vector Creation", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "Three classifiers were tested for both subtasks. Subtask A utilized the Naive Bayes Multinomial, Naive Bayes, and J48 decision tree classifiers. Similarly, 1 https://github.com/gerardBriones/twitter-sentimentanalysis 2 http://www.nltk.org/ (Hall et al., 2009) data mining package 3 . We used Weka's default learning parameters in our experiments. Table 4 shows the overall accuracies acquired by the different classifiers tested for Subtask A. We chose to use a baseline of a unigram model with the frequency threshold set to one. The Naive Bayes Multinomial classifier produced the highest results for Subtask A from the classifiers tested. Further, the enhancements done to the unigram model did not yield a significant increase of accuracy in our tests. The highest accuracy was achieved with the basic unigram model in conjunction with the Naive Bayes Multinomial classifier. With that being said, the basic unigram model with a frequency threshold of five was able to outperform our selected baseline. Table 5 illustrates the overall accuracies obtained for Subtask B. We swapped the Naive Bayes Multinomial classifier with the Support Vector Machines classifier due to our use of the Tweet's topic as categorical data. The SVM classifier produced the highest overall results. Further, all classifiers except for Naive Bayes performed better than the baseline. Voting did not perform as well with the J48 and SVM algorithms, but still outperformed Naive Bayes. Our highest accuracy was achieved using the unigram model with stemming as features into the SVM classifier. Table 6 shows the results using our unigram, stemming, emoticon retention, and TF-IDF methodology on Subtask A. Our average F1 and average recall scores are higher than the baseline, with our accuracy score having a smaller, but still noticeable increase. Table 7 shows the results using our unigram, stemming, emoticon retention, and TF-IDF methodology on Subtask B. Our average F1 and average recall scores are slightly better than the baseline, while our accuracy also slightly decreased. In this paper, we present a method to predict the sentiment of Twitter feeds. We evaluated using a unigram model with three feature modifications:", |
|
"cite_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 259, |
|
"text": "(Hall et al., 2009)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 347, |
|
"end": 354, |
|
"text": "Table 4", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 1007, |
|
"end": 1014, |
|
"text": "Table 5", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 1575, |
|
"end": 1582, |
|
"text": "Table 6", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 1831, |
|
"end": 1838, |
|
"text": "Table 7", |
|
"ref_id": "TABREF6" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Classifiers", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "(1) stemming, (2) emoticon retention, and (3) word saliency. These modifications were applied to the unigram model and consumed with machine learning algorithms from the Weka datamining package. The results showed that using a unigram model with a frequency threshold of five in conjunction with the Naive Bayes Multinomial classifiers obtained the highest accuracy for Subtask A, and the unigram model with stemming in combination with the Support Vector Machine classifier achieved the highest accuracy for Subtask B. Analysis of the results showed that the unstructured nature of word spelling may have played a role in our relatively low accuracies, causing multiple features to be seen as unique, when in actuality they should in fact map to the same feature. We also believe that the mixed results from the inclusion of the TF-IDF score is due in part to the heavily skewed nature of the data. In both Subtask A and Subtask B, the training data was mostly comprised of positively tagged sentiments, overwhelming the other classifications. In the future, we plan to explore incorporating synonym set evaluations, acronym expansion, and spelling correction to aid in feature reduction. Efforts will also be made to include more contextual information, like sentiment lexicon, and to explore other multiple classifier methods, such as cotraining.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "http://www.cs.waikato.ac.nz/ml/weka/index.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "The weka data mining software: An update", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Hall", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eibe", |
|
"middle": [], |
|
"last": "Frank", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Pfahringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Reutemann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Witten", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "SIGKDD Explor. Newsl", |
|
"volume": "11", |
|
"issue": "1", |
|
"pages": "10--18", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. SIGKDD Explor. Newsl., 11(1):10-18, November.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Nltk: The natural language toolkit", |
|
"authors": [ |
|
{ |
|
"first": "Edward", |
|
"middle": [], |
|
"last": "Loper", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Steven", |
|
"middle": [], |
|
"last": "Bird", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "63--70", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natural language toolkit. In Proceedings of the ACL-02 Work- shop on Effective Tools and Methodologies for Teach- ing Natural Language Processing and Computational Linguistics -Volume 1, ETMTNLP '02, pages 63-70, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "SemEval-2016 task 4: Sentiment analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Veselin Stoy- anov, and Fabrizio Sebastiani. 2016. SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '16, San Diego, California, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Readings in information retrieval. chapter An Algorithm for Suffix Stripping", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Porter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. F. Porter. 1997. Readings in information retrieval. chapter An Algorithm for Suffix Stripping, pages 313-", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Research on the construction and filter method of stop-word list in text preprocessing", |
|
"authors": [ |
|
{ |
|
"first": "Zhou", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cao", |
|
"middle": [], |
|
"last": "Ze-Wen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Intelligent Computation Technology and Automation (ICICTA), 2011 International Conference on", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "217--221", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhou Yao and Cao Ze-wen. 2011. Research on the con- struction and filter method of stop-word list in text pre- processing. In Intelligent Computation Technology and Automation (ICICTA), 2011 International Conference on, volume 1, pages 217-221. IEEE.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"text": "", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"3\">: Dataset for Subtask A</td></tr><tr><td># of Tweets</td><td>P os</td><td colspan=\"2\">N eu N eg</td></tr><tr><td>705</td><td colspan=\"2\">345 (48.93%) 164</td><td>196</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\">: Dataset for Subtask B</td></tr><tr><td colspan=\"2\"># of Tweets Topics</td><td>P os</td><td>N eg</td></tr><tr><td>3890</td><td>59</td><td colspan=\"2\">3215 (82.64%) 675</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"text": "", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>: Emoticons</td></tr><tr><td>smileEmoticon frownEmoticon winkEmoticon tongueEmoticon concernEmoticon grinEmoticon mirrorGrinEmoticon D : :) : ( ; ) : P : / : D winkGrinEmoticon ; D surpriseEmoticon : O tearSmileEmoticon : ) tearFrownEmoticon : (</td></tr><tr><td>Subtask B used the Naive Bayes, J48 decision tree,</td></tr><tr><td>and Support Vector Machine classifiers. These clas-</td></tr><tr><td>sifiers were used individually as well as with vot-</td></tr><tr><td>ing. All classifiers were implemented using the open</td></tr><tr><td>source, freely available Weka</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"text": "Overall classification accuracies for Subtask A", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Uni</td><td colspan=\"3\">Uni + Stem Uni+ Stem + Emot Uni + Stem + Emot + TF-IDF</td></tr><tr><td>NBM</td><td>0.577</td><td>0.572</td><td>0.569</td><td>0.557</td></tr><tr><td>NB</td><td>0.550</td><td>0.539</td><td>0.540</td><td>0.552</td></tr><tr><td>J48</td><td>0.516</td><td>0.549</td><td>0.552</td><td>0.515</td></tr><tr><td colspan=\"2\">Voting 0.569</td><td>0.566</td><td>0.562</td><td>0.533</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF4": { |
|
"text": "Overall classification accuracies for Subtask B", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Uni</td><td colspan=\"3\">Uni + Stem Uni+ Stem + Emot Uni + Stem + Emot + TF-IDF</td></tr><tr><td>NBM</td><td>0.692</td><td>0.674</td><td>0.674</td><td>0.612</td></tr><tr><td>J48</td><td>0.864</td><td>0.870</td><td>0.870</td><td>0.872</td></tr><tr><td>SVM</td><td>0.879</td><td>0.881</td><td>0.881</td><td>0.870</td></tr><tr><td colspan=\"2\">Voting 0.867</td><td>0.865</td><td>0.863</td><td>0.876</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF5": { |
|
"text": "", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: Final Evaluation Results for Subtask A</td></tr><tr><td># System</td><td>AvgF1 AvgR Acc</td></tr><tr><td>1 SwissCheese</td><td>0.633 0.667 0.646</td></tr><tr><td>2 SENSEI-LIF</td><td>0.630 0.670 0.617</td></tr><tr><td>3 unimelb</td><td>0.617 0.641 0.616</td></tr><tr><td>4 INESC-ID</td><td>0.610 0.663 0.600</td></tr><tr><td colspan=\"2\">5 aueb.twitter.sentiment 0.605 0.644 0.629</td></tr><tr><td>31 VCU-TSA</td><td>0.372 0.390 0.382</td></tr><tr><td>35 baseline (positive)</td><td>0.255 0.333 0.342</td></tr></table>", |
|
"num": null |
|
}, |
|
"TABREF6": { |
|
"text": "", |
|
"html": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">: Final Evaluation Results for Subtask B</td></tr><tr><td># System</td><td>AvgF1 AvgR Acc</td></tr><tr><td>1 Tweester</td><td>0.797 0.799 0.862</td></tr><tr><td>2 LYS</td><td>0.791 0.720 0.762</td></tr><tr><td>3 thecerealkiller</td><td>0.784 0.762 0.823</td></tr><tr><td>4 ECNU</td><td>0.768 0.770 0.843</td></tr><tr><td>5 INSIGHT-1</td><td>0.767 0.786 0.864</td></tr><tr><td>19 VCU-TSA</td><td>0.502 0.448 0.775</td></tr><tr><td colspan=\"2\">20 baseline (positive) 0.500 0.438 0.778</td></tr></table>", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |