|
{ |
|
"paper_id": "S14-2009", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:32:51.319714Z" |
|
}, |
|
"title": "SemEval-2014 Task 9: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Columbia University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Johns Hopkins University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "We describe the Sentiment Analysis in Twitter task, ran as part of SemEval-2014. It is a continuation of the last year's task that ran successfully as part of SemEval-2013. As in 2013, this was the most popular SemEval task; a total of 46 teams contributed 27 submissions for subtask A (21 teams) and 50 submissions for subtask B (44 teams). This year, we introduced three new test sets: (i) regular tweets, (ii) sarcastic tweets, and (iii) LiveJournal sentences. We further tested on (iv) 2013 tweets, and (v) 2013 SMS messages. The highest F1score on (i) was achieved by NRC-Canada at 86.63 for subtask A and by TeamX at 70.96 for subtask B.", |
|
"pdf_parse": { |
|
"paper_id": "S14-2009", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "We describe the Sentiment Analysis in Twitter task, ran as part of SemEval-2014. It is a continuation of the last year's task that ran successfully as part of SemEval-2013. As in 2013, this was the most popular SemEval task; a total of 46 teams contributed 27 submissions for subtask A (21 teams) and 50 submissions for subtask B (44 teams). This year, we introduced three new test sets: (i) regular tweets, (ii) sarcastic tweets, and (iii) LiveJournal sentences. We further tested on (iv) 2013 tweets, and (v) 2013 SMS messages. The highest F1score on (i) was achieved by NRC-Canada at 86.63 for subtask A and by TeamX at 70.96 for subtask B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In the past decade, new forms of communication have emerged and have become ubiquitous through social media. Microblogs (e.g., Twitter), Weblogs (e.g., LiveJournal) and cell phone messages (SMS) are often used to share opinions and sentiments about the surrounding world, and the availability of social content generated on sites such as Twitter creates new opportunities to automatically study public opinion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Working with these informal text genres presents new challenges for natural language processing beyond those encountered when working with more traditional text genres such as newswire. The language in social media is very informal, with creative spelling and punctuation, misspellings, slang, new words, URLs, and genrespecific terminology and abbreviations, e.g., RT for re-tweet and #hashtags 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Moreover, tweets and SMS messages are short: a sentence or a headline rather than a document.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "How to handle such challenges so as to automatically mine and understand people's opinions and sentiments has only recently been the subject of research (Jansen et al., 2009; Barbosa and Feng, 2010; Bifet et al., 2011; Davidov et al., 2010; O'Connor et al., 2010; Pak and Paroubek, 2010; Tumasjan et al., 2010; Kouloumpis et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 153, |
|
"end": 174, |
|
"text": "(Jansen et al., 2009;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 198, |
|
"text": "Barbosa and Feng, 2010;", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 199, |
|
"end": 218, |
|
"text": "Bifet et al., 2011;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 240, |
|
"text": "Davidov et al., 2010;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 241, |
|
"end": 263, |
|
"text": "O'Connor et al., 2010;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 287, |
|
"text": "Pak and Paroubek, 2010;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 288, |
|
"end": 310, |
|
"text": "Tumasjan et al., 2010;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 311, |
|
"end": 335, |
|
"text": "Kouloumpis et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Several corpora with detailed opinion and sentiment annotation have been made freely available, e.g., the MPQA newswire corpus (Wiebe et al., 2005) , the movie reviews corpus (Pang et al., 2002) , or the restaurant and laptop reviews corpora that are part of this year's SemEval Task 4 (Pontiki et al., 2014) . These corpora have proved very valuable as resources for learning about the language of sentiment in general, but they do not focus on tweets. While some Twitter sentiment datasets were created prior to SemEval-2013, they were either small and proprietary, such as the isieve corpus (Kouloumpis et al., 2011) or focused solely on message-level sentiment.", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 147, |
|
"text": "(Wiebe et al., 2005)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 175, |
|
"end": 194, |
|
"text": "(Pang et al., 2002)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 286, |
|
"end": 308, |
|
"text": "(Pontiki et al., 2014)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 594, |
|
"end": 619, |
|
"text": "(Kouloumpis et al., 2011)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Thus, the primary goal of our SemEval task is to promote research that will lead to better understanding of how sentiment is conveyed in Social Media. Toward that goal, we created the Se-mEval Tweet corpus as part of our inaugural Sentiment Analysis in Twitter Task, SemEval-2013 Task 2 (Nakov et al., 2013) . It contains tweets and SMS messages with sentiment expressions annotated with contextual phrase-level and messagelevel polarity. This year, we extended the corpus by adding new tweets and LiveJournal sentences.", |
|
"cite_spans": [ |
|
{ |
|
"start": 287, |
|
"end": 307, |
|
"text": "(Nakov et al., 2013)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Another interesting phenomenon that has been studied in Twitter is the use of the #sarcasm hashtag to indicate that a tweet should not be taken literally (Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011; Liebrecht et al., 2013) . In fact, sarcasm indicates that the message polarity should be flipped. With this in mind, this year, we also evaluate on sarcastic tweets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 154, |
|
"end": 184, |
|
"text": "(Gonz\u00e1lez-Ib\u00e1\u00f1ez et al., 2011;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 185, |
|
"end": 208, |
|
"text": "Liebrecht et al., 2013)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the remainder of this paper, we first describe the task, the dataset creation process and the evaluation methodology. We then summarize the characteristics of the approaches taken by the participating systems, and we discuss their scores.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "As SemEval-2013 Task 2, we included two subtasks: an expression-level subtask and a messagelevel subtask. Participants could choose to participate in either or both. Below we provide short descriptions of the objectives of these two subtasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Task Description", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Given a message containing a marked instance of a word or a phrase, determine whether that instance is positive, negative or neutral in that context. The instance boundaries were provided: this was a classification task, not an entity recognition task.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask A: Contextual Polarity Disambiguation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Given a message, decide whether it is of positive, negative, or neutral sentiment. For messages conveying both positive and negative sentiment, the stronger one is to be chosen.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask B: Message Polarity Classification", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each participating team was allowed to submit results for two different systems per subtask: one constrained, and one unconstrained. A constrained system could only use the provided data for training, but it could also use other resources such as lexicons obtained elsewhere. An unconstrained system could use any additional data as part of the training process; this could be done in a supervised, semi-supervised, or unsupervised fashion.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask B: Message Polarity Classification", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that constrained/unconstrained refers to the data used to train a classifier. For example, if other data (excluding the test data) was used to develop a sentiment lexicon, and the lexicon was used to generate features, the system would still be constrained. However, if other data (excluding the test data) was used to develop a sentiment lexicon, and this lexicon was used to automatically label additional Tweet/SMS messages and then used with the original data to train the classifier, then such a system would be considered unconstrained.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Subtask B: Message Polarity Classification", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this section, we describe the process of collecting and annotating the 2014 testing tweets, including the sarcastic ones, and LiveJournal sentences. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Datasets", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We annotated the new tweets as in 2013: by identifying tweets from popular topics that contain sentiment-bearing words by using SentiWordNet (Baccianella et al., 2010) as a filter. We altered the annotation task for the sarcastic tweets, displaying them to the Mechanical Turk annotators without the #sarcasm hashtag; the Turkers had to determine whether the tweet is sarcastic on their own. Moreover, we asked Turkers to indicate the degree of sarcasm as (a) definitely sarcastic, (b) probably sarcastic, and (c) not sarcastic. As in 2013, we combined the annotations using intersection, where a word had to appear in 2/3 of the annotations to be accepted. An annotated example from each source is shown in Table 3 . Table 3 : Example of polarity for each source of messages. The target phrases are marked in [. . .] , and are followed by their polarity; the sentence-level polarity is shown in the last column.", |
|
"cite_spans": [ |
|
{ |
|
"start": 141, |
|
"end": 167, |
|
"text": "(Baccianella et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 810, |
|
"end": 817, |
|
"text": "[. . .]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 708, |
|
"end": 715, |
|
"text": "Table 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 718, |
|
"end": 725, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotation", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We did not deliver the annotated tweets to the participants directly; instead, we released annotation indexes, a list of corresponding Twitter IDs, and a download script that extracts the corresponding tweets via the Twitter API. 2 We provided the tweets in this manner in order to ensure that Twitter's terms of service are not violated. Unfortunately, due to this restriction, the task participants had access to different number of training tweets depending on when they did the downloading. This varied between a minimum of 5,215 tweets and the full set of 10,882 tweets. On average the teams were able to collect close to 9,000 tweets; for teams that did not participate in 2013, this was about 8,500. The difference in training data size did not seem to have had a major impact. In fact, the top two teams in subtask B (coooolll and TeamX) trained on less than 8,500 tweets.", |
|
"cite_spans": [ |
|
{ |
|
"start": 230, |
|
"end": 231, |
|
"text": "2", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tweets Delivery", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "The participating systems were required to perform a three-way classification for both subtasks. A particular marked phrase (for subtask A) or an entire message (for subtask B) was to be classified as positive, negative or objective/neutral. We scored the systems by computing a score for predicting positive/negative phrases/messages. For instance, to compute positive precision, p pos , we find the number of phrases/messages that a system correctly predicted to be positive, and we divide that number by the total number it predicted to be positive. To compute positive recall, r pos , we find the number of phrases/messages correctly predicted to be positive and we divide that number by the total number of positives in the gold standard. We then calculate F1-score for the positive class as follows F pos = 2(ppos+rpos) ppos * rpos . We carry out a similar computation for F neg , for the negative phrases/messages. The overall score is then F = (F pos + F neg )/2.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used the two test sets from 2013 and the three from 2014, which we combined into one test set and we shuffled to make it hard to guess which set a sentence came from. This guaranteed that participants would submit predictions for all five test sets. It also allowed us to test how well systems trained on standard tweets generalize to sarcastic tweets and to LiveJournal sentences, without the participants putting extra efforts into this. The participants were also not informed about the source the extra test sets come from.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We provided the participants with a scorer that outputs the overall score F and a confusion matrix for each of the five test sets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Scoring", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "The results are shown in Tables 4 and 5, and the team affiliations are shown in Table 6 . Tables 4 and 5 contain results on the two progress test sets (tweets and SMS messages), which are the official test sets from the 2013 edition of the task, and on the three new official 2014 testsets (tweets, tweets with sarcasm, and LiveJournal). The tables further show macro-and micro-averaged results over the 2014 datasets. There is an index for each result showing the relative rank of that result within the respective column. The participating systems are ranked by their score on the Twitter-2014 testset, which is the official ranking for the task; all remaining rankings are secondary.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 80, |
|
"end": 87, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Participants and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "As we mentioned above, the participants were not told that the 2013 test sets would be included in the big 2014 test set, so that they do not overtune their systems on them. However, the 2013 test sets were made available for development, but it was explicitly forbidden to use them for training. Still, some participants did not notice this restriction, which resulted in their unusually high scores on Twitter2013-test; we did our best to identify all such cases, and we asked the authors to submit corrected runs. The tables mark such resubmissions accordingly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Participants and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Most of the submissions were constrained, with just a few unconstrained: 7 out of 27 for subtask A, and 8 out of 50 for subtask B. In any case, the best systems were constrained. Some teams participated with both a constrained and an unconstrained system, but the unconstrained system was not always better than the constrained one: sometimes it was worse, sometimes it performed the same. Thus, we decided to produce a single ranking, including both constrained and unconstrained systems, where we mark the latter accordingly. Table 4 shows the results for subtask A, which attracted 27 submissions from 21 teams. There were seven unconstrained submissions: five teams submitted both a constrained and an unconstrained run, and two teams submitted an unconstrained run only. The best systems were constrained. All participating systems outperformed the majority class baseline by a sizable margin.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 528, |
|
"end": 535, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Participants and Results", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "The results for subtask B are shown in Table 5 . The subtask attracted 50 submissions from 44 teams. There were eight unconstrained submissions: six teams submitted both a constrained and an unconstrained run, and two teams submitted an unconstrained run only. As for subtask A, the best systems were constrained. Again, all participating systems outperformed the majority class baseline; however, some systems were very close to it.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 39, |
|
"end": 46, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Subtask B", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Overall, we observed similar trends as in SemEval-2013 Task 2. Almost all systems used supervised learning. Most systems were constrained, including the best ones in all categories. As in 2013, we observed several cases of a team submitting a constrained and an unconstrained run and the constrained run performing better.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "It is unclear why unconstrained systems did not outperform constrained ones. It could be because participants did not use enough external data or because the data they used was too different from Twitter or from our annotation method. Or it could be due to our definition of unconstrained, which labels as unconstrained systems that use additional tweets directly, but considers unconstrained those that use additional tweets to build sentiment lexicons and then use these lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "As in 2013, the most popular classifiers were SVM, MaxEnt, and Naive Bayes. Moreover, two submissions used deep learning, coooolll (Harbin Institute of Technology) and ThinkPositive (IBM Research, Brazil), which were ranked second and tenth on subtask B, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The features used were quite varied, including word-based (e.g., word and character ngrams, word shapes, and lemmata), syntactic, and Twitter-specific such as emoticons and abbreviations. The participants still relied heavily on lexicons of opinion words, the most popular ones being the same as in 2013: MPQA, SentiWord-Net and Bing Liu's opinion lexicon. Popular this year was also the NRC lexicon (Mohammad et al., 2013) , created by the best-performing team in 2013, which is top-performing this year as well.", |
|
"cite_spans": [ |
|
{ |
|
"start": 400, |
|
"end": 423, |
|
"text": "(Mohammad et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Preprocessing of tweets was still a popular technique. In addition to standard NLP steps such as tokenization, stemming, lemmatization, stopword removal and POS tagging, most teams applied some kind of Twitter-specific processing such as substitution/removal of URLs, substitution of emoticons, word normalization, abbreviation lookup, and punctuation removal. Finally, several of the teams used Twitter-tuned NLP tools such as part of speech and named entity taggers (Gimpel et al., 2011; Ritter et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 468, |
|
"end": 489, |
|
"text": "(Gimpel et al., 2011;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 490, |
|
"end": 510, |
|
"text": "Ritter et al., 2011)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The similarity of preprocessing techniques, NLP tools, classifiers and features used in 2013 and this year is probably partially due to many teams participating in both years. As Table 6 shows, 18 out of the 46 teams are returning teams.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 186, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Comparing the results on the progress Twitter test in 2013 and 2014, we can see that NRC-Canada, the 2013 winner for subtask A, have now improved their F1 score from 88.93 to 90.14, which is the 2014 best score. The best score on the Progress SMS in 2014 of 89.31 belongs to ECNU; this is a big jump compared to their 2013 score of 76.69, but it is less compared to the 2013 best of 88.37 achieved by GU-MLT-LT. For subtask B, on the Twitter progress testset, the 2013 winner NRC-Canada improves their 2013 result from 69.02 to 70.75, which is the second best in 2014; the winner in 2014, TeamX, achieves 72.12. On the SMS progress test, the 2013 winner NRC-Canada improves its F1 score from 68.46 to 70.28. Overall, we see consistent improvements on the progress testset for both subtasks: 0-1 and 2-3 points absolute for subtasks A and B, respectively. Table 4 : Results for subtask A. The * indicates system resubmissions (because they initially trained on Twitter2013-test), and the indicates a system that includes a task co-organizer as a team member. The systems are sorted by their score on the Twitter2014 test dataset; the rankings on the individual datasets are indicated with a subscript. The last two columns show macro-and micro-averaged results across the three 2014 test datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 855, |
|
"end": 862, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, note that for both subtasks, the best systems on the Twitter-2014 dataset are those that performed best on the 2013 progress Twitter dataset: NRC-Canada for subtask A, and TeamX (Fuji Xerox Co., Ltd.) for subtask B.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "It is interesting to note that the best results for Twitter2014-test are lower than those for Twitter2013-test for both subtask A (86.63 vs. 90.14) and subtask B (70.96 vs 72.12). This is so despite the baselines for Twitter2014-test being higher than those for Twitter2013-test: 42.2 vs. 38.1 for subtask A, and 34.6 vs. 29.2 for subtask B. Most likely, having access to Twitter2013-test at development time, teams have overfitted on it. It could be also the case that some of the sentiment dictionaries that were built in 2013 have become somewhat outdated by 2014.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Finally, note that while some teams such as NRC-Canada performed well across all test sets, other such as TeamX, which used a weighting scheme tuned specifically for class imbalances in tweets, were only strong on Twitter datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Discussion", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We have described the data, the experimental setup and the results for SemEval-2014 Task 9. As in 2013, our task was the most popular one at SemEval-2014, attracting 46 participating teams: 21 in subtask A (27 submissions) and 44 in subtask B (50 submissions).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We introduced three new test sets for 2014: an in-domain Twitter dataset, an out-of-domain Live-Journal test set, and a dataset of tweets containing sarcastic content. While the performance on the LiveJournal test set was mostly comparable to the in-domain Twitter test set, for most teams there was a sharp drop in performance for sarcastic tweets, highlighting better handling of sarcastic language as one important direction for future work in Twitter sentiment analysis.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "We plan to run the task again in 2015 with the inclusion of a new sub-evaluation on detecting sarcasm with the goal of stimulating research in this area; we further plan to add one more test domain. Table 5 : Results for subtask B. The * indicates system resubmissions (because they initially trained on Twitter2013-test), and the indicates a system that includes a task co-organizer as a team member. The systems are sorted by their score on the Twitter2014 test dataset; the rankings on the individual datasets are indicated with a subscript. The last two columns show macro-and micro-averaged results across the three 2014 test datasets.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 206, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "In the 2015 edition of the task, we might also remove the constrained/unconstrained distinction.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Finally, as there are multiple opinions about a topic in Twitter, we would like to focus on detecting the sentiment trend towards a topic.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "https://dev.twitter.com", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We would like to thank Kathleen McKeown and Smaranda Muresan for funding the 2014 Twitter test sets. We also thank the anonymous reviewers.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgements", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining", |
|
"authors": [ |
|
{ |
|
"first": "Stefano", |
|
"middle": [], |
|
"last": "Baccianella", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrea", |
|
"middle": [], |
|
"last": "Esuli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Seventh International Conference on Language Resources and Evaluation, LREC '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stefano Baccianella, Andrea Esuli, and Fabrizio Se- bastiani. 2010. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Proceedings of the Seventh International Conference on Language Resources and Evaluation, LREC '10, Valletta, Malta.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Robust sentiment detection on Twitter from biased and noisy data", |
|
"authors": [ |
|
{ |
|
"first": "Luciano", |
|
"middle": [], |
|
"last": "Barbosa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Junlan", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "36--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Luciano Barbosa and Junlan Feng. 2010. Robust sen- timent detection on Twitter from biased and noisy data. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING '10, pages 36-44, Beijing, China.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Detecting sentiment change in Twitter streaming data", |
|
"authors": [ |
|
{ |
|
"first": "Albert", |
|
"middle": [], |
|
"last": "Bifet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Geoffrey", |
|
"middle": [], |
|
"last": "Holmes", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bernhard", |
|
"middle": [], |
|
"last": "Pfahringer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ricard", |
|
"middle": [], |
|
"last": "Gavald\u00e0", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings Track", |
|
"volume": "17", |
|
"issue": "", |
|
"pages": "5--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Albert Bifet, Geoffrey Holmes, Bernhard Pfahringer, and Ricard Gavald\u00e0. 2011. Detecting sentiment change in Twitter streaming data. Journal of Ma- chine Learning Research, Proceedings Track, 17:5- 11.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Creating a live, public short message service corpus: the NUS SMS corpus. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Tao", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min-Yen", |
|
"middle": [], |
|
"last": "Kan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "", |
|
"volume": "47", |
|
"issue": "", |
|
"pages": "299--335", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Tao Chen and Min-Yen Kan. 2013. Creating a live, public short message service corpus: the NUS SMS corpus. Language Resources and Evaluation, 47(2):299-335.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Semi-supervised recognition of sarcasm in Twitter and Amazon", |
|
"authors": [ |
|
{ |
|
"first": "Dmitry", |
|
"middle": [], |
|
"last": "Davidov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Tsur", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ari", |
|
"middle": [], |
|
"last": "Rappoport", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning, CoNLL '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "107--116", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dmitry Davidov, Oren Tsur, and Ari Rappoport. 2010. Semi-supervised recognition of sarcasm in Twitter and Amazon. In Proceedings of the Fourteenth Con- ference on Computational Natural Language Learn- ing, CoNLL '10, pages 107-116, Uppsala, Sweden.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Part-of-speech tagging for Twitter: Annotation, features, and experiments", |
|
"authors": [ |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dipanjan", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Das", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jacob", |
|
"middle": [], |
|
"last": "Mills", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Eisenstein", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dani", |
|
"middle": [], |
|
"last": "Heilman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [], |
|
"last": "Yogatama", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Flanigan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "42--47", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, ACL-HLT '11, pages 42- 47, Portland, Oregon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Identifying sarcasm in Twitter: a closer look", |
|
"authors": [ |
|
{ |
|
"first": "Roberto", |
|
"middle": [], |
|
"last": "Gonz\u00e1lez-Ib\u00e1\u00f1ez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Smaranda", |
|
"middle": [], |
|
"last": "Muresan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nina", |
|
"middle": [], |
|
"last": "Wacholder", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies -Short Papers, ACL-HLT '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "581--586", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Roberto Gonz\u00e1lez-Ib\u00e1\u00f1ez, Smaranda Muresan, and Nina Wacholder. 2011. Identifying sarcasm in Twit- ter: a closer look. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies -Short Pa- pers, ACL-HLT '11, pages 581-586, Portland, Ore- gon, USA.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Twitter power: Tweets as electronic word of mouth", |
|
"authors": [ |
|
{ |
|
"first": "Bernard", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mimi", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Sobel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Abdur", |
|
"middle": [], |
|
"last": "Chowdury", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "J. Am. Soc. Inf. Sci. Technol", |
|
"volume": "60", |
|
"issue": "11", |
|
"pages": "2169--2188", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bernard Jansen, Mimi Zhang, Kate Sobel, and Abdur Chowdury. 2009. Twitter power: Tweets as elec- tronic word of mouth. J. Am. Soc. Inf. Sci. Technol., 60(11):2169-2188.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Twitter sentiment analysis: The good the bad and the OMG!", |
|
"authors": [ |
|
{ |
|
"first": "Efthymios", |
|
"middle": [], |
|
"last": "Kouloumpis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Johanna", |
|
"middle": [], |
|
"last": "Moore", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Fifth International Conference on Weblogs and Social Media, ICWSM '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Efthymios Kouloumpis, Theresa Wilson, and Johanna Moore. 2011. Twitter sentiment analysis: The good the bad and the OMG! In Proceedings of the Fifth International Conference on Weblogs and Social Media, ICWSM '11, Barcelona, Catalonia, Spain.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "The perfect solution for detecting sarcasm in tweets #not", |
|
"authors": [ |
|
{ |
|
"first": "Christine", |
|
"middle": [], |
|
"last": "Liebrecht", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Florian", |
|
"middle": [], |
|
"last": "Kunneman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Antal", |
|
"middle": [], |
|
"last": "Van Den", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bosch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--37", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Christine Liebrecht, Florian Kunneman, and Antal Van den Bosch. 2013. The perfect solution for de- tecting sarcasm in tweets #not. In Proceedings of the 4th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 29-37, Atlanta, Georgia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "NRC-Canada: Building the state-ofthe-art in sentiment analysis of tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh international workshop on Semantic Evaluation Exercises, SemEval-2013", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "321--327", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRC-Canada: Building the state-of- the-art in sentiment analysis of tweets. In Proceed- ings of the Seventh international workshop on Se- mantic Evaluation Exercises, SemEval-2013, pages 321-327, Atlanta, Georgia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "SemEval-2013 task 2: Sentiment analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zornitsa", |
|
"middle": [], |
|
"last": "Kozareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation, SemEval '13", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "312--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. SemEval-2013 task 2: Sentiment analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Semantic Evaluation, SemEval '13, pages 312-320, Atlanta, Georgia, USA.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "From tweets to polls: Linking text sentiment to public opinion time series", |
|
"authors": [ |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ramnath", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bryan", |
|
"middle": [], |
|
"last": "Balasubramanyan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah", |
|
"middle": [], |
|
"last": "Routledge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourth International Conference on Weblogs and Social Media, ICWSM '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Brendan O'Connor, Ramnath Balasubramanyan, Bryan Routledge, and Noah Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proceedings of the Fourth Inter- national Conference on Weblogs and Social Media, ICWSM '10, Washington, DC, USA.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Twitter based system: Using Twitter for disambiguating sentiment ambiguous adjectives", |
|
"authors": [ |
|
{ |
|
"first": "Alexander", |
|
"middle": [], |
|
"last": "Pak", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrick", |
|
"middle": [], |
|
"last": "Paroubek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 5th International Workshop on Semantic Evaluation, SemEval '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "436--439", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twit- ter based system: Using Twitter for disambiguating sentiment ambiguous adjectives. In Proceedings of the 5th International Workshop on Semantic Evalu- ation, SemEval '10, pages 436-439, Uppsala, Swe- den.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Thumbs up?: Sentiment classification using machine learning techniques", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivakumar", |
|
"middle": [], |
|
"last": "Vaithyanathan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing -Volume 10, EMNLP '02, pages 79-86.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "SemEval-2014 task 4: Aspect based sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maria", |
|
"middle": [], |
|
"last": "Pontiki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Harris", |
|
"middle": [], |
|
"last": "Papageorgiou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval '14", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maria Pontiki, Harris Papageorgiou, Dimitrios Gala- nis, Ion Androutsopoulos, John Pavlopoulos, and Suresh Manandhar. 2014. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evalu- ation, SemEval '14, Dublin, Ireland.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Named entity recognition in tweets: An experimental study", |
|
"authors": [ |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sam", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mausam", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Oren", |
|
"middle": [], |
|
"last": "Etzioni", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1524--1534", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An ex- perimental study. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, EMNLP '11, pages 1524-1534, Edinburgh, Scotland, UK.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Predicting elections with Twitter: What 140 characters reveal about political sentiment", |
|
"authors": [ |
|
{ |
|
"first": "Andranik", |
|
"middle": [], |
|
"last": "Tumasjan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Timm", |
|
"middle": [], |
|
"last": "Sprenger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Philipp", |
|
"middle": [], |
|
"last": "Sandner", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Isabell", |
|
"middle": [ |
|
"Welpe" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the Fourth International Conference on Weblogs and Social Media, ICWSM '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Andranik Tumasjan, Timm Sprenger, Philipp Sandner, and Isabell Welpe. 2010. Predicting elections with Twitter: What 140 characters reveal about politi- cal sentiment. In Proceedings of the Fourth Inter- national Conference on Weblogs and Social Media, ICWSM '10, Washington, DC, USA.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Annotating expressions of opinions and emotions in language. Language Resources and Evaluation", |
|
"authors": [ |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "39", |
|
"issue": "", |
|
"pages": "165--210", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emo- tions in language. Language Resources and Evalu- ation, 39(2-3):165-210.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"content": "<table><tr><td>3.1 Datasets Used</td><td/><td/><td/></tr><tr><td colspan=\"4\">For training and development, we released the</td></tr><tr><td colspan=\"4\">Twitter train/dev/test datasets from SemEval-2013</td></tr><tr><td colspan=\"4\">task 2, as well as the SMS test set, which uses mes-</td></tr><tr><td colspan=\"4\">sages from the NUS SMS corpus (Chen and Kan,</td></tr><tr><td colspan=\"4\">2013), which we annotated for sentiment in 2013.</td></tr><tr><td colspan=\"4\">We further added a new 2014 Twitter test set,</td></tr><tr><td colspan=\"4\">as well as a small set of tweets that contained</td></tr><tr><td colspan=\"4\">the #sarcasm hashtag to determine how sarcasm</td></tr><tr><td colspan=\"4\">affects the tweet polarity. Finally, we included</td></tr><tr><td colspan=\"4\">sentences from LiveJournal in order to determine</td></tr><tr><td colspan=\"4\">how systems trained on Twitter perform on other</td></tr><tr><td colspan=\"4\">sources. The statistics for each dataset and for</td></tr><tr><td colspan=\"3\">each subtask are shown in Tables 1 and 2.</td><td/></tr><tr><td>Corpus</td><td colspan=\"3\">Positive Negative Objective</td></tr><tr><td/><td/><td colspan=\"2\">/ Neutral</td></tr><tr><td>Twitter2013-train</td><td>3,662</td><td>1,466</td><td>4,600</td></tr><tr><td>Twitter2013-dev</td><td>575</td><td>340</td><td>739</td></tr><tr><td>Twitter2013-test</td><td>1,572</td><td>601</td><td>1,640</td></tr><tr><td>SMS2013-test</td><td>492</td><td>394</td><td>1,207</td></tr><tr><td>Twitter2014-test</td><td>982</td><td>202</td><td>669</td></tr><tr><td>Twitter2014-sarcasm</td><td>33</td><td>40</td><td>13</td></tr><tr><td>LiveJournal2014-test</td><td>427</td><td>304</td><td>411</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Dataset statistics for Subtask A." |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"text": "Source Example Polarity Twitter Why would you [still]-wear shorts when it's this cold?! I [love]+ how Britain see's a bit of sun and they're [like 'OOOH]+ LET'S STRIP!' positive SMS [Sorry]-I think tonight [cannot]-and I [not feeling well]-after my rest. negative LiveJournal [Cool]+ posts , dude ; very [colorful]+ , and [artsy]+ . positive Twitter Sarcasm [Thanks]+ manager for putting me on the schedule for Sunday negative" |
|
} |
|
} |
|
} |
|
} |