|
{ |
|
"paper_id": "S16-1028", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:27:00.194433Z" |
|
}, |
|
"title": "SentiSys at SemEval-2016 Task 4: Feature-Based System for Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Hamdan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper describes our sentiment analysis system which has been built for Sentiment Analysis in Twitter Task of SemEval-2016. We have used a Logistic Regression classifier with different groups of features. This system is an improvement to our previous system Lsislif in Semeval-2015 after removing some features and adding new features extracted from a new automatic constructed sentiment lexicon.", |
|
"pdf_parse": { |
|
"paper_id": "S16-1028", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper describes our sentiment analysis system which has been built for Sentiment Analysis in Twitter Task of SemEval-2016. We have used a Logistic Regression classifier with different groups of features. This system is an improvement to our previous system Lsislif in Semeval-2015 after removing some features and adding new features extracted from a new automatic constructed sentiment lexicon.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sentiment analysis in Twitter is different from document level sentiment analysis. Normally, in document level, each document is classified as positive or negative, the document is long enough to obtain a good representation using only the existing words (bag-of-words). For example, in movie reviews we can get f-score of 85% using bag-of-words representation with SVM classifier while in Twitter it is about 60% according to our experiments in previous SemEval workshops. This lower performance in Twitter domain is not surprising if we know the limitations of such task when applied to Twitter:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The size of a tweet is limited to 140 characters which leads to sparseness where the tweets do not provide enough word co-occurrence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The informal language and non-standard expressions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "\u2022 The numerous spelling errors.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For dealing with the previous limitations, we have decided to extend the bag-of-words representation. Therefore, many group of features have been extracted. Uni-gram, bi-gram and 3-grams of words features to capture the text of tweet and the context. Negation features to handle the negated context. Sentiment lexicons features can help the classification because it contains positive and negative words which can add a useful information about the polarity of a tweet, they also contain a lot of terms which may not appear in the training data which can be very useful. Semantic features as Brown clusters can also give a rich representation which can be useful for reducing the sparsity. For evaluating our system, we have participated in SemEval-2016 competition for sentiment analysis in Twitter (message polarity subtask A) 1 (Nakov et al., 2016) . Our system has been ranked six over 34, this system is derived from our previous system LsisLif which has been ranked third in SemEval-2015.", |
|
"cite_spans": [ |
|
{ |
|
"start": 831, |
|
"end": 851, |
|
"text": "(Nakov et al., 2016)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The rest of this chapter is organized as follows. Section 2 presents the problem formulation. Section 3 gives an overview of our proposed approach. The features we extracted for training the classifier are presented in Section 4. Our experiments are described in Section 5. The related work is presented in Section 6. The conclusion and future work are presented in Section 7.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Let T = t 1 , t 2 , .., t n be a collection of n tweets. Each tweet t i will be represented by a subset of all possible features F = f 1 , f 2 , .., f m that can appear in t i . The features can be single words, bigrams, ngrams, stemmed words or other syntactic or semantic features. If a feature f i exists in a tweet t j , the tweet can be represented as a vector of weighted features t j = (w 1 , w 2 , .., w m ) where w i is the weight of the feature f i in the tweet t j . w i can represent the presence or absence of the feature or the frequency or any other function of the feature frequency in the tweet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let us have three classes C = c 1 , c 2 , c 3 where c 1 represents the negative class, c 2 the neutral class and c 3 the positive class. Our task is to assign each tweet t j to a class c i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Problem Formulation", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our proposed approach for sentiment polarity classification consists of three steps:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Proposed Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "1. We tokenize each tweet to get the feature space which contains the words, punctuations and emoticons that appear in the tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Proposed Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "2. We extend the feature space by extracting some features using different resources (Sentiment lexicons, Twitter dictionary) and some semantic features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Proposed Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "3. We train a supervised classifier to get a trained model in order to predict the sentiment of the new tweets.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Proposed Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The next section describes the features we have extracted.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Overview of the Proposed Approach", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Before extracting the features, we should tokenize the tweet. Tokenization is a challenging problem for Twitter text. Happytokenizer 2 is the tokenizer which we used. It can capture the words, emoticons and punctuations. For example, for this tweet:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "\"RT @ #happyfuncoding: this is a typical Twitter tweet :-)\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "It returns the following terms:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "{rt, @, #happyfuncoding, :, this, is, a, typical, twitter, tweet, :-)}", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We also replaced each web link by the word url and each user name by uuser. Then, several groups of features have been extracted to improve the bag-ofwords representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Feature Extraction", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Unigram, bigram and 3-gram are extracted for each term in the tweet without any stemming or stopword removing, all terms with occurrence less than 3 are removed from the feature space. Therefore, for this tweet:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word ngrams", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "\"i'am going to chapel hill on sat. :)\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word ngrams", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The feature vector produced by this group of feature will be: 'going', 'to', 'chapel', 'hill', 'on', 'sat', '.', ':) ', \"i'm going\", 'going to', 'to chapel', 'chapel hill', 'hill on', 'on sat', 'sat .', '. :)', \"i'm going to\", 'going to chapel', 'to chapel hill', 'chapel hill on', 'hill on sat', 'on sat .', 'sat . :)'}.", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 116, |
|
"text": "'going', 'to', 'chapel', 'hill', 'on', 'sat', '.', ':)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word ngrams", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "{\"i'm\",", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Word ngrams", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "The rule-based algorithm presented in Christopher Potts' Sentiment Symposium Tutorial 3 is implemented. This algorithm appends a negation suffix to all words that appear within a negation scope which is determined by a negation key and a punctuation or a connector belonging to [\",\", \";\", \".\", \"!\", \"?\", \"but\", \"-\", \"so\"]. All the negated words are added to the feature space. For example, for this tweet: \"I'am not happy\"", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "The feature vector generated by the words n-gram features with negation features is:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "{\"i'am\", 'not', 'happy Neg', 'happy', \"i'am not\", 'not happy', \"i'am not happy\"} happy NEG is added by the negation features while the others are the ngrams features. Obviously, we have chosen to add the negated feature to the vector without removing the original feature happy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Negation Features", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "We constructed a dictionary for the abbreviations and the slang words used in Twitter in order to overcome the ambiguity of these terms which may increase the similarity between two similar tweets written in two different ways. This dictionary maps certain Twitter expressions and emotion icons to their meaning or their corresponding sentiment. It contains about 125 terms collected from different pages on the Web. All terms presented in a tweet and in the Twitter dictionary are mapped to their corresponding terms in the dictionary and added to the feature space. For this tweet: \"i'am going to chapel hill on sat. :)\", the term veryhappy will be added to the tweet vector because the emoticon :) will be replaced by veryhappy as indicated in the dictionary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Twitter Dictionary", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "The semantic representation of a text may bring some important hidden information, which may result in a better document representation and a better classification system. Usually, the semantic features can help to overcome the problem of spareness in short text. Externally resources may be important to get such representation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Semantic Features", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "From over 56 million English tweets (837 million tokens), 1000 hierarchical clusters have been constructed over 217 thousand words (Owoputi et al., 2013) . Note that in cluster A1, the term lololol (an extension of lol for \"laughing out loud\") is grouped with a large number of laughter acronyms.", |
|
"cite_spans": [ |
|
{ |
|
"start": 131, |
|
"end": 153, |
|
"text": "(Owoputi et al., 2013)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Brown Dictionary Features", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "Each word in the text is mapped to its cluster in Brown dictionary, 1000 features are added to feature space where each feature represents the number of words in the text belonging to each cluster.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Brown Dictionary Features", |
|
"sec_num": "4.4.1" |
|
}, |
|
{ |
|
"text": "The system extracts four features from the manual constructed lexicons and six features from the automatic ones. For each sentence the number of positive words, the number of negative ones, the number of positive words divided by the number of negative ones and the polarity of the last word are extracted from manual constructed lexicons. In addition to the sum of the positive scores and the sum of the negative scores from the automatic constructed lexicons.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Lexicons", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "The manual lexicons are: MPQA Subjectivity Lexicon 4 and Bing Liu Lexicon 5 . The automatic ones are: NRC Hashtag Sentiment Lexicon and our lexicon based on natural entropy measure (Hamdan et al., 2015c) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 181, |
|
"end": 203, |
|
"text": "(Hamdan et al., 2015c)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Lexicons", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Thus, this feature group adds 20 features to the tweet vector, some of this features are integer numbers others are floats. The lexicons which we used are the following:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Sentiment Lexicons", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "Two manual constructed lexicons have been exploited:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Manually Constructed Sentiment Lexicons", |
|
"sec_num": "4.5.1" |
|
}, |
|
{ |
|
"text": "Multi-Perspective Question Answering Subjectivity Lexicon is maintained by (Wilson et al., 2005) , a lexicon of over 8,000 subjectivity single-word clues, each clue is classified as positive or negative. This is a fragment illustrating this lexicon structure: ", |
|
"cite_spans": [ |
|
{ |
|
"start": 75, |
|
"end": 96, |
|
"text": "(Wilson et al., 2005)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MPQA Subjectivity Lexicon", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "A list of positive and negative opinion words or sentiment words for English (around 6800 words). This list was compiled over many years starting from this paper (Hu and Liu, 2004a Score is a real number indicates the sentiment score. #positive is the number of times the term co-occurred with a positive marker such as a positive emoticon or a positive hashtag. #negative is the number of times the term cooccurred with a negative marker such as a negative emoticon or a negative hashtag.", |
|
"cite_spans": [ |
|
{ |
|
"start": 162, |
|
"end": 180, |
|
"text": "(Hu and Liu, 2004a", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Bing Liu Lexicon", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "PMI metric has been widely used to compute the semantic orientation of words in order to construct the automatic lexicons. Sentiment140 lexicon is constructed using semantic orientation on Sentiment140 corpus (Go et al., 2009) , a collection of 1.6 million tweets that contain positive and negative emoticons 6 . But this corpus is a balanced corpus, it contains the same number of positive and negative tweets. Therefore, semantic orientation can be rewritten as following:", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 226, |
|
"text": "(Go et al., 2009)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "SO(w) = P M I(w, +) \u2212 P M I(w, \u2212) = log( p(w,+) p(w).p(+) ) \u2212 log( p(w,\u2212) p(w).p(\u2212) )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "(1) As p(+) = p(\u2212) = 0.5 in the balanced corpus:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "So(w) = 1 + log(p(+|w)) \u2212 1 \u2212 log(p(\u2212|w)) = log(a/c) (2)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "where + stands for the positive class, -stands for negative class, a is the number of documents containing the word w in the positive class, c is the number of documents containing the word w in the negative class. Thus, the semantic orientation is positive if a>c else it is negative. We should note that the probability of the classes does not affect the final semantic orientation score, therefore we propose another metric which depends on the distribution of the word over the classes which seems more relevant in the balanced corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "We constructed a lexicon from sentiment140 corpus, we calculated Natural Entropy (ne) score for each term in this manner:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "ne(w) = 1 \u2212 (\u2212(p(+|w).log(p(+|w)) \u2212 p(\u2212|w).log(p(\u2212|w))))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "(3) where p(+|w): The probability of the positive class given the word w. p(\u2212|w): The probability of the negative class given the word w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "The more uneven the distribution of documents where a term occurs, the larger the Natural Entropy of this term is. Thus, the entropy of the term can express the uncertainty of the classes given the term. One minus this degree of uncertainty boosts the terms that unevenly distributed between the two classes (Wu and Gu, 2014) . ne score is always between 0 and 1, and it assigns a high score for the words unevenly distributed over the classes, but it cannot discriminate the positive words from the negative ones. Therefore, we have used the a and c for discriminating the positive words from the negative ones; if a>c then the word is considered positive else it is considered negative.", |
|
"cite_spans": [ |
|
{ |
|
"start": 308, |
|
"end": 325, |
|
"text": "(Wu and Gu, 2014)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Using this lexicon instead of sentiment140 can improve the performance of a state-of-the-art sentiment classifier as shown in (Hamdan et al., 2015c) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 148, |
|
"text": "(Hamdan et al., 2015c)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Our Sentiment Lexicon", |
|
"sec_num": "4.6" |
|
}, |
|
{ |
|
"text": "Twitter datasets have been provided by SemEval organizers since 2013 for message polarity classification subtask of sentiment analysis in Twitter (Nakov et al., 2013) . The participants have been provided with training tweets annotated positive, negative or neutral. In addition to a script for downloading the tweets. After executing the given script, we got the whole training dataset which consists of 9684 tweets. The organizers have also provided a development set containing 1654 tweets for tuning a machine learner. ", |
|
"cite_spans": [ |
|
{ |
|
"start": 146, |
|
"end": 166, |
|
"text": "(Nakov et al., 2013)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Twitter Dataset", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We trained the L1-regularized logistic regression classifier implemented in LIBLINEAR (Fan et al., 2008) , we had also tested L2 regularization technique but it gives less performance than L1. The classifier is trained on the training dataset using the features in the previous section with the three polarities (positive, negative, and neutral) as labels. A weighting schema is adapted for each class, we use the weighting option \u2212w i which enables a use of different cost parameter C for different classes.", |
|
"cite_spans": [ |
|
{ |
|
"start": 76, |
|
"end": 104, |
|
"text": "LIBLINEAR (Fan et al., 2008)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Since the training data is unbalanced, this weighting schema adjusts the probability of each label. Thus, we tuned the classifier in adjusting the cost parameter C of logistic regression, weight w pos of positive class and weight w neg of negative class. We used the development set for tuning the three parameters, all combinations of C in range [0.1 .. 4] by step of 0.1, w pos in range [1 .. 8] by step of 0.1, w neg in range [1 .. 8] by step of 0.1 are tested. The combination C=0.3, w pos =7.6, w neg =5.2 have given the best F1score for the development set and therefore it was selected for our experiments on test set 2016.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment Setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "The evaluation score used by the task organizers was the averaged F1-score of the positive and negative classes. In the SemEval-2016 competition, our submission is ranked six (59.8%) over 34 submissions while it was ranked third in SemEval-2015. Table 4 shows the results of our experiments after removing a feature group at each run for the four test set 2016.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 246, |
|
"end": 253, |
|
"text": "Table 4", |
|
"ref_id": "TABREF8" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "Test-2016 All features 59.8 all-lexicons 56.9 all-ngram 58.1 all-brown 58.4 The results show that the sentiment lexicons features are the most important ones which conforms with the conclusion in different studies (Hamdan et al., 2015c; Mohammad et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 214, |
|
"end": 236, |
|
"text": "(Hamdan et al., 2015c;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 237, |
|
"end": 259, |
|
"text": "Mohammad et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Run", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "There are two principally different approaches to opinion mining: lexicon-based and supervised. The lexicon-based approach goes from the word level in order to constitute the polarity of the text. This approach depends on a sentiment lexicon to get the word polarity score. While the supervised approach goes from the text level and learn a model which assigns a polarity score to the whole text, this approach needs a labeled corpus to learn the model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Lexicon-based approaches decide the polarity of a document based on sentiment lexicons. The sentiment of a text is a function of the common words between the text and the sentiment lexicons. Much of the first lexicon-based research has focused on using adjectives as indicators of the seman-tic orientation of text (Hatzivassiloglou and McKeown, 1997; Hu and Liu, 2004b) . (Taboada et al., 2011) proposed another function called SO-CAL (Semantic Orientation CALculator) which uses dictionaries of words annotated with their semantic orientation (polarity and strength), and incorporates intensification and negation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 315, |
|
"end": 351, |
|
"text": "(Hatzivassiloglou and McKeown, 1997;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 352, |
|
"end": 370, |
|
"text": "Hu and Liu, 2004b)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 373, |
|
"end": 395, |
|
"text": "(Taboada et al., 2011)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon-Based Approach", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "Thus, the sentiment lexicon is the most important part of this approach. Three different ways can be used to construct such lexicons: Manual Approach, Dictionary-Based Approach and Corpus-Based Approach.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Lexicon-Based Approach", |
|
"sec_num": "6.1" |
|
}, |
|
{ |
|
"text": "The supervised approach is a machine learning approach. Sentiment classification can be seen as a text classification problem (Pang et al., 2002; Liu, 2012) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 145, |
|
"text": "(Pang et al., 2002;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 146, |
|
"end": 156, |
|
"text": "Liu, 2012)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The research papers in sentiment classification have mainly focused on the two steps: document representation and classification methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "While some papers have extended the bag-ofword representation by adding different types of features (Pang et al., 2002; Mohammad et al., 2013; Hamdan et al., 2013; Hamdan et al., 2015c) , others have proposed different weighting schemas to weight the features such as PMI, Information Gain and chi-square \u03c7 2 (Martineau and Finin, 2009; Paltoglou and Thelwall, 2010; Deng et al., 2014) . Recently, after the success of deep learning techniques in many classification systems, several studies have learned the features instead of extracting them (Socher et al., 2013; Severyn and Moschitti, 2015) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 100, |
|
"end": 119, |
|
"text": "(Pang et al., 2002;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 120, |
|
"end": 142, |
|
"text": "Mohammad et al., 2013;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 163, |
|
"text": "Hamdan et al., 2013;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 164, |
|
"end": 185, |
|
"text": "Hamdan et al., 2015c)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 309, |
|
"end": 336, |
|
"text": "(Martineau and Finin, 2009;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 366, |
|
"text": "Paltoglou and Thelwall, 2010;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 367, |
|
"end": 385, |
|
"text": "Deng et al., 2014)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 545, |
|
"end": 566, |
|
"text": "(Socher et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 567, |
|
"end": 595, |
|
"text": "Severyn and Moschitti, 2015)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "The work of (Pang et al., 2002) was the first to apply this approach to classify the movie reviews into two classes positive or negative. They tested several classifiers (Naive Bayes, SVM, Maximum entropy) with several features.", |
|
"cite_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 31, |
|
"text": "(Pang et al., 2002)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Later on, many studies have proposed different features and some feature selection methods to choose the best feature set. Many features have been exploited :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Terms and their weights: The features are the unigrams or n-grams with the associated frequency or weight given by a weighting schema like TF-IDF or PMI.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Part of Speech (POS): The words can indicate different sentiment according to their parts of speech (POS). Some papers treated the adjectives as special features.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Sentiment Lexicons: The words and expressions which express an opinion have been used to add additional features as the number of positive and negative terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Sentiment Shifters: The terms that are used to change the sentiment orientation, from positive to negative or vice versa such as not and never. Taking into account these features can improve the sentiment classification.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "\u2022 Semantic Features: The named entities, concepts and topics have been extracted to get the semantic of the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "Many systems which have worked on feature extraction have achieved a state-of-the-art performance in many competitions like SemEval 7 . For example, (Mohammad et al., 2013) used SVM model with several types of features including terms, POS and sentiment lexicons in Twitter data set. (Hamdan et al., 2015a; Hamdan et al., 2015c; Hamdan et al., 2015b) have also proved the importance of feature extraction with logistic regression classifier in Twitter and reviews of restaurants and laptops. They extracted terms, sentiment lexicon and some semantic features like topics. And (Hamdan et al., 2013) has proposed to extract the concepts from DBPedia. Recently, some research papers have applied deep learning techniques to sentiment classification. (Socher et al., 2013) proposed to use recursive neural network to capture the compositionality in the phrases, (Tang et al., 2014) combined the handcrafted features with learned features. They used neural network for learning sentiment-specific word embedding, then they combined hand-crafted features with these word embedding to produce a stateof-the-art system in sentiment analysis in Twitter. (Kim, 2014) proposed a simple convolutional neural network with one layer of convolution which performs remarkably well. Their results add to the wellestablished evidence that unsupervised pre-training of word vectors is an important ingredient in deep learning for Natural language processing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 172, |
|
"text": "(Mohammad et al., 2013)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 284, |
|
"end": 306, |
|
"text": "(Hamdan et al., 2015a;", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 307, |
|
"end": 328, |
|
"text": "Hamdan et al., 2015c;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 329, |
|
"end": 350, |
|
"text": "Hamdan et al., 2015b)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 576, |
|
"end": 597, |
|
"text": "(Hamdan et al., 2013)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 747, |
|
"end": 768, |
|
"text": "(Socher et al., 2013)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 858, |
|
"end": 877, |
|
"text": "(Tang et al., 2014)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 1145, |
|
"end": 1156, |
|
"text": "(Kim, 2014)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Supervised Approach", |
|
"sec_num": "6.2" |
|
}, |
|
{ |
|
"text": "In this paper, we tested the impact of combining several groups of features on the sentiment classification of tweets. A logistic regression classifier with weighting schema was used, the sentiment lexiconbased features seem to get the most influential effect with the combination. As the sentiment lexicons features seem to be so important in sentiment classification, we think that it is important to orient our future work on this direction. Improving the automatic construction of sentiment lexicons may lead to an important improvement on sentiment classification. For example, taking the context in the consideration may help such process. Another important direction is using deep learning techniques which have recently proved their performance in several studies. Thus, we can learn the features instead of extracting them.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "http://alt.qcri.org/semeval2016/task4/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://sentiment.christopherpotts.net/tokenizing.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://sentiment.christopherpotts.net/lingstruc.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://mpqa.cs.pitt.edu/lexicons/subj lexicon/ 5 http://www.cs.uic.edu/ liub/FBS/sentimentanalysis.html#lexicon", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://help.sentiment140.com/for-students", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://www.cs.york.ac.uk/semeval-2013/task2.html", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A study of supervised term weighting scheme for sentiment analysis", |
|
"authors": [ |
|
{ |
|
"first": "Zhi-Hong", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kun-Hu", |
|
"middle": [], |
|
"last": "Luo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hong-Liang", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Expert Systems with Applications", |
|
"volume": "41", |
|
"issue": "7", |
|
"pages": "3506--3513", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi-Hong Deng, Kun-Hu Luo, and Hong-Liang Yu. 2014. A study of supervised term weighting scheme for sentiment analysis. Expert Systems with Applica- tions, 41(7):3506 -3513.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "LIBLINEAR: A Library for Large Linear Classification", |
|
"authors": [ |
|
{ |
|
"first": "Kai-Wei", |
|
"middle": [], |
|
"last": "Rong-En Fan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cho-Jui", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiang-Rui", |
|
"middle": [], |
|
"last": "Hsieh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chih-Jen", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Journal of Machine Learning Research", |
|
"volume": "9", |
|
"issue": "", |
|
"pages": "1871--1874", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A Li- brary for Large Linear Classification. Journal of Ma- chine Learning Research, 9:1871-1874.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Twitter Sentiment Classification using Distant Supervision. Processing", |
|
"authors": [ |
|
{ |
|
"first": "Alec", |
|
"middle": [], |
|
"last": "Go", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richa", |
|
"middle": [], |
|
"last": "Bhayani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lei", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--6", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alec Go, Richa Bhayani, and Lei Huang. 2009. Twit- ter Sentiment Classification using Distant Supervision. Processing, pages 1-6.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Experiments with DBpedia, WordNet and SentiWordNet as resources for sentiment analysis in micro-blogging", |
|
"authors": [ |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Hamdan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Fr\u00e3 C D\u00e3 C Ric", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Bechet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Bellot", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "International Workshop on Semantic Evaluation SemEval-2013 (NAACL Workshop)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussam Hamdan, Fr\u00c3 c d\u00c3 c ric Bechet, and Patrice Bellot. 2013. Experiments with DBpedia, WordNet and SentiWordNet as resources for sentiment analysis in micro-blogging. In International Workshop on Se- mantic Evaluation SemEval-2013 (NAACL Workshop), Atlanta, Georgia (USA), April.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Lsislif: CRF and Logistic Regression for Opinion Target Extraction and Sentiment Polarity Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Hamdan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Bellot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederic", |
|
"middle": [], |
|
"last": "Bechet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "753--758", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussam Hamdan, Patrice Bellot, and Frederic Bechet. 2015a. Lsislif: CRF and Logistic Regression for Opin- ion Target Extraction and Sentiment Polarity Analy- sis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 753- 758, Denver, Colorado, June. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Lsislif: Feature Extraction and Label Weighting for Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Hamdan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Bellot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederic", |
|
"middle": [], |
|
"last": "Bechet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "568--573", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussam Hamdan, Patrice Bellot, and Frederic Bechet. 2015b. Lsislif: Feature Extraction and Label Weight- ing for Sentiment Analysis in Twitter. In Proceedings of the 9th International Workshop on Semantic Eval- uation (SemEval 2015), pages 568-573, Denver, Col- orado, June. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Sentiment Lexicon-Based Features for Sentiment Analysis in Short Text", |
|
"authors": [ |
|
{ |
|
"first": "Hussam", |
|
"middle": [], |
|
"last": "Hamdan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Patrice", |
|
"middle": [], |
|
"last": "Bellot", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frederic", |
|
"middle": [], |
|
"last": "Bechet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceeding of the 16th International Conference on Intelligent Text Processing and Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hussam Hamdan, Patrice Bellot, and Frederic Bechet. 2015c. Sentiment Lexicon-Based Features for Sen- timent Analysis in Short Text. In Proceeding of the 16th International Conference on Intelligent Text Pro- cessing and Computational Linguistics, Cairo, Egypt.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Predicting the Semantic Orientation of Adjectives", |
|
"authors": [ |
|
{ |
|
"first": "Vasileios", |
|
"middle": [], |
|
"last": "Hatzivassiloglou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kathleen", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Mckeown", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "Proceedings of the Eighth Conference on European Chapter of the Association for Computational Linguistics, EACL '97", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "174--181", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vasileios Hatzivassiloglou and Kathleen R. McKeown. 1997. Predicting the Semantic Orientation of Adjec- tives. In Proceedings of the Eighth Conference on Eu- ropean Chapter of the Association for Computational Linguistics, EACL '97, pages 174-181, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Mining and Summarizing Customer Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Minqing", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minqing Hu and Bing Liu. 2004a. Mining and Summa- rizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Mining and Summarizing Customer Reviews", |
|
"authors": [ |
|
{ |
|
"first": "Minqing", |
|
"middle": [], |
|
"last": "Hu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '04", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "168--177", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Minqing Hu and Bing Liu. 2004b. Mining and Summa- rizing Customer Reviews. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, KDD '04, pages 168-177, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Convolutional Neural Networks for Sentence Classification", |
|
"authors": [ |
|
{ |
|
"first": "Yoon", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. CoRR, abs/1408.5882.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Sentiment Analysis and Opinion Mining", |
|
"authors": [ |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Synthesis Lectures on Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bing Liu. 2012. Sentiment Analysis and Opinion Min- ing. Synthesis Lectures on Human Language Tech- nologies. Morgan & Claypool Publishers.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Delta TFIDF: An Improved Feature Space for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Justin", |
|
"middle": [], |
|
"last": "Martineau", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tim", |
|
"middle": [], |
|
"last": "Finin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "ICWSM", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Justin Martineau and Tim Finin. 2009. Delta TFIDF: An Improved Feature Space for Sentiment Analysis. In ICWSM.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "NRCCanada: Building the State-of-the-Art in Sentiment Analysis of Tweets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Saif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Svetlana", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodan", |
|
"middle": [], |
|
"last": "Kiritchenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the International Workshop on Semantic Evaluation, SemEval '13", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif M. Mohammad, Svetlana Kiritchenko, and Xiaodan Zhu. 2013. NRCCanada: Building the State-of-the- Art in Sentiment Analysis of Tweets. In Proceedings of the International Workshop on Semantic Evalua- tion, SemEval '13.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "#Emotional Tweets", |
|
"authors": [ |
|
{ |
|
"first": "Saif", |
|
"middle": [], |
|
"last": "Mohammad", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "246--255", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Saif Mohammad. 2012. #Emotional Tweets. In *SEM 2012: The First Joint Conference on Lexical and Com- putational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), pages 246-255, Montr\u00e9al, Canada, June. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "SemEval-2013 Task 2: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Zornitsa", |
|
"middle": [], |
|
"last": "Kozareva", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the Seventh International Workshop on Semantic Evaluation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "312--320", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Sara Rosenthal, Zornitsa Kozareva, Veselin Stoyanov, Alan Ritter, and Theresa Wilson. 2013. SemEval-2013 Task 2: Sentiment Analysis in Twitter. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 312-320, Atlanta, Georgia, USA, June. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "SemEval-2016 Task 4: Sentiment Analysis in Twitter", |
|
"authors": [ |
|
{ |
|
"first": "Preslav", |
|
"middle": [], |
|
"last": "Nakov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alan", |
|
"middle": [], |
|
"last": "Ritter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sara", |
|
"middle": [], |
|
"last": "Rosenthal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fabrizio", |
|
"middle": [], |
|
"last": "Sebastiani", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval '2016. Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Preslav Nakov, Alan Ritter, Sara Rosenthal, Fabrizio Se- bastiani, and Veselin Stoyanov. 2016. SemEval-2016 Task 4: Sentiment Analysis in Twitter. In Proceed- ings of the 10th International Workshop on Semantic Evaluation, SemEval '2016. Association for Compu- tational Linguistics, June.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Improved part-of-speech tagging for online conversational text with word clusters", |
|
"authors": [ |
|
{ |
|
"first": "Olutobi", |
|
"middle": [], |
|
"last": "Owoputi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Brendan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kevin", |
|
"middle": [], |
|
"last": "Dyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nathan", |
|
"middle": [], |
|
"last": "Gimpel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noah A", |
|
"middle": [], |
|
"last": "Schneider", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Smith", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "380--390", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conver- sational text with word clusters. In Proceedings of NAACL-HLT, pages 380-390.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "A Study of Information Retrieval Weighting Schemes for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Georgios", |
|
"middle": [], |
|
"last": "Paltoglou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mike", |
|
"middle": [], |
|
"last": "Thelwall", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1386--1395", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Georgios Paltoglou and Mike Thelwall. 2010. A Study of Information Retrieval Weighting Schemes for Sen- timent Analysis. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguis- tics, ACL '10, pages 1386-1395, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Thumbs Up?: Sentiment Classification Using Machine Learning Techniques", |
|
"authors": [ |
|
{ |
|
"first": "Bo", |
|
"middle": [], |
|
"last": "Pang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lillian", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shivakumar", |
|
"middle": [], |
|
"last": "Vaithyanathan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "10", |
|
"issue": "", |
|
"pages": "79--86", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs Up?: Sentiment Classification Using Machine Learning Techniques. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing -Volume 10, EMNLP '02, pages 79-86, Stroudsburg, PA, USA. Association for Com- putational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "UNITN: Training Deep Convolutional Neural Network for Twitter Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Aliaksei", |
|
"middle": [], |
|
"last": "Severyn", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alessandro", |
|
"middle": [], |
|
"last": "Moschitti", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "Proceedings of the 9th International Workshop on Semantic Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "464--469", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Aliaksei Severyn and Alessandro Moschitti. 2015. UNITN: Training Deep Convolutional Neural Net- work for Twitter Sentiment Classification. In Proceed- ings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), pages 464-469, Denver, Colorado, June. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank", |
|
"authors": [ |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Socher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Perelygin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Jean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Chuang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher Potts", |
|
"middle": [], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Potts", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts. 2013. Recursive Deep Mod- els for Semantic Compositionality Over a Sentiment Treebank. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Lexicon-based Methods for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Maite", |
|
"middle": [], |
|
"last": "Taboada", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Brooke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Milan", |
|
"middle": [], |
|
"last": "Tofiloski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kimberly", |
|
"middle": [], |
|
"last": "Voll", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Manfred", |
|
"middle": [], |
|
"last": "Stede", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Comput. Linguist", |
|
"volume": "37", |
|
"issue": "2", |
|
"pages": "267--307", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Maite Taboada, Julian Brooke, Milan Tofiloski, Kim- berly Voll, and Manfred Stede. 2011. Lexicon-based Methods for Sentiment Analysis. Comput. Linguist., 37(2):267-307, June.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification", |
|
"authors": [ |
|
{ |
|
"first": "Duyu", |
|
"middle": [], |
|
"last": "Tang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Furu", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nan", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ming", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bing", |
|
"middle": [], |
|
"last": "Qin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "1555--1565", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1555-1565, Baltimore, Mary- land, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "OpinionFinder: A System for Subjectivity Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Theresa", |
|
"middle": [], |
|
"last": "Wilson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Paul", |
|
"middle": [], |
|
"last": "Hoffmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Swapna", |
|
"middle": [], |
|
"last": "Somasundaran", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jason", |
|
"middle": [], |
|
"last": "Kessler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Janyce", |
|
"middle": [], |
|
"last": "Wiebe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Claire", |
|
"middle": [], |
|
"last": "Cardie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ellen", |
|
"middle": [], |
|
"last": "Riloff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Siddharth", |
|
"middle": [], |
|
"last": "Patwardhan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of HLT/EMNLP on Interactive Demonstrations, HLT-Demo '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "34--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Theresa Wilson, Paul Hoffmann, Swapna Somasun- daran, Jason Kessler, Janyce Wiebe, Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. OpinionFinder: A System for Subjectivity Analysis. In Proceedings of HLT/EMNLP on Interactive Demon- strations, HLT-Demo '05, pages 34-35, Stroudsburg, PA, USA. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Reducing Over-Weighting in Supervised Term Weighting for Sentiment Analysis", |
|
"authors": [ |
|
{ |
|
"first": "Haibing", |
|
"middle": [], |
|
"last": "Wu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaodong", |
|
"middle": [], |
|
"last": "Gu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1322--1330", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haibing Wu and Xiaodong Gu. 2014. Reducing Over- Weighting in Supervised Term Weighting for Senti- ment Analysis. In COLING 2014, 25th International Conference on Computational Linguistics, Proceed- ings of the Conference: Technical Papers, August 23- 29, 2014, Dublin, Ireland, pages 1322-1330.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Twitter expression Meaning</td></tr><tr><td>:)</td><td>veryhappy</td></tr><tr><td>: )</td><td>veryhappy</td></tr><tr><td>b/c</td><td>Because</td></tr><tr><td>FWIW</td><td>For what it's worth</td></tr><tr><td>Gr8</td><td>Great</td></tr><tr><td>IMHO</td><td>In my honest opinion or in my humble opinion</td></tr><tr><td>J/K</td><td>Just kidding</td></tr><tr><td>LOL</td><td>Laughing out loud funny</td></tr><tr><td>OMG</td><td>Oh my God</td></tr><tr><td>PLZ</td><td>Please</td></tr><tr><td>ROFL</td><td>Rolling on the floor laughing</td></tr><tr><td>RTHX</td><td>Thanks for the retweet</td></tr><tr><td>hahaha</td><td>laughing funny</td></tr><tr><td>wow</td><td>amazing surprised</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows a part of the dictionary." |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF2": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Cluster Top words (by frequency)</td></tr><tr><td>A1</td><td>lmao lmfao lmaoo lmaooo hahahahaha lool ctfu rofl loool lmfaoo lmfaooo lmaoooo lmbo</td></tr><tr><td/><td>lololol</td></tr><tr><td>A2</td><td>haha hahaha hehe hahahaha hahah aha hehehe ahaha hah hahahah kk hahaa ahah</td></tr><tr><td>A3</td><td>yes yep yup nope yess yesss yessss ofcourse yeap likewise yepp yesh yw yuup yus</td></tr><tr><td>A4</td><td>yeah yea nah naw yeahh nooo yeh noo noooo yeaa ikr nvm yeahhh nahh nooooo</td></tr><tr><td>A5</td><td>smh jk #fail #random #fact smfh #smh #winning #realtalk smdh #dead #justsaying</td></tr></table>", |
|
"type_str": "table", |
|
"text": "shows an example of five clusters." |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Example Twitter word clusters: we list the most probable words, starting with the most probable, in descending order." |
|
}, |
|
"TABREF6": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td/><td/><td colspan=\"3\">shows the distribution of each</td></tr><tr><td colspan=\"2\">label in each dataset.</td><td/><td/><td/></tr><tr><td>Data</td><td>All</td><td colspan=\"3\">Positive Negative Neutral</td></tr><tr><td>train</td><td>9684</td><td>3640</td><td>1458</td><td>4586</td></tr><tr><td>dev</td><td>1654</td><td>739</td><td>340</td><td>575</td></tr><tr><td colspan=\"2\">test-2016 -</td><td>-</td><td>-</td><td>-</td></tr></table>", |
|
"type_str": "table", |
|
"text": "" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "Sentiment labels distribution in the training, testing and development datasets in Twitter." |
|
}, |
|
"TABREF8": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"text": "The F1 score for each run, All features run exploits all features while the others remove a feature group at each run lexicons, n-gram and brown cluster, respectively." |
|
} |
|
} |
|
} |
|
} |