|
{ |
|
"paper_id": "C10-1047", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T12:55:22.344381Z" |
|
}, |
|
"title": "Finding the Storyteller: Automatic Spoiler Tagging using Linguistic Cues", |
|
"authors": [ |
|
{ |
|
"first": "Sheng", |
|
"middle": [], |
|
"last": "Guo", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Virginia", |
|
"middle": [], |
|
"last": "Tech", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Naren", |
|
"middle": [], |
|
"last": "Ramakrishnan", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Given a movie comment, does it contain a spoiler? A spoiler is a comment that, when disclosed, would ruin a surprise or reveal an important plot detail. We study automatic methods to detect comments and reviews that contain spoilers and apply them to reviews from the IMDB (Internet Movie Database) website. We develop topic models, based on Latent Dirichlet Allocation (LDA), but using linguistic dependency information in place of simple features from bag of words (BOW) representations. Experimental results demonstrate the effectiveness of our technique over four movie-comment datasets of different scales.", |
|
"pdf_parse": { |
|
"paper_id": "C10-1047", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Given a movie comment, does it contain a spoiler? A spoiler is a comment that, when disclosed, would ruin a surprise or reveal an important plot detail. We study automatic methods to detect comments and reviews that contain spoilers and apply them to reviews from the IMDB (Internet Movie Database) website. We develop topic models, based on Latent Dirichlet Allocation (LDA), but using linguistic dependency information in place of simple features from bag of words (BOW) representations. Experimental results demonstrate the effectiveness of our technique over four movie-comment datasets of different scales.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In everyday parlance, the notion of 'spoilers' refers to information, such as a movie plot, whose advance revelation destroys the enjoyment of the consumer. For instance, consider the movie Derailed which features Clive Owen and Jennifer Aniston. In the script, Owen is married and meets Aniston on a train during his daily commute to work. The two of them begin an affair. The adultery is noticed by some inscrupulous people who proceed to blackmail Owen and Aniston. To experience a spoiler, consider this comment from imdb.com:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "I can understand why Aniston wanted to do this role, since she gets to play majorly against type (as the supposedly 'nice' girl who's really -oh no! -part of the scam), but I'm at a loss to figure out what Clive Owen is doing in this sub-par, unoriginal, ugly and overly violent excuse for a thriller.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "i.e., we learn that Aniston's character is actually a not-so-nice person who woos married men for later blackmail, and thus a very suspenseful piece of information is revealed. Automatic ways to detect spoilers are crucial in large sites that host reviews and opinions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Arguably, what constitutes a spoiler is inherently a subjective assessment and, for movies/books with intricate storylines, some comments are likely to contain more spoilers than others. We therefore cast the spoiler detection problem as a ranking problem so that comments that are more likely to be spoilers are to be ranked higher than others. In particular, we rank user comments w.r.t. (i.e., given) the movie's synopsis which, according to imdb, is '[a detailed description of the movie, including spoilers, so that users who haven't seen a movie can read anything about the title]'.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our contributions are three fold. (i) We formulate the novel task of spoiler detection in reviews and cast it as ranking user comments against a synopsis. We demonstrate how simple bagof-words (BOW) representations need to be augmented with linguistic cues in order to satisfactorily detect spoilers. (ii) We showcase the ability of dependency parses to extract discriminatory linguistic cues that can distinguish spoilers from non-spoilers. We utilize an LDA-based model (Wei and Croft, 2006) to probabilistically rank spoilers. Our approach does not require manual tagging of positive and negative examples -an advantage that is crucial to large scale implementation. (iii) We conduct a detailed experimental evaluation with imdb to assess the effectiveness of our framework. Using manually tagged com-ments for four diverse movies and suitably configured design choices, we evaluate a total of 12 ranking strategies.", |
|
"cite_spans": [ |
|
{ |
|
"start": 472, |
|
"end": 493, |
|
"text": "(Wei and Croft, 2006)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Probabilistic topic modeling has attracted significant attention with techniques such as probabilistic latent semantic analysis (PLSA) (Hofmann, 1999) and LDA (Blei et al., 2003; Griffiths and Steyvers, 2004; Heinrich, 2008; Steyvers and Griffiths, 2007) . We discuss LDA in detail due to its centrality to our proposed techniques. As a generative model, LDA describes how text could be generated from a latent set of variables denoting topics. Each document is modeled as a mixture of topics, and topics are modeled as multinomial distributions on words.", |
|
"cite_spans": [ |
|
{ |
|
"start": 135, |
|
"end": 150, |
|
"text": "(Hofmann, 1999)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 159, |
|
"end": 178, |
|
"text": "(Blei et al., 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 208, |
|
"text": "Griffiths and Steyvers, 2004;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 209, |
|
"end": 224, |
|
"text": "Heinrich, 2008;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 225, |
|
"end": 254, |
|
"text": "Steyvers and Griffiths, 2007)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "An unlabeled training corpus can be used to estimate an LDA model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many inference methods have been proposed, e.g., variational methods (Blei et al., 2003) , expectation propagation (Griffiths and Steyvers, 2004) , Gibbs sampling (Griffiths and Steyvers, 2004) , and a collapsed variational Bayesian inference method (Teh et al., 2007) . Gibbs sampling, as a specific form of Markov chain Monte Carlo (MCMC), is a popular method for estimating LDA models. After an LDA model is estimated, it can be used in a very versatile manner: to analyze new documents, for inference tasks, or for retrieval/comparison functions. For instance, we can calculate the probability that a given word appears in a document conditioned on other words. Furthermore, two kinds of similarities can be assessed: between documents and between words (Steyvers and Griffiths, 2007) . The similarity between two documents can also be used to retrieve documents relevant to a query document (Heinrich, 2008 ). Yet another application is to use LDA as a dimensionality reduction tool for text classification (Blei et al., 2003) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 69, |
|
"end": 88, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 115, |
|
"end": 145, |
|
"text": "(Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 163, |
|
"end": 193, |
|
"text": "(Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 250, |
|
"end": 268, |
|
"text": "(Teh et al., 2007)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 758, |
|
"end": 788, |
|
"text": "(Steyvers and Griffiths, 2007)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 896, |
|
"end": 911, |
|
"text": "(Heinrich, 2008", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 1012, |
|
"end": 1031, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "To improve LDA's expressiveness, we can relax the bag-of-words assumption and plug in more sophisticated topic models (Griffiths et al., 2005; Griffiths et al., 2007; Wallach, 2006; Wallach, 2008; Wang and Mccallum, 2005; Wang et al., 2007) . sLDA (supervised LDA), as a statistical model of labeled collections, focuses on the prediction problem (Blei and Mcauliffe, 2007) . The correlated topic model (CTM) (Blei and Lafferty, 2007) addresses plain LDA's inability to model topic correlation. The author-topic model (AT) (Steyvers et al., 2004) considers not only topics but also authors of the documents, and models documents as if they were generated by a two-stage stochastic process.", |
|
"cite_spans": [ |
|
{ |
|
"start": 118, |
|
"end": 142, |
|
"text": "(Griffiths et al., 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 143, |
|
"end": 166, |
|
"text": "Griffiths et al., 2007;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 167, |
|
"end": 181, |
|
"text": "Wallach, 2006;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 182, |
|
"end": 196, |
|
"text": "Wallach, 2008;", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 221, |
|
"text": "Wang and Mccallum, 2005;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 222, |
|
"end": 240, |
|
"text": "Wang et al., 2007)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 347, |
|
"end": 373, |
|
"text": "(Blei and Mcauliffe, 2007)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 409, |
|
"end": 434, |
|
"text": "(Blei and Lafferty, 2007)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 523, |
|
"end": 546, |
|
"text": "(Steyvers et al., 2004)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Based on the fact that a spoiler should be topically close to the synopsis, we propose three methods to solve the spoiler ranking problem. The first two use LDA as a preprocessing stage, whereas the third requires positive training data. Predictive perplexity: Our first method is motivated by the use of LDA-based predictive perplexity (PP) for collaborative filtering (Blei et al., 2003) . Here, the PP metric is evaluated over a fixed test dataset in order to empirically compare LDA with other models (pLSI, mixture of unigrams). In our work, we view documents as analogous to users, and words inside documents as analogous to movies. Given a group of known words, we predict the other group of unkown words. We can either calculate the predictive perplexity value from each movie comment Com to the unique synopsis (PP1), or from the synopsis Syn to each comment (PP2).", |
|
"cite_spans": [ |
|
{ |
|
"start": 370, |
|
"end": 389, |
|
"text": "(Blei et al., 2003)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "P P 1(Syn, w com ) = exp{\u2212 PM syn d=1 log p(w d |wcom) Msyn } P P 2(Com, w syn ) = exp{\u2212 P Mcom d=1 log p(w d |wsyn) Mcom }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In the equations above, p(w d |w com ) and p(w d |w syn ) are the probabilities to generate the word (w d ) from a group of observed words w obs (either a comment w com or a synopsis w syn ). p(w|w obs ) = z p(w|z)p(z|\u03b8)p(\u03b8|w obs )d\u03b8 M com or M syn is the length of a comment or a synopsis. Notice that p(\u03b8|w obs ) can be easily calculated after estimating LDA model by Gibbs sampling. It is also discussed as \"predictive likelihood ranking\" in (Heinrich, 2008) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 445, |
|
"end": 461, |
|
"text": "(Heinrich, 2008)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Methods", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "Since documents are modeled as mixtures of topics in LDA, we can calculate the similarity between synopsis and comment by measuring their topic distributions' similarity. We adopt the widely-used symmetrized Kullback Leibler (KL) divergence (Heinrich, 2008; Steyvers and Griffiths, 2007) to measure the difference between the two documents' topic distributions,", |
|
"cite_spans": [ |
|
{ |
|
"start": 241, |
|
"end": 257, |
|
"text": "(Heinrich, 2008;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 258, |
|
"end": 287, |
|
"text": "Steyvers and Griffiths, 2007)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetrized KL-divergence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "sKL(Syn, Com) = 1 2 [D KL (Syn Com) + D KL (Com Syn)]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetrized KL-divergence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetrized KL-divergence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "D KL (p q) = T j=1 p j log 2 p j q j LPU:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetrized KL-divergence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Viewing the spoiler ranking problem as a retrieval task given the (long) query synopsis, we also consider the LPU (Learning from Positive and Unlabeled Data) method (Liu et al., 2003) . We apply LPU as if the comment collection was the unlabeled dataset, and the synopsis together with few obvious spoiler comments as the positive training data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 165, |
|
"end": 183, |
|
"text": "(Liu et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Symmetrized KL-divergence:", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "LDA, as a topic model, is widely used as a clustering method and dimensionality reduction tool.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "It models text as a mixture of topics. However, topics extracted by LDA are not necessarily the same topics as judged by humans since the definition of topic is very subjective. For instance, when conducting sentimental polarity analysis, we hope that topics are clusters concerning one certain kind of subjective sentiment. But for other purposes, we may desire topics focusing on broad 'plots.' Since LDA merely processes a collection according to the statistical distribution of words, its results might not fit either of these two cases mentioned above.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In a basic topic model (section 3.1), neither the order of a sequence of words nor the semantic connections between two words affect the probabilistic modeling. Documents are generated only based on a BOW assumption. However, word order information is very important for most textrelated tasks, and simply discarding the order information is inappropriate. Significant work has gone in to address this problem. Griffiths et al. use order information by incorporating collocations (Griffiths et al., 2005; Griffiths et al., 2007) . They give an example of the collocation \"united kingdom\", which is ideally treated as a single chunk than two independent words. However, this model can only be used to capture collocations involving sequential terms. Their extended model (Griffiths et al., 2007) integrates topics and syntax, and identifies syntactic classes of words based on their distribution. More sophisticated models exist (Wallach, 2006; Wang and Mccallum, 2005; Wang et al., 2007; Wallach, 2008 ) but all of them are focused on solving linguistic analysis tasks using topic models. In this paper, however, our focus is on utilizing dependency information as a preprocessing step to help improve the accuracy of LDA models.", |
|
"cite_spans": [ |
|
{ |
|
"start": 480, |
|
"end": 504, |
|
"text": "(Griffiths et al., 2005;", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 505, |
|
"end": 528, |
|
"text": "Griffiths et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 770, |
|
"end": 794, |
|
"text": "(Griffiths et al., 2007)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 928, |
|
"end": 943, |
|
"text": "(Wallach, 2006;", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 944, |
|
"end": 968, |
|
"text": "Wang and Mccallum, 2005;", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 969, |
|
"end": 987, |
|
"text": "Wang et al., 2007;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 988, |
|
"end": 1001, |
|
"text": "Wallach, 2008", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In more detail, we utilize dependency parsing to breakup sentences and treat parses as independent 'virtual words,' to be added to the original BOWbased LDA model. In our experiments we employ the Stanford typed dependency parser 1 (Marneffe et al., 2006) as our parsing tool. We use collapsed typed dependencies (a.k.a. grammatical relations) to form the virtual words. However, we do not incorporate all the dependencies. We only retain dependencies whose terms have the part-of-speech tags such as \"NN\", \"VB\", \"JJ\", \"PRP\" and \"RB\" 2 , since these terms have strong plot meaning, and are close to the movie topic. Fig. 2 shows a typical parsing result from one sample sentence. This sentence is taken from a review of Unbreakable. Consider Fig. 1 , which depicts five sample sentences all containing two words: \"Dunn\" and \"survivor\". Although these sentences appear different, these two words above refer to the same individual. By treating dependencies as virtual words, we can easily integrate these plot-related relations into an LDA model. Notice that among these five sentences, the grammatical relations between these two words are different: in the fourth sentence, \"survivor\" serves as an appositional modifier of the term \"Dunn\"(appos), whereas in David Dunn is the sole survivor of this terrible disaster.", |
|
"cite_spans": [ |
|
{ |
|
"start": 232, |
|
"end": 255, |
|
"text": "(Marneffe et al., 2006)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 616, |
|
"end": 622, |
|
"text": "Fig. 2", |
|
"ref_id": "FIGREF0" |
|
}, |
|
{ |
|
"start": 742, |
|
"end": 748, |
|
"text": "Fig. 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "David Dunn (Bruce Willis) is the only survivor in a horrific train trash.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "David Dunn, a man caught in what appears to be a loveless, deteriorating marriage, is the sole survivor of a Philadelphia train wreck.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "In this Bruce Willis plays David Dunn, the sole survivor of a passenger train accident.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Then the story moves to security guard David Dunn (Bruce Willis) miraculously being the lone survivor of a mile-long train crash (that you find out later was not accidental), and with no injuries what-so-ever. nsubj nsubj nsubj appos nsubj Figure 1 : Four sentences with the same topical connection between \"Dunn\" and \"survivor\". We integrate this relation into LDA by treating it as a virtual word \"Dunn-survivor.\" other sentences, \"Dunn\" serves as the nominal subject of \"survivor\"(nsubj). What is important to note is that the surface distances between these given words in different sentences vary a lot. By utilizing dependency parsing, we can capture the semantic connection which is physically separated by even as much as 15 words, as in the third sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 240, |
|
"end": 248, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We evaluate topic drift among the results from plain LDA. We mainly check whether plain LDA will assign the same topic to those terms that have specific linguistic dependency relations. We only consider the following four types of dependencies for evaluation 3 :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Relations with two noun terms: <NN, NN>, such as \"appos\", \"nn\", \"abbrev\" etc.;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Relations with one noun and one adjective: <NN, JJ>, like \"amod\";", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Relations with one noun and one verb: <NN, VB>, such as \"agent\", \"dobj\", etc.;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "\u2022 Relations with only one noun: <NN, *>, which is the relaxed version of <NN, NN>;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "We experimented with different pre-set topic numbers (500, 50, and 2) and conducted experiments on four different movie comment collections with LDA analysis. Table 1 shows that <NN, NN> dependency has the highest chance to be topic-matched 4 than other relations. However, all dependencies have very low percentage to be topic-matched, and with a topic number of 2, there remained a significant amount of unmatched <NN, NN> dependencies, demonstrating that simply doing plain LDA may not capture the plot \"topic\" as we desire. Observing the results above, each method from section 3.1 (PP1, PP2, sKL and LPU) can be extended by: (1) using BOW-based words, (2) using only dependency-based words, or (3) using a mix of BOW and dependency (dependencies as virtual words). This induces 12 different ranking strategies. The whole film is somewhat slow and it would've been possible to add more action scenes. Even though I liked it very much (6.8/10) I think it is less impressive than \"The Sixth Sense\" (8.0/10). I would like to be more specific with each scene but it will turn this comment into a spoiler so I will leave it there. I recommend you to see the movie if you come from the basic Sci-Fi generation, otherwise you may feel uncomfortable with it. Anyway once upon a time you were a kid in wonderland and everything was possible.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 159, |
|
"end": 166, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[tt0217869]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "2 Spoiler This is one of the rare masterpiece that never got the respect it deserved because people were expecting sixth sense part 2.Sixth sense was a great film but this is M.N. Shyamalan's best work till date. This is easily one of my top 10 films of all time. Excellent acting, direction, score, cinematography and mood. This movie will hold you in awe from start to finish and any student of cinema would tell what a piece of art this film is. The cast is phenomenal, right from bruce willis to sam jackson and penn , everyone is spectacular in their roles and they make u realise that you do not need loud dramatic moments to create an impact, going slow and subtle is the trick here. This is not a thriller, it's a realistic superhero film.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[tt0217869]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "3 Spoiler I can't believe this movie gets a higher rating than the village. OK, after thinking about it, i get the story of unbreakable and i understand what it's trying to say. I do think the plot and the idea is captivating and interesting. Having said that, i don't think the director did anything to make this movie captivating nor interesting. It seemed to try too hard to make this movie a riddle for the audience to solve. The pace was slow at the beginning and ended just as it was getting faster. I remember going out of the cinema, feeling frustrated and confused. it's not until i thoroughly thought about it that i understood the plot. I believe a good movie should engaged the audience and be cleverly suspenseful without confusing the audience too much. Unbreakable tried to be that but failed miserably. 2 out of 10, see the village instead.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "[tt0217869]", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "4 Spoiler This movie touched me in ways I have trouble expressing, and brings forth a message one truly need to take seriously! I was moved, and the ending brought a tear to my eye, as well as a constant two-minute shiver down my spine. It shows how our western way of life influence the lives of thousands of innocents, in a not-so-positive way. Conflict diamonds, as theme this movie debates, are just one of them. Think of Nike, oil, and so on. We continually exploit \"lesser developed\" nations for our own benefit, leaving a trail of destruction, sorrow, and broken backs in our trail. I, for one, will be more attentive as to what products I purchase in the future, that's for sure. 4 Experimental Results", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dependency Parsing", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "IMDb boasts a collection of more than 203,000 movies (from 1999 to 2009) , and the number of comments and reviews for these movies number nearly 970,000. For those movies with synopsis provided by IMDb, the average length of their synopses is about 2422 characters 5 . Our experimental setup, for evaluation purposes, requires some amount of labeled data. We choose four movies from IMDb, together with 2148 comments. As we can see in Table 3 , these four movies have different sizes of comment sets: the movie \"Unbreakable\" (2000) has more than 1000 comments, whereas the movie \"Role Models\" (2008) has only 123 comments. We labeled all the 2148 comments for these four movies manually, and as Table 3 shows, 5 Those movies without synopsis are not included. about 20% of each movie's comments are spoilers. Our labeling result is a little different from the current labeling in IMDb: among the 2148 comments, although 1659 comments have the same labels with IMDb, the other 489 are different (205 are treated as spoilers by IMDb but non-spoilers by us; vice versa with 284) The current labeling system in IMDb is very coarse: as shown in Table 2, the first four rows of comments are labeled as spoilers by IMDb, but actually they are not. The last two rows of comments are ignored by IMDb; however, they do expose the plots about the twisting ends.", |
|
"cite_spans": [ |
|
{ |
|
"start": 53, |
|
"end": 72, |
|
"text": "(from 1999 to 2009)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 711, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 435, |
|
"end": 442, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 695, |
|
"end": 702, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data preparation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "After crawling all the comments of these four movies, we performed sentence chunking using the LingPipe toolkit and obtained 356 sentences for the four movies' synopses, and 26964 sentences for all the comments of these four movies. These sentences were parsed to extract dependency information: we obtained 5655 dependencies for all synopsis sentences and 448170 dependencies for all comment sentences. From these, we only retain those dependencies that have at least one noun term in either left side or the right side. For measures which require the dependency information, the dependencies are re-organized and treated as a new term planted in the text.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Data preparation", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "One of the shortcomings of LDA-based methods is that they require setting a number of topics in advance. Numerous ways have been proposed to handle this problem (Blei et al., 2004; Blei et al., 2003; Griffiths and Steyvers, 2004; Heinrich, 2008; Steyvers and Griffiths, 2007; Teh et al., 2006) . Perplexity, which is widely used in the language modeling community, is also used to predict the best number of topics. It is a measure of how well the model fits the unseen documents, and is calculated as average per-word held-out likelihood. The lower the perplexity is, the better the model is, and therefore, the number of topic is specified as the one leading to the best performance. Griffiths and Steyvers (Griffiths and Steyvers, 2004) also discuss the standard Bayesian method which computes the posterior probability of different models given the observed data. Another method from non-parametric Bayesian statistics automatically helps choose the appropriate number of topics, with flexibility to still choose hyperparameters (Blei et al., 2004; Teh et al., 2006) . Although the debate of choosing an appropriate number of topics continues (Boyd-Graber et al., 2009) , we utilized the classic perplexity method in our work. Heinrich (Heinrich, 2008) demonstrated that perplexity can be calculated by:", |
|
"cite_spans": [ |
|
{ |
|
"start": 161, |
|
"end": 180, |
|
"text": "(Blei et al., 2004;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 181, |
|
"end": 199, |
|
"text": "Blei et al., 2003;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 200, |
|
"end": 229, |
|
"text": "Griffiths and Steyvers, 2004;", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 245, |
|
"text": "Heinrich, 2008;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 246, |
|
"end": 275, |
|
"text": "Steyvers and Griffiths, 2007;", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 276, |
|
"end": 293, |
|
"text": "Teh et al., 2006)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 686, |
|
"end": 739, |
|
"text": "Griffiths and Steyvers (Griffiths and Steyvers, 2004)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 1033, |
|
"end": 1052, |
|
"text": "(Blei et al., 2004;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 1053, |
|
"end": 1070, |
|
"text": "Teh et al., 2006)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1147, |
|
"end": 1173, |
|
"text": "(Boyd-Graber et al., 2009)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1240, |
|
"end": 1256, |
|
"text": "(Heinrich, 2008)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic number analysis", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "P (W|M) = M m=1 p(\u02dc w m |M) \u2212 1 N = exp{\u2212 P M m=1 log p(\u02dc wm|M) P M m=1 Nm }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic number analysis", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "We chose different topic numbers and calculated the perplexity value for the 20% held-out comments. A good number of topics was found to be between 200 and 600 for both Bow-based strategy and Bow+Dependency strategy, and is also affected by the size of movie comment collections. (We used 0.1 as the document topic prior, and 0.01 as the topic word prior.)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Topic number analysis", |
|
"sec_num": "4.2.1" |
|
}, |
|
{ |
|
"text": "As discussed earlier, our task is to rank all the comments according to their possibilities of being a spoiler. We primarily used four methods to do the ranking: PP1, PP2, sKL, and the LPU method. For each method, we tried the basic model using \"bag-of-words\", and the model using dependency parse information (only), and also with both BOW and dependency information mixed. We utilize LingPipe LDA clustering component which uses Gibbs sampling.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA analysis process", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "Among the four methods studied here, PP1, PP2 and sKL are based on LDA preprocessing. After obtaining the topic-word distribution and the posterior distributions for topics in each document, the PP1 and PP2 metrics can be easily calculated. The symmetrized KL divergence between each pair of synopsis and comment is calculated by comparing their topic distributions. LPU method, as a text classifier, requires a set of positive training data. We selected those comments which contain terms or phrases as strong hint of spoiler (using a list of 20 phrases as the filter, such as \"spoiler alert\", \"spoiler ahead\", etc). These spoiler comments together with the synopsis, are treated as the positive training data. We then utilized LPU to label each comment with a real number for ranking.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA analysis process", |
|
"sec_num": "4.2.2" |
|
}, |
|
{ |
|
"text": "To evaluate the ranking effects of the 12 strategies, we plot n-best precision and recall graphs, which are widely used for assessing collocation measures (Evert and Krenn, 2001; Pecina and Schlesinger, 2006) . Fig. 3 visualizes the precision-recall graphs from 12 different measures for the four movie comment collections. The x-axis represents the proportion of the ranking list, while the y-axis depicts the corresponding precision or recall value. The upper part of the figure is the result for the movie which contains more than 1000 comments, while the bottom part demonstrates the result for the relatively small comment collection. The n-best evaluation shows that for all the four movie comment collections, PP1_mix and PP1 perform significantly better than the other methods, and the dependency information helps to increase the accuracy significantly, especially for the larger size collection. The LPU method, though using part of the positive training data, did not perform very well. The reason could be that although some of the users put the warning phrases (like \"spoiler alert\") ahead of their comments, the comment might contain only indirect plot-revealing information. This also reflects that a spoiler tagging method by us- The PP1 method with BOW and dependency information mixed performs the best among all the measures. Other six methods such as dependency only and KL-based which do not give good performance are ignored in this figure to make it readable. Full comparison is available at: http://sites.google.com/site/ldaspoiler/ ing only keywords typically will not work. Finally, the approach to directly calculating the symmetrized KL divergence seems to be not suitable, either.", |
|
"cite_spans": [ |
|
{ |
|
"start": 155, |
|
"end": 178, |
|
"text": "(Evert and Krenn, 2001;", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 179, |
|
"end": 208, |
|
"text": "Pecina and Schlesinger, 2006)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 217, |
|
"text": "Fig. 3", |
|
"ref_id": "FIGREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We also compared the average precision values and normalized discounted cumulative gain (nDCG) values (Croft et al., 2009; J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) of the ranking results with different parameters for Gibbs sampling, such as burnin period and sample size. Average precision is calculated by averaging the precision values from the ranking positions where a valid spoiler is found, and the nDCG value for the top-p list is calculated as nDCG p = DCGp IDCG \u2022 DCG p is defined as:", |
|
"cite_spans": [ |
|
{ |
|
"start": 102, |
|
"end": 122, |
|
"text": "(Croft et al., 2009;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 123, |
|
"end": 153, |
|
"text": "J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA iteration analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "DCG p = rel 1 + p i=2 rel i log 2 i", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LDA iteration analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "where rel i is 1 when the i-th comment in the list is judged as a real spoiler, and 0, otherwise. IDCG denotes the maximum possible DCG value when all the real spoilers are ranked at the top (perfect ranking) (J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002) . As we can see from Table 4 , the accuracy is not affected too much as long as the burin period for the MCMC process is longer than 50 and the sample size retained is larger than 10. In our experiments, we use 100 as the burin parameter, and beyond that, 100 samples were retained with sample lag of 2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 240, |
|
"text": "(J\u00e4rvelin and Kek\u00e4l\u00e4inen, 2002)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 269, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "LDA iteration analysis", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "As shown in Table 5 , we find that the basic BOW strategy prefers the longer comments whereas the strategy that uses dependency information prefers the shorter ones. Although it is reasonable that a longer comment would have a higher probabil-ity of revealing the plot, methods which prefers the longer comments usually leave out the short spoiler comments. By incorporating the dependency information together with the basic BOW, the new method reduces this shortcoming. For instance, consider one short comment for the movie \"Unbreakable (2000)\":", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 12, |
|
"end": 19, |
|
"text": "Table 5", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Representative results", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "This is the same formula as Sixth Sense -from the ability to see things other people don't, to the shocking ending. Only this movie is just not plausible -I mean Elijah goes around causing disasters, trying to see if anyone is \"Unbreakable\" -it's gonna take a lot of disasters because its a big world.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Representative results", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "whcih is ranked as the 27th result in the PP1_mix method, whereas the BOW based PP1 method places it at the 398th result in the list. Obviously, this comment reveals the twisting end that it is Elijah who caused the disasters. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Representative results", |
|
"sec_num": "4.5" |
|
}, |
|
{ |
|
"text": "We have introduced the spoiler detection problem and proposed using topic models to rank movie comments according to the extent they reveal the movie's plot. In particular, integrating linguistic cues from dependency information into our topic model significantly improves the ranking accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In future work, we seek to study schemes which can segment comments to potentially identify the relevant spoiler portion automatically. The automatic labeling idea of (Mei et al., 2007) can also be studied in our framework. Deeper linguistic analysis, such as named entity recognition and semantic role labeling, can also be conducted. In addition, evaluating topic models or choosing the right number of topics using dependency information can be further studied. Finally, integrating the dependency relationships more directly into the probabilistic graphical model is also worthy of study.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 185, |
|
"text": "(Mei et al., 2007)", |
|
"ref_id": "BIBREF16" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and future work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "http://nlp.stanford.edu/software, V1.6 2 In the implementation, we actually considered all the POS tags with these five tags as prefix, such as \"NNS\", \"VBN\", etc.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Here we use <NN, JJ> to express relations having NN and JJ terms, but not necessarily in that order. Also, NN represents all tags related with nouns in the Penn Treebank Tagset, such as NNS. This applies to all the four expressions here.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "When both the left term and the right term of a dependency share the same topic, the relation is topic-matched.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A correlated topic model of science", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Annals of Applied Statistics", |
|
"volume": "1", |
|
"issue": "1", |
|
"pages": "17--35", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blei, David M. and John D. Lafferty. 2007. A cor- related topic model of science. Annals of Applied Statistics, 1(1):17-35.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Supervised topic models", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jon", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Mcauliffe", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 21st Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blei, David M. and Jon D. Mcauliffe. 2007. Super- vised topic models. In Proceedings of the 21st An- nual Conference on Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Andrew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [ |
|
"I" |
|
], |
|
"last": "Ng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Journal of machine learning research", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "993--1022", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blei, David M., Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. Journal of ma- chine learning research, 3:993-1022.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Hierarchical topic models and the nested chinese restaurant process", |
|
"authors": [ |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Gri", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 18th Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Blei, David M., T. Gri, M. Jordan, and J. Tenenbaum. 2004. Hierarchical topic models and the nested chi- nese restaurant process. In Proceedings of the 18th Annual Conference on Neural Information Process- ing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Reading tea leaves: How humans interpret topic models", |
|
"authors": [ |
|
{ |
|
"first": "Jordan", |
|
"middle": [], |
|
"last": "Boyd-Graber", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jonathan", |
|
"middle": [], |
|
"last": "Chang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sean", |
|
"middle": [], |
|
"last": "Gerrish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chong", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 23rd Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boyd-Graber, Jordan, Jonathan Chang, Sean Gerrish, Chong Wang, and David Blei. 2009. Reading tea leaves: How humans interpret topic models. In Pro- ceedings of the 23rd Annual Conference on Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Search Engines: Information Retrieval in Practice", |
|
"authors": [ |
|
{ |
|
"first": "Bruce", |
|
"middle": [], |
|
"last": "Croft", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Donald", |
|
"middle": [], |
|
"last": "Metzler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Trevor", |
|
"middle": [], |
|
"last": "Strohman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Croft, Bruce, Donald Metzler, and Trevor Strohman. 2009. Search Engines: Information Retrieval in Practice. Addison Wesley, 1 edition.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Methods for the qualitative evaluation of lexical association measures", |
|
"authors": [ |
|
{ |
|
"first": "Stefan", |
|
"middle": [], |
|
"last": "Evert", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Brigitte", |
|
"middle": [], |
|
"last": "Krenn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of 39th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Evert, Stefan and Brigitte Krenn. 2001. Methods for the qualitative evaluation of lexical association mea- sures. In Proceedings of 39th Annual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Finding scientific topics", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the National Academy of Sciences of the United States of America", |
|
"volume": "101", |
|
"issue": "", |
|
"pages": "5228--5235", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Griffiths, Thomas L. and M. Steyvers. 2004. Find- ing scientific topics. In Proceedings of the National Academy of Sciences of the United States of Amer- ica, 101 Suppl 1:5228-5235, April.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Integrating topics and syntax", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 19th Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Griffiths, Thomas L., Mark Steyvers, David M. Blei, and Joshua B. Tenenbaum. 2005. Integrating topics and syntax. In Proceedings of the 19th Annual Con- ference on Neural Information Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Topics in semantic representation", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joshua", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Tenenbaum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Psychological Review", |
|
"volume": "114", |
|
"issue": "2", |
|
"pages": "211--244", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Griffiths, Thomas L., Mark Steyvers, and Joshua B. Tenenbaum. 2007. Topics in semantic representa- tion. Psychological Review, 114(2):211-244, April.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Parameter estimation for text analysis", |
|
"authors": [ |
|
{ |
|
"first": "Gregor", |
|
"middle": [], |
|
"last": "Heinrich", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Heinrich, Gregor. 2008. Parameter estimation for text analysis. Technical report, University of Leipzig.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Probabilistic latent semantic analysis", |
|
"authors": [ |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Hofmann", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of 15th Conference on Uncertainty in Artificial Intelligence", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hofmann, Thomas. 1999. Probabilistic latent seman- tic analysis. In Proceedings of 15th Conference on Uncertainty in Artificial Intelligence.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Cumulated gain-based evaluation of IR techniques", |
|
"authors": [ |
|
{ |
|
"first": "Kalervo", |
|
"middle": [], |
|
"last": "J\u00e4rvelin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jaana", |
|
"middle": [], |
|
"last": "Kek\u00e4l\u00e4inen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "ACM Transactions on Information Systems", |
|
"volume": "20", |
|
"issue": "4", |
|
"pages": "422--446", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "J\u00e4rvelin, Kalervo and Jaana Kek\u00e4l\u00e4inen. 2002. Cumu- lated gain-based evaluation of IR techniques. ACM Transactions on Information Systems, 20(4):422- 446.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Building text classifiers using positive and unlabeled examples", |
|
"authors": [ |
|
{ |
|
"first": "Yu", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 3rd IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yu. 2003. Building text classifiers using positive and unlabeled examples. In Proceedings of the 3rd IEEE International Conference on Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Generating typed dependency parses from phrase structure parses", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 5th International Conference on Language Resources and Evaluation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marneffe, M., B. Maccartney, and C. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the 5th Inter- national Conference on Language Resources and Evaluation.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Automatic labeling of multinomial topic models", |
|
"authors": [ |
|
{ |
|
"first": "Qiaozhu", |
|
"middle": [], |
|
"last": "Mei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xuehua", |
|
"middle": [], |
|
"last": "Shen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chengxiang", |
|
"middle": [], |
|
"last": "Zhai", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 13th ACM SIGKDD conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mei, Qiaozhu, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic models. In Proceedings of the 13th ACM SIGKDD conference.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "Combining association measures for collocation extraction", |
|
"authors": [ |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Pecina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Schlesinger", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pecina, Pavel and Pavel Schlesinger. 2006. Com- bining association measures for collocation extrac- tion. In Proceedings of the 21st International Con- ference on Computational Linguistics and 44th An- nual Meeting of the Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Probabilistic Topic Models", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steyvers, Mark and Tom Griffiths, 2007. Probabilistic Topic Models. Lawrence Erlbaum Associates.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Probabilistic author-topic models for information discovery", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Steyvers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Padhraic", |
|
"middle": [], |
|
"last": "Smyth", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michal", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Zvi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Thomas", |
|
"middle": [], |
|
"last": "Griffiths", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of the 10th ACM SIGKDD conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Steyvers, Mark, Padhraic Smyth, Michal R. Zvi, and Thomas Griffiths. 2004. Probabilistic author-topic models for information discovery. In Proceedings of the 10th ACM SIGKDD conference.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Hierarchical dirichlet processes", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Whye", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "I", |
|
"middle": [], |
|
"last": "Jordan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Beal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Matthew", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Blei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Journal of the American Statistical Association", |
|
"volume": "101", |
|
"issue": "476", |
|
"pages": "1566--1581", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teh, Yee Whye, Jordan, I. Michael, Beal, J. Matthew, Blei, and M. David. 2006. Hierarchical dirichlet processes. Journal of the American Statistical As- sociation, 101(476):1566-1581, December.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A collapsed variational bayesian inference algorithm for latent dirichlet allocation", |
|
"authors": [ |
|
{ |
|
"first": "Yee", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Teh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Newman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Max", |
|
"middle": [], |
|
"last": "Welling", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 21st Annual Conference on Neural Information Processing Systems", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Teh, Yee W., David Newman, and Max Welling. 2007. A collapsed variational bayesian inference algo- rithm for latent dirichlet allocation. In Proceedings of the 21st Annual Conference on Neural Informa- tion Processing Systems.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Topic modeling: beyond bag-of-words", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 23rd International Conference on Machine Learning", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wallach, Hanna M. 2006. Topic modeling: beyond bag-of-words. In Proceedings of the 23rd Interna- tional Conference on Machine Learning.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Structured topic models for language", |
|
"authors": [ |
|
{ |
|
"first": "Hanna", |
|
"middle": [ |
|
"M" |
|
], |
|
"last": "Wallach", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wallach, Hanna M. 2008. Structured topic models for language. Ph.D. thesis, University of Cambridge.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A note on topical n-grams", |
|
"authors": [ |
|
{ |
|
"first": "Xuerui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, Xuerui and Andrew Mccallum. 2005. A note on topical n-grams. Technical report, University of Massachusetts Amherst.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Topical n-grams: Phrase and topic discovery, with an application to information retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Xuerui", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 7th IEEE International Conference on Data Mining", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wang, Xuerui, Andrew McCallum, and Xing Wei. 2007. Topical n-grams: Phrase and topic discovery, with an application to information retrieval. In Pro- ceedings of the 7th IEEE International Conference on Data Mining.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Lda-based document models for ad-hoc retrieval", |
|
"authors": [ |
|
{ |
|
"first": "Xing", |
|
"middle": [], |
|
"last": "Wei", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bruce", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Croft", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 29th Annual International ACM SIGIR Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Wei, Xing and Bruce W. Croft. 2006. Lda-based doc- ument models for ad-hoc retrieval. In Proceedings of the 29th Annual International ACM SIGIR Con- ference.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": "Dependency parse of \"David Dunn is the sole survivor of this terrible disaster\"." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"num": null, |
|
"uris": null, |
|
"text": ": N-best(top nth) evaluation (Burnin period = 100): comparison of precision-recall for different methods on four movie comment collections." |
|
}, |
|
"TABREF0": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td/><td>topic number = 500</td><td/><td/></tr><tr><td>Movie Name</td><td><NN, NN></td><td><NN, JJ></td><td><NN, VB></td><td><NN, *></td></tr><tr><td>Unbreakable</td><td>772/3024</td><td>412/4411</td><td>870/19498</td><td>5672/61251</td></tr><tr><td>Blood Diamond</td><td>441/1775</td><td>83/553</td><td>80/1012</td><td>609/3496</td></tr><tr><td>Shooter</td><td>242/1846</td><td>42/1098</td><td>114/2150</td><td>1237/15793</td></tr><tr><td>Role Models</td><td>409/2978</td><td>60/1396</td><td>76/2529</td><td>559/7276</td></tr><tr><td/><td/><td>topic number = 50</td><td/><td/></tr><tr><td>Movie Name</td><td><NN, NN></td><td><NN, JJ></td><td><NN, VB></td><td><NN, *></td></tr><tr><td>Unbreakable</td><td>1326/3024</td><td>953/4411</td><td>3354/19498</td><td>14067/61251</td></tr><tr><td>Blood Diamond</td><td>806/1775</td><td>151/553</td><td>210/1012</td><td>1194/3496</td></tr><tr><td>Shooter</td><td>584/1846</td><td>204/1098</td><td>392/2150</td><td>3435/15793</td></tr><tr><td>Role Models</td><td>1156/2978</td><td>190/1396</td><td>309/2529</td><td>1702/7276</td></tr><tr><td/><td/><td>topic number = 2</td><td/><td/></tr><tr><td>Movie Name</td><td><NN, NN></td><td><NN, JJ></td><td><NN, VB></td><td><NN, *></td></tr><tr><td>Unbreakable</td><td>2379/3024</td><td>3106/4411</td><td>13606/19498</td><td>43876/61251</td></tr><tr><td>Blood Diamond</td><td>1391/1775</td><td>404/553</td><td>761/1012</td><td>2668/3496</td></tr><tr><td>Shooter</td><td>1403/1846</td><td>768/1098</td><td>1485/2150</td><td>11008/15793</td></tr><tr><td>Role Models</td><td>2185/2978</td><td>908/1396</td><td>1573/2529</td><td>4920/7276</td></tr></table>", |
|
"text": "Topic match analysis for plain LDA (Each entry is the ratio of topic-matched dependencies to all dependencies)" |
|
}, |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td>No.</td><td>Tag by IMDb</td><td>Comment in IMDb</td></tr><tr><td>1</td><td>Spoiler</td><td/></tr></table>", |
|
"text": "Some examples of incorrect spoiler tagging in IMDb (italicized sentences are spoilers)." |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"3\">Movie Name IMDB ID #Comments #Spoilers</td></tr><tr><td>Unbreakable tt0217869</td><td>1219</td><td>205</td></tr><tr><td>Blood Diamond tt0450259</td><td>538</td><td>147</td></tr><tr><td>Shooter tt0822854</td><td>268</td><td>73</td></tr><tr><td>Role Models tt0430922</td><td>123</td><td>39</td></tr></table>", |
|
"text": "Evaluation dataset about four movies with different numbers of comments." |
|
}, |
|
"TABREF4": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td colspan=\"2\"><S=100; Lag=2></td><td colspan=\"2\"><S=10; Lag=2></td><td colspan=\"2\"><S=1; Lag=2></td></tr><tr><td>Burnin</td><td>AvgP (%)</td><td>nDCG</td><td>AvgP (%)</td><td>nDCG</td><td>AvgP (%)</td><td>nDCG</td></tr><tr><td>400</td><td>80.85</td><td>0.951</td><td>78.2</td><td>0.938</td><td>78.1</td><td>0.94</td></tr><tr><td>200</td><td>80.95</td><td>0.951</td><td>80.5</td><td>0.948</td><td>79.1</td><td>0.94</td></tr><tr><td>100</td><td>87.25</td><td>0.974</td><td>80.2</td><td>0.943</td><td>82.4</td><td>0.96</td></tr><tr><td>50</td><td>81.5</td><td>0.958</td><td>79.5</td><td>0.942</td><td>80.0</td><td>0.94</td></tr><tr><td>10</td><td>78.9</td><td>0.944</td><td>79.5</td><td>0.949</td><td>75.9</td><td>0.92</td></tr><tr><td>1</td><td>79.4</td><td>0.940</td><td>79.2</td><td>0.952</td><td>58.0</td><td>0.86</td></tr></table>", |
|
"text": "Comparison of ranking by PP_mix using different parameters for Gibbs sampling (analyzed on the top 150 ranking lists, and the values in the table are the mean of the accuracy from four movie comment collections)." |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"type_str": "table", |
|
"content": "<table><tr><td/><td>Role Models</td><td>Shooter</td><td>Blood Diamond</td><td>Unbreakable</td></tr><tr><td>BOW</td><td>2162.14</td><td>2259.36</td><td>2829.86</td><td>1389.18</td></tr><tr><td>Dependency</td><td>1596.14</td><td>1232.12</td><td>2435.58</td><td>1295.72</td></tr></table>", |
|
"text": "Comparison of average length of the top-50 comments of 4 movies from 2 strategies." |
|
} |
|
} |
|
} |
|
} |