Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "S14-1016",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:32:20.361851Z"
},
"title": "Extracting Latent Attributes from Video Scenes Using Text as Background Knowledge",
"authors": [
{
"first": "Anh",
"middle": [],
"last": "Tran",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
},
{
"first": "Paul",
"middle": [],
"last": "Cohen",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We explore the novel task of identifying latent attributes in video scenes, such as the mental states of actors, using only large text collections as background knowledge and minimal information about the videos, such as activity and actor types. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms. We develop and test several largely unsupervised information extraction models that identify the mental states of human participants in video scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models as well as other baseline methods. This work is licenced under a Creative Commons Attribution 4.0 International License.",
"pdf_parse": {
"paper_id": "S14-1016",
"_pdf_hash": "",
"abstract": [
{
"text": "We explore the novel task of identifying latent attributes in video scenes, such as the mental states of actors, using only large text collections as background knowledge and minimal information about the videos, such as activity and actor types. We formalize the task and a measure of merit that accounts for the semantic relatedness of mental state terms. We develop and test several largely unsupervised information extraction models that identify the mental states of human participants in video scenes. We show that these models produce complementary information and their combination significantly outperforms the individual models as well as other baseline methods. This work is licenced under a Creative Commons Attribution 4.0 International License.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "\"Labeling a narrowly avoided vehicular manslaughter as approach(car, person) is missing something.\" 1 The recognition of activities, participants, and objects in videos has advanced considerably in recent years (Li et al., 2010; Poppe, 2010; Weinland et al., 2011; Yang and Ramanan, 2011; Ng et al., 2012) . However, identifying latent attributes of scenes, such as the mental states of human participants, has not been addressed. Latent attributes matter: If a video surveillance system detects one person chasing another, the response from law enforcement should be radically different if the people are happy (e.g., children playing) or afraid and angry (e.g., a person running from an assailant).",
"cite_spans": [
{
"start": 211,
"end": 228,
"text": "(Li et al., 2010;",
"ref_id": "BIBREF11"
},
{
"start": 229,
"end": 241,
"text": "Poppe, 2010;",
"ref_id": "BIBREF23"
},
{
"start": 242,
"end": 264,
"text": "Weinland et al., 2011;",
"ref_id": "BIBREF29"
},
{
"start": 265,
"end": 288,
"text": "Yang and Ramanan, 2011;",
"ref_id": "BIBREF30"
},
{
"start": 289,
"end": 305,
"text": "Ng et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Attributes that are latent in visual representations are often explicit in textual representations. This suggests a novel method for inferring latent attributes: Use explicit features of videos to query text corpora, and from the resulting texts extract attributes that are latent in the videos, such as mental states. The contributions of this work are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "1: We formalize the novel task of latent attribute identification from video scenes, focusing on the identification of actors' mental states. The input for the task is contextual information about the scene, such as detections about the activity (e.g., chase) and actor types (e.g., policeman or child), and the output is a distribution over mental state labels. We show that gold standard annotations for this task can be reliably generated using crowd sourcing. We define a novel evaluation measure, called constrained weighted similarity-aligned F 1 score, that accounts for both the differences between mental state distributions and the semantic relatedness of mental state terms (e.g., partial credit is given for irate when the target is angry).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose several robust and largely unsupervised information extraction (IE) models for identifying the mental state labels of human participants in a scene, given solely the activity and actor types: a lexical semantic (LS) model that extracts mental state labels that are highly similar to the context of the scene in a latent, conceptual vector space; and an information retrieval (IR) model that identifies labels commonly appearing in sentences related to the explicit scene context. We show that these models are complementary and their combination performs better than either model, alone.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "3: Furthermore, we show that an event-centric model that focuses on the mental state labels of the participants in the relevant event (identified using syntactic patterns and coreference resolution) outperforms the above shallower models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "As far as we know, the task proposed here is novel. We can, however, review work relevant to each part of the problem and our solution. Mental state inference is often formulated as a classification problem, where the goal is to predict target mental state labels based on low-level sensory input data. Most solutions try to learn classification models based on large amounts of training data, while some require human engineering of domain knowledge. Hidden Markov Models (HMMs) and Dynamic Bayesian Networks (DBNs) are popular representations because they can model the temporal evolution of mental states. For instance, the mental states of students can be inferred from unintentional body gestures using a DBN (Abbasi et al., 2009) . Likewise, an HMM can also be used to model the emotional states of humans (Liu and Wang, 2011) . Some solutions combine HMMs and DBNs in a Bayesian inference framework to yield a multi-layer representation that can do realtime inference of complex mental and emotional states (El Kaliouby and Robinson, 2004; Baltrusaitis et al., 2011) . Our work differs from these approaches in several ways: It is mostly unsupervised, multi-modal, and requires little training.",
"cite_spans": [
{
"start": 710,
"end": 735,
"text": "DBN (Abbasi et al., 2009)",
"ref_id": null
},
{
"start": 812,
"end": 832,
"text": "(Liu and Wang, 2011)",
"ref_id": "BIBREF12"
},
{
"start": 1018,
"end": 1046,
"text": "Kaliouby and Robinson, 2004;",
"ref_id": "BIBREF5"
},
{
"start": 1047,
"end": 1073,
"text": "Baltrusaitis et al., 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Relevant video processing technology includes object detection (e.g., (Felzenszwalb et al., 2008) ), person detection, and pose detection (e.g., (Yang and Ramanan, 2011) ). Many tracking algorithms have been developed, such as group tracking (McKenna et al., 2000) , tracking by learning appearances (Ramanan et al., 2007) , and tracking in 3D space (Giebel et al., 2004; Brau et al., 2013) . For human action recognition, current state-of-the-art techniques are capable of achieving near perfect performance on the commonly used KTH Actions dataset (Schuldt et al., 2004) and high performance rates on other more challenging datasets (O'Hara and Draper, 2012; Sadanand and Corso, 2012) .",
"cite_spans": [
{
"start": 70,
"end": 97,
"text": "(Felzenszwalb et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 145,
"end": 169,
"text": "(Yang and Ramanan, 2011)",
"ref_id": "BIBREF30"
},
{
"start": 242,
"end": 264,
"text": "(McKenna et al., 2000)",
"ref_id": "BIBREF15"
},
{
"start": 300,
"end": 322,
"text": "(Ramanan et al., 2007)",
"ref_id": "BIBREF24"
},
{
"start": 350,
"end": 371,
"text": "(Giebel et al., 2004;",
"ref_id": "BIBREF9"
},
{
"start": 372,
"end": 390,
"text": "Brau et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 550,
"end": 572,
"text": "(Schuldt et al., 2004)",
"ref_id": "BIBREF27"
},
{
"start": 635,
"end": 660,
"text": "(O'Hara and Draper, 2012;",
"ref_id": "BIBREF21"
},
{
"start": 661,
"end": 686,
"text": "Sadanand and Corso, 2012)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To extract mental state information from texts, one might use any or all of the technologies of natural language processing, so a complete review of relevant technologies is impossible, here. Of immediate relevance is the work of de Marneffe et al. (2010), which identified the latent meaning behind scalar adjectives (e.g., which ages people have in mind when talking about \"little kids\"). The authors learned these meanings by extracting scalars, such as children's ages, that were commonly collocated with phrases, such as \"little kids,\" in web documents. Mohtarami et al. (2011) tried to infer yes/no answers from indirect yes/no question-answer pairs (IQAPs) by predicting the uncertainty of sentiment adjectives in indirect answers. Their method employs antonyms, synonyms, word sense disambiguation as well as the semantic association between the sentiment adjectives that appear in the IQAP to assign a degree of certainty to each answer. Sokolova and Lapalme (2011) further showed how to learn a model for predicting the opinions of users based on their written contents, such as reviews and product descriptions, on the Web. Gabbard et al. (2011) found that coreference resolution can significantly improve the recall rate of relations extraction without much expense to the precision rate.",
"cite_spans": [
{
"start": 559,
"end": 582,
"text": "Mohtarami et al. (2011)",
"ref_id": "BIBREF19"
},
{
"start": 947,
"end": 974,
"text": "Sokolova and Lapalme (2011)",
"ref_id": "BIBREF28"
},
{
"start": 1135,
"end": 1156,
"text": "Gabbard et al. (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work builds on these efforts by combining information retrieval, lexical semantics, and event extraction to extract latent scene attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "For the experiments in this paper, we focus solely on videos containing chase scenes. Chases often invoke clear mental state inferences, and depending on context can suggest very different mental state distributions for the actors involved.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We compiled a video dataset of 26 chase videos found on the Web. Of these, five involve police officers, seven involve children, four show sportsrelated scenes, and twelve describe different chase scenarios involving civilian adults (two videos involve children playing sports). The average duration of the dataset is 8.8 seconds with a range of [4, 18] . Most videos involve a single chaser and a single chasee (a person being chased) while a few have several chasers and/or chasees.",
"cite_spans": [
{
"start": 346,
"end": 349,
"text": "[4,",
"ref_id": null
},
{
"start": 350,
"end": 353,
"text": "18]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Video Corpus",
"sec_num": "3.1"
},
{
"text": "For each video, we used Amazon Mechanical Turk (MTurk) to identify both the actors and their mental states. Each worker was asked to view a video in its entirety before answering some questions about the scene. We give no prior training to the workers. The questions were carefully phrased to apply to all participants of a particular role, for example all chasers (if there are more than one). We also ask obvious validation questions about the participants in each role (e.g., are the chasers running towards the camera?) and use the answers to these questions to filter out poor responses. In gen-eral, we found that most responses were good and only a few incomplete submissions were rejected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Video Corpus",
"sec_num": "3.1"
},
{
"text": "In the first experiment, we asked MTurk workers to select the actor types and various other detections from a predefined list of tags. This labeling task is a proxy for a computer vision detection system that functions at a human level of performance. Indeed, we restricted the actor type labels to a set that can be reasonably expected from automatic detection algorithms: person, police officer, child, and (non-human) object. For instance, police officers often wear distinctive color uniforms that can be learned using the Felzenszwalb detector (Felzenszwalb et al., 2008) , whereas children can be reliably differentiated by their heights under a 3D-tracking model (Brau et al., 2013) . Each video was annotated by three different workers and the union of their annotations is produced. The overall accuracy of the annotation was excellent. The MTurk workers correctly identified the important actors in every video.",
"cite_spans": [
{
"start": 549,
"end": 576,
"text": "(Felzenszwalb et al., 2008)",
"ref_id": "BIBREF6"
},
{
"start": 670,
"end": 689,
"text": "(Brau et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Video Corpus",
"sec_num": "3.1"
},
{
"text": "Next, we collected a gold standard list of mental state labels for each video by asking MTurk workers to identify all applicable mental state adjectives for the actors involved. We used a text-box to allow for free-form input. Studies have shown that people of different cultures can perceive emotions very differently, and having forced choice options cannot always capture their true perception (Gendron et al., 2014) . Therefore, we did not restrict the response of the workers in any way. Workers could abstain from answering if they felt the video was too ambiguous. Each video was evaluated by ten different workers. We converted each term provided to the closest adjective form if possible. Terms with no equivalent adjective forms were left in place. On rare occasions, workers provided sentence descriptions despite being asked for single-word adjectives. These sentences were either removed, or collapsed into a single word if appropriate. The overall quality of the annotations was good and generally followed common intuition. Asides from the frequently used terms, we also received some colorful (yet informative) descriptions, like incredulous and vindictive. In general, chases involving police scenarios often contained violent and angry states while chases involving children received more cheerful labels. There were unexpected descriptions, such as annoy for a playful chase between two children. Upon review of the video, we agreed that one child did indeed look annoyed. Thus, the resulting descriptions were subjective, but very few were hard to rationalize. By aggregating the answers from the workers, we generated a gold standard distribution of mental state terms for each video. 2",
"cite_spans": [
{
"start": 397,
"end": 419,
"text": "(Gendron et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Video Corpus",
"sec_num": "3.1"
},
{
"text": "The text corpus used for our models is the English Gigaword 5th Edition corpus 3 , made available by the Linguistics Data Consortium and indexed by Lucene 4 . It is a comprehensive archive of newswire text data (approximately 26 GB), acquired over several years. It is in this corpus that we expect to find mental state terms cued by contextual information from videos.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Corpus",
"sec_num": "3.2"
},
{
"text": "We developed several individual models based on the neighborhood paradigm, that is, the hypothesis that relevant mental state labels will appear \"near\" text cued by the visual features of a scene.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhood Models",
"sec_num": "4"
},
{
"text": "The models take as input the context extracted from a video scene, defined simply as a list of \"activity and actor-type\" tuples (e.g., (chase, police)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhood Models",
"sec_num": "4"
},
{
"text": "Multiple actor types will result in multiple tuples for a video. The actors can be either a person, a policeman, a child, or a (non-human) object. If the detections describe the actor as both a person and a child, or a person and a policeman, we automatically remove the person label as it is a Word-Net (Miller, 1995) hypernym of both child and policeman. For each human actor type, we further increase our coverage by retrieving the synonym set (synset) of its most frequent sense (i.e., sense #1) from WordNet. For example, a chase involving a policeman would generate the following tuples: (chase, policeman) and (chase, officer).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhood Models",
"sec_num": "4"
},
{
"text": "We call these query tuples because they are used to query text for sentences that -if all goes wellwill contain relevant mental state labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Neighborhood Models",
"sec_num": "4"
},
{
"text": "Given query tuples, our models use an initial seed set of 160 mental state adjectives to produce a single distribution over mental state labels, referred to as the response distribution, for each video. The seed set is compiled from popular mental and emotional state dictionaries, including the Profile of Mood States (POMS) (McNair et al., 1971 ) and Plutchik's wheel of emotion. We also included frequently used labels gathered from synsets found in WordNet (see Table 1 for examples). Note that the gold standard annotations produced by MTurk workers (Sec. 3) was not a source for this set, nor was it restricted to these terms.",
"cite_spans": [
{
"start": 326,
"end": 346,
"text": "(McNair et al., 1971",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 466,
"end": 473,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Neighborhood Models",
"sec_num": "4"
},
{
"text": "Our first model uses the recurrent neural network language model (RNNLM) of Mikolov et al. (2013) to project both mental state labels and query tuples into a latent conceptual space. Similarity is then trivially computed as the cosine similarity between these vectors. In all of our experiments, we used a RNNLM computed over the Gigaword corpus with 600-dimensional vectors. For this vector space (vec) model, we separate the query tuples into different levels of back-off context. The first level includes the set of activity types as singleton context tuples, e.g., (chase), while the second level includes all (activity, actor) context tuples. Hence, each query tuple will yield two different context tuples, one for each back-off level. For each context tuple with multiple terms, such as (chase, policeman), we find the vector representation for the context by aggregating the vectors representing the search terms:",
"cite_spans": [
{
"start": 76,
"end": 97,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "vec(chase, policeman) = vec(chase) + vec(policeman) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "The vector representation for a singleton context tuple is just the vector of the single search term. We then calculate the distance of each mental state label m to the normalized vector representation of the context tuple by computing the cosine similarity score between the two vectors:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "cos(\u0398 m ) = vec(m) \u2022 vec(context tuple) ||vec(m)|| ||vec(context tuple)|| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "The hypothesis here is that mental state labels that are related to the search context will have a RNNLM vector that is closer to the context tuple vector, resulting in a high cosine similarity score. Because the number of latent dimensions is relatively small (when compared to vocabulary size), cosine similarity scores in this latent space tend to be close. To further separate these scores, we raise them to an exponential power: score(m) = e cos(\u0398m)+1 \u2212 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "The processing of each context tuple yields 160 different scores, one for each mental state label. We normalize these scores to form a single distribution of scores for each context tuple. The distributions are then integrated into a single distribution representative of the complete activity as follows: (a) the distributions at each context back-off level are averaged to generate a single distribution per level -for the second level (which includes activity and actor types), it means distributions for all (activity, actor) tuples are averaged, whereas the first level only has a single distribution from the singleton activity tuple (chase); and (b) distributions for the different levels are linearly interpolated, similar to the back-off strategy of (Collins, 1997) . Let e 1 and e 2 represent the weights of some mental state label m from the average distribution at the first and second level, respectively. Then the interpolated distribution score e for m is:",
"cite_spans": [
{
"start": 759,
"end": 774,
"text": "(Collins, 1997)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "e = \u03bbe 1 + (1 \u2212 \u03bb)e 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "Compiling the distribution scores for each m produces the final distribution representing the activity modeled. We prune this final distribution by taking the top ranked items that make up some \u03b3 proportion of the distribution. We delay the discussion of how \u03b3 is tuned to Section 6. The final pruned distribution is normalized to produce the response distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-off Interpolation in Vector Space",
"sec_num": "4.1"
},
{
"text": "Our second model, the sent model, extracts mental state labels based on the likelihood that they appear in sentences cued by query tuples. For each tuple, we estimate the conditional probability that we will see a mental state label m in a sentence, where m is from the seed set, given that we already observed the desired activity and actor type in the same sentence: P (m|activity, actor). In this case, we refer to the sentence length as the neighborhood window. Furthermore, all terms must appear as the correct part-of-speech (POS): m must appear as an adjective or verb, the activity as a verb, and the actor as a noun. (Mental state adjectives are allowed to appear as verbs because some are often mis-tagged as verbs; e.g., agitated, determined, welcoming.) We used Stanford's CoreNLP toolkit for tokenization and POS tagging. 5 Note that this probability is similar to a trigram probability in POS tagging, except the triples need not form an ordered sequence but must appear in the same sentence and under the correct POS tag. Unfortunately, we cannot always compute this trigram probability directly from the corpus because there might be too few instances of each trigram to compute a probability reliably. As is common, we instead estimate it as a linear interpolation of unigrams, bigrams, and trigrams. We define the maximum likelihood probabilitiesP , derived from relative frequencies f , for the unigrams, bigrams, and trigrams as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "P (m) = f (m) N P (m|activity) = f (m, activity) f (activity) P (m|activity, actor) = f (m, activity, actor) f (activity, actor)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "for all mental state labels m, activities, and actor types in our queries. N is the total number of tokens in the corpus. The aforementioned POS requirement is enforced: f (m) is the number of occurrences of m as an adjective or verb. We defin\u00ea P = 0 if the corresponding numerator and denominator are zero. The desired trigram probability is then estimated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "P (m|activity, actor) = \u03bb 1P (m) + \u03bb 2P (m|activity) + \u03bb 3P (m|activity, actor) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "As \u03bb 1 + \u03bb 2 + \u03bb 3 = 1, P represents a probability distribution. We use the deleted interpolation algorithm (Brants, 2000) to estimate one set of lambda values for the model, based on all trigrams.",
"cite_spans": [
{
"start": 108,
"end": 122,
"text": "(Brants, 2000)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "For each query tuple generated in a video, 160 different trigrams are computed, one for each mental state label in the seed set, resulting in 160 conditional probability scores. We normalize these scores into a single distribution -the mental state distribution for that query tuple. We then combine all resulting distributions, one from each query tuple, and take the average to produce a single distribution over mental state labels for the video. As before, we prune this distribution by taking the top-ranked items that cover a large fraction \u03b3 of total probability. The pruned distribution is renormalized to yield the final response distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sentence Co-occurrence with Deleted Interpolation",
"sec_num": "4.2"
},
{
"text": "The sent model has two limitations. On one hand, it is too sparse: the single sentence neighborhood window is too small to reliably estimate the frequencies of trigrams for the probabilities of mental state terms. On the other hand, it may be too lenient, as it extracts all mental state mentions appearing in the same sentence with the activity, or event, under consideration, regardless if they apply to this event or not. We address these limitations next with an event-centric model (event).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event-centric with Deleted Interpolation",
"sec_num": "4.3"
},
{
"text": "Intuitively, the event model focuses on the mental state labels of event participants. Formally, these mental state terms are extracted as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event-centric with Deleted Interpolation",
"sec_num": "4.3"
},
{
"text": "1: We identify event participants (or actors). We do this by analyzing the syntactic dependencies of sentences containing the target verb (e.g., chase) to find the subject and object. In most cases, the nominal subject of the verb chase is the chaser and the direct object is the person being chased. We implemented additional patterns to model passive voice and other exceptions. We used Stanford's CoreNLP toolkit for syntactic dependency parsing and the downstream coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Event-centric with Deleted Interpolation",
"sec_num": "4.3"
},
{
"text": "Once the phrases that point to actors are identified, we identify all mentions of these actors in the entire document by traversing the coreference chains containing the phrases extracted in the previous step. The sentences traversed in the chains define the neighborhood area for this model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "2:",
"sec_num": null
},
{
"text": "Lastly, we identify the mental state terms of event participants using a second set of syntactic patterns. First, we inspect several copulative verbs, such as to be and feel, and extract mental state labels from these structures if the corresponding subject is one of the mentions detected above. Second, we search for mental states along adjectival modifier relations, where the head is an actor mention. For all patterns, we make sure to filter for only mental state complements belonging to the initial seed list. The same POS restriction as in the other models also applies. We increment the joint frequency f for the n-gram once for each neighborhood that properly contain all search terms from the n-gram in the correct POS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "The event model addresses both limitations of the sent model: it avoids the lenient extraction of mental state labels by focusing on labels associated with event participants; it addresses sparsity by considering all mentions of event participants in a document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "To understand the impact of this model, we compare it against two additional baselines. The first baseline investigates the importance of focusing on mental state terms associated with event participants. This model, called coref, implements the first two steps of the above algorithm, but instead of extracting only mental state terms associated with event actors (last step), it considers all mentions appearing anywhere in the coreference neighborhood. That is, all unique sentences traversed by the relevant coreference chains are first pieced together to define a single neighborhood for a given document; then the relative joint frequencies of n-grams are computed by incrementing f once for each neighborhood that contains all terms with correct POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "The second baseline analyzes the importance of coreference resolution to our problem. This model is similar to sent, with the modification that it increases the size of the neighborhood window to include the immediate neighbors of target sentences that contain activity labels. We call this the win-n model: The window around a target verb contains 2n + 1 sentences. We build the context neighborhood by concatenating all target sentences and their windows together for a given document. This defines a single neighborhood for each document. This contrasts with the sent model, in which the neighborhood is defined for each sentence containing the activity label in the document, resulting in several possible neighborhoods in a document. The joint frequency f for each n-gram -where n > 1 -is computed similarly with the coref model: it is incremented once for each neighborhood that contains all the terms from the n-gram in the correct POS. Frequencies for unigrams are computed similar to sent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "As before, 160 different trigrams are generated for each query tuple, one for each mental state label in the seed set, resulting in 160 conditional probability scores. We similarly combine these scores and generate a single pruned distribution as the response for each of the model above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "3:",
"sec_num": null
},
{
"text": "(irate, 0.8), (afraid, 0.2) R 1 (angry, 0.6), (mad, 0.4) R 2 (irate, 0.2), (afraid, 0.8) R 3 (mad, 0.4), (irate, 0.4), (scared, 0.2) Table 2 : We show an example gold standard distribution G and several candidate response distributions to be matched against G. Here, R 3 best matches the shape and meaning of G, because (irate, mad) and (afraid, scared) are close synonyms. R 2 appears to match G semantically, but matches its shape poorly. R 1 misses one of the mental state labels, afraid, but contains labels that are semantically close to the weightiest term in G.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "G",
"sec_num": null
},
{
"text": "We combined the results from the event and vec models to produce an ensemble model (ens) which, for a mental state label m, returns the average of m's scores according to the response distributions of the two individual models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble Model",
"sec_num": "4.4"
},
{
"text": "Let R denote the response distribution over mental state labels produced for a single video by one of the models described in the previous section, and let G denote the gold standard distribution produced for the same video by MTurk workers. If R is similar to G then our models produce similar mental state terms as the workers. There are many ways to compare distributions (e.g., KL distance, chi-square statistics) but these give bad results when distributions are sparse. More importantly, for our purposes, the measures that compare the shapes of distributions do not allow semantic comparisons at the level of distribution elements. Suppose R assigns high scores to angry and mad, only, while G assigns a high score to happy, only. Clearly, R is wrong. But if instead G had assigned a high score to irate, only, then R would be more right than wrong because, at the level of the individual elements, angry and mad are similar to irate but not similar to happy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5"
},
{
"text": "We describe a series of measures, starting with the familiar F 1 score, and discuss their applicability. To illustrate the effectiveness of each measure, we will use the examples shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 188,
"end": 195,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Measures",
"sec_num": "5"
},
{
"text": "The F 1 score measures the similarity between two sets of elements, R and G. F 1 = 1 when R = G and F 1 = 0 when R and G share no elements. F 1 is the harmonic mean of precision and recall:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F 1 Score",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "precision = |R \u2229 G| |R| , recall = |R \u2229 G| |G| ,",
"eq_num": "(1)"
}
],
"section": "F 1 Score",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "F 1 = 2 \u2022 precision \u2022 recall precision + recall .",
"eq_num": "(2)"
}
],
"section": "F 1 Score",
"sec_num": "5.1"
},
{
"text": "The F 1 score penalizes the responses in Table 3 that include semantically similar labels to those in G, and fails to reflect the weights of the labels in G and R.",
"cite_spans": [],
"ref_spans": [
{
"start": 41,
"end": 48,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "F 1 Score",
"sec_num": "5.1"
},
{
"text": "Although the standard F 1 does not immediately fit our needs, it is a good starting point. We can incorporate the semantic similarity of distribution elements by generalizing the formulas for precision and recall as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "precision = 1 |R| r\u2208R max g\u2208G \u03c3(r, g) , recall = 1 |G| g\u2208G max r\u2208R \u03c3(r, g) ,",
"eq_num": "(3)"
}
],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "where \u03c3 \u2208 [0, 1] is a function that yields the similarity between two elements. The standard F 1 has:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "\u03c3(r, g) = 1 , if r = g 0 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "otherwise , but clearly \u03c3 can be defined to take values proportional to the similarity of r and g. We can choose from a wide range of semantic similarity and relatedness measures that are based on Word-Net (Pedersen et al., 2004) . The recent RNNLM of Mikolov opens the door to even more similarity measures based on vector space representations of words (Mikolov et al., 2013) . After experimentations, we settled on one proposed by Hirst and St-Onge (1998) . It represents two lexicalized concepts as semantically close if their WordNet synsets are connected by a path that is not too long and that \"does not change direction too often\" (Hirst and St-Onge, 1998) . We chose this metric because it has a finite range, accommodates numerous POS pairs, and works well in practice. Given the generalized precision and recall formulas in Eq 3, our similarity-aligned (SA) F 1 score can be computed in the usual way, as the harmonic mean of precision and recall (Eq 2).",
"cite_spans": [
{
"start": 206,
"end": 229,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF22"
},
{
"start": 355,
"end": 377,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 434,
"end": 458,
"text": "Hirst and St-Onge (1998)",
"ref_id": "BIBREF10"
},
{
"start": 639,
"end": 664,
"text": "(Hirst and St-Onge, 1998)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "SA-F 1 is inspired by the Constrained Entity-Aligned F-Measure (CEAF) metric proposed",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "F 1 SA-F 1 CWSA-F 1 p r f 1 p r f 1 p r f 1 R 1 0 0 0 1 .5 2 3 1 .8 .89 R 2 1 1 1 1 1 1 .4 .4 .4 R 3 1 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": ".5 .4 1 1 1 1 1 1 Table 3 : The precision (p), recall (r), and F 1 (f 1 ) scores under various evaluation models are presented for the examples from Table 2 . Suppose that \u03c3(irate, angry) = \u03c3(irate, mad) = \u03c3(afraid, scared) = 1, with \u03c3 of any two identical strings being 1, and \u03c3 of all other pairs are 0.",
"cite_spans": [],
"ref_spans": [
{
"start": 18,
"end": 25,
"text": "Table 3",
"ref_id": null
},
{
"start": 149,
"end": 156,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "by (Luo, 2005) for coreference resolution. CEAF computes an optimal one-to-one mapping between subsets of reference and system entities before it computes recall, precision and F. Similarly, SA-F 1 finds optimal mappings between the labels of the two sets based on \u03c3 (this is what the max terms in Eq 3 do). Table 3 shows that SA-F 1 correctly rewards the use of synonyms. The high scores given to R 2 , however, indicate that it does not measure the similarity between distribution shapes.",
"cite_spans": [
{
"start": 3,
"end": 14,
"text": "(Luo, 2005)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 308,
"end": 315,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Similarity-Aligned F 1 Score",
"sec_num": "5.2"
},
{
"text": "Let R(r) and G(r) be the probabilities of label r in the R and G distributions, respectively. Let \u03c3 * S ( ) denote the best similarity score achievable when comparing elements from set S to using the similarity function \u03c3. That is, \u03c3 * S ( ) = max e\u2208S \u03c3( , e). We can easily weight \u03c3 * S ( ) by the probability of . For example, we might redefine precision as r\u2208R R(r) \u2022 \u03c3 * G (r). However, this would not account for the probability of r in the gold standard distribution, G.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Weighted Similarity-Aligned F 1 Score",
"sec_num": "5.3"
},
{
"text": "An analogy might help here: Suppose we have an unknown \"mystery bag\" of 100 colored pencils that we will try to match with a \"response bag\" of pencils. If we fill our response bag with 100 crimson pencils, while the mystery bag contains only 25 crimson pencils, then our precision score should get points only for the first 25 pencils, while the remaining 75 in the response bag should not be rewarded. For recall, the reward given for each color in the mystery bag is capped by the number of pencils of that color in the response bag. The analogy is complete when we consider that crimson pencils should perhaps be partially rewarded when matched by cardinal, rose or cerise pencils. In other words, a similarity mea-sure should account for an accumulated mass of synonyms. Let M S ( ) denote the subset of terms from S that have the best similarity score to : M S ( ) = {e | \u03c3( , e) = \u03c3 * S ( ), \u2200e \u2208 S} .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Weighted Similarity-Aligned F 1 Score",
"sec_num": "5.3"
},
{
"text": "We define new forms of precision and recall as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Weighted Similarity-Aligned F 1 Score",
"sec_num": "5.3"
},
{
"text": "p = r\u2208R min \uf8eb \uf8ed R(r), e\u2208M G (r) G(e) \uf8f6 \uf8f8 \u03c3 * G (r) , r = g\u2208G min \uf8eb \uf8ed G(g), e\u2208M R (g) R(e) \uf8f6 \uf8f8 \u03c3 * R (g) .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Weighted Similarity-Aligned F 1 Score",
"sec_num": "5.3"
},
{
"text": "(4) The resulting constrained weighted similarityaligned (CWSA) F 1 score is the harmonic mean of these new precision and recall scores. Table 3 shows that CWSA-F 1 yields the most intuitive evaluation of the response distributions, downweighting R 2 in favor of R 3 and R 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Constrained Weighted Similarity-Aligned F 1 Score",
"sec_num": "5.3"
},
{
"text": "As described in Section 3, MTurk workers annotated 26 videos by identifying the actor types and mental state labels for each video. The actor types become query tuples of the form (activity, actor) and the mental state labels are compiled into one probability distribution over labels for each video, designated G. The query tuples were provided to our neighborhood models (Sec. 4), which returned a response distribution over mental state labels for each video, designated R.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Procedure",
"sec_num": "6"
},
{
"text": "We selected four videos of the 26 to calibrate the prune parameters \u03b3 and the interpolation parameters \u03bb (Sec. 4). One of these videos contains children, one has police involvement, and two contain adults. We asked additional MTurk workers to annotate these videos, yielding an independent set of annotations to be used solely for calibration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Procedure",
"sec_num": "6"
},
{
"text": "The experimental question is, how well does G match R for each video?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Procedure",
"sec_num": "6"
},
{
"text": "We report the average performance of our models along with two additional baseline methods in Table 4 . The na\u00efve baseline method unif simply binds R to the initial seed set of 160 mental state labels with uniform probability, while the stronger freq baseline uses the occurrence frequency distribution of the labels from the Gigaword corpus (note that only occurrences tagged as adjectives or verbs were counted). All average improvements of the ensemble model over the baseline models are significant (p < 0.01). Significance tests were one-tailed and were based on nonparametric bootstrap resampling with 10, 000 iterations. Using the classical F 1 measure, the coref model scored highest on precision, while the ensemble method did best on F 1 . Not surprisingly, no model can top the baseline methods on recall as both baselines use the entire seed set of 160 terms. Even so, the average recall for the baselines were only .750, which means that the initial seed set did not include words that were used by the MTurk annotators. As we've mentioned, the classical F 1 is misleading because it does not credit synonyms. For example, in one movie, one of our models was rewarded once for matching the label angry and penalized six times for also reporting irate, enraged, raging, upset, furious, and mad. Frequently, our models were penalized for using the terms scared and afraid instead of fearful.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "7"
},
{
"text": "Under the CWSA-F 1 evaluation measure, which correctly accounts for both synonyms and label probabilities, our ensemble model performed best. The average CWSA-F 1 score of the ensemble model improves upon the simple uniform baseline unif by almost 75%, and over the stronger freq baseline by over 40%. The ensemble method also outperforms each individual method in all measured scores. These improvements were also found to be significant. This strongly suggests that the vec and event models are complementary, and not entirely redundant. Furthermore, Table 4 shows that the event model performs considerably better than coref. This result emphasizes the importance of focusing on the mental state labels of event participants rather than considering all mental state terms collocated in the same sentence with an actor or action verb.",
"cite_spans": [],
"ref_spans": [
{
"start": 553,
"end": 560,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "7"
},
{
"text": "Models CWSA-F1 Versus coref p-value win-0 0.388682 \u22120.027512 0.0067 win-1 0.415328 \u22120.000866 0.4629 win-2 0.399777 \u22120.016417 0.0311 win-3 0.392832 \u22120.023362 0.0029 Table 5 : The average CWSA-F 1 scores for the win-n model with different window parameters are shown in comparison to the coref model. The coref model outperformed all tested configurations, though the difference is not significant for n = 1. The p-value based on the average differences were obtained using one-tailed nonparametric bootstrap resampling with 10, 000 iterations. Table 5 explores the effectiveness of coreference resolution in expanding the neighborhood area. The coref model outperformed the simple windowing method under every tested configuration. However, the improvement over windowing with n = 1 is not significant. This can be explained by fact that immediately neighboring sentences are more likely to be related. Moreover, since newswire articles tend to be short, the neighborhoods generated by win-1 tend to be similar to those generated by coref. In general, coref does not do worse than a simple windowing method and has the bonus advantage of providing references to the actors of interest for downstream processes.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 171,
"text": "Table 5",
"ref_id": null
},
{
"start": 543,
"end": 550,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "7"
},
{
"text": "In Table 6 , we show the performance results based on the types of chase scenarios happening in the videos. The average scores under the uniform baseline unif for chase videos involving children and sporting events are lower than for police and other chases. This suggests that our seed set of 160 mental state labels is biased towards the latter types of events, and is not as fit to describe chases involving children.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "7"
},
{
"text": "On average, videos involving police officers show the biggest improvement in the CWSA-F 1 scores over the unif baseline (+0.2693), whereas videos involving children received the lowest gain (+0.1517). We believe this is the effect of the Gigaword text corpus, which is a comprehensive archive of newswire text, and thus is heavily biased towards high-speed and violent chases involving the police. The Gigaword corpus is not the place to find children happily chasing each other. Similarly, sports-related chases, which are also news-worthy, have a higher gain than children's videos on average. Table 6 : The average CWSA-F 1 scores for the ensemble model are shown in comparison to the uniform baseline method, categorize by video types.",
"cite_spans": [],
"ref_spans": [
{
"start": 596,
"end": 603,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results & Discussions",
"sec_num": "7"
},
{
"text": "We introduced the novel task of identifying latent attributes in video scenes, specifically the mental states of actors in chase scenes. We showed that these attributes can be identified by using explicit features of videos to query text corpora, and from the resulting texts extract attributes that are latent in the videos. We presented several largely unsupervised methods for identifying distributions of actors' mental states in video scenes. We defined a similarity measure, CWSA-F 1 , for comparing distributions of mental state labels that accounts for both semantic relatedness of the labels and their probabilities in the corresponding distributions. We showed that very little information from videos is needed to produce good results that significantly outperform baseline methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "In the future, we plan to add more detection types. Additional contextual information from videos (e.g., scene locations) should help improve performance, especially on tougher videos (e.g., videos involving children chases). Moreover, we believe that the initial seed set of mental state labels can be learned simultaneously with the extraction patterns of the event model using a mutual bootstrapping method, similar to that of (Riloff and Jones, 1999) .",
"cite_spans": [
{
"start": 430,
"end": 454,
"text": "(Riloff and Jones, 1999)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Currently, our experiments assume one distribution of mental state labels for each video. They do not distinguish between the mental states of the chaser and chasee, while in reality these participants may be in very different states of mind. Our event model is capable of making this distinction and we will test its performance on this task in the future. We also plan to test the effectiveness of our models with actual computer vision detectors. As a first approximation, we will simulate the noisy nature of detectors by degrading the quality of annotated data. Using artificial noise on ground-truth data, we can simulate the performance of real detectors and test the robustness of our models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "All videos and annotations are available at: http://trananh.github.io/vlsa3 Linguistics Data Consortium catalog no. LDC2011T07 4 Apache Lucene: http://lucene.apache.org",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://nlp.stanford.edu/software/ corenlp.shtml.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Student mental state inference from unintentional body gestures using dynamic Bayesian networks",
"authors": [
{
"first": "Matthew",
"middle": [
"N"
],
"last": "Abdul Rehman Abbasi",
"suffix": ""
},
{
"first": "Nitin",
"middle": [
"V"
],
"last": "Dailey",
"suffix": ""
},
{
"first": "Takeaki",
"middle": [],
"last": "Afzulpurkar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Uno",
"suffix": ""
}
],
"year": 2009,
"venue": "Journal on Multimodal User Interfaces",
"volume": "3",
"issue": "1-2",
"pages": "21--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abdul Rehman Abbasi, Matthew N. Dailey, Nitin V. Afzulpurkar, and Takeaki Uno. 2009. Student men- tal state inference from unintentional body gestures using dynamic Bayesian networks. Journal on Mul- timodal User Interfaces, 3(1-2):21-31, December.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Real-time inference of mental states from facial expressions and upper body gestures",
"authors": [
{
"first": "Tadas",
"middle": [],
"last": "Baltrusaitis",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Mcduff",
"suffix": ""
},
{
"first": "Ntombikayise",
"middle": [],
"last": "Banda",
"suffix": ""
},
{
"first": "Marwa",
"middle": [],
"last": "Mahmoud",
"suffix": ""
},
{
"first": "Rana",
"middle": [],
"last": "El Kaliouby",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Robinson",
"suffix": ""
},
{
"first": "Rosalind",
"middle": [],
"last": "Picard",
"suffix": ""
}
],
"year": 2011,
"venue": "Face and Gesture",
"volume": "",
"issue": "",
"pages": "909--914",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tadas Baltrusaitis, Daniel McDuff, Ntombikayise Banda, Marwa Mahmoud, Rana el Kaliouby, Peter Robinson, and Rosalind Picard. 2011. Real-time inference of mental states from facial expressions and upper body gestures. In Face and Gesture 2011, pages 909-914. IEEE, March.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "TnT: A statistical part-ofspeech tagger",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the sixth conference on Applied natural language processing",
"volume": "",
"issue": "",
"pages": "224--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants. 2000. TnT: A statistical part-of- speech tagger. In Proceedings of the sixth confer- ence on Applied natural language processing, pages 224-231, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bayesian 3D Tracking from monocular video",
"authors": [
{
"first": "Ernesto",
"middle": [],
"last": "Brau",
"suffix": ""
},
{
"first": "Jinyan",
"middle": [],
"last": "Guan",
"suffix": ""
},
{
"first": "Kyle",
"middle": [],
"last": "Simek",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Del Pero",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"Reimer"
],
"last": "Dawson",
"suffix": ""
},
{
"first": "Kobus",
"middle": [],
"last": "Barnard",
"suffix": ""
}
],
"year": 2013,
"venue": "The IEEE International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ernesto Brau, Jinyan Guan, Kyle Simek, Luca Del Pero, Colin Reimer Dawson, and Kobus Barnard. 2013. Bayesian 3D Tracking from monocular video. In The IEEE International Conference on Computer Vision (ICCV), December.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Three generative, lexicalised models for statistical parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "16--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th annual meeting on Association for Computa- tional Linguistics -, pages 16-23, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Real-Time Inference of Complex Mental States from Facial Expressions and Head Gestures",
"authors": [
{
"first": "R",
"middle": [
"El"
],
"last": "Kaliouby",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Robinson",
"suffix": ""
}
],
"year": 2004,
"venue": "2004 Conference on Computer Vision and Pattern Recognition Workshop",
"volume": "",
"issue": "",
"pages": "154--154",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. El Kaliouby and P. Robinson. 2004. Real-Time In- ference of Complex Mental States from Facial Ex- pressions and Head Gestures. In 2004 Conference on Computer Vision and Pattern Recognition Work- shop, pages 154-154. IEEE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A discriminatively trained, multiscale, deformable part model",
"authors": [
{
"first": "Pedro",
"middle": [],
"last": "Felzenszwalb",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
}
],
"year": 2008,
"venue": "IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pedro Felzenszwalb, David McAllester, and Deva Ra- manan. 2008. A discriminatively trained, multi- scale, deformable part model. In 2008 IEEE Confer- ence on Computer Vision and Pattern Recognition, pages 1-8. IEEE, June.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Coreference for learning to extract relations: yes, Virginia, coreference matters",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Gabbard",
"suffix": ""
},
{
"first": "Marjorie",
"middle": [],
"last": "Freedman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers",
"volume": "2",
"issue": "",
"pages": "288--293",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Gabbard, Marjorie Freedman, and RM Weischedel. 2011. Coreference for learn- ing to extract relations: yes, Virginia, coreference matters. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2, pages 288-293.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Cultural relativity in perceiving emotion from vocalizations",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Gendron",
"suffix": ""
},
{
"first": "Debi",
"middle": [],
"last": "Roberson",
"suffix": ""
},
{
"first": "Jacoba",
"middle": [],
"last": "Marieta Van Der Vyver",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"Feldman"
],
"last": "Barrett",
"suffix": ""
}
],
"year": 2014,
"venue": "Psychological science",
"volume": "25",
"issue": "4",
"pages": "911--931",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Gendron, Debi Roberson, Jacoba Marieta van der Vyver, and Lisa Feldman Barrett. 2014. Cultural relativity in perceiving emotion from vo- calizations. Psychological science, 25(4):911-20, April.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "A bayesian framework for multi-cue 3d object tracking",
"authors": [
{
"first": "J",
"middle": [],
"last": "Giebel",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Gavrila",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schn\u00f6rr",
"suffix": ""
}
],
"year": 2004,
"venue": "Computer Vision-ECCV 2004",
"volume": "",
"issue": "",
"pages": "241--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J Giebel, DM Gavrila, and C Schn\u00f6rr. 2004. A bayesian framework for multi-cue 3d object track- ing. In Computer Vision-ECCV 2004, pages 241- 252.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Lexical chains as representations of context for the detection and correction of malapropisms",
"authors": [
{
"first": "Graeme",
"middle": [],
"last": "Hirst",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "St-Onge",
"suffix": ""
}
],
"year": 1998,
"venue": "WordNet: An Electronic Lexical Database (Language, Speech, and Communication)",
"volume": "",
"issue": "",
"pages": "305--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graeme Hirst and D St-Onge. 1998. Lexical chains as representations of context for the detection and cor- rection of malapropisms. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database (Language, Speech, and Communication), pages 305-332. The MIT Press.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Object bank: A high-level image representation for scene classification & semantic feature sparsification",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Lj Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "E",
"middle": [
"P"
],
"last": "Fei-Fei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2010,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "LJ Li, Hao Su, L Fei-Fei, and EP Xing. 2010. Ob- ject bank: A high-level image representation for scene classification & semantic feature sparsifica- tion. In Advances in Neural Information Processing Systems.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Emotion recognition using hidden Markov models from facial temperature sequence",
"authors": [
{
"first": "Zhilei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Shangfei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2011,
"venue": "ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction -Volume Part II",
"volume": "",
"issue": "",
"pages": "240--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhilei Liu and Shangfei Wang. 2011. Emotion recog- nition using hidden Markov models from facial tem- perature sequence. In ACII'11 Proceedings of the 4th international conference on Affective computing and intelligent interaction -Volume Part II, pages 240-247.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "On coreference resolution performance metrics",
"authors": [
{
"first": "Xiaoqiang",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing -HLT '05",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the confer- ence on Human Language Technology and Empiri- cal Methods in Natural Language Processing -HLT '05, pages 25-32, Morristown, NJ, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Learning the meaning of scalar adjectives",
"authors": [
{
"first": "",
"middle": [],
"last": "Mc De Marneffe",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Potts",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "167--176",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "MC De Marneffe, CD Manning, and Christopher Potts. 2010. \"Was it good? It was provocative.\" Learning the meaning of scalar adjectives. In Proceedings of the 48th Annual Meeting of the Association for Com- putational Linguistics, pages 167-176.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Tracking Groups of People. Computer Vision and Image Understanding",
"authors": [
{
"first": "J",
"middle": [],
"last": "Stephen",
"suffix": ""
},
{
"first": "Sumer",
"middle": [],
"last": "Mckenna",
"suffix": ""
},
{
"first": "Zoran",
"middle": [],
"last": "Jabri",
"suffix": ""
},
{
"first": "Azriel",
"middle": [],
"last": "Duric",
"suffix": ""
},
{
"first": "Harry",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wechsler",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "80",
"issue": "",
"pages": "42--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen J. McKenna, Sumer Jabri, Zoran Duric, Azriel Rosenfeld, and Harry Wechsler. 2000. Tracking Groups of People. Computer Vision and Image Un- derstanding, 80(1):42-56, October.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Profile of Mood States (POMS)",
"authors": [
{
"first": "M",
"middle": [],
"last": "D M Mcnair",
"suffix": ""
},
{
"first": "L F",
"middle": [],
"last": "Lorr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Droppleman",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D M McNair, M Lorr, and L F Droppleman. 1971. Profile of Mood States (POMS).",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--12",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, pages 1-12.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "WordNet: a lexical database for English",
"authors": [
{
"first": "A",
"middle": [],
"last": "George",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1995,
"venue": "Communications of the ACM",
"volume": "38",
"issue": "11",
"pages": "39--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George A. Miller. 1995. WordNet: a lexical database for English. Communications of the ACM, 38(11):39-41, November.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Predicting the uncertainty of sentiment adjectives in indirect answers",
"authors": [
{
"first": "Mitra",
"middle": [],
"last": "Mohtarami",
"suffix": ""
},
{
"first": "Hadi",
"middle": [],
"last": "Amiri",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 20th ACM international conference on Information and knowledge management -CIKM '11",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitra Mohtarami, Hadi Amiri, Man Lan, and Chew Lim Tan. 2011. Predicting the uncertainty of sentiment adjectives in indirect answers. In Pro- ceedings of the 20th ACM international conference on Information and knowledge management -CIKM '11, page 2485, New York, New York, USA. ACM Press.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Recognizing human gender in computer vision: a survey",
"authors": [
{
"first": "",
"middle": [],
"last": "Cb Ng",
"suffix": ""
},
{
"first": "B",
"middle": [
"M"
],
"last": "Tay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goi",
"suffix": ""
}
],
"year": 2012,
"venue": "PRICAI 2012: Trends in Artificial Intelligence",
"volume": "7458",
"issue": "",
"pages": "335--346",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "CB Ng, YH Tay, and BM Goi. 2012. Recognizing hu- man gender in computer vision: a survey. PRICAI 2012: Trends in Artificial Intelligence, 7458:335- 346.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Scalable action recognition with a subspace forest",
"authors": [
{
"first": "B",
"middle": [
"A"
],
"last": "S O'hara",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Draper",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1210--1217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S O'Hara and B. A. Draper. 2012. Scalable action recognition with a subspace forest. In 2012 IEEE Conference on Computer Vision and Pattern Recog- nition, pages 1210-1217. IEEE, June.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "WordNet::Similarity: measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Nineteenth National Conference on Artificial Intelligence (AAAI-04)",
"volume": "",
"issue": "",
"pages": "1024--1025",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, S Patwardhan, and J Michelizzi. 2004. WordNet::Similarity: measuring the relatedness of concepts. In Proceedings of the Nineteenth Na- tional Conference on Artificial Intelligence (AAAI- 04), pages 1024-1025, San Jose, CA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A survey on vision-based human action recognition",
"authors": [
{
"first": "Ronald",
"middle": [],
"last": "Poppe",
"suffix": ""
}
],
"year": 2010,
"venue": "Image and Vision Computing",
"volume": "28",
"issue": "6",
"pages": "976--990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald Poppe. 2010. A survey on vision-based human action recognition. Image and Vision Computing, 28(6):976-990, June.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Tracking people by learning their appearance",
"authors": [
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "David A Forsyth",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zisserman",
"suffix": ""
}
],
"year": 2007,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "29",
"issue": "",
"pages": "65--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deva Ramanan, David a Forsyth, and Andrew Zisser- man. 2007. Tracking people by learning their ap- pearance. IEEE transactions on pattern analysis and machine intelligence, 29(1):65-81, January.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning dictionaries for information extraction by multi-level bootstrapping",
"authors": [
{
"first": "E",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the sixteenth national conference on Artificial intelligence (AAAI-1999)",
"volume": "",
"issue": "",
"pages": "474--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E Riloff and R Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proceedings of the sixteenth national conference on Artificial intelligence (AAAI-1999), pages 474- 479.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Action bank: A high-level representation of activity in video",
"authors": [
{
"first": "S",
"middle": [],
"last": "Sadanand",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Corso",
"suffix": ""
}
],
"year": 2012,
"venue": "2012 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1234--1241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Sadanand and J. J. Corso. 2012. Action bank: A high-level representation of activity in video. In 2012 IEEE Conference on Computer Vision and Pat- tern Recognition, pages 1234-1241. IEEE, June.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Recognizing human actions: a local SVM approach",
"authors": [
{
"first": "C",
"middle": [],
"last": "Schuldt",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Laptev",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Caputo",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 17th International Conference on Pattern Recognition",
"volume": "3",
"issue": "",
"pages": "32--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C Schuldt, I Laptev, and B Caputo. 2004. Recognizing human actions: a local SVM approach. In Proceed- ings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004., pages 32-36 Vol.3. IEEE.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning opinions in user-generated web content",
"authors": [
{
"first": "M",
"middle": [],
"last": "Sokolova",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Lapalme",
"suffix": ""
}
],
"year": 2011,
"venue": "Natural Language Engineering",
"volume": "17",
"issue": "04",
"pages": "541--567",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Sokolova and G. Lapalme. 2011. Learning opin- ions in user-generated web content. Natural Lan- guage Engineering, 17(04):541-567, March.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "A survey of vision-based methods for action representation, segmentation and recognition. Computer Vision and Image Understanding",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Weinland",
"suffix": ""
},
{
"first": "Remi",
"middle": [],
"last": "Ronfard",
"suffix": ""
},
{
"first": "Edmond",
"middle": [],
"last": "Boyer",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "115",
"issue": "",
"pages": "224--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Weinland, Remi Ronfard, and Edmond Boyer. 2011. A survey of vision-based methods for action representation, segmentation and recognition. Com- puter Vision and Image Understanding, 115(2):224- 241, February.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Articulated pose estimation with flexible mixtures-of-parts",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
}
],
"year": 2011,
"venue": "CVPR 2011",
"volume": "",
"issue": "",
"pages": "1385--1392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Yang and Deva Ramanan. 2011. Articulated pose estimation with flexible mixtures-of-parts. In CVPR 2011, pages 1385-1392. IEEE, June.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"content": "<table/>",
"text": "Source Example Mental State LabelsPOMSalert, annoyed, energetic, exhausted, helpful, sad, terrified, unworthy, weary, etc. Plutchik angry, disgusted, fearful, joyful/joyous, sad, surprised, trusting, etc. Others agitated, competitive, cynical, disappointed, excited, giddy, happy, inebriated, violent, etc.",
"html": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"content": "<table/>",
"text": "The initial seed set contains 160 mental state labels, compiled from different sources like the popular Profile of Mood States dictionary and Plutchik's wheel of emotion.",
"html": null,
"num": null
},
"TABREF3": {
"type_str": "table",
"content": "<table/>",
"text": "The average evaluation performance across 26 different chase videos are shown against 2 different baselines for all proposed models. Bold font indicates the best score in a given column.",
"html": null,
"num": null
}
}
}
}