|
{ |
|
"paper_id": "S14-1015", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T15:33:13.553499Z" |
|
}, |
|
"title": "See No Evil, Say No Evil: Description Generation from Densely Labeled Images", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Yatskar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Washington Seattle", |
|
"location": { |
|
"postCode": "98195", |
|
"region": "WA", |
|
"country": "USA" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Lucy", |
|
"middle": [], |
|
"last": "Vanderwende", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "", |
|
"affiliation": {}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper studies generation of descriptive sentences from densely annotated images. Previous work studied generation from automatically detected visual information but produced a limited class of sentences, hindered by currently unreliable recognition of activities and attributes. Instead, we collect human annotations of objects, parts, attributes and activities in images. These annotations allow us to build a significantly more comprehensive model of language generation and allow us to study what visual information is required to generate human-like descriptions. Experiments demonstrate high quality output and that activity annotations and relative spatial location of objects contribute most to producing high quality sentences. * This work was conducted at Microsoft Research. 1 While object recognition is improving (ImageNet accuracy is over 90% for 1000 classes) progress in activity recog", |
|
"pdf_parse": { |
|
"paper_id": "S14-1015", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper studies generation of descriptive sentences from densely annotated images. Previous work studied generation from automatically detected visual information but produced a limited class of sentences, hindered by currently unreliable recognition of activities and attributes. Instead, we collect human annotations of objects, parts, attributes and activities in images. These annotations allow us to build a significantly more comprehensive model of language generation and allow us to study what visual information is required to generate human-like descriptions. Experiments demonstrate high quality output and that activity annotations and relative spatial location of objects contribute most to producing high quality sentences. * This work was conducted at Microsoft Research. 1 While object recognition is improving (ImageNet accuracy is over 90% for 1000 classes) progress in activity recog", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Image descriptions compactly summarize complex visual scenes. For example, consider the descriptions of the image in Figure 1 , which vary in content but focus on the women and what they are doing. Automatically generating such descriptions is challenging: a full system must understand the image, select the relevant visual content to present, and construct complete sentences. Existing systems aim to address all of these challenges but use visual detectors for only a small vocabulary of words, typically nouns, associated with objects that can be reliably found. 1 Such systems are blind Figure 1 : An annotated image with human generated sentence descriptions. Each bounding polygon encompasses one or more objects and is associated with a count and text labels.This image has 9 high level objects annotated with over 250 textual labels.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 117, |
|
"end": 125, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 592, |
|
"end": 600, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "to much of the visual content needed to generate complete, human-like sentences.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we instead study generation with more complete visual support, as provided by human annotations, allowing us to develop more comprehensive models than previously considered. Such models have the dual benefit of (1) providing new insights into how to construct more human-like sentences and (2) allowing us to perform experiments that systematically study the contribution of different visual cues in generation, suggesting which automatic detectors would be most beneficial for generation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In an effort to approximate relatively complete visual recognition, we collected manually labeled representations of objects, parts, attributes and activities for a benchmark caption generation dataset that includes images paired with human authored descriptions . 2 As seen in Figure 1 , the labels include object boundaries and descriptive text, here including the facts that the children are \"riding\" and \"walking\" and that they are \"young.\" Our goal is to be as exhaustive as possible, giving equal treatment to all objects. For example, the annotations in Figure 1 contain enough information to generate the first three sentences and most of the content in the remaining two. Labels gathered in this way are a type of feature norms (McRae et al., 2005) , which have been used in the cognitive science literature to approximate human perception and were recently used as a visual proxy in distributional semantics (Silberer and Lapata, 2012) . We present the first effort, that we are aware of, for using feature norms to study image description generation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 265, |
|
"end": 266, |
|
"text": "2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 737, |
|
"end": 757, |
|
"text": "(McRae et al., 2005)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 918, |
|
"end": 945, |
|
"text": "(Silberer and Lapata, 2012)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 278, |
|
"end": 286, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 561, |
|
"end": 569, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Such rich data allows us to develop significantly more comprehensive generation models. We divide generation into choices about which visual content to select and how to realize a sentence that describes that content. Our approach is grammarbased, feature-rich, and jointly models both decisions. The content selection model includes latent variables that align phrases to visual objects and features that, for example, measure how visual salience and spatial relationships influence which objects are mentioned. The realization approach considers a number of cues, including language model scores, word specificity, and relative spatial information (e.g. to produce the best spatial prepositions), when producing the final sentence. When used with a reranking model, including global cues such as sentence length, this approach provides a full generation system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our experiments demonstrate high quality visual content selection, within 90% of human performance on unigram BLEU, and improved complete sentence generation, nearly halving the difference from human performance to two baselines on 4-gram BLEU. In ablations, we measure the importance of different annotations and visual cues, showing that annotation of activities and relative bounding box information between objects are crucial to generating human-like description.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "A number of approaches have been proposed for constructing sentences from images, including copying captions from other images (Farhadi 2 Available at : http://homes.cs.washington.edu/\u02dcmy89/ Ordonez et al., 2011) , using text surrounding an image in a news article (Feng and Lapata, 2010) , filling visual sentence templates Yang et al., 2011; Elliott and Keller, 2013) , and stitching together existing sentence descriptions (Gupta and Mannem, 2012; Kuznetsova et al., 2012) . However, due to the lack of reliable detectors, especially for activities, many previous systems have a small vocabulary and must generate many words, including verbs, with no direct visual support. These problems also extend to video caption systems (Yu and Siskind, 2013; Krishnamoorthy et al., 2013) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 127, |
|
"end": 137, |
|
"text": "(Farhadi 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 191, |
|
"end": 212, |
|
"text": "Ordonez et al., 2011)", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 265, |
|
"end": 288, |
|
"text": "(Feng and Lapata, 2010)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 325, |
|
"end": 343, |
|
"text": "Yang et al., 2011;", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 344, |
|
"end": 369, |
|
"text": "Elliott and Keller, 2013)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 450, |
|
"text": "(Gupta and Mannem, 2012;", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 451, |
|
"end": 475, |
|
"text": "Kuznetsova et al., 2012)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 729, |
|
"end": 751, |
|
"text": "(Yu and Siskind, 2013;", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 752, |
|
"end": 780, |
|
"text": "Krishnamoorthy et al., 2013)", |
|
"ref_id": "BIBREF14" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The Midge algorithm (Mitchell et al., 2012) is most closely related to our approach, and will provide a baseline in our experiments. Midge is syntax-driven but again uses a small vocabulary without direct visual support for every word. It outputs a large set of sentences to describe all triplets of recognized objects in the scene, but does not include a content selection model to select the best sentence. We extend Midge with content and sentence selection rules to use it as a baseline.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 43, |
|
"text": "(Mitchell et al., 2012)", |
|
"ref_id": "BIBREF20" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The visual facts we annotate are motivated by research in machine vision. Attributes are a good intermediate representation for categorization (Farhadi et al., 2009) . Activity recognition is an emerging area in images (Li and Fei-Fei, 2007; Yao et al., 2011; Sharma et al., 2013) and video (Weinland et al., 2011) , although less studied than object recognition. Also, parts have been widely used in object recognition (Felzenszwalb et al., 2010 ). Yet, no work tests the contribution of these labels for sentence generation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 143, |
|
"end": 165, |
|
"text": "(Farhadi et al., 2009)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 219, |
|
"end": 241, |
|
"text": "(Li and Fei-Fei, 2007;", |
|
"ref_id": "BIBREF17" |
|
}, |
|
{ |
|
"start": 242, |
|
"end": 259, |
|
"text": "Yao et al., 2011;", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 260, |
|
"end": 280, |
|
"text": "Sharma et al., 2013)", |
|
"ref_id": "BIBREF27" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 314, |
|
"text": "(Weinland et al., 2011)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 420, |
|
"end": 446, |
|
"text": "(Felzenszwalb et al., 2010", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "There is also a significant amount of work on other grounded language problems, where related models have been developed. Visual referring expression generation systems (Krahmer and Van Deemter, 2012; Mitchell et al., 2013; FitzGerald et al., 2013) aim to identify specific objects, a sub-problem we deal with when describing images more generally. Other research generates descriptions in simulated worlds and, like this work, uses feature rich models (Angeli et al., 2010) , or syntactic structures like PCFGs (Chen et al., 2010; Konstas and Lapata, 2012) but does not combine the two. Finally, Zitnick and Parikh (2013) study sentences describing clipart scenes. They present a number of factors influencing overall descriptive quality, several of which we use in sentence generation for the first time.", |
|
"cite_spans": [ |
|
{ |
|
"start": 169, |
|
"end": 200, |
|
"text": "(Krahmer and Van Deemter, 2012;", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 201, |
|
"end": 223, |
|
"text": "Mitchell et al., 2013;", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 248, |
|
"text": "FitzGerald et al., 2013)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 453, |
|
"end": 474, |
|
"text": "(Angeli et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 531, |
|
"text": "(Chen et al., 2010;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 532, |
|
"end": 557, |
|
"text": "Konstas and Lapata, 2012)", |
|
"ref_id": "BIBREF12" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We collected a dataset of richly annotated images to approximate gold standard visual recognition. In collecting the data, we sought a visual annotation with sufficient coverage to support the generation of as many of the words in the original image descriptions as possible. We also aimed to make it as visually exhaustive as possible-giving equal treatment to all visible objects. This ensures less bias from annotators' perception about which objects are important, since one of the problems we would like to solve is content selection. This dataset will be available for future experiments.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We built on the dataset from which contained 8,000 Flickr images and associated descriptions gathered using Amazon Mechanical Turk (MTurk). Restricting ourselves to Creative Commons images, we sampled 500 images for annotation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We collected annotations of images in three stages using MTurk, and assigned each annotation task to 3-5 workers to improve quality through redundancy (Callison-Burch, 2009 ). Below we describe the process for annotating a single image.", |
|
"cite_spans": [ |
|
{ |
|
"start": 151, |
|
"end": 172, |
|
"text": "(Callison-Burch, 2009", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Stage 1: We prompted five turkers to list all objects in an image, ignoring objects that are parts of larger objects (e.g., the arms of a person), which we collected later in Stage 3. This list also included groups, such as crowds of people.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Stage 2: For each unique object label from Stage 1, we asked two turkers to draw a polygon around the object identified. 3 In cases where the object is a group, we also asked for the number of objects present (1-6 or many). Finally, we created a list of all references to the object from the first stage, which we call the Object facet.", |
|
"cite_spans": [ |
|
{ |
|
"start": 121, |
|
"end": 122, |
|
"text": "3", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Stage 3: For each object or group, we prompted three turkers to provide descriptive phrases of:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\u2022 Doing -actions the object participates in, e.g. \"jumping.\" \u2022 Parts -physical parts e.g. \"legs\", or other items in the possession of the object e.g. \"shirt.\" \u2022 Attributes -adjectives describing the object, e.g. \"red.\" \u2022 Isa -alternative names for a object e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "\"boy\", \"rider.\" Figure 1 shows more examples for objects 3 We modified LabelMe (Torralba et al., 2010) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 57, |
|
"end": 58, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 79, |
|
"end": 102, |
|
"text": "(Torralba et al., 2010)", |
|
"ref_id": "BIBREF30" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 16, |
|
"end": 24, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "in a labeled image. 4 We refer to all of these annotations, including the merged Object labels, as facets. These labels provide feature norms (McRae et al., 2005) , which have recently used as a visual proxy in distributional semantics (Silberer and Lapata, 2012; Silberer et al., 2013) but have not been previous studied for generation. This annotation of 500 images (2500 sentences) yielded over 4000 object instances and 100,000 textual labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 20, |
|
"end": 21, |
|
"text": "4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 142, |
|
"end": 162, |
|
"text": "(McRae et al., 2005)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 236, |
|
"end": 263, |
|
"text": "(Silberer and Lapata, 2012;", |
|
"ref_id": "BIBREF28" |
|
}, |
|
{ |
|
"start": 264, |
|
"end": 286, |
|
"text": "Silberer et al., 2013)", |
|
"ref_id": "BIBREF29" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Dataset", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Given such rich annotations, we can now develop significantly more comprehensive generation models. In this section, we present an approach that first uses a generative model and then a reranker. The generative model defines a distribution over content selection and content realization choices, using diverse cues from the image annotations. The reranker trades off our generative model score, language model score (to encourage fluency), and length to produce the final sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Approach", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We want to generate a sen-", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "tence w = w 1 . . . w n where each word w i \u2208 V", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "comes from a fixed vocabulary V . The vocabulary V includes all 2700 words used in descriptive sentences in the training set. 5 The model conditions on an annotated image I that contains a set of objects O, where each object o \u2208 O has a bounding polygon and a number of facets containing string labels. To model the naming of specific objects, words w i can be associated with alignment variables a i that range over O. One such variable is introduced for each head noun in the sentence. Figure 2 shows alignment variable settings with colors that match objects in the image. Finally, as a byproduct of the hierarchical generative process, we construct an undirected dependency tree d over the words in w.", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 127, |
|
"text": "5", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 488, |
|
"end": 496, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The complete generative model defines the probability p( w, a, d | I) of a sentence w, word alignments a, and undirected dependency tree d, given the annotated input image I. The overall process unfolds recursively, as seen in Figure 3 . The main clause is produced by first selecting the subject alignment a s followed by the subject word w s . It then chooses the verb and optionally the object alignment a o and word w o . The process then continues recursively, modifying the subject, verb, and object of the sentence with noun and prepositional modifiers. The recursion begins at Step 2 in Figure 3 . Given a parent word w and that word's relevant alignment variable a, the model creates attachments where w is the grammatical head of subsequently produced words. Choices about whether to create noun modifiers or prepositional modifiers are made in steps (a) and (b). The process chooses values for the alignment variables and then chooses content words, adding connective prepositions in the case of prepositional modifiers. It then chooses to end or submits new wordalignment pairs to be recursively modified.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 227, |
|
"end": 235, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 595, |
|
"end": 603, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Each line defines a decision that must be made according to a local probability distribution. For example, Step 1.a defines the probability of aligning a subject word to various objects in the image. The distributions are maximum entropy models, similar to previous work (Angeli et al., 2010) , using features described in the next section. The induced undirected dependency tree d has an edge between each word and the previously generated word (or the input word w in Steps 2.a.i and 2.a.ii, when no previous word is available). Figure 2 shows a possible output from the process, along with the Bayesian network that encodes what each decision was conditioned on during generation.", |
|
"cite_spans": [ |
|
{ |
|
"start": 271, |
|
"end": 292, |
|
"text": "(Angeli et al., 2010)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 531, |
|
"end": 539, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Learning We learn the model from data (e) object word wo from pn(w | ao, dc) (f) end with pstop or go to (2) with (ws, as) (g) end with pstop or go to (2) with (wv, as) (h) end with pstop or go to (2) with (wo, ao) 2. for a (word, alignment) (w , a) (a,b are optional):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "{( w i , d i , I i ) | i = 1 . . . m} containing sentences w i , dependency trees d i ,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(a) if w not verb: modify w with noun, select:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "i. modifier word wn from pn(w | a, dc).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "ii. end with pstop or go to (2) with (am, wn) (b) modify w with preposition, select:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "i. preposition word wp if w not a verb: from pp(w | a, dc) else: from pp(w | a, wv, dc) ii. object alignment ap from pa(a | a, wp, dc) iii. object word wn from pn(w | ap, dc). iv. end with pstop or go to (2) with (ap, wn) I i . The dependency trees define the path that was taken through the generative process in Figure 3 and are used to create a Bayesian network for every sentence, like in Figure 2 . However, object alignments a i are latent during learning and we must marginalize over them.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 322, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
}, |
|
{ |
|
"start": 393, |
|
"end": 401, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The model is trained to maximize the conditional marginal log-likelihood of the data with regularization:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "L(\u03b8) = i log a p( a, w i , d i | I i ; \u03b8) \u2212 r|\u03b8| 2", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where \u03b8 is the set of parameters and r is the regularization coefficient. In essence, we maximize the likelihood of every sentence's observed Bayesian network, while marginalizing over content selection variables we did not observe.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Because the model only includes pairwise dependencies between the hidden alignment variables a, the inference problem is quadratic in the number of objects and non-convex because a is unobserved. We optimize this objective directly with L-BFGS, using the junction-tree algorithm to compute the sum and the gradient. 6 Inference To describe an image, we need to maximize over word, alignment, and the dependency parse variables:", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 317, |
|
"text": "6", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "arg max w, a, d p( w, a, d | I)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This computation is intractable because we need to consider all possible sentences, so we use beam search for strings up to a fixed length. Reranking Generating directly from the process in Figure 3 results in sentences that may be short and repetitive because the model score is a product of locally normalized distributions. The reranker takes as input a candidate list c, for an image I, as decoded from the generative model. The candidate list includes the top-k scoring hypotheses for each sentence length up to a fixed maximum. A linear scoring function is used for reranking optimized with MERT (Och, 2003) to maximize BLEU-2.", |
|
"cite_spans": [ |
|
{ |
|
"start": 602, |
|
"end": 613, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 190, |
|
"end": 198, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Generative Model", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We construct indicator features to capture variation in usage in different parts of the sentence, types of objects that are mentioned, visual salience, and semantic and visual coordination between objects. The features are included in the maximum entropy models used to parameterize the distributions described in Figure 3 . Whenever possible, we use WordNet Synsets (Miller, 1995) instead of lexical features to limit over-fitting.", |
|
"cite_spans": [ |
|
{ |
|
"start": 367, |
|
"end": 381, |
|
"text": "(Miller, 1995)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 314, |
|
"end": 322, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Features in the generative model use tests for local properties, such as the identity of a synset of a word in WordNet, conjoined with an identifier that indicates context in the generative process. 7 Generative model features indicate (1) visual and semantic information about objects in distributions over alignments (content selection) and (2) preferences for referring to objects in distributions over words (content realization). Features in the reranking model indicate global properties of candidate sentences. Exact formulas for computing the features are in the appendix.", |
|
"cite_spans": [ |
|
{ |
|
"start": 199, |
|
"end": 200, |
|
"text": "7", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Visual features, such as an object's position in the image, are used for content selection. Pairwise visual information between two objects, for example the bounding box overlap between objects or the relative position of the two objects, is included in distributions where selection of an alignment variable conditions on previously generated alignments. For verbs (Step 1.d in Figure 3 ) and prepositions (Step 2.b.ii), these features are conjoined with the stem of the connective.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 379, |
|
"end": 387, |
|
"text": "Figure 3", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Semantic types of objects are also used in content selection. We define semantic types by finding synsets of labels in objects that correspond to high level types, a list motivated by the animacy hierarchy (Zaenen et al., 2004) . 8 Type features indicate the type of the object referred to by an alignment variable as well as the cross product of types when an alignment variable is on conditioning side of a distribution (e.g. Step 1.d). Like above, in the presence of a connective word, these features are conjoined with the stem of the connective.", |
|
"cite_spans": [ |
|
{ |
|
"start": 206, |
|
"end": 227, |
|
"text": "(Zaenen et al., 2004)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 230, |
|
"end": 231, |
|
"text": "8", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Content realization features help select words when conditioning on chosen alignments (e.g.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Step 1.b). These features include the identity of the WordNet synset corresponding to a word, the word's depth in the synset hierarchy, the language model score for adding that word 9 and whether the word matches labels in facets corresponding to the object referenced by an alignment variable.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Reranking features are primarily used to overcome issues of repetition and length in the generative distributions, more commonly used for alignment, than to create sentences. We use only four features: length, the number of repetitions, generative model score, and language model score.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Features", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Data We used 70% of the data for training (1750 sentences, 350 images), 15% for development, and 15% for testing (375 sentences, 75 images).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Parameters The regularization parameter was set on the held out data to r = 8. The reranker candidate list included the top 500 sentences for each sentence length up to 15 and weights were optimized with Z-MERT (Zaidan, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 211, |
|
"end": 225, |
|
"text": "(Zaidan, 2009)", |
|
"ref_id": "BIBREF36" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "Metrics Our evaluation is based on BLEU-n (Papineni et al., 2001) , which considers all ngrams up to length n. To assess human performance using BLEU, we score each of the five references against the four other ones and finally average the five BLEU scores. In order to make these results comparable to BLEU scores for our model and baselines, we perform the same five-fold averaging when computing BLEU for each system. We also compute accuracy for different syntactic positions in the sentence. We look at a number of categories: the main clause's components (S,V,O), prepositional phrase components, the preposition (Pp) and their objects (Po) and noun modifying words (N), including determiners. Phrases match if they have an exact string match and share context identifiers as defined in the features sections. Human Evaluation Annotators rated sentences output by our full model against either human or a baseline system generated descriptions. Three criteria were evaluated: grammaticality, which sentence is more complete and well formed; truthfulness, which sentence is more accurately capturing something true in the image; and salience, which sentence is capturing important things in the image while still being concise. Two annotators annotated all test pairs for all criteria for a given pair of systems. Six annotators were used (none authors) and agreement was high (Cohen's kappa = 0.963, 0.823 and 0.703 for grammar, truth and salience). Machine Translation Baseline The first baseline is designed to see if it is possible to generate good sentences from the facet string labels alone, with no visual information. We use an extension of phrase-based machine translation techniques (Och et al., 1999) . We created a virtual bitext by pairing each image description (the target sentence) with a sequence 10 of visual identifiers (the source \"sentence\") listing strings from the facet labels. Since phrases produced by turkers lack many of the functions words needed to create fluent sentences, we added one of 47 function words either at the start or the end of each output phrase.", |
|
"cite_spans": [ |
|
{ |
|
"start": 42, |
|
"end": 65, |
|
"text": "(Papineni et al., 2001)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 1699, |
|
"end": 1717, |
|
"text": "(Och et al., 1999)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "The translation model included standard features such as language model score (using our caption language model described previously), word count, phrase count, linear distortion, and the count of deleted source words. We also define three features that count the number of Object, Isa, and Doing phrases, to learn a preference for types of phrases. The feature weights are tuned with MERT (Och, 2003) to maximize BLEU-4. Midge Baseline As described in related work, the Midge system creates a set of sentences to describe everything in an input image. These sen- tences must all be true, but do not have to select the same content that a person would. It can be adapted to our task by adding object selection and sentence ranking rules. For object selection, we choose the three most frequently named objects in the scene according to a background corpus of image descriptions. For sentence selection, we take all sentences within one word of the average length of a sentence in our corpus, 11, and select the one with best Midge generation score.", |
|
"cite_spans": [ |
|
{ |
|
"start": 390, |
|
"end": 401, |
|
"text": "(Och, 2003)", |
|
"ref_id": "BIBREF23" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "6" |
|
}, |
|
{ |
|
"text": "We report experiments for our generation pipeline and ablations that remove data and features. Overall Performance Table 1 shows the results on the test set. The full model consistently achieves the highest BLEU scores. Overall, these numbers suggest strong content selection by getting high recall for individual words (BLEU-1), but fall further behind human performance as the length of the n-gram grows (BLEU-2 through BLEU-4). These number match our perception that the model is learning to produce high quality sentences, but does not always describe all of the important aspects of the scene or use exactly the expected wording. Table 4 presents example output, which we will discuss in more detail shortly. amples of poor performance (bottom). Each image has two captions, the system output S and a human reference R. Table 2 presents the results of a human evaluation. The full model outperforms all baselines on every measure, but is not always competitive with human descriptions. It performs the best on grammaticality, where it is judged to be as grammatical as humans. However, surprisingly, in many cases it is also often judged equal to the other baselines. Examination of baseline output reveals that the MT baseline often generates short sentences, having little chance of being judged ungrammatical. Furthermore, the Midge baseline, like our system, is a syntax-based system and therefore often produces grammatical sentences. Although our system performs well with respect to the baselines on truthfulness, often the system constructs sentences with incorrect prepositions, an issue that could be improved with better estimates of 3-d position in the image. On truthfulness, the MT baseline is comparable to our system, often being judged equal, because its output is short. Our system's strength is salience, a factor the baselines do not model. Table 3 shows annotation ablation experiments on the development set, where we remove different classes of data labels to measure the performance that can be achieved with less visual information. In all cases, the overall behavior of the system varies, as it tries to learn to compensate for the missing information. Ablating actions is by far the most detrimental. Overall BLEU score suffers and prediction accuracy of the verb (V) degrades significantly causing cascading errors that affect the object of the verb (O). Removing count information affects noun attachment (N) performance. Images where determiner use is important or where groups of objects are best identified by the number (for example, three dogs) are difficult to describe naturally. Finally, we see a tradeoff when removing properties. There is an increase in noun modifier accuracy (N) but a decrease in content selection quality (BL-1), showing recall has gone down. In essence, the approach learns to stop trying to generate adjectives and other modifiers that would rely on the missing properties. The difference in BLEU score with the Full-Model is small, even without these modifiers, because there often still exists a a short output with high accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 115, |
|
"end": 122, |
|
"text": "Table 1", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 635, |
|
"end": 642, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
}, |
|
{ |
|
"start": 825, |
|
"end": 832, |
|
"text": "Table 2", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1866, |
|
"end": 1873, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "7" |
|
}, |
|
{ |
|
"text": "Feature Ablation The bottom two rows in Table 3 show ablations of the visual and pairwise features, measuring the contribution of the visual information provided by the bounding box annotations. The ablated visual information includes bounding-box positions and relative pairwise visual information. The pairwise ablation removes the ability to model any interactions between objects, for example, relative bounding box or pairwise object type information.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 40, |
|
"end": 47, |
|
"text": "Table 3", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Ablation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Overall, prepositional phrase accuracy is most affected. Ablating visual features significantly impacts accuracy of prepositional phrases (Pp and Po), affecting the use of preposition words the most, and lowering fluency (BL-4). Precision in the object of the verb (O) rises; the model makes \u223c 50% fewer predictions in that position than the Full-Model because it lacks features to coordinate subject and object of the verb. Ablating pairwise features has similar results. While the model corrects errors in the object of the preposition (Po) with the addition of visual features, fluency is still worse than Full-Model, as reflected by BL-4. Table 4 has examples of good and bad system output. The first two images are good examples, including both system output (S) and a human reference (R). The second two contain lower quality outputs. Overall, the model captures common ways to refer to people and scenes. However, it does better for images with fewer sentient objects because content selection is less ambiguous.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 643, |
|
"end": 650, |
|
"text": "Table 4", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Data Ablation", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our system does well at finding important objects. For example, in the first good image, we mention the guitar instead of the house, both of which are prominent and have high overlap with the woman. In the second case, we identify that both dogs and humans tend to be important actors in scenes but poorly identify their relationship.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The bad examples show difficult scenes. In the first description the broad context is not identified, instead focusing on the bench (highlighted in red). The second example identifies a weakness in our annotation: it encodes contradictory groupings of the people. The groupings covers all of the children, including the boy running, and many subsets of the people near the grass. This causes ambiguity and our methods cannot differentiate them, incorrectly mentioning just the children and picking an inappropriate verb (one participant in the group is not sitting). Improved annotation of groups would enable the study of generation for more complex scenes, such as these.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Qualitative Results", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this work we used dense annotations of images to study description generation. The annotations allowed us to not only develop new models, better capable of generating human-like sentences, but also to explore what visual information is crucial for description generation. Experiments showed that activity and bounding-box information is important and demonstrated areas of future work. In images that are more complex, for example multiple sentient objects, object grouping and reference will be important to generating good descriptions.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Issues of this type can be explored with annotations of increasing complexity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "8" |
|
}, |
|
{ |
|
"text": "Steps", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CONTEXT(a , dc)\u2297 {TYPE(a ), MENTION(a , do), MENTION(a , obj ), VISUAL(a )} pa(a | dc) pa(a | a, w, dc) 1.a, 1.d, 2.b.ii CONTEXT(a , dc)\u2297 {TYPE(a) \u2297 TYPE(a ), VISUAL2(a, a )} pa(a | a, w, dc) 1.d, 2.b.i CONTEXT(a , dc)\u2297 {TYPE(a) \u2297 TYPE(a ) \u2297 STEM(w), VISUAL2(a, a ) \u2297 STEM(w)} pa(a | a, w, dc) 1.d, 2.b.i CONTEXT(a, dc)\u2297 {WORDNET(w), MATCH(w, a), SPECIFICITY(w, a), ADJECTIVE(w, a), DETERMINER(w, a)} pn(w | a, dc) 1.b, 1.e, 2.a.i 2.b.ii CONTEXT(a, dc) \u2297 {MATCH(w, a), TYPE(a) \u2297 STEM(w)} pv(w | a, dc) 1.c CONTEXT(a , dc) \u2297 TYPE(a) \u2297 STEM(wp) pp(w | a, dc) pp(w | a, wv, dc) 2.b.i CONTEXT(a , dc) \u2297 STEM(wv) \u2297 STEM(w)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pp(w | a, wv, dc) 2.b.i Table 5 : Feature families and distributions that include them. \u2297 indicates the cross-product of the indicator features. Distributions are listed more than once to indicate they use multiple feature families.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "VISUAL(a) returns indicators for visual facts about the object that a aligns to. There is an indicator for two quantities: (1) overlap of object's polygon with every horizontal third of the image, as a fraction of the object's area, and (2) the object's distance to the center of the image as fraction of the diagonal of the image. Each quantity, v, is put into three overlapping buckets: if v > .1, if v > .5, and if v > .9.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "VISUAL2(a, a ) indicates pairwise visual facts about two objects. There is an indicator for the following quantities bucketed: the amount of overlap between the polygons for a and a as a fraction of the size of a's polygon, the distance between the center of the polygon for a and a as a fraction of image's diagonal, and the slope between the center of a and a . Each quantity, v, is put into three overlapping buckets: if v > .1, if v > .5, and if v > .9. There is an indicator for the relative position of extremities a and a : whether the rightmost point of a is further right than a 's rightmost or leftmost point, and the same for top, left, and bottom.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "WORDNET(w) returns indicators for all hypernyms of a word w. The two most specific synsets are not used when there at least 8 options.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MENTION(a, facet) returns the union of the WORDNET(w) features for all words w in the facet facet for the object referred to alignment a. ADJECTIVE(w, a) indicates four types of features specific to adjective usage. If MENTION(w, Attributes) is not empty, indicate : (1) the satellite adjective synset of w in Wordnet, (2) the head adjective synset of w in Wordnet, (3) the head adjective synset conjoined with TYPE(a), and (4) the number of times there exists a label in the Attributes facet of a that has the same head adjective synset as w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "DETERMINER(w, a) indicates four determiner specific features. If w is a determiner, then indicate : (1) the identity of w conjoined with the count (the label for numerosity) of a, (2) the identity of w conjoined with an indicator for if the count of a is greater than one, (3) the identity of w conjoined with TYPE(a) and (4) the frequency with which w appears before its head word in the Flikr corpus (Ordonez et al., 2011) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 402, |
|
"end": 424, |
|
"text": "(Ordonez et al., 2011)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "MATCH(w, a), indicates all facets of object a that contain words with the same stem as w.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "SPECIFICITY(w, a) is an indicator of the specificity of the word w when referring to the object aligned to a. Indicates the relative depth of w in Wordnet, as compared to all words w where MATCH(w , a) is not empty. The depth is bucketed into quintiles. STEM(w) returns the Porter2 stem of w. 13 The distribution for stopping, p stop (ST OP | d c , w), contains two types of features. (1) Structural features indicating for the number of times a contextual identifier has appeared so far in the derivation and (2) mention features indicating the types of objects mentioned. 14 To compute mention features, we consider all possible types of objects, t, then there is an indicator for: (1) if \u2203o, \u2203w \u2208 w : MATCH(w, o) = \u2205 \u2227 TYPE(o) = t, (2) whether \u2203o, \u2203w \u2208 w : MATCH(w, o) = \u2205 \u2227 TYPE(o) = t and (3) if (1) does not hold but (2) does.", |
|
"cite_spans": [ |
|
{ |
|
"start": 293, |
|
"end": 295, |
|
"text": "13", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Included In", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In the experiments, Parts and Isa facets do not improve performance, so we do not use them in the final model. Isa is redundant with the Object facet, as seen inFigure 1. Also parts like clothing, were often annotated as separate objects.5 We do not generate from image facets directly, because only 20% of the sentences in our data can be produced like this. Instead, we develop features which consider the similarity between labels in the image and words in the vocabulary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To compute the gradient, we differentiate the recurrence in the junction-tree algorithm by applying the product rule.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For example, inFigure 2the context for the word \"sidewalk\" would be \"word,syntactic-object,verb,preposition\" indicating it is a word, in the syntactic object of a preposition, which was attached to a verb modifying prepositional phrase.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For example, human, animal, artifact (a human created object), natural body (trees, water, ect.), or natural artifact (stick, leaf, rock).9 We use tri-grams with Kneser-Ney smoothing over the 1 million caption data set(Ordonez et al., 2011).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We defined a consistent ordering of visual identifiers and set the distortion limit of the phrase-based decoder to infinity.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Similarly \"large\" is \"word,noun,subject,preposition\" while \"girls\" is special cased to \"word,subject,root\" because it has no initial attachment. The alignment variable above the word handbags is \"alignment,syntacticobject,subject,preposition\" because it an alignment variable, is in the syntactic object position of a preposition and can be located by following a subject attached pp.12 WordNet divides these into synsets expressing water, weather, nature and a few more.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "http://snowball.tartarus.org/algorithms/english/stemmer.html 14 Object mention features cannot contain a because that creates large dependencies in inference for learning.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "Acknowledgments This work is partially funded by DARPA CSSG (D11AP00277) and ARO (W911NF-12-1-0197). We ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "acknowledgement", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This appendix describes the feature templates for the generative model in greater detail.Features in the generative model conjoin indicators for local tests, such as STEM(w) which indicates the stem of a word w, with a global contextual identifier CONTEXT(v, d) that indicates properties of the generation history, as described in detail below. Table 5 provides a reference for which feature templates are used in the generative model distributions, as defined in Figure 3 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 345, |
|
"end": 352, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 472, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Appendix A", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "CONTEXT(n, d) is an indicator for a contextual identifier for a variable n in the model depending on the dependency structure d. There is an indicator for all combinations of the type of n (alignment or word), the position of n (subject, syntactic object, verb, noun-modifier, or preposition), the position of the earliest variable along the path to generate n, and the type of attachment to that variable (noun or prepositional modifier). For example, in Figure 2 the context for the word \"sidewalk\" would be \"word,syntacticobject,verb,preposition\" indicating it is a word, the object of a preposition, whose path was along a verb modifying prepositional phrase. 11 TYPE(a) indicates the high level type of an object referred to by alignment variable a. We use synsets to define high level types including human, animal, artifact, natural artifact and various synsets that capture scene information, 12 a list motivated by the animacy hierarchy (Zaenen et al., 2004) . Each object is assigned a type by finding the synset for its name (object facet), and tracing the hypernym structure in Wordnet to find the appropriate class, if one exists. Additionally, the type indicates whether the object is a group or not. For example, in Figure 2 , the blue polygon has type \"person,group\", or the red bike polygon has type \"artifact,single.\"", |
|
"cite_spans": [ |
|
{ |
|
"start": 946, |
|
"end": 967, |
|
"text": "(Zaenen et al., 2004)", |
|
"ref_id": "BIBREF35" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 464, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1231, |
|
"end": 1239, |
|
"text": "Figure 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Feature Templates", |
|
"sec_num": "8.1" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A simple domain-independent probabilistic approach to generation", |
|
"authors": [ |
|
{ |
|
"first": "Gabor", |
|
"middle": [], |
|
"last": "Angeli", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gabor Angeli, Percy Liang, and Dan Klein. 2010. A simple domain-independent probabilistic approach to generation. In Empirical Methods in Natural Lan- guage Processing (EMNLP).", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Fast, cheap, and creative: Evaluating translation quality using Amazon's Mechanical Turk", |
|
"authors": [ |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Callison", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "-", |
|
"middle": [], |
|
"last": "Burch", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "286--295", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Chris Callison-Burch. 2009. Fast, cheap, and creative: Evaluating translation quality using Amazon's Me- chanical Turk. In EMNLP, pages 286-295, August.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Training a multilingual sportscaster: Using perceptual context to learn language", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "JAIR", |
|
"volume": "37", |
|
"issue": "", |
|
"pages": "397--435", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. JAIR, 37:397-435.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Generating typed dependency parses from phrase structure parses", |
|
"authors": [ |
|
{ |
|
"first": "Marie-Catherine", |
|
"middle": [], |
|
"last": "De Marneffe", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Maccartney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Christopher", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Manning", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "LREC", |
|
"volume": "6", |
|
"issue": "", |
|
"pages": "449--454", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Marie-Catherine de Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating typed dependency parses from phrase structure parses. In LREC, volume 6, pages 449-454.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Image Description using Visual Dependency Representations", |
|
"authors": [ |
|
{ |
|
"first": "Desmond", |
|
"middle": [], |
|
"last": "Elliott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Frank", |
|
"middle": [], |
|
"last": "Keller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Desmond Elliott and Frank Keller. 2013. Image De- scription using Visual Dependency Representations. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Describing objects by their attributes", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ian", |
|
"middle": [], |
|
"last": "Endres", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Hoiem", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Forsyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Computer Vision and Pattern Recognition", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1778--1785", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Farhadi, Ian Endres, Derek Hoiem, and David Forsyth. 2009. Describing objects by their at- tributes. In Computer Vision and Pattern Recogni- tion, 2009. CVPR 2009. IEEE Conference on, pages 1778-1785. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Every picture tells a story: Generating sentences from images", |
|
"authors": [ |
|
{ |
|
"first": "Ali", |
|
"middle": [], |
|
"last": "Farhadi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohsen", |
|
"middle": [], |
|
"last": "Hejrati", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [ |
|
"Amin" |
|
], |
|
"last": "Sadeghi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cyrus", |
|
"middle": [], |
|
"last": "Rashtchian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Forsyth", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 11th European conference on Computer Vision, ECCV'10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "15--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every pic- ture tells a story: Generating sentences from images. In Proceedings of the 11th European conference on Computer Vision, ECCV'10, pages 15-29.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Pedro", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ross", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Felzenszwalb", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Girshick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Deva", |
|
"middle": [ |
|
"Ramanan" |
|
], |
|
"last": "Mcallester", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "IEEE Transactions on", |
|
"volume": "32", |
|
"issue": "9", |
|
"pages": "1627--1645", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pedro F Felzenszwalb, Ross B Girshick, David McAllester, and Deva Ramanan. 2010. Object detection with discriminatively trained part-based models. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(9):1627-1645.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "How many words is a picture worth? Automatic caption generation for news images", |
|
"authors": [ |
|
{ |
|
"first": "Yansong", |
|
"middle": [], |
|
"last": "Feng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1239--1249", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yansong Feng and Mirella Lapata. 2010. How many words is a picture worth? Automatic caption gener- ation for news images. In ACL, pages 1239-1249.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning distributions over logical forms for referring expression generation", |
|
"authors": [ |
|
{ |
|
"first": "Nicholas", |
|
"middle": [], |
|
"last": "Fitzgerald", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yoav", |
|
"middle": [], |
|
"last": "Artzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nicholas FitzGerald, Yoav Artzi, and Luke Zettle- moyer. 2013. Learning distributions over logi- cal forms for referring expression generation. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "From image annotation to image description", |
|
"authors": [ |
|
{ |
|
"first": "Ankush", |
|
"middle": [], |
|
"last": "Gupta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Prashanth", |
|
"middle": [], |
|
"last": "Mannem", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "NIPS", |
|
"volume": "7667", |
|
"issue": "", |
|
"pages": "196--204", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ankush Gupta and Prashanth Mannem. 2012. From image annotation to image description. In NIPS, volume 7667, pages 196-204.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Conceptto-text generation via discriminative reranking", |
|
"authors": [ |
|
{ |
|
"first": "Ioannis", |
|
"middle": [], |
|
"last": "Konstas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "369--378", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ioannis Konstas and Mirella Lapata. 2012. Concept- to-text generation via discriminative reranking. In ACL, pages 369-378.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Computational generation of referring expressions: A survey", |
|
"authors": [ |
|
{ |
|
"first": "Emiel", |
|
"middle": [], |
|
"last": "Krahmer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kees", |
|
"middle": [], |
|
"last": "Van Deemter", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Computational Linguistics", |
|
"volume": "38", |
|
"issue": "1", |
|
"pages": "173--218", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Emiel Krahmer and Kees Van Deemter. 2012. Compu- tational generation of referring expressions: A sur- vey. Computational Linguistics, 38(1):173-218.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Generating natural-language video descriptions using text-mined knowledge", |
|
"authors": [ |
|
{ |
|
"first": "Niveda", |
|
"middle": [], |
|
"last": "Krishnamoorthy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Malkarnenkar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Raymond", |
|
"middle": [], |
|
"last": "Mooney", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kate", |
|
"middle": [], |
|
"last": "Saenko", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sergio", |
|
"middle": [], |
|
"last": "Guadarrama", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Procedings of AAAI", |
|
"volume": "2013", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Niveda Krishnamoorthy, Girish Malkarnenkar, Ray- mond Mooney, Kate Saenko, and Sergio Guadar- rama. 2013. Generating natural-language video de- scriptions using text-mined knowledge. Procedings of AAAI, 2013(2):3.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Baby talk: Understanding and generating simple image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Premraj", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Dhar", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Computer Vision and Pattern Recognition (CVPR)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1601--1608", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A.C. Berg, and T.L. Berg. 2011. Baby talk: Understand- ing and generating simple image descriptions. In Computer Vision and Pattern Recognition (CVPR), pages 1601-1608.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Collective generation of natural image descriptions", |
|
"authors": [ |
|
{ |
|
"first": "Polina", |
|
"middle": [], |
|
"last": "Kuznetsova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexander", |
|
"middle": [ |
|
"C" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamara", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yejin", |
|
"middle": [], |
|
"last": "Choi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "359--368", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Polina Kuznetsova, Vicente Ordonez, Alexander C Berg, Tamara L Berg, and Yejin Choi. 2012. Col- lective generation of natural image descriptions. In ACL, pages 359-368.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "What, where and who? Classifying events by scene and object recognition", |
|
"authors": [ |
|
{ |
|
"first": "Li-Jia", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "ICCV", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Li-Jia Li and Li Fei-Fei. 2007. What, where and who? Classifying events by scene and object recognition. In ICCV, pages 1-8. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Semantic feature production norms for a large set of living and nonliving things", |
|
"authors": [ |
|
{ |
|
"first": "Ken", |
|
"middle": [], |
|
"last": "Mcrae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "George", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Cree", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mark", |
|
"middle": [ |
|
"S" |
|
], |
|
"last": "Seidenberg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Chris", |
|
"middle": [], |
|
"last": "Mcnorgan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Behavior Research Methods", |
|
"volume": "37", |
|
"issue": "4", |
|
"pages": "547--559", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris Mcnorgan. 2005. Semantic feature produc- tion norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547- 559.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "WordNet: a lexical database for english", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "George", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Miller", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Communications of the ACM", |
|
"volume": "38", |
|
"issue": "11", |
|
"pages": "39--41", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "George A Miller. 1995. WordNet: a lexical database for english. Communications of the ACM, 38(11):39-41.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Midge: Generating image descriptions from computer vision detections", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xufeng", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jesse", |
|
"middle": [], |
|
"last": "Dodge", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alyssa", |
|
"middle": [], |
|
"last": "Mensch", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amit", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "747--756", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret Mitchell, Xufeng Han, Jesse Dodge, Alyssa Mensch, Amit Goyal, Alex Berg, Kota Yamaguchi, Tamara Berg, Karl Stratos, and Hal Daum\u00e9, III. 2012. Midge: Generating image descriptions from computer vision detections. In EACL, pages 747- 756.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "Generating expressions that refer to visible objects", |
|
"authors": [ |
|
{ |
|
"first": "Margaret", |
|
"middle": [], |
|
"last": "Mitchell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of NAACL-HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1174--1184", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Margaret Mitchell, Kees van Deemter, and Ehud Re- iter. 2013. Generating expressions that refer to vis- ible objects. In Proceedings of NAACL-HLT, pages 1174-1184.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Improved alignment models for statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Tillmann", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Ney", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proc. of the Joint Conf. of Empirical Methods in Natural Language Processing and Very Large Corpora", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "20--28", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "F. Och, C. Tillmann, and H. Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint Conf. of Empirical Methods in Natural Language Processing and Very Large Cor- pora, pages 20-28.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Minimum error rate training in statistical machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Franz Josef", |
|
"middle": [], |
|
"last": "Och", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "160--167", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL, pages 160- 167.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "Im2Text: Describing images using 1 million captioned photographs", |
|
"authors": [ |
|
{ |
|
"first": "Vicente", |
|
"middle": [], |
|
"last": "Ordonez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Girish", |
|
"middle": [], |
|
"last": "Kulkarni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tamara", |
|
"middle": [ |
|
"L" |
|
], |
|
"last": "Berg", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "NIPS", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1143--1151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vicente Ordonez, Girish Kulkarni, and Tamara L. Berg. 2011. Im2Text: Describing images using 1 million captioned photographs. In NIPS, pages 1143-1151.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "BLEU: a method for automatic evaluation of machine translation", |
|
"authors": [ |
|
{ |
|
"first": "Kishore", |
|
"middle": [], |
|
"last": "Papineni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Salim", |
|
"middle": [], |
|
"last": "Roukos", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Todd", |
|
"middle": [], |
|
"last": "Ward", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wei-Jing", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proc. of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2001. BLEU: a method for automatic evaluation of machine translation. In Proc. of ACL.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Collecting image annotations using Amazon's Mechanical Turk", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Rashtchian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Young", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Hodosh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Hockenmaier", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "139--147", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Rashtchian, P. Young, M. Hodosh, and J. Hock- enmaier. 2010. Collecting image annotations us- ing Amazon's Mechanical Turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechan- ical Turk, pages 139-147.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Expanded parts model for human attribute and action recognition in still images", |
|
"authors": [ |
|
{ |
|
"first": "Gaurav", |
|
"middle": [], |
|
"last": "Sharma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fr\u00e9d\u00e9ric", |
|
"middle": [], |
|
"last": "Jurie", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Cordelia", |
|
"middle": [], |
|
"last": "Schmid", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CVPR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gaurav Sharma, Fr\u00e9d\u00e9ric Jurie, Cordelia Schmid, et al. 2013. Expanded parts model for human attribute and action recognition in still images. In CVPR.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Grounded models of semantic representation", |
|
"authors": [ |
|
{ |
|
"first": "Carina", |
|
"middle": [], |
|
"last": "Silberer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carina Silberer and Mirella Lapata. 2012. Grounded models of semantic representation. In EMNLP, July.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Models of semantic representation with visual attributes", |
|
"authors": [ |
|
{ |
|
"first": "Carina", |
|
"middle": [], |
|
"last": "Silberer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vittorio", |
|
"middle": [], |
|
"last": "Ferrari", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "572--582", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In ACL, pages 572-582.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "LabelMe: Online image annotation and applications", |
|
"authors": [ |
|
{ |
|
"first": "Antonio", |
|
"middle": [], |
|
"last": "Torralba", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Bryan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jenny", |
|
"middle": [], |
|
"last": "Russell", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Yuen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the IEEE", |
|
"volume": "98", |
|
"issue": "8", |
|
"pages": "1467--1484", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Antonio Torralba, Bryan C Russell, and Jenny Yuen. 2010. LabelMe: Online image annotation and appli- cations. Proceedings of the IEEE, 98(8):1467-1484.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "A survey of vision-based methods for action representation, segmentation and recognition. Computer Vision and Image Understanding", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Weinland", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Remi", |
|
"middle": [], |
|
"last": "Ronfard", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Edmond", |
|
"middle": [], |
|
"last": "Boyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "", |
|
"volume": "115", |
|
"issue": "", |
|
"pages": "224--241", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Weinland, Remi Ronfard, and Edmond Boyer. 2011. A survey of vision-based methods for action representation, segmentation and recognition. Com- puter Vision and Image Understanding, 115(2):224- 241.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "Corpus-guided sentence generation of natural images", |
|
"authors": [ |
|
{ |
|
"first": "Yezhou", |
|
"middle": [], |
|
"last": "Yang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ching", |
|
"middle": [ |
|
"Lik" |
|
], |
|
"last": "Teo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yiannis", |
|
"middle": [], |
|
"last": "Aloimonos", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yezhou Yang, Ching Lik Teo, Hal Daum\u00e9 III, and Yian- nis Aloimonos. 2011. Corpus-guided sentence gen- eration of natural images. In Empirical Methods in Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Action recognition by learning bases of action attributes and parts", |
|
"authors": [ |
|
{ |
|
"first": "Bangpeng", |
|
"middle": [], |
|
"last": "Yao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Xiaoye", |
|
"middle": [], |
|
"last": "Jiang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aditya", |
|
"middle": [], |
|
"last": "Khosla", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andy", |
|
"middle": [ |
|
"Lai" |
|
], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leonidas", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Guibas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Li", |
|
"middle": [], |
|
"last": "Fei-Fei", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "ICCV", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bangpeng Yao, Xiaoye Jiang, Aditya Khosla, Andy Lai Lin, Leonidas J. Guibas, and Li Fei-Fei. 2011. Action recognition by learning bases of action at- tributes and parts. In ICCV, Barcelona, Spain, November.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "Grounded language learning from video described with sentences", |
|
"authors": [ |
|
{ |
|
"first": "Haonan", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jeffrey", |
|
"middle": [ |
|
"Mark" |
|
], |
|
"last": "Siskind", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "1", |
|
"issue": "", |
|
"pages": "53--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Haonan Yu and Jeffrey Mark Siskind. 2013. Grounded language learning from video described with sen- tences. In Proceedings of the 51st Annual Meet- ing of the Association for Computational Linguis- tics, volume 1, pages 53-63.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Animacy encoding in English: why and how", |
|
"authors": [ |
|
{ |
|
"first": "Annie", |
|
"middle": [], |
|
"last": "Zaenen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jean", |
|
"middle": [], |
|
"last": "Carletta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Gregory", |
|
"middle": [], |
|
"last": "Garretson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joan", |
|
"middle": [], |
|
"last": "Bresnan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Koontz-Garboden", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tatiana", |
|
"middle": [], |
|
"last": "Nikitina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O'", |
|
"middle": [], |
|
"last": "Catherine", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tom", |
|
"middle": [], |
|
"last": "Connor", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wasow", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "ACL Workshop on Discourse Annotation", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "118--125", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Annie Zaenen, Jean Carletta, Gregory Garretson, Joan Bresnan, Andrew Koontz-Garboden, Tatiana Nikitina, M Catherine O'Connor, and Tom Wasow. 2004. Animacy encoding in English: why and how. In ACL Workshop on Discourse Annotation, pages 118-125.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems", |
|
"authors": [ |
|
{ |
|
"first": "Omar", |
|
"middle": [ |
|
"F" |
|
], |
|
"last": "Zaidan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The Prague Bulletin of Mathematical Linguistics", |
|
"volume": "91", |
|
"issue": "", |
|
"pages": "79--88", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Omar F. Zaidan. 2009. Z-MERT: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathematical Linguistics, 91:79-88.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Bringing semantics into focus using visual abstraction", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lawrence", |
|
"middle": [], |
|
"last": "Zitnick", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Devi", |
|
"middle": [], |
|
"last": "Parikh", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "CVPR", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C. Lawrence Zitnick and Devi Parikh. 2013. Bring- ing semantics into focus using visual abstraction. In CVPR.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"text": "One path through the generative model and the Bayesian network it induces. The first row of colored circles are alignment variables to objects in the image. The second row is words, generated conditioned on alignments.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF1": { |
|
"text": "computed with the Stanford parser (de Marneffe et al., 2006), and images 1. for a main clause (d,e are optional), select: (a) subject as alignment from pa(a). (b) subject word ws from pn(w | as, dc) (c) verb word wv from pv(w | as, dc) (d) object alignment ao from pa(a | as, wv, dc)", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"FIGREF2": { |
|
"text": "Generative process for producing words w, alignments a and dependencies d. Each distribution is conditioned on the partially complete path through generative process dc to establish sentence context. The notation pstop is short hand for pstop(ST OP | w, dc) the stopping distribution.", |
|
"uris": null, |
|
"num": null, |
|
"type_str": "figure" |
|
}, |
|
"TABREF1": { |
|
"text": "", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"text": "Ablation results on development data using BLEU1-4 and reporting match accuracy for sentence structures.", |
|
"content": "<table><tr><td>S: A girl playing a</td></tr><tr><td>guitar in the grass</td></tr><tr><td>R: A woman with a nylon stringed</td></tr><tr><td>guitar is playing in a field</td></tr><tr><td>S: A man playing with two</td></tr><tr><td>dogs in the water</td></tr><tr><td>R: A man is throwing a log into</td></tr><tr><td>a waterway while two dogs watch</td></tr><tr><td>S: Two men playing with</td></tr><tr><td>a bench in the grass</td></tr><tr><td>R: Nine men are playing a game</td></tr><tr><td>in the park, shirts versus skins</td></tr><tr><td>S: Three kids sitting on a road</td></tr><tr><td>R: A boy runs in a race</td></tr><tr><td>while onlookers watch</td></tr></table>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"text": "Two good examples of output (top), and two ex-", |
|
"content": "<table/>", |
|
"html": null, |
|
"num": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |