ACL-OCL / Base_JSON /prefixE /json /eacl /2021.eacl-main.104.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:42:46.632150Z"
},
"title": "ECOL-R: Encouraging Copying in Novel Object Captioning with Reinforcement Learning",
"authors": [
{
"first": "Yufei",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CSIRO Data61",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Ian",
"middle": [
"D"
],
"last": "Wood",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "CSIRO Data61",
"location": {
"settlement": "Sydney",
"country": "Australia"
}
},
"email": "[email protected]"
},
{
"first": "Stephen",
"middle": [],
"last": "Wan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Oracle Corporation",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Novel Object Captioning is a zero-shot Image Captioning task requiring describing objects not seen in the training captions, but for which information is available from external object detectors. The key challenge is to select and describe all salient detected novel objects in the input images. In this paper, we focus on this challenge and propose the ECOL-R model (Encouraging Copying of Object Labels with Reinforced Learning), a copy-augmented transformer model that is encouraged to accurately describe the novel object labels. This is achieved via a specialised reward function in the SCST reinforcement learning framework (Rennie et al., 2017) that encourages novel object mentions while maintaining the caption quality. We further restrict the SCST training to the images where detected objects are mentioned in reference captions to train the ECOL-R model. We additionally improve our copy mechanism via Abstract Labels, which transfer knowledge from known to novel object types, and a Morphological Selector, which determines the appropriate inflected forms of novel object labels. The resulting model sets new state-of-the-art on the nocaps (Agrawal et al., 2019) and held-out COCO (Hendricks et al., 2016) benchmarks.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Novel Object Captioning is a zero-shot Image Captioning task requiring describing objects not seen in the training captions, but for which information is available from external object detectors. The key challenge is to select and describe all salient detected novel objects in the input images. In this paper, we focus on this challenge and propose the ECOL-R model (Encouraging Copying of Object Labels with Reinforced Learning), a copy-augmented transformer model that is encouraged to accurately describe the novel object labels. This is achieved via a specialised reward function in the SCST reinforcement learning framework (Rennie et al., 2017) that encourages novel object mentions while maintaining the caption quality. We further restrict the SCST training to the images where detected objects are mentioned in reference captions to train the ECOL-R model. We additionally improve our copy mechanism via Abstract Labels, which transfer knowledge from known to novel object types, and a Morphological Selector, which determines the appropriate inflected forms of novel object labels. The resulting model sets new state-of-the-art on the nocaps (Agrawal et al., 2019) and held-out COCO (Hendricks et al., 2016) benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Novel Object Captioning is a zero-shot Image Captioning task where the captions should mention novel objects (i.e., not seen in the training captions), but for which information is available from external object detectors. To produce high-quality captions, the captioning models should select and describe all salient detected objects and avoid mentioning minor or irrelevant details in the input images. As shown in Figure 1 , caption A is the best caption among the three because A mentions all salient objects in the images without any unnecessary details while B mentions Bread which is just a Figure 1 : Caption A is the ground-truth caption for the image. Compared with B and C, A is the best caption because it mentions all salient objects (i.e, Hamburger, French Fries and Drinks). We use Abstract Labels, that is hypernyms of the objects' detected object labels in the object representations, transferring knowledge from the objects seen in the training captions to novel objects. Our copy mechanism also selects appropriate inflected forms of object labels (i.e., Hamburgers vs. Hamburger). minor detail; and C misses the salient object Drink. This paper aims to develop a captioning model that produces caption A.",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 1",
"ref_id": null
},
{
"start": 598,
"end": 606,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We use an advanced copy mechanism, similar to the one in , to effectively integrate novel objects. We follow the setup in and use two object detectors: one providing rich object visual features and another providing task specific (including novel) object labels as copy candidates. Our preliminary experiments show that the copy mechanism is infrequently triggered and unable to mention many salient objects in the input images. We propose the ECOL-R model (Encouraging Copying of Object Labels with Reinforced Learning), a copy-augmented transformer model trained in the Self-Critical Sequence Training (SCST) framework (Rennie et al., 2017) . SCST with a CIDEr reward (Vedantam et al., 2015 ) is a standard approach for training the captioning models (Anderson et al., 2018b) , but this paper will show that it does not sufficiently encourage the model to use copy operations. We design a new reward function that provides a reward for each copy operation proportional to the caption quality. We further restrict the SCST training to the images that contain at least one word in the ground truth captions that corresponds to one of the detected object labels. With these innovations, the ECOL-R model outperforms a SCST baseline and a strong inference encouragement baseline by a large margin.",
"cite_spans": [
{
"start": 621,
"end": 642,
"text": "(Rennie et al., 2017)",
"ref_id": "BIBREF24"
},
{
"start": 670,
"end": 692,
"text": "(Vedantam et al., 2015",
"ref_id": "BIBREF26"
},
{
"start": 753,
"end": 777,
"text": "(Anderson et al., 2018b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our copy mechanism and caption generator incorporate two enhancements to better choose and incorporate novel objects: a) Abstract Labels which correspond to hypernyms of the object labels and facilitate knowledge transfer between objects appearing in training captions and novel objects; b) a Morphological Selector which determines the correct inflected form of the copied task specific object labels which is similar in purpose to that proposed in (Lu et al., 2018b) .",
"cite_spans": [
{
"start": 450,
"end": 468,
"text": "(Lu et al., 2018b)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate the ECOL-R model on the novel object captioning benchmark nocaps and held-out COCO (Hendricks et al., 2016) . The ECOL-R model achieves a new state of the art on both benchmarks and generalizes well to in-domain images.",
"cite_spans": [
{
"start": 95,
"end": 119,
"text": "(Hendricks et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Popular Image Captioning models include LSTMbased (Anderson et al., 2018b) and Transformerbased decoders (Herdade et al., 2019; Cornia et al., 2020) . The visual encoders are often neural object detectors (Anderson et al., 2018b; producing Region-of-Interest (ROI) vectors. To train the model to copy novel object labels, the Neural Baby Talk model (NBT) (Lu et al., 2018a) and follow-up work (Wu et al., 2018; Yao et al., 2017; Li et al., 2019) use copy mechanisms (Vinyals et al., 2015) . The copying candidates are labels of salient objects produced by external object detectors. In this paper, we follow previous work by using the Visual Genome object detector from (Anderson et al., 2018b) as the visual feature extractor and a task specific object detector to provide object labels for copying.",
"cite_spans": [
{
"start": 50,
"end": 74,
"text": "(Anderson et al., 2018b)",
"ref_id": "BIBREF5"
},
{
"start": 105,
"end": 127,
"text": "(Herdade et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 128,
"end": 148,
"text": "Cornia et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 205,
"end": 229,
"text": "(Anderson et al., 2018b;",
"ref_id": "BIBREF5"
},
{
"start": 355,
"end": 373,
"text": "(Lu et al., 2018a)",
"ref_id": "BIBREF20"
},
{
"start": 393,
"end": 410,
"text": "(Wu et al., 2018;",
"ref_id": "BIBREF29"
},
{
"start": 411,
"end": 428,
"text": "Yao et al., 2017;",
"ref_id": "BIBREF31"
},
{
"start": 429,
"end": 445,
"text": "Li et al., 2019)",
"ref_id": "BIBREF17"
},
{
"start": 466,
"end": 488,
"text": "(Vinyals et al., 2015)",
"ref_id": "BIBREF27"
},
{
"start": 670,
"end": 694,
"text": "(Anderson et al., 2018b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "These models are typically trained with the Cross-Entropy loss (CE). This creates a mismatch between the training and testing environments because the evaluation metrics are non-differentiable text-based measures (Ranzato et al., 2015) . Self-Critical Sequence Training (SCST) (Rennie et al., 2017) was proposed to address this issue by directly optimizing the inference output using caption-level rewards, such as CIDEr-D (Vedantam et al., 2015) .",
"cite_spans": [
{
"start": 213,
"end": 235,
"text": "(Ranzato et al., 2015)",
"ref_id": "BIBREF23"
},
{
"start": 415,
"end": 446,
"text": "CIDEr-D (Vedantam et al., 2015)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are two existing novel object captioning benchmarks: a) the held-out COCO Benchmark (Hendricks et al., 2016) , constructed by excluding images containing one of eight selected object classes from the standard COCO 2014 benchmark, and b) nocaps , which uses the COCO 2017 benchmark for training and provides new validation and test images from the Open Images Dataset with over 400 novel objects. Both benchmarks are object-centric and there is no reliable benchmarks that systematically evaluate the quality of generated actions or attributes. Figure 2 provides an overview of the ECOL-R model. We refer to the ECOL-R model without SCST training as ECOL. We describe this model in Sec. 3.1 and our novel reinforced copy encouragement training in Sec. 3.2.",
"cite_spans": [
{
"start": 90,
"end": 114,
"text": "(Hendricks et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 550,
"end": 558,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Input Image Objects: Following the setup in , we use two object detectors: the Visual Genome object detector from Anderson et al. (2018b) , producing image objects and regions G (represented by embedding vectors [x g 1 , . . . , x g k g ]) with detailed visual features; and a task specific object detector, producing image objects",
"cite_spans": [
{
"start": 114,
"end": 137,
"text": "Anderson et al. (2018b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "F (represented by [x f 1 , . . . , x f k f ])",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "and their corresponding labels L f = [l 1 , . . . , l k f ] used as copy candidates during caption generation. We will introduce object representations x i below and define them in Eq. 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Image Object Representations: Following Anderson et al. (2018b); Lu et al. (2018a), we represent both sets of objects with Region-Of-Interest (ROI, r i \u2208 R 2048 ) vectors from the Visual Genome object detector and object positional features (p i \u2208 R 8 ), including bounding box coordinates and size, and an object label confidence score. In addition, Figure 2 : Overview of the ECOL-R Model. X is the concatenated Object representations G and F from the two object detectors. The Transformer encoder produces H and the decoder provides h t at step t. We then estimate the probabilities for generating each vocabulary word (yellow box) and copying from each task specific image object (green box). The results are concatenated and jointly softmax (red box). We refine each copy probability into the concrete inflected word probability in MSelector . The final output P (y t ) concatenates all above word probabilities.",
"cite_spans": [],
"ref_spans": [
{
"start": 351,
"end": 359,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "to transfer knowledge from the seen objects to the novel ones, we propose Abstract Labels for the task specific objects, described below. Abstract Labels: The task specific object detectors we use provide taxonomies of object classes, and every detected object is assigned a label from that taxonomy. More general object classes conceptually include all the labels lower in the taxonomy. 1 This provides us with a mechanism for associating class labels not present in the training data with those that do occur in the training data by mapping them to a common ancestor in the hierarchy. Inspired by Ciaramita and Johnson (2003) , we define Abstract Labels to be a fixed set of ancestor class labels that spans the entire taxonomy (see Figure 3 ). Using the abstract labels to drive copy decisions allows the usage of known object types to inform the word generation of novel objects. Each object from the task specific detector is associated with its nearest abstract label ancestor. We choose the set of abstract labels such that the objects in the training data are evenly distributed across the set of abstract labels. We represent abstract labels with trainable embeddings e i \u2208 R d , where d is the hidden size of our base model. We use the Open Images V4 class hierarchy for the nocaps benchmark and a merged 8 coco super-categories hierarchy Lin et al. (2014) for the held-out COCO benchmark. The Figure 3 : A part of the class hierarchy from the Open Images V4 Dataset (Kuznetsova et al., 2018) . The green nodes are used as abstract object labels. For each label, its abstract label is its closest green ancestor.",
"cite_spans": [
{
"start": 599,
"end": 627,
"text": "Ciaramita and Johnson (2003)",
"ref_id": "BIBREF8"
},
{
"start": 1349,
"end": 1366,
"text": "Lin et al. (2014)",
"ref_id": "BIBREF19"
},
{
"start": 1477,
"end": 1502,
"text": "(Kuznetsova et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 735,
"end": 743,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1404,
"end": 1412,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "final representation for each object x i is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "x i = LN (W r r i + e i ) + LN (W p p i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where LN is layer normalization and W r \u2208 R d\u00d72048 , W p \u2208 R d\u00d78 are trainable projections. The two sets of object representations are concatenated as X = F G where represents concatenation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Transformer Base Model: We use a transformer model (Vaswani et al., 2017) with an N enclayer encoder and an N dec -layer decoder (N enc = N dec = 3 in our experiments). We denote the encoder output H = Encoder (X). The decoder uses frozen word and positional embeddings WE and PE from GPT2 (Radford et al., 2019) which are helpful in producing better captions describing novel objects. In step t:",
"cite_spans": [
{
"start": 51,
"end": 73,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
},
{
"start": 290,
"end": 312,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w 1:t\u22121 = WE (y 1:t\u22121 ) + PE (y 1:t\u22121 ) (2) h t = Decoder (H, w 1:t\u22121 )",
"eq_num": "(3)"
}
],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where y 1:t\u22121 is the generation history and h t \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Outputs With Copy Mechanism: The ECOL model either generates words from the vocabulary or copies from task specific objects. We deploy a copy mechanism similar to the dynamic pointer network in . Given the decoder output h t , we first calculate a raw score for each vocabulary word:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "VScore(h t ) = W e h t",
"eq_num": "(4)"
}
],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where W e \u2208 R |V |\u00d7d , |V | is the GPT2 vocabulary size. We then calculate raw additive attention scores over the encoder output of task specific image objects (i.e., H 1:k f ):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "OScore(H, h t ) i = w T c tanh(W f H i + W h h t ) (5) where i \u2208 [1, k f ] and W f \u2208 R d\u00d7d , W h \u2208 R d\u00d7d and w c \u2208 R d .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Finally, we concatenate the raw scores from VScore and OScore and jointly softmax:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "[v t , c t ] = Softmax ([VScore(h t ) OScore(H, h t )]) (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where represents concatenation. v t provides probabilities for GPT2 vocabulary words and c t provides probabilities for copying task specific object labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Morphological Selector: Object labels can appear in inflected forms in captions. For example, in Figure 1 , after selecting the object hamburger, the ECOL model should generate \"hamburgers\" after \"Two\". We propose a morphological selector (M Selector) to refine the copy probability of each task specific image object label l i (i.e., c t,i ) into the probabilities of generating all possible morphological forms y l i t (i.e., P (y l i t |l i )). Specifically, we use h t to choose an inflected form from its possible inflected forms (e.g., Singular or Plural in English):",
"cite_spans": [],
"ref_spans": [
{
"start": 97,
"end": 105,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y l i t |l i ) = softmax(W l i h t )",
"eq_num": "(7)"
}
],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Here",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "W l i \u2208 R s i \u00d7d",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where s i is the number of inflected forms of label l i (in most cases 2 for English, singular and plural). Finally, the ECOL model concatenates the above refined probabilities as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "P (y v t ) = v t (8) P (y l i t ) = c t,i \u2022 P (y l i t |l i ) (9) P (y t ) = P (y v t ) P (y l 1 t ) \u2022 \u2022 \u2022 P (y l k f t ) (10)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "where represents concatenation. Some novel object labels are included in the GPT2 vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "However, these words are not present in the training captions and thus the model always assigns them very low probabilities in P (y v t ). The only way novel object labels can appear in captions is through copy operations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "Model Application Scope In this paper, we focus on the Novel Object Captioning task. However, in general, our copy mechanism is capable of copying any type of information. The Abstract Label approach is general to zero shot learning problems where novel items share characteristics with training instances. The Morphological Selector is also applicable to linguistic copy mechanisms in other contexts such as Commonsense Reasoning where copied terms may require linguistic alignment with the generated text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The ECOL Model",
"sec_num": "3.1"
},
{
"text": "In this paper, we encourage the copying of object labels by using a suitable reward function in the Self-Critical Sequence Training (SCST) framework, which has proven effective for image captioning tasks. Compared with injecting additional loss terms together with the standard XE loss, using the SCST framework allows us to design arbitrary encouragement signals based on the inference output. It minimizes the negative expected reward score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L R (\u03b8) = \u2212E y 1:T \u223cp \u03b8 [r(y 1:T )]",
"eq_num": "(11)"
}
],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "where r is the reward function and p \u03b8 represents the models outputs. In this paper, following Cornia et al. (2020), we first pre-train the ECOL model with the CE loss, then switch to fine-tune the ECOL model with the above SCST loss.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "Inference Bias Baseline: We add an Inference Bias (IB) b \u2208 R + to increase P (y l i t ) at inference time. Eq. 9 is changed to:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y l i t ) = b \u2022 c t,i \u2022 P (y l i t |l i )",
"eq_num": "(12)"
}
],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "and remaining probabilities normalised accordingly. IB is functionally equivalent to adjusting the threshold for the copy decision during inference. Surprisingly, this simple inference trick provides a strong baseline (see Table 3 ). This shows that after the CE training, many correct copy operations are assigned with low probabilities, compared to the fixed vocabulary items. However, we believe that it is better to train the model to increase the probabilities of these copy operations than adding ad hoc adjustments at inference time.",
"cite_spans": [],
"ref_spans": [
{
"start": 223,
"end": 230,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "Can Standard SCST Encourage Copying? Rennie et al. 2017shows that SCST with the CIDEr reward fine-tuning leads to noticeable caption quality improvement for standard image captioning tasks (i.e, improvement in various automatic metrics). Previous work (Cornia et al., 2020; Anderson et al., 2018b) use CIDEr as the standard reward function in their SCST optimization. This shows suggests that the problem of overfitting of SCST training with CIDEr reward is minimal. Intuitively, the CIDEr reward is positively correlated with the number of salient object label mentions and should encourage the model to copy salient novel object labels. However, CIDEr equally rewards both generation of object labels present in training data via the vocabulary P (y v t ) and via copy operations P (y l i t ). Novel objects labels however can only be generated by copy operations (see Sec. 3.1), thus the CIDEr reward function does not sufficiently encourage these operations. We propose two orthogonal modifications to the standard SCST algorithm to address this issue:",
"cite_spans": [
{
"start": 252,
"end": 273,
"text": "(Cornia et al., 2020;",
"ref_id": "BIBREF9"
},
{
"start": 274,
"end": 297,
"text": "Anderson et al., 2018b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "Novel Encouragement Reward: We propose combining the standard CIDEr-D reward with a reward function that gives captions with words copied from object labels an extra bonus, which we intend to encourage copy operations. One straightforward way to implement this idea is to provide a constant bonus to each triggered copy operation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "R a (X) = CIDEr (X) + a * C (13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "where X is a generated caption, C is the number of copy actions in the caption X and a \u2208 R + is a fixed hyper-parameter. We refer this as additive bias. Optimizing with the additive bias, the captioning model only needs to trigger the copy operation for arbitrary objects at arbitrary generation steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "That is, the model may encourage copying object labels at the expense of caption quality (i.e., high CIDEr-D scores). Therefore, we propose a proportional bias that assigns different rewards to the copy operations in different images by making a connection between the copy bonus and the generated captions CIDEr-D score:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R p (X) = CIDEr (X) * (1.0 + p * C)",
"eq_num": "(14)"
}
],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "where p \u2208 R + is a fixed hyper-parameter. Although R a can effectively encourage the model to copy objects, it may introduce noisy object mentions. R p penalizes those noisy object mentions via the low caption CIDEr score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "Visual Object Aligned (VOA) Images: VOA Images refers to the set of training images where the reference captions contain at least one word from retained object labels. During SCST training, images that contain no object label words (i.e., non-VOA images) will not utilise copy operations, thus these images encourage the model NOT to copy. VOA images account for approximately 70% of the full COCO-2017 training images set. Although restricting training to VOA images can be done with arbitrary models, this may hurt the diversity of generated captions. Experiments in Table 3 confirm that restricting to VOA images only improves performance when used with SCST training.",
"cite_spans": [],
"ref_spans": [
{
"start": 569,
"end": 576,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "Hyper-Parameters For Copy Encouragement:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "The above approaches introduce two additional parameters: a and p. In our experiments, a and p range over 0.2, 0.3 and 0.4; we found that 0.3 works the best for both reward functions. Combined with restricting SCST training to VOA images, R p works better than R a and sets a new SOTA for novel object image captioning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Copying More Object Labels",
"sec_num": "3.2"
},
{
"text": "We conduct experiments on the nocaps and the held-out COCO (Hendricks et al., 2016) Benchmark. We set the layer and embedding size to d = 768 and use Adam optimisation (Kingma and Ba, 2014). We train our models 15 epochs with batch size 100 for CE loss and 15 epochs with batch size 10 for SCST loss.",
"cite_spans": [
{
"start": 59,
"end": 83,
"text": "(Hendricks et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We use CIDEr (Vedantam et al., 2015), SPICE (Anderson et al., 2016a) and METEOR (Banerjee and Lavie, 2005) to evaluate the caption quality. CIDEr measures the similarity between the reference captions and generated outputs using tf-idf weighted ngram overlap. SPICE is based on the scene graphs matching between the reference captions and generated outputs. METEOR focuses on the alignment between the words in reference captions and generated outputs, with an aim of 1:1 correspondence.",
"cite_spans": [
{
"start": 44,
"end": 68,
"text": "(Anderson et al., 2016a)",
"ref_id": "BIBREF1"
},
{
"start": 80,
"end": 106,
"text": "(Banerjee and Lavie, 2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "To measure the effectiveness of our copy encouragement approach, we report object F1 (Anderson et al., 2017) in the held-out COCO Benchmark. As the nocaps benchmark does not release its groundtruth captions, we instead report averaged number of mentioned objects (Ave. O) and CIDEr score for dummy captions that only contain copied object words (Object CIDEr, OC., details see Appendix). ",
"cite_spans": [
{
"start": 85,
"end": 108,
"text": "(Anderson et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "We compare our models ECOL + IB and ECOL-R with other state-of-the-art systems in Tables 1 and 2. On the nocaps benchmark (Table 1) , our models outperform previous work, including the recently proposed OSCAR L + CBS + SCST model (Li et al., 2020) , which is fine-turned from the BERT-LARGE model (Devlin et al., 2019) , by 2.0 CIDEr, 0.9 SPICE and set a new state of the art. Compared with the OSCAR L model, our models use far fewer model parameters (340M vs. 60M) and outperforms OSCAR L on both CIDEr and SPICE metrics. We train our model for about 10 hours for CE Loss and 24 hours for SCST Loss using a single Nvidia P100 GPU. As a comparison, the OSCAR L model which is fine-tuned from BERT-LARGE uses 60 -90 hours for training CE Loss and 60 -90 hours for training SCST Loss. 2 This shows that simply deploying a BERT-based language model is not sufficient for the Novel Object Captioning task.",
"cite_spans": [
{
"start": 231,
"end": 248,
"text": "(Li et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 298,
"end": 319,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 82,
"end": 98,
"text": "Tables 1 and 2.",
"ref_id": "TABREF1"
},
{
"start": 123,
"end": 132,
"text": "(Table 1)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with the State-of-the-art",
"sec_num": "4.2"
},
{
"text": "On the held-out COCO benchmark (Table 2) , the ECOL-R model produces more novel objects (+ 13.3 Object F1) and higher quality captions (+ 3.9 CIDEr on the out-of-domain split) than the ECOL model with run-time Inference Bias. Compared with previous work, the ECOL-R model achieves 10.9 CIDEr and 1.9 SPICE higher in the out-ofdomain split, 21.2 CIDEr and 2.8 SPICE higher in the in-domain split with the highest object F1. This shows that our copy encouragement approach successfully trains our model to correctly copy more novel objects and to produce high-quality captions. Compared with PS3 (Anderson et al., 2018a) and FDM-net model (Cao et al., 2020) which are trained on extra images containing novel objects and scene graphs, our models still outperform the PS3 model and 13.9 CIDEr higher than the FDMnet. We set a new state of the art in this benchmark without additional novel objects information. Table 3 presents ablation results for various ECOL-R components, including our copy encouragement approach. Table 4 shows that our encouragement of copying in the ECOL-R model does not benefit from additional Inference Bias. Table 5 shows the effect of Abstract Labels and the Morphological Selector in the ECOL-R model. Finally, Table 6 confirms the ECOL-R model's generalization ability for in-domain COCO images.",
"cite_spans": [
{
"start": 594,
"end": 618,
"text": "(Anderson et al., 2018a)",
"ref_id": "BIBREF4"
},
{
"start": 637,
"end": 655,
"text": "(Cao et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 31,
"end": 40,
"text": "(Table 2)",
"ref_id": "TABREF1"
},
{
"start": 908,
"end": 915,
"text": "Table 3",
"ref_id": "TABREF3"
},
{
"start": 1016,
"end": 1023,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 1133,
"end": 1140,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1238,
"end": 1245,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Comparison with the State-of-the-art",
"sec_num": "4.2"
},
{
"text": "The ECOL model produces better captions using the frozen GPT2 parameters (row 1 vs. 2). Our copy mechanism (C) helps the model to explicitly integrate novel objects, substantially improving the out-of-domain split by 15.3 CIDEr and 0.3 SPICE (row 2 vs. 3). The Inference Bias (IB) introduces noticeable performance improvement: 8.4 CIDEr and 0.3 SPICE (row 3 vs. 4) in models that do not use our reinforcement learning approach. The ECOL model trained with the standard SCST reward function obtains an overall 8.1 CIDEr improvement, but most of the improvement is from the in-domain and neardomain splits (row 8 vs. 6). Compared with the ECOL + IB model, the ECOL model trained with standard SCST algorithm is 8.1 CIDEr lower in the out-of-domain split (row 5 vs. 4). As discussed in Sec. 3.2, standard SCST cannot provide sufficient copy encouragement as object words can be generated from either pathways (fixed vocabulary or copy). Optimizing either the R a or R p reward functions improves the ECOL + CIDEr model by 7.0 CIDEr and 7.8 CIDEr respectively (row 7 and 5). R a achieves 3.7 CIDEr higher than R p in the out-of-domain split. Interestingly, after restricting the model training to VOA images, R p achieves 7.8 CIDEr improvement in the out-of-domain split (row 8 vs. 7), outperforming the ECOL + R a w/ VOA model by 1.4 CIDEr (row 10 vs. 8).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECOL-R Components:",
"sec_num": null
},
{
"text": "We directly measure the copy quantity by counting the number of copied object labels and Object CIDEr. Row 5 and 3 confirm that the standard SCST algorithm has little impact on the copy quantity (only + 0.1 object per image and + 1.4 Object CIDEr). Inference Bias (IB), R a and R p rewards substantially improve the quantity of copied objects (row 4, 6, 7 vs. 3). Among these three components, the models trained with R a and R p work better than the IB baseline (row 6, 7 vs. 4). The model trained with the R a reward copies more objects than the R p reward, especially training with all training images. This is because the R a reward assigns constant positive reward for all copied objects. However, such a naive reward appears to encourage noisy copying operations (i.e., copying non-salient objects). As a result, the ECOL + R a model performs worse than the ECOL + R p model in terms of caption quality (row 7 vs. 6). After restricting training with VOA images, the models trained with R a and R p copy similar amount of objects, but the model with R a produce better captions than the one with R a , especially in the out-of-domain split (row 10 vs. 8). The R p reward maintains a good balance between copying more objects and high caption quality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Copy Encouragement:",
"sec_num": null
},
{
"text": "Are The VOA images Always Useful? Restricting training to the VOA images can be done with any captioning models. However, this does not necessarily encourage copy operations and improve the output caption quality. When we restrict training to VOA images, the ECOL-R model performs consistently worse in all three splits compared to our proposed training scheme (row 9 vs. 10). The only difference is that the ECOL model is not trained with diverse images during the cross-entropy stage. That is, restricting to VOA images is only suitable for fine-tuning in the SCST stage.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effectiveness of Copy Encouragement:",
"sec_num": null
},
{
"text": "Sufficient Encouragement For Copy: Here we investigate whether our ECOL-R model mentions a sufficient number of salient objects. We apply an increasing amount of inference bias to the ECOL, ECOL + CIDEr and ECOL-R models in Table 4 . We note that only ECOL-R model is negatively impacted (measured by CIDEr score) by different Inference Bias values. This shows that the ECOL-R model does not benefit from further copy encouragement. Table 5 shows the effect of Abstract Labels (AL) and the M Selector (M) in the ECOL + IB and ECOL-R models. Removing AL and M from the ECOL + IB model drops 2.3 CIDEr, 0.6 SPICE and 4.6 CIDEr, 0.8 SPICE respectively. AL and M have a large positive impact on the SPICE ",
"cite_spans": [],
"ref_spans": [
{
"start": 224,
"end": 231,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 433,
"end": 440,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Effectiveness of Copy Encouragement:",
"sec_num": null
},
{
"text": "A bathroom with a shower curtain and a toilet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECOL-R",
"sec_num": null
},
{
"text": "An ostrich and a deer standing in a field.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECOL-R",
"sec_num": null
},
{
"text": "A red door of a red house with a red phone. \u00d7 ECOL + IB A white bath tub sitting next to a white toilet. \u00d7 Two ostriches and a deer in a grassy field. \u00d7 A red telephone booth sitting next to a brick wall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ECOL-R",
"sec_num": null
},
{
"text": "The bathtub is white and has a white shower curtain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GT",
"sec_num": null
},
{
"text": "An ostrich standing in grass with a few deer in the background.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GT",
"sec_num": null
},
{
"text": "A red phone booth is standing against a brick wall. The ECOL-R model accurately mentions the novel object curtain in the caption. Second Case: Both models talk about ostrich in their generated caption, but the ECOL + IB model uses the wrong modifier. Third Case: when detected object labels are too general, the ECOL-R model may produce inaccurate captions. score. As SPICE is sensitive to long-range object word relationships, such as attributes and predicate words, (Anderson et al., 2016a) Abstract Labels and the M Selector improve the semantic coherence and fluency of the captions. The performance gap in the ECOL-R model becomes smaller. Our copy encouragement approach contributes to the generation coherency and fluency. novel object captioning models (NBT + CBS and Up-Down + ELMo + CBS) reported in in Table 6 . Both of our models outperforms the Up-Down and NBT model by a large margin. Our models produce high-quality captions for images with novel objects as well as known objects.",
"cite_spans": [
{
"start": 468,
"end": 492,
"text": "(Anderson et al., 2016a)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 813,
"end": 820,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "GT",
"sec_num": null
},
{
"text": "Qualitative analysis on the nocaps validation set reveals that the ECOL-R model mentions the salient object in the input image (first example in Figure 4) , is able to generate more accurate descriptions of novel objects (second example in Figure 4 ), however may generate inaccurate captions due to the non-informative detected object labels (third example in Figure 4 ). In summary, the ECOL-R model is better at incorporating detected image objects into generated captions than the ECOL + IB model.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 154,
"text": "Figure 4)",
"ref_id": "FIGREF0"
},
{
"start": 240,
"end": 248,
"text": "Figure 4",
"ref_id": "FIGREF0"
},
{
"start": 361,
"end": 369,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Qualitative analysis on nocaps",
"sec_num": "4.4"
},
{
"text": "This paper proposes the ECOL-R model that includes a training scheme to encourage copying novel object labels using Reinforced Learning. Our experiments show that the ECOL-R model successfully integrates novel object information and achieves state-of-the-art performance on two Novel Object Caption Benchmarks. In the future, we plan to extend our SCST reward function to other metrics such as SPICE (Anderson et al., 2016b) and BertScore .",
"cite_spans": [
{
"start": 400,
"end": 424,
"text": "(Anderson et al., 2016b)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "5"
},
{
"text": "object categories are frequently mentioned in the training captions and that they often have variable, context-sensitive verbalisation (e.g., a person might be described as a sports player, a student, etc., depending on the context). For those objects, vocabulary based word generation often did a better job at selecting the correct verbalisation due to their frequency in training captions. On the other hand, novel objects typically have lower-frequencies and a fixed, single verbalisation. For example, elephants are usually only referred to with the word elephant. For this reason, we remove objects with high-frequency in training captions from the output of the task specific object detector, leaving their corresponding words to be generated via vocabulary softmax. We also remove the more abstract objects (higher in the object class hierarchy) when object regions overlap. Finally, we keep only one detected object for each label (the one with highest confidence score). We provide the downloadable link of filtered results in Sec B. We use exactly the same Visual Genome objects as described in Anderson et al. (2018b) . The Visual Genome object detector (Anderson et al., 2018b) can produce ROI vectors for arbitrary bounding boxes, hence we use it also to produce ROI vectors for objects from the task specific detector.",
"cite_spans": [
{
"start": 1106,
"end": 1129,
"text": "Anderson et al. (2018b)",
"ref_id": "BIBREF5"
},
{
"start": 1166,
"end": 1190,
"text": "(Anderson et al., 2018b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "5"
},
{
"text": "We use Beam Search with beam size 5 to decode the captions. We first do length normalization for the overall score of each decoded caption. We also penalize captions when they generate repeated bi-grams. Once the repetitions are found, the logprobability for that particular word is divided by a penalty value e 2 . All image objects are only allowed to be copied once. During the SCST optimization, we mask out words from the vocabulary that can be generated via copy operations to encourage the model to copy. All the above constraints are applied to all of our models in the ablation study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.2 ECOL-R Inference Details",
"sec_num": null
},
{
"text": "Object CIDEr score for dummy captions that only contain copied object words. This shows the correctness of our copy mechanism. High Object CIDEr score means many of the copied object labels are also mentioned in the ground-truth captions. We use this metrics because the nocaps benchmark does not release its ground-truth captions and only provide online evaluation APIs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.3 Object CIDEr Details",
"sec_num": null
},
{
"text": "For the nocaps Benchmark, we train with the COCO-2017 dataset, which is available at http://images.cocodataset.org/ zips/train2017.zip.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Dataset Details",
"sec_num": null
},
{
"text": "The nocaps Validation and Test datasets are available from https://nocaps.org/download.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B Dataset Details",
"sec_num": null
},
{
"text": "Genome image object detection files can be found in https://github.com/nocaps-org/ updown-baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Visual",
"sec_num": null
},
{
"text": "For the held-out COCO Benchmark, the training and evaluation data can be found in https://github.com/LisaAnne/DCC. The Visual Genome image object detector is used for both benchmarks because COCO-2017 and COCO-2014 share the same set of images. The anonymous Google Drive includes the above data and the sets of task specific objects detected for the above two benchmarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Visual",
"sec_num": null
},
{
"text": "We find some images in COCO share exactly the same reference captions. We find it beneficial to remove those duplicates. We simply iterate over all reference captions and remove any captions if they have already been found previously. This removes 25463 captions from the training data of the nocaps Benchmark and 7059 captions from the training data of the held-out COCO Benchmark.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.1 Duplicated Caption Removal",
"sec_num": null
},
{
"text": "VOA (visual object aligned) images/reference caption pairs are those that mention at least one detected task specific image object label (or their linguistic variant).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "B.2 VOA (visual object aligned) Images",
"sec_num": null
},
{
"text": "Non-VOA image/caption pairs are removed from our SCST training process. We provide the reduced set of reference captions in the anonymous Google Drive (ddc captions/ddc train VOA.json and nocaps captions/nocaps train VOA.json). Table 9 and Table 10 show the number of images and annotated reference captions of the nocaps and held-out COCO Benchmark, respectively. On average, each image has five reference captions. The COCO Train in the nocaps Benchmark is larger than the held-out COCO Benchmark.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 248,
"text": "Table 9 and Table 10",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "B.2 VOA (visual object aligned) Images",
"sec_num": null
},
{
"text": "The nocaps Benchmark hosts its evaluation sever at https://evalai.cloudcv.org/web/ We provide an on-the-shelf version of this tool in the anonymous Google Drive (in tools).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "C Evaluation",
"sec_num": null
},
{
"text": "We only show the test performance on the heldout COCO Benchmark in our main paper. Here, we show the performance of our model performance on the validation Set in Table 8 . The models achieve similar level of performance on the Validation Set.",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 8",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "C.1 held-out COCO Benchmark Validation Performance",
"sec_num": null
},
{
"text": "If the task specific object detector does not provide such a taxonomy, a suitable taxonomy could be obtained from sources such as Wordnet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "According to the authors' comments on their official model code repo https://github.com/microsoft/ Oscar/issues/6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://drive.google.com/drive/ folders/1EToBXQ8WAWxn5uCd38HtfRYmchnBBbMo? usp=sharing",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank anonymous reviewers for their insightful suggestions to improve this paper. This research was supported by a Google award through the Natural Language Understanding Focused Program, by a MQ Research Excellence Scholarship and a CSIRO's DATA61 Top-up Scholarship, and under the Australian Research Councils Discovery Projects funding scheme (project number DP160102156).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The hyper-parameters of the ECOL-R model is shown in Table 7 . This architecture is basically from (Cornia et al., 2020) . We only change the hidden size of the model to 768 to fit the size of GPT2 (the smallest version). Our model has total 60.8 \u00d7 10 6 parameters and 43.0 \u00d7 10 6 trainable parameters. This scale is slightly smaller than the Transformer Base model (65.8 \u00d7 10 6 ) (Vaswani et al., 2017) .We optimise with Adam(\u03b1=0.9, \u03b2=0.98, =1e-9) (Kingma and Ba, 2014) and clip gradients to 0.1 for both Benchmarks. In Cross-entropy training, we vary the learning rate over the course of training using the heuristic:where S is the step number and W is the number of warm-up steps. We set W to 20000 steps for the nocaps Benchmark and 10000 steps for the heldout COCO Benchmark. The number of warm-up steps has some impact on both benchmark. We tried 20,000 and 10,000 for both Benchmarks. For SCST training, we set the initial learning rate 1e \u22126 and reduce it by half if the reward metric (Validatoin set CIDEr) does not improve for 3 evaluations. We conduct evaluation every 3000 steps.We use Pytorch 1.4.0 to implement our model. The Cross-Entropy Training takes about 8 hours and the SCST optimization takes about 15 hours in a single NVIDIA Tesla P100 GPU.Our source code is submitted in the Software. We setup an anonymous Google Drive to host large file 3 . ",
"cite_spans": [
{
"start": 99,
"end": 120,
"text": "(Cornia et al., 2020)",
"ref_id": "BIBREF9"
},
{
"start": 381,
"end": 403,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 53,
"end": 60,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendices A Model Details",
"sec_num": null
},
{
"text": "We follow the processing of input objects in . We observed that some",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A.1 Input Object Detector",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "nocaps: novel object captioning at scale",
"authors": [
{
"first": "Harsh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Karan",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "Yufei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xinlei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. 2019. no- caps: novel object captioning at scale. In Proceed- ings of the IEEE/CVF International Conference on Computer Vision (ICCV).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "ECCV",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016a. Spice: Semantic proposi- tional image caption evaluation. In ECCV.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Spice: Semantic propositional image caption evaluation",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2016,
"venue": "European Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "382--398",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016b. Spice: Semantic propo- sitional image caption evaluation. In European Conference on Computer Vision, pages 382-398. Springer.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Guided open vocabulary image captioning with constrained beam search",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Basura",
"middle": [],
"last": "Fernando",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "936--945",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2017. Guided open vocabulary im- age captioning with constrained beam search. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 936-945, Copenhagen, Denmark. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Partially-supervised image captioning",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2018,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "1875--1886",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Stephen Gould, and Mark Johnson. 2018a. Partially-supervised image captioning. In Advances in Neural Information Processing Systems, pages 1875-1886.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018b. Bottom-up and top-down attention for image captioning and visual question answering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the acl workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65-72.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Feature deformation meta-networks in image captioning of novel objects",
"authors": [
{
"first": "Tingjia",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Xiaomei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yanwei",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Yu-Gang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Xiangyang",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tingjia Cao, Ke Han, Xiaomei Wang, Lin Ma, Yanwei Fu, Yu-Gang Jiang, and Xiangyang Xue. 2020. Fea- ture deformation meta-networks in image captioning of novel objects. In Proceedings of the AAAI Confer- ence on Artificial Intelligence.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supersense tagging of unknown nouns in WordNet",
"authors": [
{
"first": "Massimiliano",
"middle": [],
"last": "Ciaramita",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "168--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimiliano Ciaramita and Mark Johnson. 2003. Su- persense tagging of unknown nouns in WordNet. In Proceedings of the 2003 Conference on Empiri- cal Methods in Natural Language Processing, pages 168-175.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Meshed-Memory Transformer for Image Captioning",
"authors": [
{
"first": "Marcella",
"middle": [],
"last": "Cornia",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Stefanini",
"suffix": ""
},
{
"first": "Lorenzo",
"middle": [],
"last": "Baraldi",
"suffix": ""
},
{
"first": "Rita",
"middle": [],
"last": "Cucchiara",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. 2020. Meshed-Memory Trans- former for Image Captioning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Deep compositional captioning: Describing novel object categories without paired training data",
"authors": [
{
"first": "Lisa",
"middle": [
"Anne"
],
"last": "Hendricks",
"suffix": ""
},
{
"first": "Subhashini",
"middle": [],
"last": "Venugopalan",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
},
{
"first": "Kate",
"middle": [],
"last": "Saenko",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {
"DOI": [
"10.1109/CVPR.2016.8"
]
},
"num": null,
"urls": [],
"raw_text": "Lisa Anne Hendricks, Subhashini Venugopalan, Mar- cus Rohrbach, Raymond J. Mooney, Kate Saenko, and Trevor Darrell. 2016. Deep compositional cap- tioning: Describing novel object categories without paired training data. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, pages 1-10.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Image captioning: Transforming objects into words",
"authors": [
{
"first": "Simao",
"middle": [],
"last": "Herdade",
"suffix": ""
},
{
"first": "Armin",
"middle": [],
"last": "Kappeler",
"suffix": ""
},
{
"first": "Kofi",
"middle": [],
"last": "Boakye",
"suffix": ""
},
{
"first": "Joao",
"middle": [],
"last": "Soares",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "32",
"issue": "",
"pages": "11137--11147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. 2019. Image captioning: Transforming objects into words. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dAlch\u00e9 Buc, E. Fox, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 32, pages 11137-11147. Curran As- sociates, Inc.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Iterative answer prediction with pointer-augmented multimodal transformers for textvqa",
"authors": [
{
"first": "Ronghang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Amanpreet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Marcus",
"middle": [],
"last": "Rohrbach",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronghang Hu, Amanpreet Singh, Trevor Darrell, and Marcus Rohrbach. 2020. Iterative answer predic- tion with pointer-augmented multimodal transform- ers for textvqa. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recog- nition (CVPR).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.6980"
]
},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Kuznetsova",
"suffix": ""
},
{
"first": "Hassan",
"middle": [],
"last": "Rom",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Alldrin",
"suffix": ""
},
{
"first": "Jasper",
"middle": [],
"last": "Uijlings",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Krasin",
"suffix": ""
},
{
"first": "Jordi",
"middle": [],
"last": "Pont-Tuset",
"suffix": ""
},
{
"first": "Shahab",
"middle": [],
"last": "Kamali",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Popov",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Malloci",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Duerig",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1811.00982"
]
},
"num": null,
"urls": [],
"raw_text": "Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Ka- mali, Stefan Popov, Matteo Malloci, Tom Duerig, et al. 2018. The open images dataset v4: Uni- fied image classification, object detection, and vi- sual relationship detection at scale. arXiv preprint arXiv:1811.00982.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Oscar: Objectsemantics aligned pre-training for vision-language tasks",
"authors": [
{
"first": "Xiujun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Yin",
"suffix": ""
},
{
"first": "Chunyuan",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xiaowei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Pengchuan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Lijuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Houdong",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object- semantics aligned pre-training for vision-language tasks. ECCV.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Pointing novel objects in image captioning",
"authors": [
{
"first": "Yehao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Hongyang",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2019,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2019. Pointing novel objects in image captioning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "CommonGen: A constrained text generation challenge for generative commonsense reasoning",
"authors": [
{
"first": "Wangchunshu",
"middle": [],
"last": "Bill Yuchen Lin",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Pei",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Chandra",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Bhagavatula",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "1823--1840",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020. CommonGen: A constrained text gen- eration challenge for generative commonsense rea- soning. In Findings of the Association for Computa- tional Linguistics: EMNLP 2020, pages 1823-1840, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Microsoft coco: Common objects in context",
"authors": [
{
"first": "Tsung-Yi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Maire",
"suffix": ""
},
{
"first": "Serge",
"middle": [],
"last": "Belongie",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Hays",
"suffix": ""
},
{
"first": "Pietro",
"middle": [],
"last": "Perona",
"suffix": ""
},
{
"first": "Deva",
"middle": [],
"last": "Ramanan",
"suffix": ""
},
{
"first": "Piotr",
"middle": [],
"last": "Doll\u00e1r",
"suffix": ""
},
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Zitnick",
"suffix": ""
}
],
"year": 2014,
"venue": "Computer Vision -ECCV 2014",
"volume": "",
"issue": "",
"pages": "740--755",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Computer Vision - ECCV 2014, pages 740-755, Cham. Springer Inter- national Publishing.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Neural baby talk",
"authors": [
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Jianwei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "7219--7228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2018a. Neural baby talk. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7219-7228.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Deep learning paradigm with transformed monolingual word embeddings for multilingual sentiment analysis",
"authors": [
{
"first": "Yujie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Boyi",
"middle": [],
"last": "Ni",
"suffix": ""
},
{
"first": "Qijin",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Kotaro",
"middle": [],
"last": "Sakamoto",
"suffix": ""
},
{
"first": "Hideyuki",
"middle": [],
"last": "Shibuki",
"suffix": ""
},
{
"first": "Tatsunori",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32nd Pacific Asia Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yujie Lu, Boyi Ni, Qijin Ji, Kotaro Sakamoto, Hideyuki Shibuki, and Tatsunori Mori. 2018b. Deep learning paradigm with transformed monolingual word em- beddings for multilingual sentiment analysis. In Pro- ceedings of the 32nd Pacific Asia Conference on Lan- guage, Information and Computation, Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sequence level training with recurrent neural networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1511.06732"
]
},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Self-critical sequence training for image captioning",
"authors": [
{
"first": "J",
"middle": [],
"last": "Steven",
"suffix": ""
},
{
"first": "Etienne",
"middle": [],
"last": "Rennie",
"suffix": ""
},
{
"first": "Youssef",
"middle": [],
"last": "Marcheret",
"suffix": ""
},
{
"first": "Jerret",
"middle": [],
"last": "Mroueh",
"suffix": ""
},
{
"first": "Vaibhava",
"middle": [],
"last": "Ross",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Goel",
"suffix": ""
}
],
"year": 2017,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Cider: Consensus-based image description evaluation",
"authors": [
{
"first": "C",
"middle": [
"Lawrence"
],
"last": "Ramakrishna Vedantam",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image description evaluation. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Pointer networks",
"authors": [
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Meire",
"middle": [],
"last": "Fortunato",
"suffix": ""
},
{
"first": "Navdeep",
"middle": [],
"last": "Jaitly",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems",
"volume": "28",
"issue": "",
"pages": "2692--2700",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 28, pages 2692-2700. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Hierarchical attention network for image captioning",
"authors": [
{
"first": "Weixuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhihong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the AAAI Conference on Artificial Intelligence",
"volume": "33",
"issue": "",
"pages": "8957--8964",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weixuan Wang, Zhihong Chen, and Haifeng Hu. 2019. Hierarchical attention network for image captioning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8957-8964.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Decoupled novel object captioner",
"authors": [
{
"first": "Yu",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Linchao",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "2018 ACM Multimedia Conference on Multimedia Conference, MM 2018",
"volume": "",
"issue": "",
"pages": "1029--1037",
"other_ids": {
"DOI": [
"10.1145/3240508.3240640"
]
},
"num": null,
"urls": [],
"raw_text": "Yu Wu, Linchao Zhu, Lu Jiang, and Yi Yang. 2018. Decoupled novel object captioner. In 2018 ACM Multimedia Conference on Multimedia Conference, MM 2018, Seoul, Republic of Korea, October 22-26, 2018, pages 1029-1037.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Evalai: Towards better evaluation systems for ai agents",
"authors": [
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Prithvijit",
"middle": [],
"last": "Chattopadhyay",
"suffix": ""
},
{
"first": "Taranjeet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Akash",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shiv Baran",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1902.03570"
]
},
"num": null,
"urls": [],
"raw_text": "Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi- jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. Evalai: Towards better evaluation systems for ai agents. arXiv preprint arXiv:1902.03570.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Incorporating copying mechanism in image captioning for learning novel objects",
"authors": [
{
"first": "Ting",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yingwei",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Yehao",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Mei",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6580--6588",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei. 2017. Incorporating copying mechanism in image caption- ing for learning novel objects. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6580-6588.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Bertscore: Evaluating text generation with bert",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2020,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Three Examples generated by the ECOL-R and ECOL + IB model on the nocaps Val Set. First Case:",
"uris": null
},
"TABREF0": {
"type_str": "table",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">in-domain</td><td colspan=\"2\">near-domain</td><td colspan=\"2\">out-of-domain</td><td/><td/><td>Overall</td></tr><tr><td>Method</td><td>CIDEr</td><td>SPICE</td><td>CIDEr</td><td>SPICE</td><td>CIDEr</td><td colspan=\"2\">SPICE</td><td>Meteor</td><td>CIDEr</td><td>SPICE</td></tr><tr><td>Up-Down + BS</td><td>73.7</td><td>11.6</td><td>57.2</td><td>10.3</td><td>30.4</td><td>8.1</td><td/><td>22.9</td><td>54.5</td><td>10.1</td></tr><tr><td colspan=\"2\">Up-Down + ELMo + CBS 76.0</td><td>11.8</td><td>74.2</td><td>11.5</td><td>66.7</td><td>9.7</td><td/><td>24.4</td><td>73.1</td><td>11.2</td></tr><tr><td>NBT + BS</td><td>62.8</td><td>10.3</td><td>51.9</td><td>9.4</td><td>48.9</td><td>8.4</td><td/><td>21.8</td><td>54.3</td><td>9.4</td></tr><tr><td>NBT + CBS</td><td>61.9</td><td>10.4</td><td>57.3</td><td>9.6</td><td>61.8</td><td>8.6</td><td/><td>21.6</td><td>59.9</td><td>9.5</td></tr><tr><td>OSCARL + CBS + SCST</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td/><td>-</td><td>80.9</td><td>11.3</td></tr><tr><td>ECOL + IB (Ours)</td><td>81.7</td><td>12.9</td><td>77.2</td><td>12.1</td><td>67.0</td><td>10.3</td><td/><td>25.6</td><td>76.0</td><td>11.9</td></tr><tr><td>ECOL-R (Ours)</td><td>87.3</td><td>12.8</td><td>84.0</td><td>12.5</td><td>75.4</td><td>10.7</td><td/><td>25.7</td><td>82.9</td><td>12.2</td></tr><tr><td/><td/><td/><td colspan=\"2\">out-of-domain</td><td/><td/><td/><td colspan=\"2\">in-domain</td></tr><tr><td>Method</td><td/><td>Meteor</td><td>CIDEr</td><td>SPICE</td><td colspan=\"2\">Object F1</td><td colspan=\"2\">Meteor</td><td>CIDEr</td><td>SPICE</td></tr><tr><td colspan=\"2\">LSTM-P (Li et al., 2019)</td><td>23.4</td><td>88.3</td><td>16.6</td><td>60.9</td><td/><td>-</td><td/><td>-</td><td>-</td></tr><tr><td colspan=\"2\">Base + CBS (Anderson et al., 2017)</td><td>23.3</td><td>77.0</td><td>15.9</td><td>54.0</td><td/><td colspan=\"2\">24.9</td><td>88.0</td><td>18.4</td></tr><tr><td colspan=\"2\">NBT + CBS (Lu et al., 2018a)</td><td>24.1</td><td>86.0</td><td>17.4</td><td>70.5</td><td/><td colspan=\"2\">25.0</td><td>92.1</td><td>18.0</td></tr><tr><td>ECOL + IB (Ours)</td><td/><td>25.6</td><td>95.5</td><td>18.8</td><td>58.2</td><td/><td colspan=\"2\">27.0</td><td>108.3</td><td>20.4</td></tr><tr><td>ECOL-R (Ours)</td><td/><td>25.7</td><td>99.2</td><td>19.3</td><td>66.3</td><td/><td colspan=\"2\">26.8</td><td>113.3</td><td>20.4</td></tr><tr><td colspan=\"2\">ECOL-R + CBS (Ours)</td><td>25.7</td><td>99.1</td><td>19.1</td><td>71.8</td><td/><td colspan=\"2\">26.8</td><td>112.6</td><td>20.8</td></tr><tr><td colspan=\"2\">PS3 (Anderson et al., 2018a) \u00a7</td><td>25.4</td><td>94.5</td><td>17.9</td><td>63.0</td><td/><td colspan=\"2\">25.9</td><td>101.1</td><td>19.0</td></tr><tr><td colspan=\"2\">FDM-net (Cao et al., 2020) \u00a7</td><td>25.9</td><td>84.8</td><td>19.4</td><td>64.7</td><td/><td colspan=\"2\">27.2</td><td>109.7</td><td>20.2</td></tr><tr><td colspan=\"2\">FDM-net + CBS (Cao et al., 2020) \u00a7</td><td>25.6</td><td>85.3</td><td>19.6</td><td>85.7</td><td/><td colspan=\"2\">26.2</td><td>105.5</td><td>19.7</td></tr></table>",
"html": null,
"text": "Table 1: Comparison of the ECOL-R model with other state-of-the-art systems on the nocaps Test Split. The ECOL-R model sets new state of the art and improves previous work by 2.0 CIDEr and 0.9 SPICE. The performance of the Up-Down and NBT models are from. The OSCAR L model is fromLi et al. (2020)."
},
"TABREF1": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": ""
},
"TABREF3": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>curtain</td><td>ostrich, deer</td><td>door, house</td></tr></table>",
"html": null,
"text": "Ablation study of our model. OC. for Object CIDEr; Ave. O for Averaged Number of Mentioned Object in each image; C for Copy Mechanism; R for ROI; P for Position; AL for Abstract Label; R a and R p are the SCST reward function; VOA: Visual Object Aligned Images; VOA (all training): all training using VOA images."
},
"TABREF5": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "The CIDEr Score in nocaps Validation Set with different Inference Bias."
},
"TABREF7": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Generalization For In-Domain COCO: To fur-</td></tr><tr><td>ther show the generalization of our model for In-</td></tr><tr><td>Domain images (i.e., without novel objects), we run</td></tr><tr><td>the ECOL + IB and ECOL-R models on the COCO</td></tr><tr><td>2017 Validation Set and compare with another two</td></tr></table>",
"html": null,
"text": "The contribution of Abstract Label (AL) and Morphological Selector (M)."
},
"TABREF8": {
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null,
"text": "The performance of the best four nocaps models on the COCO 2017 Validation Set."
},
"TABREF10": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>COCO</td><td>COCO</td><td>COCO</td><td>nocaps</td><td>nocaps</td></tr><tr><td>Train</td><td>Train</td><td>Val</td><td>Val</td><td>Test</td></tr><tr><td/><td>VOA</td><td/><td/><td/></tr><tr><td colspan=\"2\">#Image 118,287 82,771</td><td>5,000</td><td>4,500</td><td>10,600</td></tr><tr><td colspan=\"3\">#Caption 591,753 299,502 25,014</td><td>-</td><td>-</td></tr></table>",
"html": null,
"text": "The performance of ECOL + IB, ECOL-R and ECOL-R + CBS model on held-out COCO Benchmark Validation Set."
},
"TABREF11": {
"type_str": "table",
"num": null,
"content": "<table><tr><td>Train Train</td><td>Val in-</td><td>Val out-</td><td>Test in-</td><td>Test out-</td></tr><tr><td>VOA</td><td>domain</td><td>domain</td><td>domain</td><td>domain</td></tr><tr><td colspan=\"5\">#Image 70,194 55,799 17,234 3,018 17,288 3,024</td></tr><tr><td colspan=\"5\">#Caption 351,134 197,061 86,230 1,5105 86,188 15,131</td></tr></table>",
"html": null,
"text": "Data Statistics for the nocaps Benchmark. The full set of annotations in nocaps Val and Test is not available. One can only access some of them via https://nocaps.org/explore."
},
"TABREF12": {
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">challenges/challenge-page/355/overview.</td></tr><tr><td colspan=\"4\">The held-out COCO Benchmark uses the</td></tr><tr><td>evaluation</td><td>tool</td><td>from</td><td>https://github.</td></tr><tr><td colspan=\"4\">com/ruotianluo/coco-caption/tree/</td></tr><tr><td colspan=\"4\">ea20010419a955fed9882f9dcc53f2dc1ac65092.</td></tr></table>",
"html": null,
"text": "Data Statistics for the held-out COCO Benchmark.The detailed setup instruction of the local submission environment to Evai (Yadav et al., 2019) is available at https: //github.com/nocaps-org/updown-baseline."
}
}
}
}