{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:31:42.849836Z" }, "title": "SeqDialN: Sequential Visual Dialog Networks in Joint Visual-Linguistic Representation Space", "authors": [ { "first": "Liu", "middle": [], "last": "Yang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel", "location": { "country": "China Research Center" } }, "email": "liu.y.yang@intel.com" }, { "first": "Fanqi", "middle": [], "last": "Meng", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Science", "location": { "country": "Technology of China" } }, "email": "" }, { "first": "Xiao", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Davis" } }, "email": "xioliu@ucdavis.edu" }, { "first": "Ming-Kuang", "middle": [], "last": "Daniel Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "danielwu@alumni.stanford.edu" }, { "first": "Vicent", "middle": [], "last": "Ying", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "vhying@stanford.edu" }, { "first": "Xianchao", "middle": [], "last": "Xu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Intel", "location": { "country": "China Research Center" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The key challenge of the visual dialog task is how to fuse features from multimodal sources and extract relevant information from dialog history to answer the current query. In this work, we formulate a visual dialog as an information flow in which each piece of information is encoded with the joint visuallinguistic representation of a single dialog round. Based on this formulation, we consider the visual dialog task as a sequence problem consisting of ordered visual-linguistic vectors. For featurization, we use a Dense Symmetric Co-Attention network (Nguyen and Okatani, 2018) as a lightweight vison-language joint representation generator to fuse multimodal features (i.e., image and text), yielding better computation and data efficiencies. For inference, we propose two Sequential Dialog Networks (SeqDialN): the first uses LSTM (Hochreiter and Schmidhuber, 1997) for information propagation (IP) and the second uses a modified Transformer (Vaswani et al., 2017) for multi-step reasoning (MR). Our architecture separates the complexity of multimodal feature fusion from that of inference, which allows simpler design of the inference engine. On VisDial v1.0 test-std dataset, our best single generative SeqDialN achieves 62.54% NDCG 1 and 48.63% MRR 2 ; our ensemble generative SeqDialN achieves 63.78% NDCG and 49.98% MRR, which set a new state-of-the-art generative visual dialog model. We fine-tune discriminative Se-qDialN with dense annotations 3 and boost the performance up to 72.41% NDCG and 55.11% MRR. In this work, we discuss the extensive experiments we have conducted to demonstrate the effectiveness of our model 1 Normalized Discounted Cumulative Gain 2 Mean Reciprocal Rank 3 Relevance scores for 100 answer options corresponding to each question on a subset of the training set, publicly available on visualdialog.org/data components. We also provide visualization for the reasoning process from the relevant conversation rounds and discuss our finetuning methods. The code is available at https://github.com/xiaoxiaoheimei/SeqDialN.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "The key challenge of the visual dialog task is how to fuse features from multimodal sources and extract relevant information from dialog history to answer the current query. In this work, we formulate a visual dialog as an information flow in which each piece of information is encoded with the joint visuallinguistic representation of a single dialog round. Based on this formulation, we consider the visual dialog task as a sequence problem consisting of ordered visual-linguistic vectors. For featurization, we use a Dense Symmetric Co-Attention network (Nguyen and Okatani, 2018) as a lightweight vison-language joint representation generator to fuse multimodal features (i.e., image and text), yielding better computation and data efficiencies. For inference, we propose two Sequential Dialog Networks (SeqDialN): the first uses LSTM (Hochreiter and Schmidhuber, 1997) for information propagation (IP) and the second uses a modified Transformer (Vaswani et al., 2017) for multi-step reasoning (MR). Our architecture separates the complexity of multimodal feature fusion from that of inference, which allows simpler design of the inference engine. On VisDial v1.0 test-std dataset, our best single generative SeqDialN achieves 62.54% NDCG 1 and 48.63% MRR 2 ; our ensemble generative SeqDialN achieves 63.78% NDCG and 49.98% MRR, which set a new state-of-the-art generative visual dialog model. We fine-tune discriminative Se-qDialN with dense annotations 3 and boost the performance up to 72.41% NDCG and 55.11% MRR. In this work, we discuss the extensive experiments we have conducted to demonstrate the effectiveness of our model 1 Normalized Discounted Cumulative Gain 2 Mean Reciprocal Rank 3 Relevance scores for 100 answer options corresponding to each question on a subset of the training set, publicly available on visualdialog.org/data components. We also provide visualization for the reasoning process from the relevant conversation rounds and discuss our finetuning methods. The code is available at https://github.com/xiaoxiaoheimei/SeqDialN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Visual Dialog has attracted increasing research interest as an emerging field, bringing together aspects of computer vision, natural language processing, and dialog systems. In this task, an AI agent is required to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a query about the image, the agent has to ground the query in image, infer context from history, and answer the query accurately (Das et al., 2017) .", "cite_spans": [ { "start": 491, "end": 509, "text": "(Das et al., 2017)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our work is inspired by the use of visuallinguistic joint representation to erase the modality gap, where we embed the visual signals into the text snippets for each dialog round. In this way, we convert a visual dialog into an ordered vector sequence, where each vector is the joint visual-linguistic representation of a specific dialog round. Rather than using ViLBERT , we chose Dense Symmetric Co-Attention (Nguyen and Okatani, 2018) as a lightweight joint visual-linguistic representation generator. In contrast to VisDial-BERT (Murahari et al., 2019) , which concatenates all rounds of the dialog history into a single textual input for ViLBERT , we keep each dialog round separate. Keeping this inherent sequential structure from the visual dialog allows us to reason across the dialog history to find the most query-relevant dialog rounds. By viewing visual dialog task as a vector sequence, We propose two sequential networks to tackle the problem. They are fed into the Dense Symmetric Co-Attention Network (Nguyen and Okatani, 2018) to produce a visual-linguistic vector sequence in the joint visual-linguistic feature space. Our baseline model, the Information Propagation Network (SeqIPN), which uses a LSTM (Hochreiter and Schmidhuber, 1997) to summarize the visual-linguistic sequence, outperforms other well-known baselines (Das et al., 2017; Lu et al., 2017) , on NDCG metric by a large margin > 0.5. Multi-step reasoning network (Se-qMRN) is based on Transformer (Vaswani et al., 2017) .", "cite_spans": [ { "start": 533, "end": 556, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 1017, "end": 1043, "text": "(Nguyen and Okatani, 2018)", "ref_id": "BIBREF14" }, { "start": 1221, "end": 1255, "text": "(Hochreiter and Schmidhuber, 1997)", "ref_id": "BIBREF5" }, { "start": 1340, "end": 1358, "text": "(Das et al., 2017;", "ref_id": "BIBREF2" }, { "start": 1359, "end": 1375, "text": "Lu et al., 2017)", "ref_id": "BIBREF10" }, { "start": 1481, "end": 1503, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We expect the multi-head attention mechanism of Transformer better captures the relationship within the visual linguistic sequence. We achieve multi-step reasoning by stacking several Transformers to refine attentions in high level semantic space. SeqMRN outperforms VisDial-BERT (Murahari et al., 2019) by > 1.5% on NDCG when trained with comparable amount of data, while using 30% less parameters. The pipeline in Fig.1 facilitates the combination of different word embeddings and SeqDialN models. In this work, we compare two kinds of pre-trained word representations: GloVe (Pennington et al., 2014) and DistilBert (Sanh et al., 2019) . The ablation test shows that SeqMRN with DistilBert embedding yields the best performance. Further experiment reveals SeqDialN sets a new state-of-the-art generative visual dialog model. VLDialog and NDCGFinetune (Murahari et al., 2019; Qi et al., 2019b) tune with dense annotations 3 . Training on the dense annotation 3 makes these models perform very well on the NDCG metric but poorly on the others because the dense annotation 3 dataset doesn't correlate well with the original ground-truth answer to the question (Murahari et al., 2019) . In this work, we propose a reweighting method to mitigate the damage to non-NDCG metrics in fine-tuning process, which make our best model outperform (Murahari et al., 2019; Qi et al., 2019b,a) on MRR by a large margin at the cost of a little lower NDCG than them.", "cite_spans": [ { "start": 280, "end": 303, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 578, "end": 603, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" }, { "start": 619, "end": 638, "text": "(Sanh et al., 2019)", "ref_id": "BIBREF20" }, { "start": 854, "end": 877, "text": "(Murahari et al., 2019;", "ref_id": "BIBREF13" }, { "start": 878, "end": 895, "text": "Qi et al., 2019b)", "ref_id": "BIBREF18" }, { "start": 1160, "end": 1183, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 1336, "end": 1359, "text": "(Murahari et al., 2019;", "ref_id": "BIBREF13" }, { "start": 1360, "end": 1379, "text": "Qi et al., 2019b,a)", "ref_id": null } ], "ref_spans": [ { "start": 416, "end": 421, "text": "Fig.1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main contributions of this paper is three fold. (1) We formulate the visual dialog task as reasoning from a sequence in the joint visuallinguistic representation space. (2) We propose two sequential networks to tackle the visual dia-log task in the joint visual-linguistic representation space. 3We set a new state-of-the-art generative visual dialog model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "VQA focuses on providing a natural language answer given an image and a free-form, open-ended question. Attention mechanisms have been deeply explored in VQA related work. In deep networks, the attention mechanism helps refine semantic meanings at different levels. SANs create stacked attention networks, producing multiple attention maps in a sequential manner to imitate multi-step reasoning. (Lu et al., 2016) introduces co-attention between image regions and words in the question. (Yu et al., 2017) utilizes image-guided attention to extract the language concept of an image and then combines this with a novel multi-modal feature fusion of image and question.", "cite_spans": [ { "start": 396, "end": 413, "text": "(Lu et al., 2016)", "ref_id": "BIBREF11" }, { "start": 487, "end": 504, "text": "(Yu et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "VQA", "sec_num": "2.1" }, { "text": "Recently, Dense Co-Attention Network (DCN) (Nguyen and Okatani, 2018) proposes a symmetric co-attention layer to address VQA tasks. DCN is \"dense symmetric\" because it makes each visual region aware of the existence of each question word and vice versa. This fine-granularity co-attention enables DCN to discriminate subtle differences or similarities between vision and language features. In this work, we use DCN as the generator of joint visual-linguistic representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "VQA", "sec_num": "2.1" }, { "text": "Previous research has tackled the visual dialog task from various theoretical perspectives. Early baselines include Late Fusion, Hierarchical Recurrent Encoder, and Memory Networks (Das et al., 2017) . (Guo et al., 2019) proposes a two-stage method which filters out the obviously irrelevant answers in primary stage, then re-ranks the rest answers in synergistic stage. (Guo et al., 2019) won the visual dialog challenge 4 in 2018. Several models try to leverage the dialog structure to conduct explicit reasoning. GNN abstracts visual dialog as a fully connected graph where each node represents a single dialog round and each edge represents semantic dependency of the two connected nodes. Recursive Visual Attention(RvA) designs sub-networks to infer the stopping condition when recursively traversing the dialog stack to resolve visual co-reference relationships. RvA won the visual dialog challenge 4 in 2019 by fine-tuning with dense annotations 3 . ReDAN (Gan et al., 2019) develops a recurrent dual attention network to progressively update the semantic representations of query, vision, and history, making them coaware through multiple steps to achieve multi-step reasoning. ReDAN (Gan et al., 2019) achieves 64.47% NDCG on the VisDial v1.0 test-std set, is still the highest score among all published work trained without dense annotations 3 .", "cite_spans": [ { "start": 181, "end": 199, "text": "(Das et al., 2017)", "ref_id": "BIBREF2" }, { "start": 202, "end": 220, "text": "(Guo et al., 2019)", "ref_id": "BIBREF4" }, { "start": 371, "end": 389, "text": "(Guo et al., 2019)", "ref_id": "BIBREF4" }, { "start": 963, "end": 981, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" }, { "start": 1192, "end": 1210, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "Based on ViLBERT , recent VisDial-BERT (Murahari et al., 2019) leverages the joint visual-linguistic representation to tackle visual dialog task. By fine-tuning with dense annotations, VisDial-BERT (Murahari et al., 2019) achieves state-of-the-art NDCG (74.47%) using a discriminative model. However, its non-NDCG performance is significantly lower. Futhermore, it's not easy to deploy a discriminative model in real applications. Similar performance degradation occurs to P1P2 (Qi et al., 2019a) , which also trained with dense annotations 3 .", "cite_spans": [ { "start": 39, "end": 62, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 198, "end": 221, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 478, "end": 496, "text": "(Qi et al., 2019a)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog", "sec_num": "2.2" }, { "text": "The visual dialog task (Das et al., 2017 ) is formulated as follows: at time t, given a query Q t grounded in image I, and dialog history (including the image caption C)", "cite_spans": [ { "start": 23, "end": 40, "text": "(Das et al., 2017", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "H t = {C, (Q 1 , A 1 ), \u2022 \u2022 \u2022 , (Q t\u22121 , A t\u22121", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": ")} as additional context. For discriminative task, the goal is to rank 100 candidate answers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "A t = {A 1 t , A 2 t , \u2022 \u2022 \u2022 , A 100 t }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "For generative task, the goal is to generate an answer in natural language. The task requires the agent to predict the ground truth answer and rank other feasible answers as high as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "As illustrated in Fig. 1 , we rely on Faster- RCNN (Ren et al., 2015) to extract features corresponding to salient image regions (Anderson et al., 2018) . The vision feature of image I is represented as F I \u2208 R nv\u00d7dv , where n v = 36 being the number of object-like region proposals in the image and d v = 2048 being the dimension of the feature vector. Q t and each item in H is padded or truncated to the same length d l . Thus, each sentence S is represented as F S \u2208 R d l \u00d7de , where d e being the dimension of the word embedding. To facilitate further discussion, we denote d h as the dimension of the hidden state throughout this section.", "cite_spans": [ { "start": 46, "end": 69, "text": "RCNN (Ren et al., 2015)", "ref_id": null }, { "start": 129, "end": 152, "text": "(Anderson et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 18, "end": 24, "text": "Fig. 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Approach", "sec_num": "3" }, { "text": "Dense Co-Attention Network (DCN) (Nguyen and Okatani, 2018) proposes using contents in sub-grids of a convolutional neuron network as visual region features. However, we turn to use Faster R-CNN proposals (Ren et al., 2015; Anderson et al., 2018) because people usually talk about objects in their conversations, so Faster R-CNN proposals better suit for the purpose of object identification. Given an image I with vision feature F I \u2208 R nv\u00d7dv and a sentence S with embedding F S \u2208 R d l \u00d7de , we define DCN (I, S) \u2208 R d h the Dense Co-attention (Nguyen and Okatani, 2018) representation of I and S. We define an instance of t round visual dialog by a tuple D = (I, H t , Q t ). Using DCN, we convert dialog history H t into the visual-linguistic vector sequence H t as:", "cite_spans": [ { "start": 205, "end": 223, "text": "(Ren et al., 2015;", "ref_id": "BIBREF19" }, { "start": 224, "end": 246, "text": "Anderson et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": "C = DCN (I, C) L i = DCN (I, (Q i , A i )), i = 1, \u2022 \u2022 \u2022 , t \u2212 1 H t = { C, L 1 , \u2022 \u2022 \u2022 , L t\u22121 } (1) Let Q t = DCN (I, Q t )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": ", the original visual dialog then turns into a new tuple D = ( H t , Q t ) in the joint visual-linguistic representation space. Note that the sequential structure of H t is exactly the same as that of H t and image I no longer exists in D as an explicit domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": "To facilitate discussion in section 3.2, we define the question history Q t by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": "Q i = DCN (I, Q i ), 1 \u2264 i \u2264 t Q t = { Q 1 , \u2022 \u2022 \u2022 , Q t\u22121 , Q t } (2)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": "Note, Q t includes the visual-linguistic vector of the query Q t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Visual Dialog as Visual-Linguistic Vector Sequence", "sec_num": "3.1" }, { "text": "As illustrated in Fig. 2 , Information Propagation Network is a 2-layer LSTM. After converting the visual dialog into a tuple D = ( H t , Q t ) in the joint visual-linguistic representation space, we apply a LSTM to the visual-linguistic vector sequence H t and use the hidden state at time t as the summary of visual-linguistic history. Specifically:", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 24, "text": "Fig. 2", "ref_id": null } ], "eq_spans": [], "section": "SeqIPN: Information Propagation Network", "sec_num": "3.2" }, { "text": "R L = LST M ( H t )[t], R L \u2208 R d h (3) Figure 2: Architecture of Information Propagation Net- work (SeqIPN)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqIPN: Information Propagation Network", "sec_num": "3.2" }, { "text": "We apply the same LSTM to question history Q t and use Q t 's hidden state R Q as the context aware query. Experiment shows introducing R Q can slightly drop the MRR (< 1%) but increase NDCG a lot (> 1.5%). The observation can be explained as R Q is the query distorted by LSTM, which fools the discriminator and results in the MRR drop. However, the impact is controllable because LSTM's forget gate makes the impact of previous questions gradually fade away along the propagation. On the other hand, R Q collects more semantic information to broaden the scope of candidate answers, which results in the NDCG increase.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqIPN: Information Propagation Network", "sec_num": "3.2" }, { "text": "[R L , R Q ] \u2208 R 2d h is linearly projected to R QL \u2208 R d h as the final representation of D. R QL is fed into the decoder to predict answer. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqIPN: Information Propagation Network", "sec_num": "3.2" }, { "text": "Transformer (Vaswani et al., 2017) was originally developed for sequence to sequence task using an encoder-decoder architecture.In this work, we modify Transformer's encoder by replacing its self-attention with the decoder's masked selfattention, while keeping other modules unchanged. We focus on the modifications to enable multistep reasoning via Transformer. For simplicity, we define three functions Query(), Key(), and V alue().", "cite_spans": [ { "start": 12, "end": 34, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": "Given a vector", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": "v \u2208 R d h , Query(v), Key(v)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": ", and V alue(v) are vectors in R d h and represent v's query, key, and value described in (Vaswani et al., 2017) respectively. Fig. 3 is a conceptual architecture of the proposed Multi-step Reasoning Network(SeqMRN). (Vaswani et al., 2017) . Given dialog tuple D = ( H t , Q t ), the position aware visual-linguistic sequence U t is defined by:", "cite_spans": [ { "start": 90, "end": 112, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" }, { "start": 217, "end": 239, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 127, "end": 133, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": "{P 0 , \u2022 \u2022 \u2022 , P t\u22121 } are position features defined in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": "U t = {U 0 , U 1 , \u2022 \u2022 \u2022 , U t\u22121 } U 0 = C + P 0 U i = L i + P i , 1 \u2264 i \u2264 t \u2212 1 (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "SeqMRN: Multi-step Reasoning Network", "sec_num": "3.3" }, { "text": "As illustrated in Fig. 3 , this layer applies masked self-attention within the position aware sequence U t . This layer allows a single dialog round to gather relevant information from previous conversations and embed the information into its own representation.", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 24, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "History Backward Self-Attention Layer", "sec_num": "3.3.1" }, { "text": "Specifically, for U i , 0 \u2264 i \u2264 t \u2212 1, its attention logits with respect to all the other rounds of dialog is defined by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "History Backward Self-Attention Layer", "sec_num": "3.3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03c4 i : \u03c4 i j = Key(U j ) T Query(U i ) j \u2264 i \u2212\u221e i < j", "eq_num": "(5)" } ], "section": "History Backward Self-Attention Layer", "sec_num": "3.3.1" }, { "text": "where \u03c4 i \u2208 R t . Then, the context aware visual-linguistic sequence V t is defined by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "History Backward Self-Attention Layer", "sec_num": "3.3.1" }, { "text": "w i = sof tmax(\u03c4 i / d h ), w i \u2208 R t V t = {V 0 , \u2022 \u2022 \u2022 , V t\u22121 } : V i = t\u22121 j=0 w i [j] \u2022 U j (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "History Backward Self-Attention Layer", "sec_num": "3.3.1" }, { "text": "In this layer, the query Q t renews its knowledge about the context based on V t . The attention weights reflect how Q t distributes its focus over V t , which enables reasoning across the dialog history. Specifically, the query's attention logits with respect to V t is defined by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u : u j = Key(V j ) T Query( Q t )/ d h 0 \u2264 j \u2264 t \u2212 1", "eq_num": "(7)" } ], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "However, we don't want history information in V t to overpower the query's own semantic meaning, thus we augment Q t by self-attention weight u q :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "u q = Key( Q t ) T Query( Q t )/ d h", "eq_num": "(8)" } ], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "Then, the query's correction \u25b3 Q t is defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w = sof tmax([u; u q ]), w \u2208 R t+1 \u25b3 Q t = t\u22121 i=0 w i V i + w t Q t", "eq_num": "(9)" } ], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "Note that Question Correction Layer keeps V t unchanged. Contrary to SeqIPN, we don't use question history Q t in SeqMRN because attention mechanism can make Q t indistinguishable from other questions in Q t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Query Correction Layer", "sec_num": "3.3.2" }, { "text": "History Backward Self-Attention Layer and Question Correction Layer form the building blocks of our proposed Multi-step Reasoning Network. As illustrated in Fig. 3 , residual connection is used.", "cite_spans": [], "ref_spans": [ { "start": 157, "end": 163, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "Q \u2032 t = Q t + \u25b3 Q t C \u2032 = V 0 + U 0 L \u2032 i = V i + U i , 1 \u2264 i \u2264 t \u2212 1", "eq_num": "(10)" } ], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "where the results Q \u2032 t , C \u2032 and L \u2032 t are vectors in R d h . We have refined the dialog tuple D = ( H t , Q t ) to be a new tuple", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "D \u2032 = ( H \u2032 t , Q \u2032 t ),", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "H \u2032 t = { C \u2032 , L \u2032 1 , \u2022 \u2022 \u2022 , L \u2032 t\u22121 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "Members in D \u2032 are more environment aware than their corresponding members in D. We achieve multistep reasoning by stacking several such building blocks to progressively refine D. We consider L \u2032 t\u22121 of the last block as the summary of dialog history and consider Q \u2032 t of the last block as the context aware query. We", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "project [ Q \u2032 t ; L \u2032 t\u22121 ] to R QL \u2208 R d h as the final rep- resentation of D.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multi-step Reasoning", "sec_num": "3.3.3" }, { "text": "For each candidate anwer A j t \u2208 A t , a LSTM is applied to A j t to obtain its representation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Decoder", "sec_num": "3.4.1" }, { "text": "R j \u2208 R d h . The score of A j t is defined by s j = R T j R QL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Decoder", "sec_num": "3.4.1" }, { "text": "Like (Guo et al., 2019) , we optimize the N-pair loss (Sohn, 2016) :", "cite_spans": [ { "start": 5, "end": 23, "text": "(Guo et al., 2019)", "ref_id": "BIBREF4" }, { "start": 54, "end": 66, "text": "(Sohn, 2016)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Discriminative Decoder", "sec_num": "3.4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L D = log( 100 j=1 exp s j \u2212 s gt \u03c4 )", "eq_num": "(11)" } ], "section": "Discriminative Decoder", "sec_num": "3.4.1" }, { "text": "where s gt is the score of the ground truth answer, and we set \u03c4 = 0.25.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discriminative Decoder", "sec_num": "3.4.1" }, { "text": "Inspired by attention based NMT (Luong et al., 2015) , we develop an attention based decoder. The decoder is a LSTM initialized by R QL . At time t, we compute similarity weights between current hidden state and the hidden states of previous timestamps instead of directly using the hidden state to generate the distribution over vocabulary. Then, the distribution is generated based on the weighted sum of hidden states.", "cite_spans": [ { "start": 32, "end": 52, "text": "(Luong et al., 2015)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Generative Decoder", "sec_num": "3.4.2" }, { "text": "VisDial v1.0 training dataset provides a subset named dense annotations 3 which contains 2K dialog instances. For each instance in dense annotations, two human annotators assign each of its candidate answer with a relevance score based on the ground-truth answer. (Qi et al., 2019b) finetunes with dense annotations using a generalized cross entropy loss:", "cite_spans": [ { "start": 264, "end": 282, "text": "(Qi et al., 2019b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L G = \u2212 100 j=1 y j log(sof tmax(s)[j])", "eq_num": "(12)" } ], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "where s is the score vector of candidate answers, y j is the relevance score label of the j th candidate answer. However, blindly optimizing this objective will significantly hurt non-NDGC metrics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "To mitigate this issue, we propose a reweighting method to make the fine-tuning process aware of the importance of the ground truth answer. Specifically, we update the relevance label y by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y \u2032 i = y i +2 3 , i = index gt y i 3 , otherwise", "eq_num": "(13)" } ], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "where index gt is the index of the ground truth answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Reweighting Method in Fine-tuning with Dense Annotations", "sec_num": "3.5" }, { "text": "Using the VisDial v1.0 dataset, we experiment with 4 types of SeqDiaN: SeqIPN with GloVe Embedding (Pennington et al., 2014) We trained Dense Symmetric Co-Attention Network (Nguyen and Okatani, 2018) from scratch. We use NDCG 1 , MRR 2 , recall (R@1, 5, 10), and mean rank to evaluate the models' performance.", "cite_spans": [ { "start": 99, "end": 124, "text": "(Pennington et al., 2014)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In discriminative task, the model ranks the 100 candidate answers based on discriminative score, which is defined as the dot product similarity between the representation of dialogue and that of candidate answer.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In training and evaluation phases, to simplify the framework, the generative task is to rank the 100 candidate answers too. Given a candidate answer A, its generative score is defined as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "lld A \u221a |A| ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "where lld A is the answer's log-likelihood and |A| is the answer's length. Based on generative score, the rank of 100 candidate answers is well defined, as well as the sparse metric MRR and Recall. However, in inference phase, we obtain the answer via distribution over vocabulary and beam search at every step as usual.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "We compare the performance between Se-qDialN models of different configurations. We use Memory Network (MN) (Das et al., 2017) , History-Conditioned Image Attentive Encoder (HCIAE) (Lu et al., 2017) , Sequential Co-Attention Model (CoAtt) (Wu et al., 2018) and ReDAN (Gan et al., 2019) as baselines in this Model NDCG\u2191 MRR\u2191 R@1\u2191 R@5\u2191 R@10\u2191 Mean\u2193 MN-D (Das et al., 2017) 55 study because published work (Gan et al., 2019) reports the performance of these models with both discriminative and generative decoders. In Table 1 , \"-D\" stands for discriminative model and \"-G\" for generative model. SeqMRN-DE-D and SeqMRN-DE-G outperform all baselines and other SeqDialN models on NDCG 1 for both discriminative and generative cases. Especially for the generative case, SeqMRN-DE-G outperforms the second place ReDAN-G(T=3) by > 3.6% NDCG. Meanwhile, the MRR difference between ReDAN-G(T=3) and SeqMRN-DE-G is merely 0.3, SeqMRN-DE-G still outperforms ReDAN-G(T=3) on average performance. We arrive at the conclusion that SeqMRN-DE-G is a new state-ofthe-art generative visual dialog model. SeqIPN with GloVe Embedding is the simplest SeqDialN. However, SeqIPN-GE-D achieves better NDCG than well-known discriminative baselines such as MN-D, HCIAE-D and CoAtt-D. In addition, SeqIPN-GE-G even outperforms all generative baselines on NDCG. The model simplicity and performance gain together validate the merit of considering visual dialog as a visual-linguistic vector sequence.", "cite_spans": [ { "start": 108, "end": 126, "text": "(Das et al., 2017)", "ref_id": "BIBREF2" }, { "start": 181, "end": 198, "text": "(Lu et al., 2017)", "ref_id": "BIBREF10" }, { "start": 239, "end": 256, "text": "(Wu et al., 2018)", "ref_id": "BIBREF23" }, { "start": 267, "end": 285, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" }, { "start": 351, "end": 369, "text": "(Das et al., 2017)", "ref_id": "BIBREF2" }, { "start": 402, "end": 420, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 514, "end": 521, "text": "Table 1", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Model Comparison", "sec_num": "4.1.1" }, { "text": "In this section, we add VisDial-BERT (Murahari et al., 2019) as a baseline. At this stage, the comparison is conducted between models trained without dense annotation 3 .", "cite_spans": [ { "start": 37, "end": 60, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Ensemble SeqDialN Analysis", "sec_num": "4.1.2" }, { "text": "As discriminative SeqDialN and generative Seq-DialN rank the 100 candidate answers via discriminative score and generative score respectively, the uniform task definition facilitates the ensemble process. Given a set of SeqDialN models, we sim-Model NDCG\u2191 MRR\u2191 R@1\u2191 R@5\u2191 R@10\u2191 Mean\u2193 ReDAN: 4 Dis. + 4 Gen. (Gan et al., 2019) 65.13 54.19 42.92 66.25 74.88 8.74 ReDAN+ (Diverse Ens.) (Gan et al., 2019) 67.12 56.77 44.65 69.47 79.90 5.96 VisDial-BERT: w/L-only (Murahari et al., 2019) 62.64 67.86 54.54 84.34 92.36 3.44 VisDial-BERT: w/CC+VQA (Murahari et al., 2019) 64 ply average scores of all models to obtain the new score to rank the 100 candidate answers and evaluate the metrics based on the new rank. In Table 2 , \"SeqDialN: 4 Dis.\" is an ensemble of the 4 types of discriminative SeqDialN models while \"SeqDialN: 4 Gen.\" an ensemble of the 4 types of generative SeqDialN models. Our best model outperforms ReDAN and ReDAN+ by significant margin on both NDCG (> 1.5%) and MRR (> 1%). Our model also outperforms VisDial-BERT (Murahari et al., 2019) by > 3.5% NDCG despite the latter being pretrained on several large-scale datasets.", "cite_spans": [ { "start": 306, "end": 324, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" }, { "start": 382, "end": 400, "text": "(Gan et al., 2019)", "ref_id": "BIBREF3" }, { "start": 459, "end": 482, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 541, "end": 564, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 1030, "end": 1053, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 710, "end": 717, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ensemble SeqDialN Analysis", "sec_num": "4.1.2" }, { "text": "VisDial-BERT (Murahari et al., 2019) has roughly 250M parameters, the configuration \"w/L-only\" is trained only on VisDial v1.0-train set, which is more suitable to compare with SeqDialN. SeqIPN-GE-G has less than 69M parameters but it can outperform \"w/L-only\" on NDCG (> 0.5%). The ensemble configuration (SeqMRN-DE-D + SeqIPN-GE-G) has roughly the same parameters as \"w/L-only\" and it further outperforms \"w/L-only\" by > 4% NDCG. Actually, it even outperforms \"w/CC+VQA\" by > 2% NDCG. The advantage of VisDial-BERT (Murahari et al., 2019) is the high MRR score it achieves.", "cite_spans": [ { "start": 13, "end": 36, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" }, { "start": 517, "end": 540, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Ensemble SeqDialN Analysis", "sec_num": "4.1.2" }, { "text": "We also evaluate SeqDialN on VisDial v1.0 teststd set. Table 3 shows the comparison between our model and state-of-the-art visual dialog models trained without dense annotations 3 . SeqDialN achieves state-of-the-art performance on NDCG, even a single generative SeqDialN can outperform most previous work on that metric. At present, SeqDialN doesn't perform well on MRR, which is partly because it is hard for generative models to produce exactly the same answer as the ground truth, even when conditioned on the same semantic scenarios.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 62, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Ensemble SeqDialN Analysis", "sec_num": "4.1.2" }, { "text": "We fine-tune discriminative SeqDialN with dense annotations 3 . Table 4 shows the proposed reweighting method greatly mitigates performance drop in our fine-tuning experiment. We list the Model NDCG\u2191 MRR\u2191 R@1\u2191 R@5\u2191 R@10\u2191 Mean\u2193 GNN 52.82 61.37 47.33 77.98 87.83 4.57 CorefNMN (Kottur et al., 2018) 54.70 61.50 47.55 78.10 88.80 4.40 RvA 55.59 63.03 49.03 80.40 89.83 4.18 DualVD (Jiang et al., 2020) 56.32 63.23 49.25 80.23 89.70 4.11 HACAN (Yang et al., 2019) 57.17 64.22 50.88 80.63 89.45 4.20 SN (Guo et al., 2019) 57.32 62.20 47.90 80.43 89.95 4.17 SN \u2020 (Guo et al., 2019) 57.88 63.42 49.30 80.77 90.68 3.97 NMN (Kottur et al., 2018) 58 fine-tuning statistics for one SeqIPN and one Se-qMRN as representatives. Table 5 compares SeqDialN with state-of-theart models trained with dense annotations. On Vis-Dial v1.0 test-std set, our model achieves comparable NDCG as others while outperforming them on MRR. It is interesting to note that VisDial-BERT (Murahari et al., 2019) outperforms our model on MMR by > 5% before fine-tuning. After finetuning however, our model outperforms it on MRR by nearly 5%. This observation validates the effectiveness of the reweighting method in preserving a model's overall performance when trained with dense annotations 3 . In addition, we find finetuning generative models don't improve NDCG as much as discriminative case.", "cite_spans": [ { "start": 275, "end": 296, "text": "(Kottur et al., 2018)", "ref_id": "BIBREF8" }, { "start": 378, "end": 398, "text": "(Jiang et al., 2020)", "ref_id": "BIBREF6" }, { "start": 440, "end": 459, "text": "(Yang et al., 2019)", "ref_id": "BIBREF24" }, { "start": 498, "end": 516, "text": "(Guo et al., 2019)", "ref_id": "BIBREF4" }, { "start": 557, "end": 575, "text": "(Guo et al., 2019)", "ref_id": "BIBREF4" }, { "start": 615, "end": 636, "text": "(Kottur et al., 2018)", "ref_id": "BIBREF8" }, { "start": 953, "end": 976, "text": "(Murahari et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 64, "end": 71, "text": "Table 4", "ref_id": "TABREF7" }, { "start": 714, "end": 721, "text": "Table 5", "ref_id": "TABREF9" } ], "eq_spans": [], "section": "Fine-tuning with Dense Annotations", "sec_num": "4.1.3" }, { "text": "We note SeqMRN yeilds the best performance in the single model comparison, we conduct further experiments to analyze contribution of its components. For simplicity, We train discriminative SeqMRN in different configurations to 13 epochs without fine-tuning.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation Study", "sec_num": "4.2" }, { "text": "NDCG\u2191 MRR\u2191 R@1\u2191 R@5\u2191 R@10\u2191 Mean\u2193 MReal-BDAI \u2020 (Qi et al., 2019b) 74 ", "cite_spans": [ { "start": 46, "end": 64, "text": "(Qi et al., 2019b)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": null }, { "text": "We close the modules in DCN (Nguyen and Okatani, 2018) which apply cross modality attention between vision and language features. Thus the two modalities are fused in a simple summation way in DCN.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Effectiveness of visual-linguistic joint representation", "sec_num": "4.2.1" }, { "text": "In this configuration, the two modalities won't be aware of the existence of each other until the masked self-attention step in Transformer. Table 6 shows its performance, which drops on all metrics. Especially on NDCG, it drops 3.14%.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 149, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Effectiveness of visual-linguistic joint representation", "sec_num": "4.2.1" }, { "text": "This experiment demonstrates the positive impact of our early fusion, as we say, the visuallinguistic joint representation. Further analysis reveals early fusion helps enhance the model's capability to filter out irrelevant answers. We find that each image in dense annotation 3 of VisDial v1.0 has on average 12.68 answers with non-zero relevant-score. On average, We find SeqMRN-DE-D-LateFusion ranks 5.58 (44.00%) zero relevantscore answers into the top 12.68 predictions, while this number of SeqMRN-DE-D is 5.36 (42.27%). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Item named SeqMRN-DE-D-LateFusion in", "sec_num": null }, { "text": "In Table 6 , the item SeqMRN-DE-D-NoQC shows the performance of the configuration by closing the Query Correction Layer illustrated in section 3.3.2. We see that performance drops on all metrics as well. We find Query Correction Layer enhances the model's capability to integrate history information based on the given query, thus it helps answer the query which requires dialog history. (Agarwal et al., 2020) points out that not all questions in VisDial v1.0 dataset need dialogue history to answer. They have proposed a dataset named VisDialConv (Agarwal et al., 2020) , which is actu-ally a subset of VisDial v1.0 validation dataset including 97 instances which answer needs the reference to dialog history.", "cite_spans": [ { "start": 388, "end": 410, "text": "(Agarwal et al., 2020)", "ref_id": null }, { "start": 549, "end": 571, "text": "(Agarwal et al., 2020)", "ref_id": null } ], "ref_spans": [ { "start": 3, "end": 10, "text": "Table 6", "ref_id": "TABREF11" } ], "eq_spans": [], "section": "Effectiveness of Query Correction Layer", "sec_num": "4.2.2" }, { "text": "We run both SeqMRN-DE-D and SeqMRN-DE-D-NoQC on VisDialConv dataset. SeqMRN-DE-D gets 51.11% NDCG and SeqMRN-DE-D-NoQC gets 50.22%, the former has 1.77% relative improvement. As illustrated in Figure 5 , the score distribution of the two models are similar, which concentrates in range [0.2, 0.9]. However, SeqMRN-DE-D scores significantly more instances in range [0.6, 0.7] than the other. SeqMRN-DE-D also scores less instances in the low range [0.0, 0.2] but scores more instances in the high range [0.8, 1]. These observations support the conclusion that Query Correction Layer helps answer history related questions. ", "cite_spans": [], "ref_spans": [ { "start": 193, "end": 201, "text": "Figure 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Effectiveness of Query Correction Layer", "sec_num": "4.2.2" }, { "text": "We use the 3 examples in Fig. 4 to illustrate Se-qMRN's reasoning capability. On the left, the question asks: \"Is the pickle a spear or sliced?\". In SeqMRN's first reasoning block (layer0), the question focus on preserving its own information (its self attention weight being 0.671). However, in the second reasoning block (layer1), the question pays more attention to the first round which has \"pickle\" related information. This example demonstrates the attention gets the right \"correction\" in Query Correction Layer. In the middle, the question asks: \"Does he wear a hat?\" Due to the word \"he\", in SeqMRN's first reasoning block (layer0), the attention is on the caption (0.69), which has words \"man\" and \"his\". However, in the second reasoning block (layer1), the attention turns to the round \"does he wear sunglasses? yes\". Note the semantic similarity between \"wear sunglasses\" and \"wear hat\" (they are both wearables on the head). This example shows the attention making decisions based upon refined knowledge about the context in a deeper stack.", "cite_spans": [], "ref_spans": [ { "start": 25, "end": 31, "text": "Fig. 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4.3" }, { "text": "On the right, the question asks: \"Is the picture in color?\" In SeqMRN's first reasoning block, the attention focuses on itself. However, in the second reasoning block, the attention switches to the caption. Most likely in deeper stack, it make the inference like: only a color image makes a banana look \"yellow\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative Analysis", "sec_num": "4.3" }, { "text": "We presented Sequential Visual Dialog Network (SeqDialN) based on a novel idea that treats dialog rounds as a visual-linguistic vector sequence. We explore both discriminative and generative models and set up a new state-of-the-art generative visual dialog model. Even though our model is trained only on VisDial v1.0 dataset, it achieves competitive performance against other models trained on much larger vision-language datasets, which facilitates its deployment in industrial environment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "visdial/challenge2020", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ioannis Konstas, and Verena Rieser. 2020. History for visual dialog: Do we really need it? arXiv preprint", "authors": [ { "first": "Shubham", "middle": [], "last": "Agarwal", "suffix": "" }, { "first": "Trung", "middle": [], "last": "Bui", "suffix": "" }, { "first": "Joon-Young", "middle": [], "last": "Lee", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2005.07493" ] }, "num": null, "urls": [], "raw_text": "Shubham Agarwal, Trung Bui, Joon-Young Lee, Ioan- nis Konstas, and Verena Rieser. 2020. History for visual dialog: Do we really need it? arXiv preprint arXiv:2005.07493.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Bottom-up and top-down attention for image captioning and visual question answering", "authors": [ { "first": "Peter", "middle": [], "last": "Anderson", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Buehler", "suffix": "" }, { "first": "Damien", "middle": [], "last": "Teney", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Gould", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "6077--6086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 6077-6086.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "authors": [ { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" }, { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "Khushi", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Avi", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Deshraj", "middle": [], "last": "Yadav", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "326--335", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Multi-step reasoning via recurrent dual attention for visual dialog", "authors": [ { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Ahmed", "middle": [], "last": "Kholy", "suffix": "" }, { "first": "Linjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6463--6474", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhe Gan, Yu Cheng, Ahmed Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao. 2019. Multi-step reasoning via recurrent dual attention for visual dialog. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 6463- 6474.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Imagequestion-answer synergistic network for visual dialog", "authors": [ { "first": "Dalu", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Chang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Dacheng", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "10434--10443", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Image- question-answer synergistic network for visual dia- log. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 10434- 10443.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Long short-term memory", "authors": [ { "first": "Sepp", "middle": [], "last": "Hochreiter", "suffix": "" }, { "first": "J\u00fcrgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 1997, "venue": "Neural computation", "volume": "9", "issue": "8", "pages": "1735--1780", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Dualvd: An adaptive dual encoding model for deep visual understanding in visual dialogue", "authors": [ { "first": "X", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "J", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Y", "middle": [], "last": "Zhuang", "suffix": "" }, { "first": "X", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Y", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Q", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "X. Jiang, J. Yu, Z. Qin, Y. Zhuang, X. Zhang, Y. Hu, and Q. Wu. 2020. Dualvd: An adaptive dual encod- ing model for deep visual understanding in visual dialogue. AAAI.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Dual attention networks for visual reference resolution in visual dialog", "authors": [ { "first": "Gi-Cheon", "middle": [], "last": "Kang", "suffix": "" }, { "first": "Jaeseo", "middle": [], "last": "Lim", "suffix": "" }, { "first": "Byoung-Tak", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2024--2033", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gi-Cheon Kang, Jaeseo Lim, and Byoung-Tak Zhang. 2019. Dual attention networks for visual reference resolution in visual dialog. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2024-2033.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Visual coreference resolution in visual dialog using neural module networks", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Moura", "suffix": "" }, { "first": "", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the European Conference on Computer Vision (ECCV)", "volume": "", "issue": "", "pages": "153--169", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satwik Kottur, Jos\u00e9 MF Moura, Devi Parikh, Dhruv Ba- tra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module net- works. In Proceedings of the European Conference on Computer Vision (ECCV), pages 153-169.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2019, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "13--23", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Advances in Neural Information Process- ing Systems, pages 13-23.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Anitha", "middle": [], "last": "Kannan", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "314--324", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra. 2017. Best of both worlds: Trans- ferring knowledge from discriminative learning to a generative visual dialog model. In Advances in Neu- ral Information Processing Systems, pages 314-324.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Hierarchical question-image co-attention for visual question answering", "authors": [ { "first": "Jiasen", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jianwei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" } ], "year": 2016, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "289--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co-attention for visual question answering. In Advances in neural information processing systems, pages 289-297.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1412--1421", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Large-scale pretraining for visual dialog: A simple state-of-the-art baseline", "authors": [ { "first": "Vishvak", "middle": [], "last": "Murahari", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Abhishek", "middle": [], "last": "Das", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1912.02379" ] }, "num": null, "urls": [], "raw_text": "Vishvak Murahari, Dhruv Batra, Devi Parikh, and Ab- hishek Das. 2019. Large-scale pretraining for visual dialog: A simple state-of-the-art baseline. arXiv preprint arXiv:1912.02379.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Improved fusion of visual and language representations by dense symmetric co-attention for visual question answering", "authors": [ { "first": "Takayuki", "middle": [], "last": "Duy-Kien Nguyen", "suffix": "" }, { "first": "", "middle": [], "last": "Okatani", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6087--6096", "other_ids": {}, "num": null, "urls": [], "raw_text": "Duy-Kien Nguyen and Takayuki Okatani. 2018. Im- proved fusion of visual and language representations by dense symmetric co-attention for visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6087-6096.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Recursive visual attention in visual dialog", "authors": [ { "first": "Yulei", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Hanwang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Manli", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianhong", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Zhiwu", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6679--6688", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2019. Recur- sive visual attention in visual dialog. Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 6679-6688.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Glove: Global vectors for word representation", "authors": [ { "first": "Jeffrey", "middle": [], "last": "Pennington", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)", "volume": "", "issue": "", "pages": "1532--1543", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Jianqiang Huang, and Hanwang Zhang. 2019a. Two causal principles for improving visual dialog", "authors": [ { "first": "Jiaxin", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yulei", "middle": [], "last": "Niu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaxin Qi, Yulei Niu, Jianqiang Huang, and Hanwang Zhang. 2019a. Two causal principles for improving visual dialog.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Learning to answer: Fine-tuning with generalized cross entropy for visual dialog challenge", "authors": [ { "first": "Jiaxin", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Yulei", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Hanwang", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Jianqiang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Xian-Sheng", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Ji-Rong", "middle": [], "last": "Wen", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaxin Qi, Yulei Niu, Hanwang Zhang, Jianqiang Huang, Xian-Sheng Hua, and Ji-Rong Wen. 2019b. Learning to answer: Fine-tuning with generalized cross entropy for visual dialog challenge 2019. [On- line; accessed November 12, 2019].", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "authors": [ { "first": "Kaiming", "middle": [], "last": "Shaoqing Ren", "suffix": "" }, { "first": "Ross", "middle": [], "last": "He", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Girshick", "suffix": "" }, { "first": "", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "91--99", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster r-cnn: Towards real-time ob- ject detection with region proposal networks. In Advances in neural information processing systems, pages 91-99.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "authors": [ { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "NeurIPS EM C 2 Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. In NeurIPS EM C 2 Workshop.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Improved deep metric learning with multi-class n-pair loss objective", "authors": [ { "first": "Kihyuk", "middle": [], "last": "Sohn", "suffix": "" } ], "year": 2016, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "1857--1865", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kihyuk Sohn. 2016. Improved deep metric learning with multi-class n-pair loss objective. In Advances in Neural Information Processing Systems, pages 1857-1865.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Are you talking to me? reasoned visual dialog generation through adversarial learning", "authors": [ { "first": "Qi", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Peng", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Chunhua", "middle": [], "last": "Shen", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6106--6115", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, and An- ton Van Den Hengel. 2018. Are you talking to me? reasoned visual dialog generation through adversar- ial learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6106-6115.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Making history matter: Gold-critic sequence training for visual dialog", "authors": [ { "first": "Tianhao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Zheng-Jun", "middle": [], "last": "Zha", "suffix": "" }, { "first": "Hanwang", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianhao Yang, Zheng-Jun Zha, and Hanwang Zhang. 2019. Making history matter: Gold-critic sequence training for visual dialog. CoRR, abs/1902.09326.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Stacked attention networks for image question answering", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "21--29", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 21-29.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multi-modal factorized bilinear pooling with co-attention learning for visual question answering", "authors": [ { "first": "Zhou", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jun", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Jianping", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Dacheng", "middle": [], "last": "Tao", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the IEEE international conference on computer vision", "volume": "", "issue": "", "pages": "1821--1830", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zhou Yu, Jun Yu, Jianping Fan, and Dacheng Tao. 2017. Multi-modal factorized bilinear pooling with co-attention learning for visual question answering. In Proceedings of the IEEE international conference on computer vision, pages 1821-1830.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Reasoning visual dialogs with structural and partial observations", "authors": [ { "first": "Zilong", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Wenguan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Siyuan", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Song-Chun", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "6669--6678", "other_ids": {}, "num": null, "urls": [], "raw_text": "Zilong Zheng, Wenguan Wang, Siyuan Qi, and Song- Chun Zhu. 2019. Reasoning visual dialogs with structural and partial observations. Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 6669-6678.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Fig. 1illustrates a conceptual overview of the proposed method.The visual features and language embeddings are learned from two independent domains." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Conceptual architecture of sequential visual dialog network (SeqDialN)." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Conceptual architecture of Multistep Reasoning Network (SeqMRN)." }, "FIGREF3": { "uris": null, "num": null, "type_str": "figure", "text": "SeqMRN: learn to reason in attention stacks. Color strength indicates attention weight, the darker highlighting the higher attention paid." }, "FIGREF4": { "uris": null, "num": null, "type_str": "figure", "text": "NDCG Distribution Comparison on VisDial-Conv" }, "TABREF2": { "num": null, "text": "Performance of SeqDialN models on VisDial v1.0 validation set. Left: discriminative SeqDialN. Right: generative SeqDialN. \u2191 indicates higher is better. \u2193 indicates lower is better.", "html": null, "content": "", "type_str": "table" }, "TABREF4": { "num": null, "text": "Comparison of SeqDialN to state-of-the-art visual dialog models on VisDial v1.0 validation set.", "html": null, "content": "
", "type_str": "table" }, "TABREF6": { "num": null, "text": "Comparison of SeqDialN to state-of-the-art visual dialog models on VisDial v1.0 test-std set. \u2191 indicates higher is better. \u2193 indicates lower is better. \u2020 denotes ensembles. All models have been trained without dense annotations 3 .", "html": null, "content": "
ModelNDCG\u2191 MRR\u2191 R@1\u2191 R@5\u2191 R@10\u2191 Mean\u2193
SeqMRN-DE-D70.2338.33 23.04 55.1771.519.29
SeqMRN-DE-D *70.7253.59 42.35 65.0577.737.27
SeqIPN-DE-D69.1237.93 23.10 53.8369.849.70
SeqIPN-DE-D *69.6852.2 41.13 62.9475.547.78
", "type_str": "table" }, "TABREF7": { "num": null, "text": "", "html": null, "content": "
: Using reweighting method to lessen perfor-
mance drop on VisDial v1.0 validate set. * denotes
fine-tuning with reweighting method.
", "type_str": "table" }, "TABREF9": { "num": null, "text": "Comparison of SeqDialN to state-of-the-art visual dialog models on VisDial v1.0 test-std set. All models have been trained with dense annotations 3", "html": null, "content": "", "type_str": "table" }, "TABREF11": { "num": null, "text": "Ablation Study on VisDial v1.0 validation set.", "html": null, "content": "
", "type_str": "table" } } } }