ACL-OCL / Base_JSON /prefixE /json /ecnlp /2021.ecnlp-1.13.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:48.038262Z"
},
"title": "Multimodal Item Categorization Fully Based on Transformers",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Boston",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Wei",
"middle": [],
"last": "Chou",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Boston",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Yandi",
"middle": [],
"last": "Xia",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Boston",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
},
{
"first": "Hirokazu",
"middle": [],
"last": "Miyake",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Rakuten Institute of Technology Boston",
"location": {
"region": "MA",
"country": "USA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Transformer has proven to be a powerful feature extraction method and has gained widespread adoption in natural language processing (NLP). In this paper we propose a multimodal item categorization (MIC) system solely based on the Transformer for both text and image processing. On a multimodal product data set collected from a Japanese ecommerce giant, we tested a new image classification model based on the Transformer and investigated different ways of fusing bi-modal information. Our experimental results on real industry data showed that the Transformerbased image classifier has performance on par with ResNet-based classifiers and is four times faster to train. Furthermore, a cross-modal attention layer was found to be critical for the MIC system to achieve performance gains over text-only and image-only models.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "The Transformer has proven to be a powerful feature extraction method and has gained widespread adoption in natural language processing (NLP). In this paper we propose a multimodal item categorization (MIC) system solely based on the Transformer for both text and image processing. On a multimodal product data set collected from a Japanese ecommerce giant, we tested a new image classification model based on the Transformer and investigated different ways of fusing bi-modal information. Our experimental results on real industry data showed that the Transformerbased image classifier has performance on par with ResNet-based classifiers and is four times faster to train. Furthermore, a cross-modal attention layer was found to be critical for the MIC system to achieve performance gains over text-only and image-only models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Item categorization (IC) is a core technology in modern e-commerce. Since there can be millions of products and hundreds of labels in e-commerce markets, it is important to be able to map these products to their locations in a product category taxonomy tree efficiently and accurately so that buyers can easily find the products they need. Therefore, IC technology with high accuracy is needed to cope with this demanding task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Products can contain text (such as titles) and images. Although most IC research has focused on using text-based cues, images of products also contain useful information. For example, in some sub-areas like fashion, the information conveyed through images is richer and more accurate than through the text channel. In this paper, we propose an MIC model entirely based on the Transformer architecture (Vaswani et al., 2017) for achieving * Equal contributor a simplification of the model and faster training speed. We conducted experiments on real product data collected from an e-commerce giant in Japan to (a) test the performance of the Transformerbased product image classification, and (b) systematically compare several bi-modal fusion methods to jointly use both text and image cues.",
"cite_spans": [
{
"start": 401,
"end": 423,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2 Related works (Zahavy et al., 2016 ) is a seminal work on MIC where multi-label classification using both titles and images was conducted on products listed on the Walmart.com website. They used a convolutional neural network to extract representations from both titles and images, then designed several policies to fuse the outputs of the two models. This led to improved performance over individual models separately. Since this work, further research has been conducted on MIC such as (Wirojwatanakul and Wangperawong, 2019; Nawaz et al., 2018) .",
"cite_spans": [
{
"start": 16,
"end": 36,
"text": "(Zahavy et al., 2016",
"ref_id": "BIBREF15"
},
{
"start": 490,
"end": 529,
"text": "(Wirojwatanakul and Wangperawong, 2019;",
"ref_id": "BIBREF14"
},
{
"start": 530,
"end": 549,
"text": "Nawaz et al., 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recently, a MIC data challenge was organized in the SIGIR'20 e-commerce workshop 1 . Rakuten France provided a dataset containing about 99K products where each product contained a title, an optional detailed description, and a product image. The MIC task was to predict 27 category labels from four major genres: books, children, household, and entertainment. Several teams submitted their MIC systems (Bi et al., 2020; Chordia and Vijay Kumar, 2020; Chou et al., 2020) . A common solution was to fine-tune pre-trained text and image encoders to serve as feature extractors, then use a bi-modal fusion mechanism to combine predictions. Most teams used the Transformer-based BERT model (Devlin et al., 2019 ) for text feature extraction and ResNet (He et al., 2016) for image feature extraction, including the standard ResNet-152 and the recently released Big Transfer (BiT) model . For bi-modal fusion, the methods used were more diverse. Roughly in order of increasing complexity, the methods included simple decision-level late fusion (Bi et al., 2020) , highway network (Chou et al., 2020) , and co-attention (Chordia and Vijay Kumar, 2020) . It is interesting to note that the winning team used the simplest decision-level late fusion method.",
"cite_spans": [
{
"start": 402,
"end": 419,
"text": "(Bi et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 420,
"end": 450,
"text": "Chordia and Vijay Kumar, 2020;",
"ref_id": "BIBREF1"
},
{
"start": 451,
"end": 469,
"text": "Chou et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 685,
"end": 705,
"text": "(Devlin et al., 2019",
"ref_id": "BIBREF3"
},
{
"start": 747,
"end": 764,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
},
{
"start": 1037,
"end": 1054,
"text": "(Bi et al., 2020)",
"ref_id": "BIBREF0"
},
{
"start": 1073,
"end": 1092,
"text": "(Chou et al., 2020)",
"ref_id": "BIBREF2"
},
{
"start": 1112,
"end": 1143,
"text": "(Chordia and Vijay Kumar, 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In other recent work, a cross-modal attention layer which used representations from different modalities to be the key and query vectors to compute attention weights was studied. In (Zhu et al., 2020) , product descriptions and images were jointly used to predict product attributes, e.g., color and size, and their values in an end-to-end fashion. In addition, based on the fact that product images can contain information not clearly aligned with or even contradicting the information conveyed in the text, a special gate was used to control the contribution of the image channel. A similar idea was used in (Sun et al., 2020) on multimodal named entity recognition research on Twitter data.",
"cite_spans": [
{
"start": 182,
"end": 200,
"text": "(Zhu et al., 2020)",
"ref_id": "BIBREF16"
},
{
"start": 610,
"end": 628,
"text": "(Sun et al., 2020)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although the field has converged on using Transformer-based models for processing text in recent years, ResNet-based image processing is still the dominant approach in MIC research. One immediate difficulty in combining the two types of models is the big gap between the training speeds. Owing to the superior parallel running capability enabled by self-attention in the Transformer architecture, text encoder training is much faster than the image encoder, and the training bottleneck of the MIC system becomes solely the image encoder. In addition, using two different deep learning architectures simultaneously makes building and maintaining MIC systems more complex. One solution is to use Transformers as the encoder of choice for both modalities. Furthermore, a detailed comparison of different fusion methods on large-scale multimodal industry product data is still missing. Our work addresses these two directions of research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our MIC model is depicted in Figure 1 2 . It consists of feature extraction components using a Transformer on uni-modal channels (i.e., text titles and images), a fusion part to obtain multimodal representations, and a Multi-Layer Perceptron (MLP) 2 The image of the can of tea is from https://item. rakuten.co.jp/kusurinokiyoshi/10016272/ head to make final predictions.",
"cite_spans": [
{
"start": 248,
"end": 249,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 29,
"end": 37,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": "3"
},
{
"text": "We fine-tuned a Japanese BERT model (Devlin et al., 2019) trained on Japanese Wikipedia data. The BERT model encodes a textual product title,",
"cite_spans": [
{
"start": 36,
"end": 57,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BERT text model",
"sec_num": "3.1"
},
{
"text": "x = ([CLS], x 1 , ..., x N ), into text representation sequence h = (h 0 , h 1 , ...h N )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT text model",
"sec_num": "3.1"
},
{
"text": ", where h i is a vector with a dimension of 768.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BERT text model",
"sec_num": "3.1"
},
{
"text": "Although originally developed for NLP applications, in recent years the Transformer architecture (Vaswani et al., 2017) has been increasingly applied to the computer vision domain. For example, (Han et al., 2020 ) is a recent survey paper listing many newly emerging visual models using the Transformer. Among the many visual Transformer models we used the ViT model (Dosovitskiy et al., 2020) , which is a pure Transformer that is applied directly on an image's P \u00d7 P patch sequence. ViT utilizes the standard Transformer's encoder part as an image classification feature extractor and adds a MLP head to determine the image labels. The ViT model was pre-trained using a supervised learning task on a massive image data set. The size of the supervised training data set impacts ViT performance significantly. When using Google's in-house JFT 300M image set, ViT can reach a performance superior to other competitive ResNet (He et al., 2016) models.",
"cite_spans": [
{
"start": 97,
"end": 119,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF13"
},
{
"start": 194,
"end": 211,
"text": "(Han et al., 2020",
"ref_id": null
},
{
"start": 367,
"end": 393,
"text": "(Dosovitskiy et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 924,
"end": 941,
"text": "(He et al., 2016)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ViT image model",
"sec_num": "3.2"
},
{
"text": "The ViT model encodes the product image. After converting a product image to P \u00d7 P patches, ViT converts these patches to visual tokens. After adding a special [CLS] visual token to represent the entire image, the M = P \u00d7 P + 1 long sequence is fed into a ViT model to output an encoding as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViT image model",
"sec_num": "3.2"
},
{
"text": "v = (v 0 , v 1 , v 2 , ...v M )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViT image model",
"sec_num": "3.2"
},
{
"text": ", where M = P \u00d7 P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ViT image model",
"sec_num": "3.2"
},
{
"text": "The fusion method plays an important role in MIC. In this paper we compared three methods, corresponding to Figure 1 (a) , (b), and (c).",
"cite_spans": [],
"ref_spans": [
{
"start": 108,
"end": 120,
"text": "Figure 1 (a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multimodal fusion",
"sec_num": "3.3"
},
{
"text": "The simplest fusion method is combining the decisions made by individual models directly (Bi et al., 2020; Chou et al., 2020) . We used weights \u03b1 and 1 \u2212 \u03b1 to interpolate the probabilities estimated by BERT and ViT models. The \u03b1 value was chosen using a held-out set.",
"cite_spans": [
{
"start": 89,
"end": 106,
"text": "(Bi et al., 2020;",
"ref_id": "BIBREF0"
},
{
"start": 107,
"end": 125,
"text": "Chou et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Late fusion",
"sec_num": "3.3.1"
},
{
"text": "The [CLS] token, the first token of every input sequence to BERT and ViT, is used to provide a global representation. Therefore we can concatenate the two encoded [CLS] tokens to create a multimodal output. The concatenated feature vectors are sent to an MLP head for predicting multi-class category labels. This method is called a shallow fusion (Siriwardhana et al., 2020) .",
"cite_spans": [
{
"start": 347,
"end": 374,
"text": "(Siriwardhana et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Early fusion -shallow",
"sec_num": "3.3.2"
},
{
"text": "A cross-modal attention layer provides a more sophisticated fusion between text and image channels (Zhu et al., 2020; Sun et al., 2020) . Crossmodal attention is computed by combining Key-Value (K-V) pairs from one modality with the Query (Q) from the other modality. In addition, (Zhu et al., 2020) used a gate to moderate potential noise from the visual channel.",
"cite_spans": [
{
"start": 99,
"end": 117,
"text": "(Zhu et al., 2020;",
"ref_id": "BIBREF16"
},
{
"start": 118,
"end": 135,
"text": "Sun et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 281,
"end": 299,
"text": "(Zhu et al., 2020)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "Specifically, the multimodal representation h is computed from the addition of the self-attention (SA) version of text representation h and the crossmodal attention version by considering the visual ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "representation v as h = SA(h, h, h) + V G SA(h, v, v), (1) where SA(q, k, v) = softmax (W Q q)(W K k) T \u221a d k W V v,",
"eq_num": "(2)"
}
],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "V G i = \u03c3(W 1 h i + W 2 v 0 + b),",
"eq_num": "(3)"
}
],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = softmax W 3 i h i ,",
"eq_num": "(4)"
}
],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "where W 3 is a trainable parameter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Early fusion -cross-modal attention",
"sec_num": "3.3.3"
},
{
"text": "Data set: Our data consisted of about 500,000 products from a large e-commerce platform in Japan, focusing on three major product categories. Our task, a multi-class classification problem, was to predict the leaf-level product categories from their Japanese titles and images. Further details of our data set are shown in the left part of Table 1 . We used the macro-averaged F1-score to evaluate model performance. Models: We compared the following models.",
"cite_spans": [],
"ref_spans": [
{
"start": 340,
"end": 347,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 Text-only: Japanese BERT model 3 fine-tuned on product titles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 Image-BiT: BiT image model fine-tuned on product images. In particular, we used BiT-M. 4 BiT showed a considerable performance advantage than other conventional ResNet models in the SI-GIR'20 MIC data challenge (Chou et al., 2020) . Table 1 : Summary of our data set obtained from a large e-commerce platform in Japan. Right two columns report image classification macro-F1 values using BiT and ViT models, respectively.",
"cite_spans": [
{
"start": 89,
"end": 90,
"text": "4",
"ref_id": null
},
{
"start": 213,
"end": 232,
"text": "(Chou et al., 2020)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 235,
"end": 242,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 Image-ViT: ViT image model (Dosovitskiy et al., 2020) fine-tuned on product images. We used ViT-L-16. 5 16 means that we used 16 \u00d7 16 patches when feeding images.",
"cite_spans": [
{
"start": 29,
"end": 55,
"text": "(Dosovitskiy et al., 2020)",
"ref_id": "BIBREF4"
},
{
"start": 104,
"end": 105,
"text": "5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "\u2022 Fusion: The late fusion method described in Section 3.3.1 and depicted in Figure 1 (a) , the early fusion method described in Section 3.3.2 and depicted in Figure 1 (b) , and the cross-modal fusion method described in Section 3.3.3 and depicted in Figure 1 (c).",
"cite_spans": [],
"ref_spans": [
{
"start": 76,
"end": 88,
"text": "Figure 1 (a)",
"ref_id": null
},
{
"start": 158,
"end": 170,
"text": "Figure 1 (b)",
"ref_id": null
},
{
"start": 250,
"end": 258,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "Implementation details: Our models were implemented in PyTorch using a GPU for training and evaluation. The AdamW optimizer (Loshchilov and Hutter, 2017) was used. Tokenization was performed with MeCab. 6 Table 1 reports on macro-F1 values for the three genres using the ResNet-based BiT vs. Transformer-based ViT. ViT shows higher performance compared to BiT on two of the three genres. In addition, consistent with the speed advantage reported in (Dosovitskiy et al., 2020) , we also observed that the training for ViT is about four times faster than BiT. This is critical for an MIC system deployable in industry since image model training time is the main bottleneck. Table 2 reports on uni-modal model performance, i.e., text-BERT and image-ViT separately, 5 https://github.com/asyml/ vision-transformer-pytorch 6 https://taku910.github.io/mecab/ as well as the results of fusing these models in various ways. We found that the early (shallow) fusion method leads to poor model performance. One possible reason is that product images used in e-commerce product catalogs sometimes do not appear to be clearly related to its corresponding titles. For example, a bottle of wine may be packaged in a box and its image only shows the box. We also found that late (decision) fusion does not lead to consistent gains. In the appliance genre, we found that the fused model was worse than the text model. On the other hand, the cross-modal attention fusion method showed consistent gains over both the text and image models separately on all three genres.",
"cite_spans": [
{
"start": 124,
"end": 153,
"text": "(Loshchilov and Hutter, 2017)",
"ref_id": null
},
{
"start": 449,
"end": 475,
"text": "(Dosovitskiy et al., 2020)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 1",
"ref_id": null
},
{
"start": 672,
"end": 679,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Setup",
"sec_num": "4.1"
},
{
"text": "Text",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model F1 (B) F1 (A) F1 (M)",
"sec_num": null
},
{
"text": "Although various approaches have been explored in MIC research, we found that a MIC system built entirely out of the Transformer architecture was missing. Combining the well-established BERT text model and the newly released ViT image model, we proposed an all-Transformer MIC system on Japanese e-commerce products. From experiments on real industry product data from an e-commerce giant in Japan, we found that the ViT model can be fine-tuned four times faster than BiT and can have improved performance. Furthermore, fusing both text and image inputs in an MIC setup using the cross-modal attention fusion method led to model performance better than each model separately, and we found that this fusion method worked better than late fusion and the early (shallow) fusion of simply concatenating representations from the two modalities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "There are several directions to extend the current work in the future, including (1) considering jointly modeling texts and images in one Transformer model like FashionBERT (Gao et al., 2020) , and (2) using self-training to go beyond the limit caused by the size of labeled image data for the image model.",
"cite_spans": [
{
"start": 173,
"end": 191,
"text": "(Gao et al., 2020)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "5"
},
{
"text": "https://sigir-ecom.github.io/ ecom2020/data-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://huggingface.co/cl-tohoku/ bert-base-japanese-whole-word-masking 4 https://tfhub.dev/google/bit/ m-r152x4/1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Multimodal Late Fusion Model for E-Commerce Product Classification",
"authors": [
{
"first": "Ye",
"middle": [],
"last": "Bi",
"suffix": ""
},
{
"first": "Shuo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Zhongrui",
"middle": [],
"last": "Fan",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2008.06179"
]
},
"num": null,
"urls": [],
"raw_text": "Ye Bi, Shuo Wang, and Zhongrui Fan. 2020. A Multi- modal Late Fusion Model for E-Commerce Product Classification. arXiv preprint arXiv:2008.06179.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-Attention",
"authors": [
{
"first": "V",
"middle": [],
"last": "Chordia",
"suffix": ""
},
{
"first": "B",
"middle": [
"G"
],
"last": "Vijay",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. SI-GIR'20 e-Com workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Chordia and B.G. Vijay Kumar. 2020. Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-Attention. In Proc. SI- GIR'20 e-Com workshop.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "CBB-FE, CamemBERT and BiT Feature Extraction for Multimodal Product Classification and Retrieval",
"authors": [
{
"first": "H",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Y",
"middle": [
"H"
],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "W",
"middle": [
"T"
],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. SIGIR'20 e-Com workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Chou, Y.H. Lee, L. Chen, Y. Xia, and W.T. Chen. 2020. CBB-FE, CamemBERT and BiT Feature Ex- traction for Multimodal Product Classification and Retrieval. In Proc. SIGIR'20 e-Com workshop.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805[cs].ArXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs]. ArXiv: 1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An image is worth 16x16 words: Transformers for image recognition at scale",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Dosovitskiy",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Beyer",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Kolesnikov",
"suffix": ""
},
{
"first": "Dirk",
"middle": [],
"last": "Weissenborn",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Unterthiner",
"suffix": ""
},
{
"first": "Mostafa",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Minderer",
"suffix": ""
},
{
"first": "Georg",
"middle": [],
"last": "Heigold",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2010.11929"
]
},
"num": null,
"urls": [],
"raw_text": "Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, and Sylvain Gelly. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fashionbert: Text and image matching with adaptive loss for cross-modal retrieval",
"authors": [
{
"first": "Dehong",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Linbo",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Ben",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Minghui",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "2251--2260",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, and Hao Wang. 2020. Fashion- bert: Text and image matching with adaptive loss for cross-modal retrieval. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2251-2260.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An Xiao, Chunjing Xu, and Yixing Xu. 2020. A Survey on Visual Transformer",
"authors": [
{
"first": "Kai",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Yunhe",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hanting",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xinghao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianyuan",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yehui",
"middle": [],
"last": "Tang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2012.12556"
]
},
"num": null,
"urls": [],
"raw_text": "Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, and Yixing Xu. 2020. A Survey on Vi- sual Transformer. arXiv preprint arXiv:2012.12556.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Big Transfer (BiT): General Visual Representation Learning",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Kolesnikov",
"suffix": ""
},
{
"first": "Lucas",
"middle": [],
"last": "Beyer",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Joan",
"middle": [],
"last": "Puigcerver",
"suffix": ""
},
{
"first": "Jessica",
"middle": [],
"last": "Yung",
"suffix": ""
},
{
"first": "Sylvain",
"middle": [],
"last": "Gelly",
"suffix": ""
},
{
"first": "Neil",
"middle": [],
"last": "Houlsby",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1912.11370[cs].ArXiv:1912.11370"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. 2020. Big Transfer (BiT): General Visual Representation Learning. arXiv:1912.11370 [cs]. ArXiv: 1912.11370.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Learning fused representations for large-scale multimodal classification",
"authors": [
{
"first": "Alessandro",
"middle": [],
"last": "Shah Nawaz",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [
"Kamran"
],
"last": "Calefati",
"suffix": ""
},
{
"first": "Muhammad",
"middle": [],
"last": "Janjua",
"suffix": ""
},
{
"first": "Ignazio",
"middle": [],
"last": "Umer Anwaar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gallo",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE Sensors Letters",
"volume": "3",
"issue": "1",
"pages": "1--4",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shah Nawaz, Alessandro Calefati, Muhammad Kam- ran Janjua, Muhammad Umer Anwaar, and Ignazio Gallo. 2018. Learning fused representations for large-scale multimodal classification. IEEE Sensors Letters, 3(1):1-4.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Rivindu Weerasekera, and Suranga Nanayakkara. 2020. Tuning \"BERT-like\" Self Supervised Models to Improve Multimodal Speech Emotion Recognition",
"authors": [
{
"first": "Shamane",
"middle": [],
"last": "Siriwardhana",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Reis",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shamane Siriwardhana, Andrew Reis, Rivindu Weerasekera, and Suranga Nanayakkara. 2020. Tuning \"BERT-like\" Self Supervised Models to Improve Multimodal Speech Emotion Recognition.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "RIVA: A Pre-trained Tweet Multimodal Model Based on Text-image Relation for Multimodal NER",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jiquan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yindu",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Fangsheng",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Yuxuan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Zengwei",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Yuanyi",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1852--1862",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Sun, Jiquan Wang, Yindu Su, Fangsheng Weng, Yuxuan Sun, Zengwei Zheng, and Yuanyi Chen. 2020. RIVA: A Pre-trained Tweet Multimodal Model Based on Text-image Relation for Multi- modal NER. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 1852-1862.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Attention is all you need. Advances in neural information processing systems",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Kaiser",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, \\Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information process- ing systems, 30:5998-6008.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multi-Label Product Categorization Using Multi-Modal Fusion Models",
"authors": [
{
"first": "Pasawee",
"middle": [],
"last": "Wirojwatanakul",
"suffix": ""
},
{
"first": "Artit",
"middle": [],
"last": "Wangperawong",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1907.00420"
]
},
"num": null,
"urls": [],
"raw_text": "Pasawee Wirojwatanakul and Artit Wangperawong. 2019. Multi-Label Product Categorization Us- ing Multi-Modal Fusion Models. arXiv preprint arXiv:1907.00420.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deep Multi-Modal Fusion Architecture for Product Classification in e-commerce",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Zahavy",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Magnani",
"suffix": ""
},
{
"first": "Abhinandan",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Shie",
"middle": [],
"last": "Mannor",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1611.09534"
]
},
"num": null,
"urls": [],
"raw_text": "Tom Zahavy, Alessandro Magnani, Abhinandan Krish- nan, and Shie Mannor. 2016. Is a picture worth a thousand words? A Deep Multi-Modal Fusion Ar- chitecture for Product Classification in e-commerce. arXiv preprint arXiv:1611.09534.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Multimodal Joint Attribute Prediction and Value Extraction for E",
"authors": [
{
"first": "Tiangang",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Yue",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Haoran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Youzheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2009.07162"
]
},
"num": null,
"urls": [],
"raw_text": "Tiangang Zhu, Yue Wang, Haoran Li, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. Multi- modal Joint Attribute Prediction and Value Ex- traction for E-commerce Product. arXiv preprint arXiv:2009.07162.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td/><td/><td/><td colspan=\"2\">(a)</td><td/><td/><td/><td/><td>(b)</td><td>(c)</td></tr><tr><td/><td/><td/><td colspan=\"2\">category</td><td/><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"2\">Early fusion</td><td>Early fusion</td></tr><tr><td/><td/><td/><td colspan=\"2\">Late fusion</td><td/><td/><td/><td colspan=\"2\">(Shallow)</td><td>(Cross-Attention)</td></tr><tr><td/><td/><td/><td colspan=\"2\">(decision)</td><td/><td/><td/><td/><td>category</td><td>category</td></tr><tr><td colspan=\"2\">alpha</td><td/><td/><td/><td>1-alpha</td><td/><td/><td/><td/></tr><tr><td>MLP</td><td/><td/><td/><td/><td>MLP</td><td/><td/><td/><td>MLP</td><td>MLP</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Cross-modal</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>attention</td></tr><tr><td>h_0</td><td>h_1</td><td>h_2</td><td>...</td><td>h_N</td><td>v_0</td><td>v_1</td><td>h_2</td><td>...</td><td>v_9</td></tr><tr><td/><td/><td>BERT</td><td/><td/><td/><td/><td>ViT</td><td/><td/></tr><tr><td colspan=\"3\">CLS W_1 W_2</td><td>...</td><td>W_N</td><td colspan=\"2\">CLS P_1</td><td>P_2</td><td>...</td><td>P_9</td></tr><tr><td/><td colspan=\"3\">Word_1, ... Word_N</td><td/><td/><td>1</td><td>2</td><td>3</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>4</td><td>5</td><td>6</td><td/></tr><tr><td/><td/><td/><td/><td/><td/><td>7</td><td>8</td><td>9</td><td/></tr><tr><td colspan=\"11\">Figure 1: Our Transformer-based MIC system consists</td></tr><tr><td colspan=\"11\">of a BERT model to extract textual information and a</td></tr><tr><td colspan=\"11\">ViT model to extract visual information. Three differ-</td></tr><tr><td colspan=\"11\">ent types of multimodal fusion methods are compared,</td></tr><tr><td colspan=\"11\">including (a) late fusion, (b) early fusion by concatenat-</td></tr><tr><td colspan=\"11\">ing textual and image representations (shallow), and (c)</td></tr><tr><td colspan=\"11\">early fusion by using a cross-modal attention. Wide ar-</td></tr><tr><td colspan=\"5\">rows indicate that</td><td/><td/><td/><td/><td/></tr></table>",
"num": null,
"html": null,
"type_str": "table",
"text": "the entire sequence, e.g., h 0 to h N , is used in the computation. For illustration we show 3 \u00d7 3 patches for ViT but in our actual implementation a higher P was used."
},
"TABREF1": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "W Q , W K , and W V are trainable query, key, and value parameters, d k is the dimension of the key vectors, and the visual gate, V G, can be learned from both the local text representations h i and global visual representation v 0 , with W 1 , W 2 , and b as trainable parameters. The category label prediction\u0177 is determined a\u015d"
},
"TABREF4": {
"content": "<table/>",
"num": null,
"html": null,
"type_str": "table",
"text": "Macro-F1 on the three product genres. Unimodal models, i.e., BERT text model and ViT image model, and different fusion models are compared."
}
}
}
}