ACL-OCL / Base_JSON /prefixE /json /ecnlp /2022.ecnlp-1.12.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T10:33:50.020884Z"
},
"title": "Product Titles-to-Attributes As a Text-to-Text Task",
"authors": [
{
"first": "Gilad",
"middle": [],
"last": "Fuchs",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "eBay Research",
"location": {
"country": "Israel"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Online marketplaces use attribute-value pairs, such as brand, size, size type, color, etc. to help define important and relevant facts about a listing. These help buyers to curate their search results using attribute filtering and overall create a richer experience. Although their critical importance for listings' discoverability, getting sellers to input tens of different attribute-value pairs per listing is costly and often results in missing information. This can later translate to the unnecessary removal of relevant listings from the search results when buyers are filtering by attribute values. In this paper we demonstrate using a Text-to-Text hierarchical multilabel ranking model framework to predict the most relevant attributes per listing, along with their expected values, using historic user behavioral data. This solution helps sellers by allowing them to focus on verifying information on attributes that are likely to be used by buyers, and thus, increase the expected recall for their listings. Specifically for eBay's case we show that using this model can improve the relevancy of the attribute extraction process by 33.2% compared to the current highlyoptimized production system. Apart from the empirical contribution, the highly generalized nature of the framework presented in this paper makes it relevant for many high-volume search-driven websites.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "Online marketplaces use attribute-value pairs, such as brand, size, size type, color, etc. to help define important and relevant facts about a listing. These help buyers to curate their search results using attribute filtering and overall create a richer experience. Although their critical importance for listings' discoverability, getting sellers to input tens of different attribute-value pairs per listing is costly and often results in missing information. This can later translate to the unnecessary removal of relevant listings from the search results when buyers are filtering by attribute values. In this paper we demonstrate using a Text-to-Text hierarchical multilabel ranking model framework to predict the most relevant attributes per listing, along with their expected values, using historic user behavioral data. This solution helps sellers by allowing them to focus on verifying information on attributes that are likely to be used by buyers, and thus, increase the expected recall for their listings. Specifically for eBay's case we show that using this model can improve the relevancy of the attribute extraction process by 33.2% compared to the current highlyoptimized production system. Apart from the empirical contribution, the highly generalized nature of the framework presented in this paper makes it relevant for many high-volume search-driven websites.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many online marketplaces have new-listing forms that include both structured and unstructured input types to help sellers describe their listing 1 . While the unstructured part often includes free-text input boxes for title and description, a pictures upload option, etc., the structured part can include the selection of the listing category from a predefined list, or selecting specific attribute-value pairs (e.g. {\"Brand\":\"Apple\", \"Color\":\"Black\"}). Of the two, structured input often enables marketplaces a more streamline use of the data, since it requires less preprocessing and allows for more direct usage (via search results filters, etc.). On the flip-side, entering such data is more labor intensive for the sellers, and therefore, more expensive to get. This can also be intricate work for sellers since in most cases there are tens of different possible attribute names for every listing, with some attributes having more than one possible value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To reduce the seller-inflicted cost of entering listing attribute values we set two solution guidelines: (a) sellers should focus on the top attributes that are expected to impact their listing discoverability. This aims to reduce the number of attributes for which seller attention is required and only focus on those which are likely to be used in the buyerjourney of their target audience. And (b), in an effort to further reduce friction, the marketplace should pre-populate a suggested value for each of these top attributes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To identify the top attributes in a scalable manner we leveraged the rich historical data of buyer behavior on the eBay website. Like many other search-driven websites, eBay allows buyers to curate search results by applying filters on top of the initial results from the free-text-based query. Logging the filtering selections of buyers, alongside with their post-search actions, allows for an opportunity to learn what are the key attributes that buyers value when searching for the right result. For example, a common buyer behavior is to type a general description in the search box, like \"handbag\", and then to filter the results using more granular attributes, like \"Material\", etc. (Figure 1 ). Following this filtering step, the buyer might click on, and potentially purchase, a specific listing that was a part of the filtered results set. Mapping this buyer journey, from search to filtering and listing-click, allows to learn which attributes are most important for the discovery of every listing. A buyer is searching for \"handbag\" in the search box (top) and further filters the results by selecting the attribute value \"Leather\" under \"Material\" (left).",
"cite_spans": [],
"ref_spans": [
{
"start": 689,
"end": 698,
"text": "(Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "From a modeling standpoint, to accommodate both of the solution guidelines above, the output set of the model should include the importance ranking of the top attributes and their expected value. As to the model input, in order for the solution to generalize across different downstream tasks, we need to pick a minimal viable data point that all listings have, but yet, that is highly informative. In our case that would be the listing title. Model design can be examined using different lenses; A supervised model based on the historical mapping between listing titles and multiple attribute-value pairs can be modeled as a multi-label text classification (MLTC) task. However, since there is a hierarchical relationship between the attributes and values (since each attribute has a finite list of possible values), the task can also be viewed as a hierarchical multi-label text classification (HMLTC) task. Last, since we care about the importance ranking of the attributevalue pairs, this can also be viewed as a ranking task. Recent Text-to-Text-driven approaches have shown to be highly valuable for various Natural Language Processing (NLP) tasks including MLTC and HMLTC (Nam et al., 2017; Lin et al., 2018; Raffel et al., 2019) . Inspired by these approaches, we demonstrate using a Text-to-Text framework in a HMLTC ranking task and compare it to other classification models.",
"cite_spans": [
{
"start": 1179,
"end": 1197,
"text": "(Nam et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 1198,
"end": 1215,
"text": "Lin et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 1216,
"end": 1236,
"text": "Raffel et al., 2019)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Specifically in our case, the use of a Text-to-Text model approach is useful since it allows to produce multiple ranked hierarchical predictions, while separating between the probability score for the attributes and values. This introduces further flexibility to the solution (beyond the scope of the above guidelines) by allowing to report high impacting attributes even if we are uncertain about their expected attribute values. Furthermore, in comparison to approaches such as Named Entity Recognition (NER), a Text-to-Text model does not require the reported top attribute values to exist in the input title. This is useful since sellers are not always mentioning the most valuable attribute values in the listing title. Last, from an empirical standpoint, the Text-to-Text models we trained almost always outperformed models from other approaches (see section 4.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To conclude, in this work we suggest a scalable and automatic method for using listing titles to identify the most valuable set of attribute-value pairs by learning from the buyers' filtering behavior. In the next section we describe related work in the field of attribute-value extraction and hierarchical classification tasks. In the following section we describe our data collection methodology and the training procedures used for the four models that we trained. This is followed by a quantitative comparison of the results of the models, and a qualitative evaluation of the results of our best performing one. We conclude by discussing the tradeoffs of our current approach, and describe our plans for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Various methods are used to automatically extract attribute-value pairs from product-related text. This ranges from manual rules and regular expressions (Petrovski et al., 2014) to more advanced modern learning algorithms (Ghani et al., 2006; Kannan et al., 2011; de Bakker et al., 2013; Melli, 2014; Joshi et al., 2015; Ristoski and Mika, 2016; More, 2016; Petrovski and Bizer, 2017; Majumder et al., 2018; Charron et al., 2016) . In contrast to our work, these methods are focusing on extracting the most complete set of attribute-value pairs, or limited to only attribute values which appear explicitly in the product-related text. Apart from (Charron et al., 2016) , non of these works have leveraged data from historical user interaction with the attribute-value pairs.",
"cite_spans": [
{
"start": 153,
"end": 177,
"text": "(Petrovski et al., 2014)",
"ref_id": "BIBREF29"
},
{
"start": 222,
"end": 242,
"text": "(Ghani et al., 2006;",
"ref_id": "BIBREF8"
},
{
"start": 243,
"end": 263,
"text": "Kannan et al., 2011;",
"ref_id": "BIBREF13"
},
{
"start": 264,
"end": 287,
"text": "de Bakker et al., 2013;",
"ref_id": "BIBREF5"
},
{
"start": 288,
"end": 300,
"text": "Melli, 2014;",
"ref_id": "BIBREF20"
},
{
"start": 301,
"end": 320,
"text": "Joshi et al., 2015;",
"ref_id": "BIBREF12"
},
{
"start": 321,
"end": 345,
"text": "Ristoski and Mika, 2016;",
"ref_id": "BIBREF31"
},
{
"start": 346,
"end": 357,
"text": "More, 2016;",
"ref_id": "BIBREF23"
},
{
"start": 358,
"end": 384,
"text": "Petrovski and Bizer, 2017;",
"ref_id": "BIBREF28"
},
{
"start": 385,
"end": 407,
"text": "Majumder et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 408,
"end": 429,
"text": "Charron et al., 2016)",
"ref_id": "BIBREF3"
},
{
"start": 646,
"end": 668,
"text": "(Charron et al., 2016)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Hierarchical classification has been of wide interest both in computer vision applications and text related tasks. Early work has been focusing on flattening the labels (Cai and Hofmann, 2004; Hayete and Bienkowska, 2005) or on training multiple local classifiers, where the number of classifiers is dependent on the depth of the label hierarchy (Koller and Sahami, 1997; Sun and Lim, 2001; Cesa-Bianchi et al., 2006) . More recent studies aimed to train a single neural network which can learn the label hierarchy complexity (Johnson and Zhang, 2015; Peng et al., 2018; Mao et al., 2019) , while others combined both a single global network and multiple local classifiers (Wehrmann et al., 2018) . Most recently, several works demonstrated that sequenceto-sequence (Seq2Seq) networks are a promising representation for hierarchical text classification tasks (Nam et al., 2017; Lin et al., 2018) . However, less focus was given to using Seq2Seq for the ranking of multiple hierarchical label data structures, which are commonly being used, especially in online marketplaces.",
"cite_spans": [
{
"start": 169,
"end": 192,
"text": "(Cai and Hofmann, 2004;",
"ref_id": "BIBREF0"
},
{
"start": 193,
"end": 221,
"text": "Hayete and Bienkowska, 2005)",
"ref_id": "BIBREF9"
},
{
"start": 346,
"end": 371,
"text": "(Koller and Sahami, 1997;",
"ref_id": "BIBREF14"
},
{
"start": 372,
"end": 390,
"text": "Sun and Lim, 2001;",
"ref_id": "BIBREF33"
},
{
"start": 391,
"end": 417,
"text": "Cesa-Bianchi et al., 2006)",
"ref_id": "BIBREF1"
},
{
"start": 526,
"end": 551,
"text": "(Johnson and Zhang, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 552,
"end": 570,
"text": "Peng et al., 2018;",
"ref_id": "BIBREF27"
},
{
"start": 571,
"end": 588,
"text": "Mao et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 673,
"end": 696,
"text": "(Wehrmann et al., 2018)",
"ref_id": "BIBREF34"
},
{
"start": 859,
"end": 877,
"text": "(Nam et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 878,
"end": 895,
"text": "Lin et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our training dataset includes information from two major eBay verticals -\"Electronics\" and \"Fashion\", where search-filtering activity is most frequent. The data includes roughly 10M and 3M random entities from Fashion and Electronics (respectively), all from the eBay US website. Each training entity includes a listing title and one matching attribute-value pair which was previously used in a single search filtering session to discover that listing. Since the distribution of attribute-value pairs has a long-tail, we reduced the complexity of the task by truncating the data to include only the top 800 most frequent combinations. Doing so, we kept 90% of all of the filtering activity done by buyers (which is considered sufficient coverage for our use case). We used 5% of the data for validation and and model selection, and an additional 5% for test. For non-hierarchical classification experiments we have concatenated attribute-value pairs to a single token (e.g. {\"Color\":\"Black\"} was transformed to \"Color:Black\"). For Seq2Seq hierarchical classification, we kept the pairs as two separated tokens (e.g. \"Color Black\"). Separating the tokens allows the Seq2Seq model to natively perform hierarchical classification, as the Seq2Seq decoder's predictions are dependent on the previous predicted tokens (e.g. in case the attribute prediction token is \"Color\" the next token prediction is likely to be a color name, such as \"Black\"). All tokens in multi-token attribute names or values were concatenated with an underscore as a delimiter. As duplications in the training set represent a frequent, and therefore more important, listing discovery pattern, the data was not deduplicated in any way. For example, the title \"Color Clash 100% Genuine Leather Snake Ladies Handbag Tote Shoulder Bag\" might appear 20 times in the training data, out of which 12 times it will be coupled with the attribute-value pair {\"Material\":\"Leather\"}, 6 times with {\"Style\":\"Tote\"} and only 2 times with {\"Size\":\"Large\"}. The listing titles dataset was pre-processed by transforming the tokens to lowercase and removing known stopwords and non-alphanumeric characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "3.1"
},
{
"text": "For the Text-to-Text approach we trained a Convolution Neural Network (CNN) Seq2Seq model (Gehring et al., 2017) via the Fairseq framework (Ott et al., 2019) . For this we used a CNN architecture, following (Gehring et al., 2017) , which consists an embedding layer, positional embedding layer, an encoder with 4 convolutional layers, a decoder with 3 convolutional layers and a kernel width of 3. The output of the each encoder convolutional layer is transformed by a non-linear gated linear units (GLU) (Dauphin et al., 2016) with the residual connections linking between the GLU blocks and the convolutional blocks. Each decoder GLU output undergoes a dot-product based attention with the last encoder GLU block output (see also (Gehring et al., 2017) for more details). Training was done with learning rate of 0.25, gradient clipping (clip-norm) of 0.1, dropout of 0.2, maximum number of tokens in a batch (max-tokens) of 4000 and max number of epochs of 15, with a Nesterov Accelerated Gradient (NAG) optimizer (NES-TEROV, 1983) on a single GPU. Prior to training, pre-processing was done with \"fairseq-preprocess\" to build a vocabulary and binarize the data. For predictions, beam search size was set to 5. We trained two versions of the Seq2Seq models -one with attribute-value labels flattened to a single token (Seq2Seq-single), and the other where we kept their hierarchical structure (Seq2Seq-hierarchical), as described in section 3.1 above. Both versions were trained with the same hyper-parameters.",
"cite_spans": [
{
"start": 90,
"end": 112,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 139,
"end": 157,
"text": "(Ott et al., 2019)",
"ref_id": "BIBREF26"
},
{
"start": 207,
"end": 229,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 732,
"end": 754,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "We tested our Text-to-Text modeling approach for attributes prediction against BERT and ULM-FiT models, which have both been shown to be highly beneficial for multiple text classification tasks (Howard and Ruder, 2018; Devlin et al., 2018) . Apart from their past success, we also selected BERT and ULMFiT because they allowed us to test two different types of pre-training and fine-tuning approaches, as described below. For the multi-classification BERT model (Devlin et al., 2018) , we used the FastBert library 2 which is based on HuggingFace (Wolf et al., 2019) . The model that we fine-tuned was bert-base-uncased which includes 110 million parameters, 12 encoder layers consisting of 12 attention heads per layer and 768 hidden units. Fine-tuning was done for a maximum of 3 epochs with a batch size of 16, learning rate of 5e-5, a maximum sequence length of 128, a LAMB optimizer (You et al., 2019; Lan et al., 2019) and using 4 GPUs.",
"cite_spans": [
{
"start": 194,
"end": 218,
"text": "(Howard and Ruder, 2018;",
"ref_id": "BIBREF10"
},
{
"start": 219,
"end": 239,
"text": "Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 462,
"end": 483,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF6"
},
{
"start": 547,
"end": 566,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF35"
},
{
"start": 888,
"end": 906,
"text": "(You et al., 2019;",
"ref_id": "BIBREF37"
},
{
"start": 907,
"end": 924,
"text": "Lan et al., 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "Next, for the multi-classification ULMFiT (Howard and Ruder, 2018) we used eBay's title corpus to fine-tune an English language model (LM) with an AWD-LSTM architecture (Merity et al., 2017a) , which is an LSTM model with tuned dropout hyper-parameters that consists of an embedding size of 400, 3 layers and 1150 hidden activations per layer which were pre-trained on the Wikitext-103 dataset (Merity et al., 2017b) and downloaded from fast.ai 3 . The LM fine-tuning was done using the same data that is described in section 3.1, with a batch size of 64, a dropout set to 0.5, for 2 epochs using one cycle policy (Smith and Topin, 2019) and with a maximum learning rate of 1e-2 and 1e-3 for each on a single GPU. Next, a classifier model was trained while using the finetuned LM as an encoder, with a batch size of 64, for 3 epochs on 4 GPUs, using one cycle policy, with a discriminative layer training and gradual unfreezing (Howard and Ruder, 2018) . During the first epoch only the last layer was fine-tuned, with a maximum learning rate of 1e-2. For the second epoch we fine-tuned the last two layer groups, with a maximum learning rate ranging between 2.5e-3 and 5e-3, and for the last epoch we fine-tuned all of the layers with a maximum learning rate ranging between 2e-5 and 2e-3. The labels for both BERT and ULMFiT were represented as a single token (see section 3.1 above). We also trained a multi-classification model for both, instead of a multi-label one, since we saw that the latter performed significantly worse.",
"cite_spans": [
{
"start": 169,
"end": 191,
"text": "(Merity et al., 2017a)",
"ref_id": "BIBREF21"
},
{
"start": 394,
"end": 416,
"text": "(Merity et al., 2017b)",
"ref_id": "BIBREF22"
},
{
"start": 614,
"end": 637,
"text": "(Smith and Topin, 2019)",
"ref_id": "BIBREF32"
},
{
"start": 928,
"end": 952,
"text": "(Howard and Ruder, 2018)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "All models were trained on data from eBay's Electronics and Fashion verticals as described at 2 https://github.com/kaushaltrivedi/fast-bert 3 https://docs.fast.ai/index.html Section 3.1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Training",
"sec_num": "3.2"
},
{
"text": "As commonly used in similar ranking tasks, we computed Precision at k (Prec@k) and normalized Discounted Cumulative Gain at k (nDCG@k or N@k) for model evaluation. Prec@k is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Prec@k = 1 k k l=1 y rank(l)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "Where rank(l) is the index of the l-th highest predicted label and y \u2208 {0, 1} L is the true binary vector. nDCG@k is defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "DCG@k = k i=1 rel i log(i + 1) iDCG@k = |REL k | i=1 rel i log(i + 1) nDCG@k =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "DCG@k iDCG@k Where rel i is the relevance of the result at position i and REL k represents the list of relevant documents (ordered by their relevance) in the corpus up to position k. The relevance score of each attribute-value pair per listing title is defined as the number of times it was used by buyers to filter the results, prior of clicking that specific listing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.1"
},
{
"text": "To compare the performance of the different models we computed the ranking accuracy of each using historic attribute-value pairs that were used by buyers to filter their results, prior of clicking a specific listing. As seen in Table 1 , the Seq2Seqhierarchical model outperformed the other models in most of the test criteria. Interestingly, both of the Seq2Seq models (single and hierarchical) outperformed BERT and ULMFiT in almost all of the metrics, which demonstrates the advantage of using a Text-to-Text frameworks in both hierarchical and non-hierarchical learning tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 228,
"end": 235,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Quantitative Evaluation",
"sec_num": "4.2"
},
{
"text": "In theory, the results from Table 1 could be purely due to better attribute value prediction by the Seq2Seq-hierarchical model, and not necessarily because of better attribute ranking. Therefore, to further examine the robustness of these results, we disconnected the ranking evaluation from the value prediction one, and tested the above models just on attribute ranking. To conduct this comparison we split the models' concatenated attribute-value predictions to attribute and attribute value predictions (i.e. {\"Color:Black\"} was split to \"color\" and \"black\") and re-computed the evaluation metrics only on the former. As seen in Table 2 , the models' performance-ranking is overall consistent with previous experiments, with the Seq2Seq-hierarchical model also outperforming for the attribute ranking task.",
"cite_spans": [],
"ref_spans": [
{
"start": 633,
"end": 640,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Quantitative Evaluation",
"sec_num": "4.2"
},
{
"text": "In addition, from a pure technical perspective, Seq2Seq was the fastest model to train (x15 faster than BERT and x5 faster than ULMFiT), did not require any pre-trained models, and consisted of only a single training step (unlike ULMFiT, which also required an LM fine-tune step).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Evaluation",
"sec_num": "4.2"
},
{
"text": "To get a sense of the magnitude of impact that the Seq2Seq-hierarchical model could have on eBay's on-site experience, we compared our results to those from eBay's Attribute Extraction Service (AES). AES is a production system that has been highly optimized over the years, and is in charge of automatically extracting attribute-value pairs from titles that sellers provide. Currently it is mostly reliant on extensively curated rules that got added and optimized over the years. To compare the performance of the two methods we used around 15K attribute-value pairs that were used by buyers to filter search results and to discover a specific listing from the Electronics and Fashion verticals. For each we computed whether the attribute extraction method could automatically provide the relevant attribute-value given only the listing's title. This count was later divided by the number of attributevalue pairs to compute a percentage. As seen in Table 3, Seq2Seq-hierarchical led to an overall 33.2% improvement in relevant attribute-value extraction compared to AES. Table 3 : A comparison between eBay's current production system (AES) and the Seq2Seq-hierarchical (S2S-hier) model for the task of relevant attribute-value extraction. The number of attribute-value pairs which were used for the evaluation is denoted as N. For each method we show the percentage of cases that the relevant attribute-value pairs were extracted correctly (as defined by buyer behaviour). ",
"cite_spans": [],
"ref_spans": [
{
"start": 1071,
"end": 1078,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Quantitative Evaluation",
"sec_num": "4.2"
},
{
"text": "Since Seq2Seq-hierarchical outperformed the other models (Table 1) , we focused our qualitative evaluation only on its predictions. Table 4 shows examples of the top predictions of five different listings, ordered by the model likelihood score (descending order).",
"cite_spans": [],
"ref_spans": [
{
"start": 57,
"end": 66,
"text": "(Table 1)",
"ref_id": "TABREF0"
},
{
"start": 132,
"end": 139,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Qualitative Evaluation",
"sec_num": "4.3"
},
{
"text": "As seen in Table 4 , {\"Brand\":\"Ray-Ban\"} was only the 3rd most important attribute-value pair picked by the model for the title \"Ray-Ban G-15 Aviator Black Frame Black Classic 58mm\". This can be counterintuitive from a domain expertise standpoint, since the latter is clearly a ",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Qualitative Evaluation",
"sec_num": "4.3"
},
{
"text": "Ray-Ban G-15 Aviator Black Frame Black Classic {\"Frame Color\":\"Black\", \"Lens Color\": \"Black\", \"Brand\": \"Ray-Ban\"} Asus Strix Gaming LGA1151 DDR4 Motherboard {\"Form Factor\": \"microATX\", \"Compatible CPU Brand\": \"Intel\"} DJI Phantom 4 Aerial UAV Drone Quadcopter {\"Camera\": \"Included\", \"Features\": \"4K HD Video Recording\"} Nike Air Max Shoes Men's Size 7-9 {\"US Shoe Size (men's)\": [8, 8.5, 9, 7.5, 7 ]} Men's Slim Fit Coat Jean Denim Jacket Size S-XL {\"Size (men's)\": [\"M\", \"L\", \"XL\", \"S\"]} more differential attribute-value pair for the category of sunglasses than, for example, {\"Frame Color\":\"Black\"}, which was picked first. However, looking at a sample of the search queries that were prior to the filtering steps (not shown here), we see that 93% of them already contained some variation of the term \"Ray-Ban\" (e.g. \"rayban sunglasses\", \"ray ban sunglasses aviator\", \"ray-ban aviator\"). Therefore most of the search engine's out-of-thebox results already included \"Ray-Ban\" branded sunglasses, which mitigated the need to further filter by brand. In contrast, only 2% of the queries mentioned the color \"black\", which explains the frequent buyer behavior of further filtering the results by color after seeing the search results (which included sunglasses from various colors). Such ranking results are in-line with our solution guideline to identify the top attributes that are expected to be used in the listing's buyer-discovery-journey, and therefore, help maximize the listing's chances to be discovered.",
"cite_spans": [
{
"start": 379,
"end": 397,
"text": "[8, 8.5, 9, 7.5, 7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Title Predictions",
"sec_num": null
},
{
"text": "In Table 4 we provide further prediction examples which show that our Text-to-Text model does not require the reported top attribute values to be included in the input title. In addition, we evaluated the model's predictions in cases where attributes can include multiple values, like with 'size', and show that the model successfully extracts all of the relevant values from the ranges that appear in the titles. Note that the different likelihood prediction for each size value can serve as proxy to its popularity among buyers.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Title Predictions",
"sec_num": null
},
{
"text": "In this paper we demonstrate using filtering behavior data to predict the most relevant listing attribute-value pairs, and the superiority of using a Text-to-Text approach for modeling a hierarchical multi-label text classification (HMLTC) task that combines ranking. We identify several key advantages of this solution framework: First, acquiring the training data we use is a scalable and inexpensive process which does not require manual labor. Therefore, the volume of data collected in high-volume websites is likely to be sufficient for training deep-learning-based models such as Seq2Seq. Second, unlike methods such as NER, using a Text-to-Text approach enables to identify attribute-value pairs that do not necessarily exist in the title, to extract multiple values per attribute (Table 4 ) and to separately analyze the importance of every possible attribute-value pair. Third, as to the choice of hierarchical modeling, this allows us to separately analyze the likelihood probabilities of the expected attributes and values, which further generalizes the model for additional downstream tasks.",
"cite_spans": [],
"ref_spans": [
{
"start": 789,
"end": 797,
"text": "(Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "As for classifiers performance, the Seq2Seq models provided better results for most metrics compared to BERT and ULMFiT. Unlike the latter two, the Seq2Seq models didn't use a Transfer Learning approach that leverages a pre-trained Language Models. We suspect that the relatively short length of listing titles (12 tokens on average), combined with the unique jargon in eBay's data, which is hard to fully capture in the fine-tune process, might have negatively impacted the performance of BERT and ULMFiT.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Regardless to the classifier of choice, we keep in mind that the model's attribute ranking is clearly affected by the set of filtering options that were presented to the buyers on the site, and thus, cannot find attribute pairs that have not been historically used for filtering. Therefore, to avoid a closed feedback loop scenario, we would avoid using the model's attribute ranking results as an input to decide these filtering options. Also, to further increase the quality of the attribute ranking we can use a training data that consists of a sample of buyers that were served with a random (or partly random) list of filtering options. Nonetheless, even without this sample, the model can still provide sellers with meaningful information about their potential buyers' current attribute priority ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "or service; for simplicity we'll continue with the listing notation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Hierarchical document categorization with support vector machines",
"authors": [
{
"first": "Lijuan",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Thirteenth ACM International Conference on Information and Knowledge Management, CIKM '04",
"volume": "",
"issue": "",
"pages": "78--87",
"other_ids": {
"DOI": [
"10.1145/1031171.1031186"
]
},
"num": null,
"urls": [],
"raw_text": "Lijuan Cai and Thomas Hofmann. 2004. Hierarchi- cal document categorization with support vector ma- chines. In Proceedings of the Thirteenth ACM Inter- national Conference on Information and Knowledge Management, CIKM '04, page 78-87, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical classification: Combining bayes with svm",
"authors": [
{
"first": "Nicol\u00f2",
"middle": [],
"last": "Cesa-Bianchi",
"suffix": ""
},
{
"first": "Claudio",
"middle": [],
"last": "Gentile",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Zaniboni",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23rd International Conference on Machine Learning, ICML '06",
"volume": "",
"issue": "",
"pages": "177--184",
"other_ids": {
"DOI": [
"10.1145/1143844.1143867"
]
},
"num": null,
"urls": [],
"raw_text": "Nicol\u00f2 Cesa-Bianchi, Claudio Gentile, and Luca Zani- boni. 2006. Hierarchical classification: Combining bayes with svm. In Proceedings of the 23rd Interna- tional Conference on Machine Learning, ICML '06, page 177-184, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Secseq: Semantic coding for sequence-to-sequence based extreme multi-label classification",
"authors": [
{
"first": "Wei-Cheng",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Hsiang-Fu",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Inderjit",
"middle": [
"S"
],
"last": "Dhillon",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei-Cheng Chang, Hsiang-Fu Yu, Inderjit S. Dhillon, and Yiming Yang. 2018. Secseq: Semantic coding for sequence-to-sequence based extreme multi-label classification.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Extracting semantic information for e-commerce",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Charron",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Hirate",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Purcell",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Rezk",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "273--290",
"other_ids": {
"DOI": [
"10.1007/978-3-319-46547-0_27"
]
},
"num": null,
"urls": [],
"raw_text": "Bruno Charron, Yu Hirate, David Purcell, and Martin Rezk. 2016. Extracting semantic information for e-commerce. pages 273-290.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Language modeling with gated convolutional networks",
"authors": [
{
"first": "N",
"middle": [],
"last": "Yann",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. 2016. Language modeling with gated con- volutional networks.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A hybrid model words-driven approach for web product duplicate detection",
"authors": [
{
"first": "Bakker",
"middle": [],
"last": "Marnix De",
"suffix": ""
}
],
"year": 2013,
"venue": "CAiSE",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marnix de Bakker, Flavius Frasincar, and Damir Vandic. 2013. A hybrid model words-driven approach for web product duplicate detection. In CAiSE.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In Proc. of ICML.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text mining for product attribute extraction",
"authors": [
{
"first": "Rayid",
"middle": [],
"last": "Ghani",
"suffix": ""
},
{
"first": "Katharina",
"middle": [],
"last": "Probst",
"suffix": ""
},
{
"first": "Yan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Marko",
"middle": [],
"last": "Krema",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Fano",
"suffix": ""
}
],
"year": 2006,
"venue": "SIGKDD Explor. Newsl",
"volume": "8",
"issue": "1",
"pages": "41--48",
"other_ids": {
"DOI": [
"10.1145/1147234.1147241"
]
},
"num": null,
"urls": [],
"raw_text": "Rayid Ghani, Katharina Probst, Yan Liu, Marko Krema, and Andrew Fano. 2006. Text mining for prod- uct attribute extraction. SIGKDD Explor. Newsl., 8(1):41-48.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Gotrees: Predicting go associations from protein domain composition using decision trees",
"authors": [
{
"first": "Boris",
"middle": [],
"last": "Hayete",
"suffix": ""
},
{
"first": "Jadwiga",
"middle": [],
"last": "Bienkowska",
"suffix": ""
}
],
"year": 2005,
"venue": "Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing",
"volume": "10",
"issue": "",
"pages": "127--165",
"other_ids": {
"DOI": [
"10.1142/9789812702456_0013"
]
},
"num": null,
"urls": [],
"raw_text": "Boris Hayete and Jadwiga Bienkowska. 2005. Gotrees: Predicting go associations from protein domain com- position using decision trees. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, 10:127-38.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Universal language model fine-tuning for text classification",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "328--339",
"other_ids": {
"DOI": [
"10.18653/v1/P18-1031"
]
},
"num": null,
"urls": [],
"raw_text": "Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Effective use of word order for text categorization with convolutional neural networks",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "103--112",
"other_ids": {
"DOI": [
"10.3115/v1/N15-1011"
]
},
"num": null,
"urls": [],
"raw_text": "Rie Johnson and Tong Zhang. 2015. Effective use of word order for text categorization with convolutional neural networks. In Proceedings of the 2015 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 103-112, Denver, Col- orado. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed word representations improve NER for e-commerce",
"authors": [
{
"first": "Mahesh",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Hart",
"suffix": ""
},
{
"first": "Mirko",
"middle": [],
"last": "Vogel",
"suffix": ""
},
{
"first": "Jean-David",
"middle": [],
"last": "Ruvini",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {
"DOI": [
"10.3115/v1/W15-1522"
]
},
"num": null,
"urls": [],
"raw_text": "Mahesh Joshi, Ethan Hart, Mirko Vogel, and Jean-David Ruvini. 2015. Distributed word representations im- prove NER for e-commerce. In Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pages 160-167, Denver, Col- orado. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Matching unstructured product offers to structured product specifications",
"authors": [
{
"first": "Anitha",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Inmar",
"suffix": ""
},
{
"first": "Rakesh",
"middle": [],
"last": "Givoni",
"suffix": ""
},
{
"first": "Ariel",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Fuxman",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '11",
"volume": "",
"issue": "",
"pages": "404--412",
"other_ids": {
"DOI": [
"10.1145/2020408.2020474"
]
},
"num": null,
"urls": [],
"raw_text": "Anitha Kannan, Inmar E. Givoni, Rakesh Agrawal, and Ariel Fuxman. 2011. Matching unstructured product offers to structured product specifications. In Pro- ceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, KDD '11, page 404-412, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Hierarchically classifying documents using very few words",
"authors": [
{
"first": "Daphne",
"middle": [],
"last": "Koller",
"suffix": ""
},
{
"first": "Mehran",
"middle": [],
"last": "Sahami",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Fourteenth International Conference on Machine Learning, ICML '97",
"volume": "",
"issue": "",
"pages": "170--178",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daphne Koller and Mehran Sahami. 1997. Hierarchi- cally classifying documents using very few words. In Proceedings of the Fourteenth International Confer- ence on Machine Learning, ICML '97, page 170-178, San Francisco, CA, USA. Morgan Kaufmann Pub- lishers Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sememe prediction: Learning semantic knowledge from unstructured textual wiki descriptions",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Xuancheng",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Damai",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Yunfang",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Li, Xuancheng Ren, Damai Dai, Yunfang Wu, Houfeng Wang, and Xu Sun. 2018. Sememe pre- diction: Learning semantic knowledge from unstruc- tured textual wiki descriptions.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Semantic-unit-based dilated convolution for multi-label text classification",
"authors": [
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junyang Lin, Qi Su, Pengcheng Yang, Shuming Ma, and Xu Sun. 2018. Semantic-unit-based dilated con- volution for multi-label text classification.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Deep recurrent neural networks for product attribute extraction in ecommerce",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Bodhisattwa Prasad Majumder",
"suffix": ""
},
{
"first": "Abhinandan",
"middle": [],
"last": "Subramanian",
"suffix": ""
},
{
"first": "Shreyansh",
"middle": [],
"last": "Krishnan",
"suffix": ""
},
{
"first": "Ajinkya",
"middle": [],
"last": "Gandhi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "More",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bodhisattwa Prasad Majumder, Aditya Subramanian, Abhinandan Krishnan, Shreyansh Gandhi, and Ajinkya More. 2018. Deep recurrent neural networks for product attribute extraction in ecommerce.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Hierarchical text classification with reinforced label assignment",
"authors": [
{
"first": "Yuning",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Xiang",
"middle": [],
"last": "Ren",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "445--455",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1042"
]
},
"num": null,
"urls": [],
"raw_text": "Yuning Mao, Jingjing Tian, Jiawei Han, and Xiang Ren. 2019. Hierarchical text classification with reinforced label assignment. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 445-455, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Shallow semantic parsing of product offering titles (for better automatic hyperlink insertion)",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Melli",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14",
"volume": "",
"issue": "",
"pages": "1670--1678",
"other_ids": {
"DOI": [
"10.1145/2623330.2623343"
]
},
"num": null,
"urls": [],
"raw_text": "Gabor Melli. 2014. Shallow semantic parsing of prod- uct offering titles (for better automatic hyperlink in- sertion). In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, page 1670-1678, New York, NY, USA. Association for Computing Machin- ery.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Regularizing and optimizing lstm language models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2017a. Regularizing and optimizing lstm language models.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pointer sentinel mixture models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Merity",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017b. Pointer sentinel mixture models.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attribute extraction from product titles in ecommerce",
"authors": [
{
"first": "Ajinkya",
"middle": [],
"last": "More",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ajinkya More. 2016. Attribute extraction from product titles in ecommerce. CoRR, abs/1608.04670.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Maximizing subset accuracy with recurrent neural networks in multilabel classification",
"authors": [
{
"first": "Jinseok",
"middle": [],
"last": "Nam",
"suffix": ""
},
{
"first": "Eneldo",
"middle": [],
"last": "Loza Menc\u00eda",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hyunwoo",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "F\u00fcrnkranz",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5413--5423",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinseok Nam, Eneldo Loza Menc\u00eda, Hyunwoo J Kim, and Johannes F\u00fcrnkranz. 2017. Maximizing subset accuracy with recurrent neural networks in multi- label classification. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 30, pages 5413-5423. Curran Associates, Inc.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A method for solving the convex programming problem with convergence rate o(1/k 2 )",
"authors": [
{
"first": "Y",
"middle": [
"E"
],
"last": "Nesterov",
"suffix": ""
}
],
"year": 1983,
"venue": "Dokl. Akad. Nauk SSSR",
"volume": "269",
"issue": "",
"pages": "543--547",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. E. NESTEROV. 1983. A method for solving the convex programming problem with convergence rate o(1/k 2 ). Dokl. Akad. Nauk SSSR, 269:543-547.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "fairseq: A fast, extensible toolkit for sequence modeling",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/N19-4009"
]
},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of the 2019 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics (Demonstrations), pages 48-53, Minneapolis, Minnesota. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Large-scale hierarchical text classification with recursively regularized deep graph-cnn",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Jianxin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yaopeng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mengjiao",
"middle": [],
"last": "Bao",
"suffix": ""
},
{
"first": "Lihong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yangqiu",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Qiang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 World Wide Web Conference, WWW '18",
"volume": "",
"issue": "",
"pages": "1063--1072",
"other_ids": {
"DOI": [
"10.1145/3178876.3186005"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Peng, Jianxin Li, Yu He, Yaopeng Liu, Mengjiao Bao, Lihong Wang, Yangqiu Song, and Qiang Yang. 2018. Large-scale hierarchical text classification with recursively regularized deep graph-cnn. In Pro- ceedings of the 2018 World Wide Web Conference, WWW '18, page 1063-1072, Republic and Canton of Geneva, CHE. International World Wide Web Con- ferences Steering Committee.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Extracting attribute-value pairs from product specifications on the web",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Petrovski",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the International Conference on Web Intelligence, WI '17",
"volume": "",
"issue": "",
"pages": "558--565",
"other_ids": {
"DOI": [
"10.1145/3106426.3106449"
]
},
"num": null,
"urls": [],
"raw_text": "Petar Petrovski and Christian Bizer. 2017. Extracting attribute-value pairs from product specifications on the web. In Proceedings of the International Con- ference on Web Intelligence, WI '17, page 558-565, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Learning regular expressions for the extraction of product attributes from e-commerce microdata",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Petrovski",
"suffix": ""
},
{
"first": "Volha",
"middle": [],
"last": "Bryl",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Second International Conference on Linked Data for Information Extraction",
"volume": "1267",
"issue": "",
"pages": "45--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Petar Petrovski, Volha Bryl, and Christian Bizer. 2014. Learning regular expressions for the extraction of product attributes from e-commerce microdata. In Proceedings of the Second International Conference on Linked Data for Information Extraction -Volume 1267, LD4IE'14, page 45-54, Aachen, DEU. CEUR- WS.org.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Exploring the limits of transfer learning with a unified text-to",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Enriching product ads with metadata from html annotations",
"authors": [
{
"first": "Petar",
"middle": [],
"last": "Ristoski",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Mika",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 13th International Conference on The Semantic Web. Latest Advances and New Domains -Volume",
"volume": "9678",
"issue": "",
"pages": "151--167",
"other_ids": {
"DOI": [
"10.1007/978-3-319-34129-3_10"
]
},
"num": null,
"urls": [],
"raw_text": "Petar Ristoski and Peter Mika. 2016. Enriching product ads with metadata from html annotations. In Pro- ceedings of the 13th International Conference on The Semantic Web. Latest Advances and New Domains -Volume 9678, page 151-167, Berlin, Heidelberg. Springer-Verlag.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Superconvergence: very fast training of neural networks using large learning rates",
"authors": [
{
"first": "Leslie",
"middle": [
"N"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Nicholay",
"middle": [],
"last": "Topin",
"suffix": ""
}
],
"year": 2019,
"venue": "Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications",
"volume": "11006",
"issue": "",
"pages": "369--386",
"other_ids": {
"DOI": [
"10.1117/12.2520589"
]
},
"num": null,
"urls": [],
"raw_text": "Leslie N. Smith and Nicholay Topin. 2019. Super- convergence: very fast training of neural networks using large learning rates. In Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, volume 11006, pages 369 -386. Inter- national Society for Optics and Photonics, SPIE.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Hierarchical text classification and evaluation",
"authors": [
{
"first": "Aixin",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Ee-Peng",
"middle": [],
"last": "Lim",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 IEEE International Conference on Data Mining, ICDM '01",
"volume": "",
"issue": "",
"pages": "521--528",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aixin Sun and Ee-Peng Lim. 2001. Hierarchical text classification and evaluation. In Proceedings of the 2001 IEEE International Conference on Data Mining, ICDM '01, page 521-528, USA. IEEE Computer Society.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Hierarchical multi-label classification networks",
"authors": [
{
"first": "Jonatas",
"middle": [],
"last": "Wehrmann",
"suffix": ""
},
{
"first": "Ricardo",
"middle": [],
"last": "Cerri",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Barros",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "5075--5084",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonatas Wehrmann, Ricardo Cerri, and Rodrigo Bar- ros. 2018. Hierarchical multi-label classification net- works. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 5075-5084, Stockholmsm\u00e4ssan, Stockholm Sweden. PMLR.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Huggingface's transformers: State-of-the-art natural language processing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Lysandre",
"middle": [],
"last": "Debut",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Moi",
"suffix": ""
},
{
"first": "Pierric",
"middle": [],
"last": "Cistac",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rault",
"suffix": ""
},
{
"first": "R'emi",
"middle": [],
"last": "Louf",
"suffix": ""
},
{
"first": "Morgan",
"middle": [],
"last": "Funtowicz",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Brew",
"suffix": ""
}
],
"year": 2019,
"venue": "ArXiv",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R'emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. Huggingface's transformers: State-of-the-art natural language processing. ArXiv, abs/1910.03771.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Sgm: Sequence generation model for multi-label classification",
"authors": [
{
"first": "Pengcheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Shuming",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pengcheng Yang, Xu Sun, Wei Li, Shuming Ma, Wei Wu, and Houfeng Wang. 2018. Sgm: Sequence gen- eration model for multi-label classification.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Large batch optimization for deep learning: Training bert in 76 minutes",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "You",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Sashank",
"middle": [],
"last": "Reddi",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Hseu",
"suffix": ""
},
{
"first": "Sanjiv",
"middle": [],
"last": "Kumar",
"suffix": ""
},
{
"first": "Srinadh",
"middle": [],
"last": "Bhojanapalli",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Demmel",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Keutzer",
"suffix": ""
},
{
"first": "Cho-Jui",
"middle": [],
"last": "Hsieh",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learning: Training bert in 76 minutes.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "An example of a typical buyer search session.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"content": "<table><tr><td colspan=\"3\">Data Metric BERT ULM</td><td>S2S</td><td>S2S -hier</td></tr><tr><td>P@1 P@3 Elec P@5 N@1 N@3 N@5</td><td>54 33.3 24.4 50.6 56.7 60.5</td><td>59.4 37.2 26.9 56.2 63.4 66.7</td><td>61.6 39.4 28.7 58.1 67.2 71.1</td><td>62.7 40.1 29.2 59.5 68.3 72.1</td></tr><tr><td>P@1 P@3 Fash P@5 N@1 N@3 N@5</td><td>61 33.8 23 59.4 64.7 67.7</td><td>62.8 35.4 24.1 61.2 67.7 70.9</td><td>62.8 36.2 25.2 61.2 68.8 72.6</td><td>63.1 36.3 25.2 61.5 69 72.6</td></tr></table>",
"text": "Model performance measured by Precision@k (P@k) and nDCG@k (N@k) comparison of the four models -ULMFiT (ULM), BERT, Seq2Seq-single (S2S) and Seq2Seq-hierarchical (S2S-hier) for the Electronics (Elec) and Fashion (Fash) verticals. Best results are marked in bold.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF1": {
"content": "<table><tr><td colspan=\"4\">Dataset Metric BERT ULM Attr Attr</td><td>S2S Attr</td><td>S2S -hier Attr</td></tr><tr><td>Elec</td><td>P@1 P@3 P@5 N@1 N@3 N@5</td><td>92.4 74 51.8 78.9 82.8 83.1</td><td>94 76 53.8 81.9 85.2 85.6</td><td>93 76.2 56 79.4 84.3 85.7</td><td>94.6 78 57.8 82.1 86.6 87.8</td></tr><tr><td>Fash</td><td>P@1 P@3 P@5 N@1 N@3 N@5</td><td>95.7 61.9 40.1 86.2 88.3 87.7</td><td>95.5 61.2 40.2 85.9 88.5 88.4</td><td>95.5 63.2 43.2 87.2 89.5 90.2</td><td>96 63.6 43.5 88 90.3 90.7</td></tr></table>",
"text": "Model performance comparison solely for the attributes ranking task. Best results are marked in bold.",
"num": null,
"html": null,
"type_str": "table"
},
"TABREF3": {
"content": "<table/>",
"text": "Example of Seq2Seq-hierarchical prediction, including values which are not explicitly mentioned in the title and multi-values attributes. Values are ordered by their importance rank.",
"num": null,
"html": null,
"type_str": "table"
}
}
}
}