{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T10:33:52.938506Z" }, "title": "Campaign Keyword Augmentation via Generative Methods", "authors": [ { "first": "Haoran", "middle": [], "last": "Shi", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon.com Inc Seattle", "location": { "settlement": "Washington", "country": "USA" } }, "email": "haoransh@amazon.com" }, { "first": "Zhibiao", "middle": [], "last": "Rao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon.com Inc Seattle", "location": { "settlement": "Washington", "country": "USA" } }, "email": "zhibiar@amazon.com" }, { "first": "Yongning", "middle": [], "last": "Wu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon.com Inc Seattle", "location": { "settlement": "Washington", "country": "USA" } }, "email": "yongning@amazon.com" }, { "first": "Zuohua", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon.com Inc Seattle", "location": { "settlement": "Washington", "country": "USA" } }, "email": "zhzhang@amazon.com" }, { "first": "Chu", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Amazon.com Inc Seattle", "location": { "settlement": "Washington", "country": "USA" } }, "email": "chuwang@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Keyword augmentation is a fundamental problem for sponsored search modeling and business. Machine generated keywords can be recommended to advertisers for better campaign discoverability as well as used as features for sourcing and ranking models. Generating highquality keywords is difficult, especially for cold campaigns with limited or even no historical logs; and the industry trend of including multiple products in a single ad campaign is making the problem more challenging. In this paper, we propose a keyword augmentation method based on generative seq2seq model and triebased search mechanism, which is able to generate high-quality keywords for any products or product lists. We conduct human annotations, offline analysis, and online experiments to evaluate the performance of our method against benchmarks in terms of augmented keyword quality as well as lifted ad exposure. The experiment results demonstrate that our method is able to generate more valid keywords which can serve as an efficient addition to advertiser selected keywords.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Sponsored search has proved to be an efficient and inspiring way of connecting shoppers with interesting products. Advertisers have the freedom to provide a list of targeting keywords with associated bidding prices to the ad platform, so that their ad campaigns can match to shopper queries either lexically or semantically. The quantity and quality of targeting keywords are fundamental to the performance of the ad campaign: insufficient keywords can hardly get the campaigns with enough exposure; and low-quality ones will match shopper queries with irrelevant ads, leading to low conversion and damages to customer experiences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Efficient and optimal keyword selection is challenging and time consuming because it requires deep understanding of the ad industry as well as the sponsored search platform. Furthermore, an ad campaign used to be designed for a single product traditionally, but ads with richer information start to appear in the recent years. Nowadays, an ad campaign can contain multiple products, brand stores, or even rich media contents. Consequently, the keyword selection task becomes even more crucial and challenging for advertisers campaign creation and management.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present an end-to-end machine learning solution to generate keywords for ad campaigns. The method applies to single-product campaigns as well as campaigns with any number of products. It only relies on product information like product titles, hence efficient on newly created campaigns without any performance logs in the past. We conduct offline and online experiments on the proposed method and observe significant improvements over traditional statistical methods in terms of keyword quality. Specifically, we highlight our contributions as the following:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We propose an end-to-end solution for keyword generation. It can be applied to recommendation of high-quality keywords for advertisers as well as semantic augmentation for better ad exposure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The keyword generation method relies on product metadata but not historical performance data of ad. Therefore, the method applies to tail or newly-created campaigns.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Our method is able to handle single-productcampaign as well as multi-product-campaign by leveraging semantic meanings of each product in the latent space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 The quality and superiority of the generated keywords are validated by human audits, offline analysis as well as online experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Considerable research work has been devoted to keyword augmentation techniques because of its important applications in information retrieval, indexing, and digital library management. The majority of existing work focuses on processing documents with statistical information including term co-occurrence and frequency (Campos et al., 2020) . In particular, Rose et al. (2010) proposed RAKE to split the document into candidate phrases by word delimiters and calculate their scores with cooccurrence counts. Ravi et al. (2010) first applied statistical machine translation model for keyword candidate generation and ranking. With rapid development of deep learning models, neural machine translation has surpassed statistical translation in many benchmarks, where recurrent neural networks (RNNs) and gating mechanisms are popular building blocks to model sequence dependencies and alignments (Hochreiter and Schmidhuber, 1997; Cho et al., 2014) . However, extracting highquality and diverse keywords from short document like ad campaigns remains a difficult problem due to the lack of context. Query expansion for improved product or ad discovery, as an application of keyword augmentation, is crucial to e-commerce search engines and recommender systems. He et al. (2016) applies LSTM architecture to rewriting query into web document index space. However, the long tail distribution of the query space hinders the deployment of complicated generative models. It is well known that infrequent queries account for a large portion of the e-commerce daily queries. In Lian et al. (2019) , a lightweight neural network for infrequent queries is trained, incurring even more engineering burdens for deployment. It also proposed the method of using trie-based search to normalize the decoding in the constrained semantic space, which is further investigated in Chen et al. (2020) .", "cite_spans": [ { "start": 319, "end": 340, "text": "(Campos et al., 2020)", "ref_id": null }, { "start": 358, "end": 376, "text": "Rose et al. (2010)", "ref_id": "BIBREF11" }, { "start": 508, "end": 526, "text": "Ravi et al. (2010)", "ref_id": "BIBREF10" }, { "start": 893, "end": 927, "text": "(Hochreiter and Schmidhuber, 1997;", "ref_id": "BIBREF6" }, { "start": 928, "end": 945, "text": "Cho et al., 2014)", "ref_id": "BIBREF4" }, { "start": 1257, "end": 1273, "text": "He et al. (2016)", "ref_id": "BIBREF5" }, { "start": 1567, "end": 1585, "text": "Lian et al. (2019)", "ref_id": "BIBREF7" }, { "start": 1857, "end": 1875, "text": "Chen et al. (2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Expanding advertiser bidding keywords is another growing research area. Qiao et al. (2017) applies keyword clustering and topic modeling to retrieve similar keywords and Zhou et al. (2019) conducts keywords expansion in the constrained domains through neural generative models. In addition, Zhang et al. (2014) formulates the keyword recommendation problem as a mixed integer optimization problem, where they collect candidate keywords whose relevance score to the ad group exceed a threshold and handle the keyword selection problem by maximizing revenue. Such methods rely on the quality of advertiser bidding keywords. Campaigns with sub-optimal or misused keywords may suffer significantly.", "cite_spans": [ { "start": 72, "end": 90, "text": "Qiao et al. (2017)", "ref_id": "BIBREF8" }, { "start": 170, "end": 188, "text": "Zhou et al. (2019)", "ref_id": "BIBREF13" }, { "start": 291, "end": 310, "text": "Zhang et al. (2014)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In this section, we present our products-to-keyword framework and algorithm for campaign keyword augmentation. The framework is compatible with any seq2seq components with encoders and decoders. Given an ad campaign C including a set of products {p 1 , p 2 , . . . , p n }, our goal is to generate a list of relevant keywords {k 1 , k 2 , . . . , k m }. We will describe how we generate keywords for each product first and later generalize to ad campaigns with multiple products.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "3" }, { "text": "We choose to use organic search click data for model training, which includes the pairs of queries and clicked products in search log. Compared to sponsored search data, it can guide the model to generate more keywords than existing ads system as shown in Lian et al. (2019) . We lowercase shopper queries and product titles, and then apply pretrained T5 tokenizer (Raffel et al., 2020) for tokenization. Note that the vocabulary space for shopper queries and product titles are ever-growing, but the subword encoding space is stable. Therefore, subword tokenization is an efficient method to handle the out-of-vocabulary issue which hurts the fluency of generated queries.", "cite_spans": [ { "start": 256, "end": 274, "text": "Lian et al. (2019)", "ref_id": "BIBREF7" }, { "start": 365, "end": 386, "text": "(Raffel et al., 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset and Preprocessing", "sec_num": "3.1" }, { "text": "In the following, we use X = [x 1 , x 2 , ...x L ] to denote tokenized product title whose length is L. Let \u03b8 be the trainable model parameters, and Q = [q 0 , q 1 , q 2 , . . . , q S ] as the padded tokenized target query, where q 0 is the special start token and q S the special end token. For training, we feed the model with the product title X and the first s query token", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modeling", "sec_num": "3.2" }, { "text": "Q