ACL-OCL / Base_JSON /prefixA /json /aacl /2020.aacl-demo.3.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T02:12:41.094575Z"
},
"title": "ISA: An Intelligent Shopping Assistant",
"authors": [
{
"first": "Tuan",
"middle": [
"Manh"
],
"last": "Lai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Illinois at Urbana-Champaign",
"location": {}
},
"email": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Nedim",
"middle": [],
"last": "Lipka",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Despite the growth of e-commerce, brick-andmortar stores are still the preferred destinations for many people. In this paper, we present ISA, a mobile-based intelligent shopping assistant that is designed to improve shopping experience in physical stores. ISA assists users by leveraging advanced techniques in computer vision, speech processing, and natural language processing. An in-store user only needs to take a picture or scan the barcode of the product of interest, and then the user can talk to the assistant about the product. The assistant can also guide the user through the purchase process or recommend other similar products to the user. We take a data-driven approach in building the engines of ISA's natural language processing component, and the engines achieve good performance.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Despite the growth of e-commerce, brick-andmortar stores are still the preferred destinations for many people. In this paper, we present ISA, a mobile-based intelligent shopping assistant that is designed to improve shopping experience in physical stores. ISA assists users by leveraging advanced techniques in computer vision, speech processing, and natural language processing. An in-store user only needs to take a picture or scan the barcode of the product of interest, and then the user can talk to the assistant about the product. The assistant can also guide the user through the purchase process or recommend other similar products to the user. We take a data-driven approach in building the engines of ISA's natural language processing component, and the engines achieve good performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Shopping in physical stores is a popular option for many people. Each week, a lot of people enter supermarkets in which they are immersed with many different product choices. In many shopping centers, customer service representatives (CSRs) are employed to answer questions from customers about products. However, a customer may experience long waiting time for assistance if all CSRs are busy interacting with other customers. Therefore, automated solutions can increase customer satisfaction and retention.",
"cite_spans": [
{
"start": 234,
"end": 240,
"text": "(CSRs)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we introduce a mobile-based intelligent shopping assistant, ISA, which is based on advanced techniques in computer vision, speech processing, and natural language processing. A user just needs to take a picture or scan the barcode of the product of interest. After that, the user can ask ISA a variety of questions such as product 1 The work was conducted while the first author interned at Adobe Research. features, specifications and return policies. The assistant can also guide the user through the purchase process or recommend other similar products. This work can be used as the first step in fully automating customer service in shopping centers. With ISA, no CSRs will be needed as customers can simply turn to their phones for assistance. We have developed a fully functional prototype of ISA.",
"cite_spans": [
{
"start": 346,
"end": 347,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of the paper is organized as follows. Section 2 introduces some related work. Section 3 gives an overview of the design and implementation of the system. Finally, Section 4 concludes the paper and suggests future directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The most closely related branches of work to ours are probably customer service chatbots for e-commerce websites. For example, SuperAgent (Cui et al., 2017 ) is a powerful chatbot that leverages large-scale and publicly available e-commerce data. The researchers demonstrate SuperAgent as an add-on extension to mainstream web browsers. When a user visits a product page, SuperAgent crawls the information of the product from multi- ple data sources within the page. After that, the user can ask SuperAgent about the product. Unlike SuperAgent, ISA is designed to assist users at physical stores ( Figure 1 ). In addition to natural language processing techniques, ISA also needs to use techniques in computer vision and speech processing when interacting with the users.",
"cite_spans": [
{
"start": 138,
"end": 155,
"text": "(Cui et al., 2017",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 598,
"end": 606,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "3 System Description",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "When an in-store user wants to get more information about a specific product, the user just needs to take a picture or scan the barcode of the product. The system then retrieves the information of the product of interest from a database by using computer vision techniques. After that, the user can ask natural language questions about the product specifications to the system. The user can either type in the questions or directly speak out the questions using voice. ISA is integrated with both speech recognition and speech synthesis abilities, which allows users to ask questions without typing. Figure 2 shows the system overview of ISA. As the figure shows, a mobile client communicates with the backend through a well-defined HTTP REST API. This creates a separation between the client and the server, which allows ISA to be scaled without much difficulty. The backend consists of three main components: 1) speech processing, 2) computer vision, 3) natural language processing. Users can chat with ISA in speech. The speech recognition and speech synthesis are implemented by calling third-party services. The computer vision component is responsible for recognizing the products that the user is facing. Given an image of a product of interest, a fine-grained visual object classification model will be used to identify the product and retrieve its information. This task is challenging because many products are visually very similar (e.g., washers and dryers usually have similar shape). Therefore, we enhance the component with highly accurate standard algorithms for barcode recognition. In case it is difficult for the object classification model to identify the product of interest accurately, the user can simply scan the barcode of the product. Finally, the natural language processing component is responsible for generating a response from a text query or question. We will next detail each part of the natural language processing component in the following sections.",
"cite_spans": [],
"ref_spans": [
{
"start": 600,
"end": 608,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "When ISA receives a query from a user, the intent recognition engine is used to determine the intent of the query. Based on the recognized intent, the appropriate domain-specific engine will be triggered. We define four different types of intent as shown in Table 1 . Intent detection can be naturally treated as a classification problem. In this work we build a random forest model (Breiman, 2001) for the problem and it achieves good performance. Other popular classifiers like support vector machines (Haffner et al., 2003) and deep neural network methods (Sarikaya et al., 2011) can also be applied in this case.",
"cite_spans": [
{
"start": 383,
"end": 398,
"text": "(Breiman, 2001)",
"ref_id": "BIBREF1"
},
{
"start": 504,
"end": 526,
"text": "(Haffner et al., 2003)",
"ref_id": "BIBREF5"
},
{
"start": 559,
"end": 582,
"text": "(Sarikaya et al., 2011)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 258,
"end": 265,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Intent Recognition",
"sec_num": "3.2"
},
{
"text": "We create a dataset of 500 different queries and use it to build a random forest (RF) for intent classification. Approximately 2/3 of the cases are used as training set, whereas the rest (1/3) are used as test set, in order to estimate the model's performance. We create a bag-of-words feature vector for each query and use it as input for the RF. The number of trees in the forest is set to be 80. For each node split during the growing of a tree, the number of features used to determine the best split is set to be \u221a k where k is the total number of features of the dataset. The accuracy of the trained RF model evaluated on the test set is 98.20%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Intent Recognition",
"sec_num": "3.2"
},
{
"text": "The product specification QA engine is used to answer questions regarding the specifications of a product. For every product, there is a list of specifications in the form of (specification name, specification value). We formalize the task of the engine as follows: Given a question Q about a product P and the list of specifications (s 1 , s 2 , ..., s M ) of P , the goal is to identify the specification that is most relevant to the question Q. M is the number of specifications of the product, and s i is the sequence of words in the name of the i th specification. In this formulation, the task is similar to the answer selection problem. 'Answers' shall be individual Previous methods for answer selection typically relies on feature engineering, linguistic tools, or external resources (Wang and Manning, 2010; Heilman and Smith, 2010; Yih et al., 2013; Yao et al., 2013) . Recently, with the renaissance of neural network models, many deep learning based methods have been proposed to tackle the answer selection problem (Rao et al., 2016; Zhiguo Wang, 2017; Bian et al., 2017; Shen et al., 2017; Tran et al., 2018; Lai et al., 2018a; Tay et al., 2018; Lai et al., 2018b,c; Rao et al., 2019; Lai et al., 2019; Garg et al., 2019; Kamath et al., 2019; Laskar et al., 2020) . These deep learning based methods typically outperform traditional techniques without relying on any feature engineering or expensive external resources. For example, the IWAN model proposed in (Shen et al., 2017) achieves competitive performance on public datasets such as TrecQA (Wang et al., 2007) and WikiQA (Yang et al., 2015) .",
"cite_spans": [
{
"start": 793,
"end": 817,
"text": "(Wang and Manning, 2010;",
"ref_id": "BIBREF24"
},
{
"start": 818,
"end": 842,
"text": "Heilman and Smith, 2010;",
"ref_id": "BIBREF6"
},
{
"start": 843,
"end": 860,
"text": "Yih et al., 2013;",
"ref_id": "BIBREF28"
},
{
"start": 861,
"end": 878,
"text": "Yao et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 1029,
"end": 1047,
"text": "(Rao et al., 2016;",
"ref_id": "BIBREF17"
},
{
"start": 1048,
"end": 1066,
"text": "Zhiguo Wang, 2017;",
"ref_id": "BIBREF29"
},
{
"start": 1067,
"end": 1085,
"text": "Bian et al., 2017;",
"ref_id": "BIBREF0"
},
{
"start": 1086,
"end": 1104,
"text": "Shen et al., 2017;",
"ref_id": "BIBREF20"
},
{
"start": 1105,
"end": 1123,
"text": "Tran et al., 2018;",
"ref_id": "BIBREF23"
},
{
"start": 1124,
"end": 1142,
"text": "Lai et al., 2018a;",
"ref_id": "BIBREF8"
},
{
"start": 1143,
"end": 1160,
"text": "Tay et al., 2018;",
"ref_id": "BIBREF22"
},
{
"start": 1161,
"end": 1181,
"text": "Lai et al., 2018b,c;",
"ref_id": null
},
{
"start": 1182,
"end": 1199,
"text": "Rao et al., 2019;",
"ref_id": "BIBREF18"
},
{
"start": 1200,
"end": 1217,
"text": "Lai et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 1218,
"end": 1236,
"text": "Garg et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 1237,
"end": 1257,
"text": "Kamath et al., 2019;",
"ref_id": "BIBREF7"
},
{
"start": 1258,
"end": 1278,
"text": "Laskar et al., 2020)",
"ref_id": "BIBREF12"
},
{
"start": 1475,
"end": 1494,
"text": "(Shen et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 1562,
"end": 1581,
"text": "(Wang et al., 2007)",
"ref_id": "BIBREF25"
},
{
"start": 1593,
"end": 1612,
"text": "(Yang et al., 2015)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Product Specification QA",
"sec_num": "3.3"
},
{
"text": "Using Amazon Mechanical Turk, a popular crowdsourcing platform, we create a dataset of 6,922 questions that are related to 369 specifications and 148 products listed in the Home Depot website. We implement the IWAN model and train the model on the collected dataset. The top-1 accuracy, top-2 accuracy, and top-3 accuracy of the model evaluated on a held-out test set are 85.60%, 95.80%, and 97.60%, respectively. In production, given a question about a product, the trained model is used to rank every specification of the product based on how relevant the specification is. We select the top-ranked specification and use it to generate the response sentence using predefined templates (Cui et al., 2017) . An example of the product specification QA engine's outputs is shown in Figure 3 . The first question from the user is matched to the product weight specification, whereas the second question is matched to the return policy specification.",
"cite_spans": [
{
"start": 687,
"end": 705,
"text": "(Cui et al., 2017)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 780,
"end": 788,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Product Specification QA",
"sec_num": "3.3"
},
{
"text": "The recommendation engine is responsible for giving new suggestions and recommendations to users. When a user wants to look for similar products (e.g., by saying \"Are there any other similar products?\"), the engine will search the database for related products and then send the information of them to the app for displaying to the user (Figure 4) .",
"cite_spans": [],
"ref_spans": [
{
"start": 337,
"end": 347,
"text": "(Figure 4)",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Recommendation",
"sec_num": "3.4"
},
{
"text": "The purchase engine is responsible for guiding the user through the purchase process. When a user wants to buy a specific product (e.g., by saying \"I would like to purchase this product.\"), the engine will first query the database for information such as the product listing price, available discounts, and user payment information. After that, the engine will craft a special response message and send it to the client app in the user's mobile device. The response message will instruct the app how to assist the user through the purchase process or provide personalized discounts if applicable ( Figure 5 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 598,
"end": 606,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Purchase",
"sec_num": "3.5"
},
{
"text": "The chit chat engine is used to reply to greeting queries such as \"How are you doing?\" or queries that are off the subject such as \"Is the sky blue?\". Our approach to building the engine is based on the sequence-to-sequence (seq2seq) framework (Sutskever et al., 2014) . The model consists of two recurrent neural networks: an encoder and a decoder. The encoder converts the input query into a fixed size feature vector. Based on that feature vector, the decoder generates the output response, one word at a time. The model is integrated with the global attention mechanism (Luong et al., 2015) so that the decoder can attend to specific parts of the input query when decoding instead of relying only on the fixed size feature vector. We collect about 3M query-response pairs from Reddit and use them to train the seq2seq model. Examples of the engine's outputs are shown below: Q: How are you doing? A: I'm doing well. Q: Is the sky blue? A: Yes.",
"cite_spans": [
{
"start": 244,
"end": 268,
"text": "(Sutskever et al., 2014)",
"ref_id": "BIBREF21"
},
{
"start": 574,
"end": 594,
"text": "(Luong et al., 2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Chit Chat",
"sec_num": "3.6"
},
{
"text": "In this paper, we present ISA, a powerful intelligent shopping assistant. ISA is designed to achieve the goal of improving shopping experience in physical stores by leveraging advanced techniques in computer vision, speech processing, and natural language processing. A user only needs to take a picture or scan the barcode of the product of interest, and then the user can ask ISA a variety of questions about the product. The system can also guide the user through the purchase decision or recommend other similar products to the user.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
},
{
"text": "There are many fronts on which we will be exploring in the future. Currently the product specification QA engine answers only questions regarding the specifications of a product. We will implement engines for addressing other kinds of questions. We will also extend ISA to better support other languages and informal text Martin et al., 2020) . In addition, we will conduct a user study to evaluate our system in the future. Finally, we wish to extend this work to other domains such as building an as-sistant for handling image editing requests (Brixey et al., 2018) .",
"cite_spans": [
{
"start": 322,
"end": 342,
"text": "Martin et al., 2020)",
"ref_id": "BIBREF14"
},
{
"start": 546,
"end": 567,
"text": "(Brixey et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "4"
}
],
"back_matter": [
{
"text": "The authors wish to thank Dr. Hung Bui (VinAI Research) and Dr. Sheng Li (University of Georgia) for their guidance and feedback on this project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": "5"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A compare-aggregate model with dynamic-clip attention for answer selection",
"authors": [
{
"first": "Weijie",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Guang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhiqing",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 ACM on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1987--1990",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weijie Bian, Si Li, Zhao Yang, Guang Chen, and Zhiqing Lin. 2017. A compare-aggregate model with dynamic-clip attention for answer selection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06 -10, 2017, pages 1987-1990.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Random forests",
"authors": [
{
"first": "Leo",
"middle": [],
"last": "Breiman",
"suffix": ""
}
],
"year": 2001,
"venue": "Mach. Learn",
"volume": "45",
"issue": "1",
"pages": "5--32",
"other_ids": {
"DOI": [
"10.1023/A:1010933404324"
]
},
"num": null,
"urls": [],
"raw_text": "Leo Breiman. 2001. Random forests. Mach. Learn., 45(1):5-32.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A system for automated image editing from natural language commands",
"authors": [
{
"first": "Jacqueline",
"middle": [],
"last": "Brixey",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Manuvinakurike",
"suffix": ""
},
{
"first": "Nham",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1812.01083"
]
},
"num": null,
"urls": [],
"raw_text": "Jacqueline Brixey, Ramesh Manuvinakurike, Nham Le, Tuan Lai, Walter Chang, and Trung Bui. 2018. A system for automated image editing from natural language commands. arXiv preprint arXiv:1812.01083.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Superagent: A customer service chatbot for e-commerce websites",
"authors": [
{
"first": "Lei",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Shaohan",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chuanqi",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Chaoqun",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "97--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Cui, Furu Wei, Shaohan Huang, Chuanqi Tan, Chaoqun Duan, and Ming Zhou. 2017. Superagent: A customer service chatbot for e-commerce web- sites. In Proceedings of ACL 2017, System Demon- strations, pages 97-102. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Tanda: Transfer and adapt pre-trained transformer models for answer sentence selection",
"authors": [
{
"first": "Siddhant",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Thuy",
"middle": [],
"last": "Vu",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Moschitti",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.04118"
]
},
"num": null,
"urls": [],
"raw_text": "Siddhant Garg, Thuy Vu, and Alessandro Moschitti. 2019. Tanda: Transfer and adapt pre-trained trans- former models for answer sentence selection. arXiv preprint arXiv:1911.04118.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Optimizing svms for complex call classification",
"authors": [
{
"first": "P",
"middle": [],
"last": "Haffner",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "J",
"middle": [
"H"
],
"last": "Wright",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings. (ICASSP '03). 2003 IEEE International Conference on",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2003.1198860"
]
},
"num": null,
"urls": [],
"raw_text": "P. Haffner, G. Tur, and J. H. Wright. 2003. Optimizing svms for complex call classification. In Acoustics, Speech, and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International Conference on, volume 1, pages I-632-I-635 vol.1.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Tree edit models for recognizing textual entailments, paraphrases, and answers to questions",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Heilman",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Noah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Smith",
"suffix": ""
}
],
"year": 2010,
"venue": "Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT '10",
"volume": "",
"issue": "",
"pages": "1011--1019",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Heilman and Noah A. Smith. 2010. Tree edit models for recognizing textual entailments, para- phrases, and answers to questions. In Human Language Technologies: The 2010 Annual Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, HLT '10, pages 1011-1019, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Predicting and integrating expected answer types into a simple recurrent neural network model for answer sentence selection",
"authors": [
{
"first": "B",
"middle": [],
"last": "Sanjay Kamath",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Grau",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanjay Kamath, B. Grau, and Y. Ma. 2019. Predicting and integrating expected answer types into a simple recurrent neural network model for answer sentence selection. Computaci\u00f3n y Sistemas, 23.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A simple end-to-end question answering model for product information",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Nedim",
"middle": [],
"last": "Lipka",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the First Workshop on Economics and Natural Language Processing",
"volume": "",
"issue": "",
"pages": "38--43",
"other_ids": {
"DOI": [
"10.18653/v1/W18-3105"
]
},
"num": null,
"urls": [],
"raw_text": "Tuan Lai, Trung Bui, Sheng Li, and Nedim Lipka. 2018a. A simple end-to-end question answering model for product information. In Proceedings of the First Workshop on Economics and Natural Lan- guage Processing, pages 38-43, Melbourne, Aus- tralia. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Supervised transfer learning for product information question answering",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "Nedim",
"middle": [],
"last": "Lipka",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "17th IEEE International Conference on Machine Learning and Applications (ICMLA)",
"volume": "",
"issue": "",
"pages": "1109--1114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Lai, Trung Bui, Nedim Lipka, and Sheng Li. 2018b. Supervised transfer learning for product in- formation question answering. In 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 1109-1114. IEEE.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A gated self-attention memory network for answer selection",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Quan Hung Tran",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kihara",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5953--5959",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1610"
]
},
"num": null,
"urls": [],
"raw_text": "Tuan Lai, Quan Hung Tran, Trung Bui, and Daisuke Kihara. 2019. A gated self-attention memory net- work for answer selection. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5953-5959, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A review on deep learning techniques applied to answer selection",
"authors": [
{
"first": "Trung",
"middle": [],
"last": "Tuan Manh Lai",
"suffix": ""
},
{
"first": "Sheng",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2132--2144",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tuan Manh Lai, Trung Bui, and Sheng Li. 2018c. A review on deep learning techniques applied to an- swer selection. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2132-2144, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Contextualized embeddings based transformer encoder for sentence similarity modeling in answer selection task",
"authors": [
{
"first": "Jimmy",
"middle": [
"Xiangji"
],
"last": "Md Tahmid Rahman Laskar",
"suffix": ""
},
{
"first": "Enamul",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hoque",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of The 12th Language Resources and Evaluation Conference",
"volume": "",
"issue": "",
"pages": "5505--5514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Md Tahmid Rahman Laskar, Jimmy Xiangji Huang, and Enamul Hoque. 2020. Contextualized embed- dings based transformer encoder for sentence sim- ilarity modeling in answer selection task. In Pro- ceedings of The 12th Language Resources and Eval- uation Conference, pages 5505-5514, Marseille, France. European Language Resources Association.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christo- pher D. Manning. 2015. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot",
"authors": [
{
"first": "Louis",
"middle": [],
"last": "Martin",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Muller",
"suffix": ""
},
{
"first": "Pedro Javier Ortiz",
"middle": [],
"last": "Su\u00e1rez",
"suffix": ""
},
{
"first": "Yoann",
"middle": [],
"last": "Dupont",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Romary",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7203--7219",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.645"
]
},
"num": null,
"urls": [],
"raw_text": "Louis Martin, Benjamin Muller, Pedro Javier Or- tiz Su\u00e1rez, Yoann Dupont, Laurent Romary,\u00c9ric de la Clergerie, Djam\u00e9 Seddah, and Beno\u00eet Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7203-7219, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "PhoBERT: Pre-trained language models for Vietnamese",
"authors": [
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen and Anh Tuan Nguyen. 2020. PhoBERT: Pre-trained language models for Viet- namese. Findings of EMNLP.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "BERTweet: A pre-trained language model for English Tweets",
"authors": [
{
"first": "Thanh",
"middle": [],
"last": "Dat Quoc Nguyen",
"suffix": ""
},
{
"first": "Anh",
"middle": [
"Tuan"
],
"last": "Vu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nguyen",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Quoc Nguyen, Thanh Vu, and Anh Tuan Nguyen. 2020. BERTweet: A pre-trained language model for English Tweets. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing: System Demonstrations.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Noisecontrastive estimation for answer selection with deep neural networks",
"authors": [
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 25th ACM International on Conference on Information and Knowledge Management",
"volume": "",
"issue": "",
"pages": "1913--1916",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jinfeng Rao, Hua He, and Jimmy Lin. 2016. Noise- contrastive estimation for answer selection with deep neural networks. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 1913-1916. ACM.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bridging the gap between relevance matching and semantic matching for short text similarity modeling",
"authors": [
{
"first": "Jinfeng",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Linqing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "5370--5381",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1540"
]
},
"num": null,
"urls": [],
"raw_text": "Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, and Jimmy Lin. 2019. Bridging the gap be- tween relevance matching and semantic matching for short text similarity modeling. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5370-5381, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Deep belief nets for natural language call-routing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sarikaya",
"suffix": ""
},
{
"first": "G",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Ramabhadran",
"suffix": ""
}
],
"year": 2011,
"venue": "2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",
"volume": "",
"issue": "",
"pages": "5680--5683",
"other_ids": {
"DOI": [
"10.1109/ICASSP.2011.5947649"
]
},
"num": null,
"urls": [],
"raw_text": "R. Sarikaya, G. E. Hinton, and B. Ramabhadran. 2011. Deep belief nets for natural language call-routing. In 2011 IEEE International Conference on Acous- tics, Speech and Signal Processing (ICASSP), pages 5680-5683.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inter-weighted alignment network for sentence pair modeling",
"authors": [
{
"first": "Gehui",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yunlun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zhi-Hong",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1190--1200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gehui Shen, Yunlun Yang, and Zhi-Hong Deng. 2017. Inter-weighted alignment network for sentence pair modeling. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 1190-1200.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Multi-cast attention networks",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Tay",
"suffix": ""
},
{
"first": "Anh",
"middle": [],
"last": "Luu",
"suffix": ""
},
{
"first": "Siu Cheung",
"middle": [],
"last": "Tuan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hui",
"suffix": ""
}
],
"year": 2018,
"venue": "KDD",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Multi-cast attention networks. In KDD.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The context-dependent additive recurrent neural net",
"authors": [
{
"first": "Tuan",
"middle": [],
"last": "Quan Hung Tran",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Lai",
"suffix": ""
},
{
"first": "Ingrid",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Trung",
"middle": [],
"last": "Zukerman",
"suffix": ""
},
{
"first": "Hung",
"middle": [],
"last": "Bui",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bui",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1274--1283",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1115"
]
},
"num": null,
"urls": [],
"raw_text": "Quan Hung Tran, Tuan Lai, Gholamreza Haffari, Ingrid Zukerman, Trung Bui, and Hung Bui. 2018. The context-dependent additive recurrent neural net. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1274-1283, New Orleans, Louisiana. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Probabilistic tree-edit models with structured latent variables for textual entailment and question answering",
"authors": [
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics, COLING '10",
"volume": "",
"issue": "",
"pages": "1164--1172",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengqiu Wang and Christopher D. Manning. 2010. Probabilistic tree-edit models with structured latent variables for textual entailment and question answer- ing. In Proceedings of the 23rd International Con- ference on Computational Linguistics, COLING '10, pages 1164-1172, Stroudsburg, PA, USA. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "What is the jeopardy model? a quasisynchronous grammar for qa",
"authors": [
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
}
],
"year": 2007,
"venue": "EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mengqiu Wang, Noah A. Smith, and Teruko Mita- mura. 2007. What is the jeopardy model? a quasi- synchronous grammar for qa. In EMNLP-CoNLL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "WikiQA: A challenge dataset for open-domain question answering",
"authors": [
{
"first": "Yi",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yih",
"middle": [],
"last": "Wen-Tau",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Meek",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2013--2018",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1237"
]
},
"num": null,
"urls": [],
"raw_text": "Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain ques- tion answering. In Proceedings of the 2015 Con- ference on Empirical Methods in Natural Language Processing, pages 2013-2018, Lisbon, Portugal. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Answer extraction as sequence tagging with tree edit distance",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callisonburch",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2013,
"venue": "North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xuchen Yao, Benjamin Van Durme, Chris Callison- burch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In North American Chapter of the Association for Computa- tional Linguistics (NAACL).",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Question answering using enhanced lexical semantic models",
"authors": [
{
"first": "Ming-Wei",
"middle": [],
"last": "Wen-Tau Yih",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Meek",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pastusiak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1744--1753",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1744-1753, Sofia, Bulgaria. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Bilateral multi-perspective matching for natural language sentences",
"authors": [
{
"first": "Radu Florian Zhiguo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Wael",
"middle": [],
"last": "Hamza",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17",
"volume": "",
"issue": "",
"pages": "4144--4150",
"other_ids": {
"DOI": [
"10.24963/ijcai.2017/579"
]
},
"num": null,
"urls": [],
"raw_text": "Radu Florian Zhiguo Wang, Wael Hamza. 2017. Bilat- eral multi-perspective matching for natural language sentences. In Proceedings of the Twenty-Sixth Inter- national Joint Conference on Artificial Intelligence, IJCAI-17, pages 4144-4150.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "ISA assists users at physical stores",
"type_str": "figure"
},
"FIGREF1": {
"uris": null,
"num": null,
"text": "The system overview of ISA",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Answering questions regarding product specifications",
"type_str": "figure"
},
"FIGREF3": {
"uris": null,
"num": null,
"text": "ISA recommends similar products to the user product specifications.",
"type_str": "figure"
},
"FIGREF4": {
"uris": null,
"num": null,
"text": "The user purchased an office chair with 5% discount",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table",
"html": null
}
}
}
}