{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:41.440080Z" }, "title": "MFAQ: a Multilingual FAQ Dataset", "authors": [ { "first": "Maxime", "middle": [], "last": "De Bruyn", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" }, { "first": "Ehsan", "middle": [], "last": "Lotfi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" }, { "first": "Jeska", "middle": [], "last": "Buhmann", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" }, { "first": "Walter", "middle": [], "last": "Daelemans", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Antwerp", "location": { "country": "Belgium" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "In this paper, we present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. Although this is significantly larger than existing FAQ retrieval datasets, it comes with its own challenges: duplication of content and uneven distribution of topics. We adopt a similar setup as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) and test various bi-encoders on this dataset. Our experiments reveal that a multilingual model based on XLM-RoBERTa (Conneau et al., 2019) achieves the best results, except for English. Lower resources languages seem to learn from one another as a multilingual model achieves a higher MRR than language-specific ones. Our qualitative analysis reveals the brittleness of the model on simple word changes. We publicly release our dataset 1 , model 2 and training script 3 .", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "In this paper, we present the first multilingual FAQ dataset publicly available. We collected around 6M FAQ pairs from the web, in 21 different languages. Although this is significantly larger than existing FAQ retrieval datasets, it comes with its own challenges: duplication of content and uneven distribution of topics. We adopt a similar setup as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) and test various bi-encoders on this dataset. Our experiments reveal that a multilingual model based on XLM-RoBERTa (Conneau et al., 2019) achieves the best results, except for English. Lower resources languages seem to learn from one another as a multilingual model achieves a higher MRR than language-specific ones. Our qualitative analysis reveals the brittleness of the model on simple word changes. We publicly release our dataset 1 , model 2 and training script 3 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Organizations create Frequently Asked Questions (FAQ) pages on their website to provide a better service to their users. FAQs are also useful to automatically answer the most frequent questions on different communication channels: email, chatbot, or search bar. FAQ retrieval is the task of locating the right answer within a collection of candidate question and answer pairs. It is closely related to the tasks of non-factoid QA and community QA, although it has its own specificities. The total number of possible answers is generally small (the average FAQ page on the web has 6 answers), and only one is correct. Retrieval systems cannot rely on named entities, as they are typically shared by many possible answers. For example, three out of four answers in Table 1 share the COVID-19 entity. Lastly, new user 1 https://huggingface.co/datasets/clips/mfaq 2 https://huggingface.co/clips/mfaq 3 https://github.com/clips/mfaq", "cite_spans": [], "ref_spans": [ { "start": 763, "end": 771, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Is it safe for my child to get a COVID-19 vaccine? Yes. Studies show that COVID-19 vaccines are safe and effective. [...] If I am pregnant, can I get a COVID-19 vaccine? Yes, if you are pregnant, you can receive if COVID-19 vaccine. What are the ingredients in COVID-19 vaccines? Vaccine ingredients can vary by manufacturer. How long does protection from a COVID-19 vaccine last? We don't know how long protection lasts for those who are vaccinated. [...] queries are matched against pairs of questions and answers, as opposed to passages for non-factoid QA.", "cite_spans": [ { "start": 116, "end": 121, "text": "[...]", "ref_id": null }, { "start": 451, "end": 456, "text": "[...]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "Since FAQ-Finder (Hammond et al., 1995) , researchers applied different methods to the task of FAQ retrieval (Sneiders, 1999; Jijkoun and de Rijke, 2005; Riezler et al., 2007; Karan and \u0160najder, 2016; Sakata et al., 2019) . However, since the advent of deep-learning and Transformers, the interest has somewhat faded compared to other areas of QA (Rogers et al., 2021) . One possible explanation is the lack of a dedicated large-scale dataset. The ones available are mostly limited to English, and domain-specific.", "cite_spans": [ { "start": 17, "end": 39, "text": "(Hammond et al., 1995)", "ref_id": "BIBREF13" }, { "start": 109, "end": 125, "text": "(Sneiders, 1999;", "ref_id": "BIBREF41" }, { "start": 126, "end": 153, "text": "Jijkoun and de Rijke, 2005;", "ref_id": "BIBREF15" }, { "start": 154, "end": 175, "text": "Riezler et al., 2007;", "ref_id": "BIBREF36" }, { "start": 176, "end": 200, "text": "Karan and \u0160najder, 2016;", "ref_id": "BIBREF17" }, { "start": 201, "end": 221, "text": "Sakata et al., 2019)", "ref_id": "BIBREF38" }, { "start": 347, "end": 368, "text": "(Rogers et al., 2021)", "ref_id": "BIBREF37" } ], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "On the other hand, the task of factoid question answering received the attention of many researchers. Recently, Transformers encoders such as Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) have been successfully applied to the retrieval part of factoid QA, overcoming strong baselines such as TF-IDF and BM25. However, we show that DPR's performance on passage retrieval is not directly transferable to FAQ retrieval. Lewis et al. (2021) recently released PAQ, a dataset of 65M pairs of Probably Asked Questions. However, answers are typically short in PAQ (a few words), which differs from FAQs where answers are longer than questions.", "cite_spans": [ { "start": 172, "end": 196, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF19" }, { "start": 426, "end": 445, "text": "Lewis et al. (2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "Another way to answer users' questions is to use Knowledge Grounded Conversation models as it does not require the pre-generation of all possible pairs of questions and answers (Komeili et al., 2021; De Bruyn et al., 2020) . However, at the time of writing these models can hallucinate knowledge , which limits their attractiveness in a corporate environment.", "cite_spans": [ { "start": 177, "end": 199, "text": "(Komeili et al., 2021;", "ref_id": "BIBREF22" }, { "start": 200, "end": 222, "text": "De Bruyn et al., 2020)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "In this paper, we provide the first multilingual dataset of FAQs. We collected around 6M FAQ pairs from the web in 21 different languages. This is significantly larger than existing datasets. However, collecting data from the web brings its own challenges: duplication of content and uneven distribution of topics. We also provide the first multilingual FAQ retriever. We show that models trained on all languages at once outperform monolingual models (except for English).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "The remainder of the paper is organized as follows. We first review the existing models and datasets available for the task of FAQ retrieval. We then present our own dataset and apply different models to it. We finally perform some analysis on the results and conclude. Our dataset and model are available on the HuggingFace Hub 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Example FAQs", "sec_num": null }, { "text": "In this section, we review the existing literature on FAQ retrieval. We first start by reviewing available models and then look at the available datasets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Since the release of FAQ-Finder (Hammond et al., 1995; Burke et al., 1997) and Auto-FAQ (Whitehead, 1995) , several methods have been presented. We grouped them into three categories: lexical, unsupervised, and supervised.", "cite_spans": [ { "start": 32, "end": 54, "text": "(Hammond et al., 1995;", "ref_id": "BIBREF13" }, { "start": 55, "end": 74, "text": "Burke et al., 1997)", "ref_id": "BIBREF2" }, { "start": 79, "end": 105, "text": "Auto-FAQ (Whitehead, 1995)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2.1" }, { "text": "Lexical FAQ-Finder (Hammond et al., 1995; Burke et al., 1997) matches user queries to FAQ questions of the Usenet dataset using Term Frequency-Inverse Document Frequency (TF-IDF). The system tries to bridge the lexical gap between users' queries and FAQ pairs by using the semantic network WordNet (Miller, 1995) to establish correlations between related terms. FAQ-Finder assumes that the question half of the QA pair is the most 4 dataset, model and training script relevant for matching to a new query. Tomuro and Lytinen (2004) improved upon FAQ-Finder by including the other half of the QA pair (the answer). Xie et al. (2020) uses a knowledge graph-based QA framework that considers entities and triples in texts as knowledge anchors. This approach requires the customization of a knowledge graph, which is labor-intensive and domain-specific. Sneiders (1999) used a rule-based technique called Prioritized Keyword Matching on top of a traditional TF-IDF approach. The use of shallow language understanding means that the matching is based on keyword comparison. Each FAQ entry must be manually annotated with a set of required and optional keywords. Sneiders (2002a Sneiders ( ,b, 2009 Sneiders ( , 2010 brings further developments on the idea. Moreo et al. (2013) proposes an approach based on semi-automatic generation of regular expression for matching queries with answers. Yang (2009) integrates a domain ontology, user modeling, and a template-based approach to tackle this problem.", "cite_spans": [ { "start": 19, "end": 41, "text": "(Hammond et al., 1995;", "ref_id": "BIBREF13" }, { "start": 42, "end": 61, "text": "Burke et al., 1997)", "ref_id": "BIBREF2" }, { "start": 298, "end": 312, "text": "(Miller, 1995)", "ref_id": "BIBREF30" }, { "start": 506, "end": 531, "text": "Tomuro and Lytinen (2004)", "ref_id": "BIBREF46" }, { "start": 850, "end": 865, "text": "Sneiders (1999)", "ref_id": "BIBREF41" }, { "start": 1157, "end": 1172, "text": "Sneiders (2002a", "ref_id": "BIBREF42" }, { "start": 1173, "end": 1192, "text": "Sneiders ( ,b, 2009", "ref_id": null }, { "start": 1193, "end": 1210, "text": "Sneiders ( , 2010", "ref_id": "BIBREF45" }, { "start": 1252, "end": 1271, "text": "Moreo et al. (2013)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2.1" }, { "text": "Unsupervised Seo (2008, 2006) presented a clustering-based method of previous user queries to retrieve the right FAQ pair. The authors used a Latent Semantic Analysis (LSA) method to overcome the lexical mismatch between related queries. Jijkoun and de Rijke (2005) experimented with several combinations of TF-IDF retrievers based on the indexing of different fields (question, answer, with or without stop words, the full text of the page). Riezler et al. (2007) extended this method by incorporating a translation-based query expansion, as initially investigated in Berger et al. (2000) .", "cite_spans": [ { "start": 13, "end": 29, "text": "Seo (2008, 2006)", "ref_id": null }, { "start": 238, "end": 265, "text": "Jijkoun and de Rijke (2005)", "ref_id": "BIBREF15" }, { "start": 443, "end": 464, "text": "Riezler et al. (2007)", "ref_id": "BIBREF36" }, { "start": 569, "end": 589, "text": "Berger et al. (2000)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2.1" }, { "text": "Supervised Moschitti et al. (2007) proposed an approach based on tree kernels. Tree kernels can be defined as similarity metrics that compare a query to an FAQ pair by parsing both texts and calculating the similarity based on the resulting parse trees. Semantic word similarity can also be added to the computation. Filice et al. (2016) expanded on this method and achieved first place in the Community QA shared task at SemEval 2015 (Nakov et al., 2015) . Sakata et al. (2019) were the first to use BERTbased models (Devlin et al., 2018) for the specific task of FAQ retrieval. The relevance between the query and the answers is learned with a fine-tuned BERT model which outputs probability scores for a pair of (query, answer). The scores are then combined using a specific method. Mass et al. (2020) also used a BERT model. Their method is based on an initial retrieval of FAQ candidates followed by three re-rankers. De Bruyn et al. (2021) used a ConveRT (Henderson et al., 2019) model to automatically answer FAQ questions in Dutch.", "cite_spans": [ { "start": 11, "end": 34, "text": "Moschitti et al. (2007)", "ref_id": "BIBREF32" }, { "start": 317, "end": 337, "text": "Filice et al. (2016)", "ref_id": "BIBREF10" }, { "start": 435, "end": 455, "text": "(Nakov et al., 2015)", "ref_id": "BIBREF33" }, { "start": 458, "end": 478, "text": "Sakata et al. (2019)", "ref_id": "BIBREF38" }, { "start": 518, "end": 539, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 786, "end": 804, "text": "Mass et al. (2020)", "ref_id": "BIBREF29" }, { "start": 961, "end": 985, "text": "(Henderson et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "2.1" }, { "text": "In this section, we review the different datasets publicly available. FAQ retrieval datasets can be evaluated on four axes: source of data (community or organizational), the existence of user queries (paraphrases), domain, and language. See Table 2 for an overview.", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 248, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Datasets", "sec_num": "2.2" }, { "text": "Faq-Finder (Hammond et al., 1995; Burke et al., 1997) used a dataset collected from Usenet news groups. FAQs were created on several topics so that newcomers do not ask the same questions again and again. This dataset is multi-domain. More recently, Karan and \u0160najder (2016) released the FAQIR dataset. It was collected from the \"maintenance & repairs\" section of the QA website Yahoo! Answers. The StackFAQ (Karan and \u0160najder, 2018) dataset was collected from the \"web apps\" sections of StackExchange. Feng et al. (2015) collected a QA dataset from the insurancelibrary.com website where a community of insurance expert reply to users' questions. Several authors (for example Filice et al., 2016) also rely on Sem-Eval 2015 Task 3 (Nakov et al., 2015) on Answer Selection in Community Question Answering. It contains pairs of questions and answers in English and Arabic.", "cite_spans": [ { "start": 11, "end": 33, "text": "(Hammond et al., 1995;", "ref_id": "BIBREF13" }, { "start": 34, "end": 53, "text": "Burke et al., 1997)", "ref_id": "BIBREF2" }, { "start": 250, "end": 274, "text": "Karan and \u0160najder (2016)", "ref_id": "BIBREF17" }, { "start": 408, "end": 433, "text": "(Karan and \u0160najder, 2018)", "ref_id": "BIBREF18" }, { "start": 503, "end": 521, "text": "Feng et al. (2015)", "ref_id": "BIBREF9" }, { "start": 732, "end": 752, "text": "(Nakov et al., 2015)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2.2" }, { "text": "There exist few publicly available datasets for organizational FAQs. OrgFAQ (Lev et al., 2020 ) is a notable exception. At the time of writing, it is not yet publicly available.", "cite_spans": [ { "start": 76, "end": 93, "text": "(Lev et al., 2020", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Datasets", "sec_num": "2.2" }, { "text": "In this section, we introduce our new multilingual FAQ dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Multilingual FAQ dataset", "sec_num": "3" }, { "text": "Instead of implementing our own web crawler, we used the Common Crawl: a non-profit organization which provides an open repository of the web. 5 Common Crawl's complete web archive consists of petabytes of data collected over 10 years of web crawling (Ortiz Su\u00e1rez et al., 2020) . The repository is organized in monthly bucket of crawled data.", "cite_spans": [ { "start": 251, "end": 278, "text": "(Ortiz Su\u00e1rez et al., 2020)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "Web pages are saved in three different formats: WARC files for the raw HTML data, WAT files for the metadata, and WET files for the plain text extracts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "For our purposes, we used WARC files as we are interested in the raw HTML data. Similar to Lev et al. (2020) , we looked for JSON-LD 6 tags containing an FAQPage item. Web developers use this tag to make it easy for search engines to parse FAQs from a web page. 7 The language of each FAQ pair is determined with fastText (Joulin et al., 2016) . We also apply some filtering to remove unwanted noise. 8 Using this method, we collected 155M FAQ pairs from 24M different pages.", "cite_spans": [ { "start": 91, "end": 108, "text": "Lev et al. (2020)", "ref_id": "BIBREF26" }, { "start": 262, "end": 263, "text": "7", "ref_id": null }, { "start": 322, "end": 343, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data collection", "sec_num": "3.1" }, { "text": "A common issue with datasets collected from the web is the redundancy of data (Lee et al., 2021) . For example, hotel pages on TripAdvisor typically have an FAQ pair referring to shuttle services from the airport to the hotel. 9 The only changing term is the name of the hotel.", "cite_spans": [ { "start": 78, "end": 96, "text": "(Lee et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Deduplication", "sec_num": "3.2" }, { "text": "Algorithms such as SimHash (Charikar, 2002) and MinHash (Broder, 1997) can detect such duplicates. MinHash is an approximate matching algorithm widely used in large-scale deduplication tasks (Lee et al., 2021; Versley and Panchenko, 2012; Gabriel et al., 2018; Gyawali et al., 2020) . The main idea of MinHash is to efficiently estimate the Jaccard similarity between two documents, represented by their set of n-grams. Because of the sparse nature of n-grams, computing the full Jaccard similarity between all documents is prohibitive. MinHash alleviates this issue by reducing each document to a fixed-length hash which can be used to efficiently approximate the Jaccard similarity between two documents. MinHash has the additional property that similar documents will have similar hashes, we can then use Locality Sensitive Hashing (LSH) (Leskovec et al., 2014) to efficiently retrieve similar documents.", "cite_spans": [ { "start": 27, "end": 43, "text": "(Charikar, 2002)", "ref_id": "BIBREF4" }, { "start": 56, "end": 70, "text": "(Broder, 1997)", "ref_id": "BIBREF1" }, { "start": 191, "end": 209, "text": "(Lee et al., 2021;", "ref_id": null }, { "start": 210, "end": 238, "text": "Versley and Panchenko, 2012;", "ref_id": "BIBREF47" }, { "start": 239, "end": 260, "text": "Gabriel et al., 2018;", "ref_id": "BIBREF11" }, { "start": 261, "end": 282, "text": "Gyawali et al., 2020)", "ref_id": "BIBREF12" }, { "start": 841, "end": 864, "text": "(Leskovec et al., 2014)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Deduplication", "sec_num": "3.2" }, { "text": "In our experiments, we represented each page as a set of 3 consecutive tokens (n-grams). We worked with a document signature length of 100, and 20 bands with 5 rows as parameters for LSH. 6 JavaScript Object Notation for Linked Data 7 More information on FAQPage markup 8 Questions need to contain a question mark (including the Arabic question mark) to avoid keyword questions. Question and answer cannot start with a \"<\", \"{\", or \"[\" to remove \"code like\" data.", "cite_spans": [ { "start": 188, "end": 189, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Deduplication", "sec_num": "3.2" }, { "text": "9 Does Ritz Paris have an airport shuttle? Does Four Seasons Hotel George V have an airport shuttle?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduplication", "sec_num": "3.2" }, { "text": "Size Lang. Domain Source Q>1 A>1 Usenet (Hammond et al., 1995) -En Multi-domain Usenet No No FAQIR (Karan and \u0160najder, 2016) 4,313 En Maintenance Yahoo! Answers Yes Yes StackFAQ (Karan and \u0160najder, 2018) 719 En Web apps StackExchange Yes Yes InsuranceQA (Feng et al., 2015) 12,887 En Insurance Insurance Library No Yes CQA-QL (Nakov et al., 2015) 2,600 En Qatar Qatar living forum No Yes Fatwa corpus (Nakov et al., 2015) 1,300 Table 2 : List of the common datasets used in FAQ retrieval. Size is the number of pairs available. Q>1 denotes if the dataset has multiple available questions for a single answer (i.e., does the dataset have paraphrases), while A>1 denotes if the dataset has multiple answers for a given question.", "cite_spans": [ { "start": 40, "end": 62, "text": "(Hammond et al., 1995)", "ref_id": "BIBREF13" }, { "start": 99, "end": 124, "text": "(Karan and \u0160najder, 2016)", "ref_id": "BIBREF17" }, { "start": 178, "end": 203, "text": "(Karan and \u0160najder, 2018)", "ref_id": "BIBREF18" }, { "start": 254, "end": 273, "text": "(Feng et al., 2015)", "ref_id": "BIBREF9" }, { "start": 326, "end": 346, "text": "(Nakov et al., 2015)", "ref_id": "BIBREF33" }, { "start": 401, "end": 421, "text": "(Nakov et al., 2015)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 428, "end": 435, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Name", "sec_num": null }, { "text": "Ar Quran Fatwa website No Yes M-FAQ (ours) 6,134,533 Multi Multi Multi No No", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name", "sec_num": null }, { "text": "These parameters ensure a 99.6% probability that documents with a Jaccard similarity of 0.75 will be identified. We subsequently compute the true Jaccard similarity for all matches.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name", "sec_num": null }, { "text": "We follow the approach of NearDup (Lee et al., 2021) and subsequently create a graph of documents. Each node on the graph is an FAQ page, and they share an edge if their true Jaccard similarity is larger than 0.75. We then compute all the independent sub-graphs, each representing a graph of duplicated pages. We only keep one page per sub-graph.", "cite_spans": [ { "start": 34, "end": 52, "text": "(Lee et al., 2021)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Name", "sec_num": null }, { "text": "Using this method, we trimmed the number of FAQ pages from 24M to 1M.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Name", "sec_num": null }, { "text": "After deduplication, our dataset contains around 6M FAQ pairs coming from 1M different web pages, spread on 26K root web domains. 10 This is significantly bigger than other FAQ datasets publicly available at the time of writing (see Table 2 for comparison).", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 240, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Description", "sec_num": "3.3" }, { "text": "Our dataset is composed of pairs of FAQs grouped by language and source page (URL). We collected data in 21 different languages. 11 The most common one is English, with 58% of the FAQ pairs, followed by German and Spanish with 13% and 8% respectively. 10 We define a root web domain as the last substring before the extension (e.g. TripAdvisor is the root web domain in fr.tripadvisor.com). In other words, we strip the extension and any subdomain. 11 We did not target specific languages, however, we removed languages with fewer than 250 pairs. Common languages such as Chinese, Hindi, Arabic and Japanese are missing. Although we do not have an official reason why, we think it may be because of our initial filtering or the fact ldjson markup is not widely used in these languages.", "cite_spans": [ { "start": 129, "end": 131, "text": "11", "ref_id": null }, { "start": 252, "end": 254, "text": "10", "ref_id": null }, { "start": 449, "end": 451, "text": "11", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Description", "sec_num": "3.3" }, { "text": "For a given language, the target size of the validation set is equal to 10% of the total number of pairs. However, two features of our dataset call for a more fine-grained approach.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training and validation sets", "sec_num": "3.4" }, { "text": "Even though we deduplicated the dataset, FAQ pages tend to originate from the same root domain. As an example, kayak (kayak.com, kayak.es, etc.) is the largest contributor to the dataset. While this is not a problem for the training set (one can always restrict the number of pages per domain), it is an issue for the validation set, as we want to assess the quality of the model on a broad set of topics. Having several large root domain contributors skews the dataset to these topics. We make the simplifying assumption that different web domains have different topics of interest. Research on the true topic distribution is left for future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Root domain distribution", "sec_num": "3.4.1" }, { "text": "We artificially increased the topic breadth of the validation set by restricting the contribution of each root domain. In the validation set, a single root domain can only contribute up to 3 FAQ pages. This method reduces the contribution of the largest domain from 21% in the training set to 3% in the validation set. Furthermore, we make sure there is no overlap of root domain between the training and validation set. 12", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Root domain distribution", "sec_num": "3.4.1" }, { "text": "The distribution of the number of pairs per page is highly uneven (see Figure 1 ). Around 50% of the pages have 5 or fewer pairs per page. Intuitively, we prefer pages with a higher number of FAQs as it is harder to pick the right answers amongst 100 candidates than 5. We thus artificially increased the difficulty of the validation by first selecting pages with a higher number of FAQ pairs per page. See Figure 1 for a comparison between the training and validations set.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 79, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 407, "end": 415, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Pairs per page concentration", "sec_num": "3.4.2" }, { "text": "The fact that our dataset is multilingual can lead to issues of cross-lingual leakage. Having pages from expedia.fr in the training set, and pages from expedia.es in the validation set can overstate the performance of the models. We avoid such problems by restricting root domains in the validation associated with only one language (e.g. expedia would be excluded from the validation set because it is associated with French and Spanish pages).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-lingual leakage", "sec_num": "3.4.3" }, { "text": "In this section, we describe the FAQ retrieval models used in our experiments. Let P be the set of all user queries and F = {(q 1 , a 1 ), ..., (q n , a n )} be the set of all FAQ pairs for a given domain. An FAQ retrieval model takes as input a user's query p i \u2208 P and an FAQ pair f j \u2208 F , and outputs a relevance score h(p i , f j ) for f j with respect to p i . However, our dataset does not contain live user queries (or paraphrases) P , we thus use questions q as queries P = {q 1 , ..., q n } and restrict the FAQ set to the answers F = {a 1 , ..., a n }. The task becomes to rank the answers A according to the questions Q.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models", "sec_num": "4" }, { "text": "We experimented with several baselines: two unsupervised and one supervised. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.1" }, { "text": "The traditional information retrieval method (Salton et al., 1975 ) uses a vector representation for q i and a i and computes a dot-product as similarity relevance score h(q i , a i ). We use n-grams of size (1, 3) and fit one model per FAQ page.", "cite_spans": [ { "start": 45, "end": 65, "text": "(Salton et al., 1975", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "TF-IDF", "sec_num": "4.1.1" }, { "text": "Encoding the semantics of a question q i and an answer a i can be achieved with the Universal Sentence Encoder (Cer et al., 2018) . The model works on monolingual and multilingual data. We encode each question and answer independently, and then perform a dot-product of the questions' and answers' representations.", "cite_spans": [ { "start": 111, "end": 129, "text": "(Cer et al., 2018)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Universal Sentence Encoder", "sec_num": "4.1.2" }, { "text": "Dense Passage Retrieval (DPR) (Karpukhin et al., 2020) is a state of the art method for passage retrieval. It uses a bi-encoder to encode questions and passages into a shared embedding space. We finetune DPR on our dataset using the same procedure described in Section 4.2.2.", "cite_spans": [ { "start": 30, "end": 54, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Dense Passage Retrieval (DPR)", "sec_num": "4.1.3" }, { "text": "Bi-encoders encode questions q i and answers a i independently and output a fixed d-dimensional representation for each query and answer. The encoder can be shared or independent to generate the representations. 13 At run-time, new queries are encoded using the encoder, and the top-k closest answers are returned. The representations for the answers can be computed once, and cached for later use. Similarity is typically computed using a dot product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "XLM-Roberta as bi-encoders", "sec_num": "4.2" }, { "text": "The state-of-the-art encoders such as RoBERTa (Liu et al., 2019) and BERT (Devlin et al., 2018) are trained for English only. As our dataset is multilingual we opted for XLM-RoBERTa (Conneau et al., 2019) , it was trained using masked language modeling on one hundred languages, using more than 2TB of filtered CommonCrawl data. This choice allows us to leverage the size of the English data for less represented languages.", "cite_spans": [ { "start": 46, "end": 64, "text": "(Liu et al., 2019)", "ref_id": "BIBREF28" }, { "start": 74, "end": 95, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 182, "end": 204, "text": "(Conneau et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Multilingual", "sec_num": "4.2.1" }, { "text": "Given pairs of questions and answers, along with a list of non-relevant answers, the bi-encoder model is trained to minimize the negative log-likelihood of picking the positive answer amongst the nonrelevant answers. Non-relevant answers can be divided into in-batch negatives and hard negatives.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "In-batch negatives In-batch negatives are the other answers from the batch, including them into the set of non-relevant answers is extremely efficient as their representations are already computed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "Hard negatives Hard negatives are close but incorrect answers to the questions. Including them improves the performance of retrieval models (Karpukhin et al., 2020; Xiong et al., 2020) . Hard negatives can either come from a standard retrieval system such as BM25, or an earlier iteration of the dense model (Xiong et al., 2020; Oguz et al., 2021) . The structure of our dataset, pages of FAQs, facilitates the search for hard negatives. As an example in Table 1 , three out of four answers share the term COVID-19. The model now has to understand the semantic of sentences instead of matching on shared named entities. By including all the pairs of the same page in the same training batch, we ensure that in-batch negatives act as hard negatives. 14", "cite_spans": [ { "start": 140, "end": 164, "text": "(Karpukhin et al., 2020;", "ref_id": "BIBREF19" }, { "start": 165, "end": 184, "text": "Xiong et al., 2020)", "ref_id": null }, { "start": 308, "end": 328, "text": "(Xiong et al., 2020;", "ref_id": null }, { "start": 329, "end": 347, "text": "Oguz et al., 2021)", "ref_id": null } ], "ref_spans": [ { "start": 455, "end": 462, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "answers answers questions questions Shared Q 1 A 1 Q 2 A 1 ... Q N A 1 Q N A 2 ... Q 2 A 2 Q 1 A 2 ... ... ... ... Q N A 3 ... Q 2 A 3 Q 1 A 3 A 1 A 2 ... A N Q 1 Q 2 ... Q N answers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "Encoder Encoder questions Figure 2 : Diagram of our architecture. A shared encoder encodes the questions and the answers independently. Each question's representation (vector) is compared to each answer's representation from the same batch using a dot-product.", "cite_spans": [], "ref_spans": [ { "start": 26, "end": 34, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "Multilingual Although XLM-Roberta is multilingual, we do not expect the model to perform cross-lingual retrieval (i.e. using one language for the query and another for the answer). We make sure that each batch is composed of pairs from the same language. This increases the difficulty of the task. Otherwise, the model could rely on the language of answers as a differentiating factor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training", "sec_num": "4.2.2" }, { "text": "In this section, we evaluate the retrieval performance of our model on MFAQ. In all our experiments, we use three metrics to evaluate the performance: precision-at-one (P@1), mean reciprocal rank (MRR), and recall-at-5 (R@5). For space reasons, we only report on MRR in the main text, the full results are available in the annex. We used the same parameters for all experiments unless mentioned otherwise. 15 We insert a special token before questions to let the shared encoder know it is encoding a question. Answers are size reaches the desired size, we start over with the remaining pairs. 15 We used a batch size of 800, sequences were limited to 128 tokens (capturing the entirety of 90% of the dataset), an Adam optimizer with a learning rate of 0,0001 (warmup of 1000 steps). Dropout of 25%. respectively prepended with . All of our experiments use a subset of the training set: only one page per domain as this technique achieves higher results. Refer to Section 5.3 for more information. We start by studying the performance of multilingual models, then compare it against monolingual models.", "cite_spans": [ { "start": 406, "end": 408, "text": "15", "ref_id": null }, { "start": 604, "end": 606, "text": "15", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5" }, { "text": "We present in Table 4 a summary of the results of our multilingual training. The model is trained concurrently on the 21 available languages. XLM-RoBERTa achieves a higher MRR on every language compared to the baselines. Low resource languages achieve a relatively high score which could indicate inter-language transfer learning.", "cite_spans": [], "ref_spans": [ { "start": 14, "end": 21, "text": "Table 4", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Multilingual", "sec_num": "5.1" }, { "text": "Next, we attempt to study if a collection of monolingual models are better suited than a single monolingual model. We use language-specific BERT-like models for each language. The list of BERT models per language is available in the annex. We followed the same procedure as described in Section 4.2, except for the encoder which is language-specific.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual", "sec_num": "5.2" }, { "text": "We limited our study of monolingual models to the ten largest languages of MFAQ. We choose these languages as they have sufficient training examples, and pre-trained BERT-like models are readily available. To study the performance of monolingual models we train models using the same procedure as described in Section 4.2 except for the encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Monolingual", "sec_num": "5.2" }, { "text": "The results in Table 5 indicates that a multilingual model outperforms monolingual models in all cases, except for English. These results indicate that leveraging additional languages is beneficial for the task of FAQ retrieval, especially for languages with fewer resources available. Interestingly, RoBERTa slightly beats DPR in English. This underperformance could be explained by the difference in batch size. Because of the dual encoder nature of DPR, we had to reduce the batch size to 320 compared to 800 for RoBERTa.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 5", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Monolingual", "sec_num": "5.2" }, { "text": "Our training procedure ensures that the model never has to use language as a cue to select the appropriate answer. Batches of training data all share the same language. We tested the cross-lingual retrieval capabilities of our multilingual model by translating the queries to English while keeping the answers in the original language. The French performance drops from 80.7 to 78.2, which is still better than the unsupervised baselines. The full results are presented in Table 6. A subsectionSubset of training data We tested the effect of limiting the number of FAQ pages per domain by limiting the training set to one page per web domain. Using this technique, we achieved an average MRR of 80.8 while using all the training data to reach an average MRR of 76.7. Filtering the training set flattens the topic distribution and better matches the validation set. Another possible approach is to randomly select a given page from a domain at each epoch. This technique would act as a natural regularization. This is left for future work.", "cite_spans": [], "ref_spans": [ { "start": 473, "end": 481, "text": "Table 6.", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Cross-lingual", "sec_num": "5.3" }, { "text": "In this section, we dive into the model's predictions and try to understand why and where it goes wrong. We do so by focusing on a single FAQ page from the admission center of the Tepper School of Business. 16 The FAQs are displayed in Table 8 Keyword search We replace some questions with a single keyword. We reduced questions 12, 14, 16 and 20 to cohort, payment plan, soldier veteran and technical requirements. In all cases, the model guessed correctly, showing the model can do a keyword-based search.", "cite_spans": [ { "start": 207, "end": 209, "text": "16", "ref_id": null } ], "ref_spans": [ { "start": 236, "end": 243, "text": "Table 8", "ref_id": "TABREF12" } ], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "6" }, { "text": "Although it can cope with some synonyms (activities -experiences), this qualitative analysis shows our model is overly reliant on keywords for matching questions and answers. Further research on adversarial training of FAQ retrieval is needed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualitative analysis", "sec_num": "6" }, { "text": "Important non-Indo-European languages such as Chinese, Hindi, or Japanese are missing from this dataset. Future work is needed to improve data collection in these languages. Second, we did not evaluate the model on a real-life FAQ retrieval dataset (with user queries). Future work is needed to see if our model can perform question-to-question retrieval, or if it needs further training to do so. A linguistic study could analyze the model's strengths and weaknesses by studying the model's performance by type of questions, answers, and entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "7" }, { "text": "In this work, we presented the first multilingual dataset of FAQs publicly available. Its size and breadth of languages are significantly larger than other datasets available. While language-specific BERT-like models can be applied to the task of FAQ retrieval, we showed it is beneficial to use a multilingual model and train on all languages at once. This method of training outperforms all monolingual models, except for English. Our qualitative analysis reveals our model is overly reliant on keywords to match questions and answers. Monolingual P@1 MRR R@5 P@1 MRR R@5 P@1 MRR R@5 P@1 MRR R@5 P@1 MRR R@5 P@1 MRR R@5 Table 7 : Results of our experiments on MFAQ. XLM-RoBERTa (1 page per domain) is consistently better than the rest, except for English where a RoBERTa model achieves a higher MRR. P@1 = Precision-at-1 (accuracy), MRR = Mean Reciprocal Rank, R@5 = Recall-at-5, One page per domain = subset of the training set. 1 Are the hours flexible enough for full-time working adults? Yes, the MSBA program accommodates students working full-time. Required weekly live sessions, lasting 75 minutes, are held in the evening, and the three residential components, two strongly recommended and one optional, take place over weekends. Students complete all other coursework on their own schedule, but must adhere to deadlines and be prepared to participate in weekly live sessions.", "cite_spans": [], "ref_spans": [ { "start": 622, "end": 629, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "2 Can I take a course from a third-party provider, like Lynda or Coursera, to prepare for the programming requirements of this program? Our goal is to make sure that everyone entering the program has the necessary background to be successful. We strongly recommend that applicants who feel they need additional preparation in programming languages take a for-credit course from an accredited two-or four-year institution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "3 Can I transfer credits into the program?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "No, the Tepper School does not accept transfer credits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "8" }, { "text": "Can the GMAT or GRE requirement be waived?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "No, these test scores are required.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "4", "sec_num": null }, { "text": "Do I have to maintain a certain GPA in the program to graduate? Yes, MSBA degree candidates must maintain a minimum cumulative GPA of 3.0 to graduate.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "5", "sec_num": null }, { "text": "Do you offer the opportunity to preview courses in your program to get a feel for what they are like? Yes we do. To preview one of our courses, please visit our Virtual Class Visit page. You'll be able to register to virtually participate in a course of your choosing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "6", "sec_num": null }, { "text": "How do I learn more about the online learning environment? To preview one of our courses, please visit our Virtual Class Visit page. You'll be able to view upcoming courses and register to virtually attend a course of your choosing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "7", "sec_num": null }, { "text": "How many hours per week should be dedicated to coursework? Students take two classes at a time and should expect to spend at least 10 hours on each course, or 20 hours total for the week. Coursework includes live synchronous meetings, assignments, projects, readings, and quizzes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "8", "sec_num": null }, { "text": "If I need to withdraw from the program, will I get a refund?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "If I need to withdraw from the program, will I get a refund?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "10 If I'm already proficient in basic programming and probability/statistics, do I have to take these courses? Yes, the 46-880 Introduction to Probability and Statistics and 46-881 Programming in R and Python courses are required for all MSBA students. These courses ensure that all students have the necessary skills and knowledge to succeed in courses that follow. For more information, visit the Curriculum page on our website.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "9", "sec_num": null }, { "text": "Is the MSBA offered exclusively on campus? No, the MSBA degree is offered only online, with three optional on-campus experiences. Though they all are optional, we strongly recommend that students attend the BaseCamp and Capstone Project experiences, which occur at the beginning and end of the degree program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "12 Is the MSBA program structured in cohorts? Yes, the part-time, online MSBA is structured in cohorts to optimize student interaction and success in the program.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "11", "sec_num": null }, { "text": "Is the Tepper School participating in the Yellow Ribbon Program? Yes, the Tepper School is participating in the Yellow Ribbon Program. For more information, please visit the Tuition page or contact Mike Danko at uro-vaedbenefits@andrew.cmu.edu.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "13", "sec_num": null }, { "text": "14 Is there a Tuition Payment Plan available? Yes, for more information about a monthly payment plan and debt minimization services, please review our payment options.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "13", "sec_num": null }, { "text": "It's a part-time online program, but are there any on-campus opportunities for students? We have three on-campus experiences. The first is an orientation basecamp, where the students are introduced to the program, interact with faculty, and learn about their cohort. The second, an immersive analytics experience led by top CMU faculty, takes place mid-program. We end the program with a capstone experience where students can present their work to real-world clients and celebrate the end of the program. For more information, visit the On-Campus Experiences page on our website.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "15", "sec_num": null }, { "text": "I'm an active duty soldier/veteran. Am I eligible for an application fee waiver? Yes, as a GMAC military-friendly business school, we waive the $125 application fee for active duty U.S. military personnel, veterans and retirees. Please contact Mike Danko at urovaedbenefits@andrew.cmu.edu to discuss the fee waiver.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "16", "sec_num": null }, { "text": "Must international students come to campus?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "17", "sec_num": null }, { "text": "We recommend attendance at the on-campus experiences, but students who are unable to attend may participate remotely and still meet the requirements of the program. Please note that because the program is delivered online, enrollment in the MSBA will not qualify students for a student visa to enter the United States.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "17", "sec_num": null }, { "text": "What are some examples of roles a graduate could pursue after the program? Business analytics professionals hold a range of positions across sectors and industries. They have titles such as business intelligence analyst, operations research analyst, market research analyst and statistician. Other job titles for these professionals are available here.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "18", "sec_num": null }, { "text": "What are the programming languages that I should have experience in before applying to the program? Basic programming knowledge in a modern language is required for admission. You do not need to be familiar with any specific language or build advanced programming skills before applying to the MSBA program. Your courses in the program will introduce you to relevant languages and provide hands-on experience.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "19", "sec_num": null }, { "text": "What are the technical requirements for the MSBA program? All students must have access to the following technologies in order to participate in the program: Laptop with the following requirements: -Windows -Intel Core i5 processor or higher; 8GB RAM, 256+ hard drive capacity -Macintosh -8GB RAM, 256+ hard drive capacity, Apple Boot Camp -Ability to run Microsoft Windows (run virtually for Macintosh computers) -Ability to install software locally (not cloud based) Full administrative rights for downloading software. -Camera and microphone for remote conferencing. Broadband internet connection (at least 5 MBPS) Ability to access the CMU Learning Management System, Canvas Ability to access the MSBA Web Conferencing Software, Vidyo Continual and unfettered use of the required IT environment, including uninterrupted access to a broadband internet connection, are integral to the program and to each student's ability to fully participate in, and complete, each aspect of the program and the program in its entirety. Students who do not have such access, and/or are not able to maintain such access, may be advised by a representative of the program to withdraw from the program, following the university's prescribed process for withdrawal, and subject to the tuition refund policy as outlined on the Student Financial Services website.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "20", "sec_num": null }, { "text": "What career resources are available for MSBA students and alumni? The Master's Career Center helps students develop strategies focused on their career needs through a variety of services. For example, the career center hosts workshops and webinars in job search fundamentals, such as resume writing, interviewing, and networking. One-on-one career coaching is offered for individualized career planning. Our coaches are steeped in experience in various industries and functions and can support a variety of student interests. Sessions are delivered virtually during times convenient to working professionals. Tepper leverages an extensive network of Fortune 500 companies to identify opportunities for MSBA candidates.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "21", "sec_num": null }, { "text": "What happens if I need to defer starting or withdraw from the program? Deferrals are granted only if an applicant must complete military service or has an extreme emergency. Deposits are refunded in these instances. Students are re-admitted the following year and must submit their deposit before the deadline for their start date. A withdrawal from the Tepper School or Carnegie Mellon University indicates that a student has no intention to return to the program. This process can be initiated by first meeting with an academic advisor to terminate a student's Campus ID and email account. A student who leaves the Tepper School or Carnegie Mellon with a sincere intention to return may petition for a leave of absence (LOA) for up to two calendar years. This process can be initiated by first meeting with an academic advisor to complete the required paperwork.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "22", "sec_num": null }, { "text": "What is the average Quant and Verbal scores for the GRE and GMAT? There is no average score expectation. The test scores are simply one component of the multifaceted admissions process that we consider when making an admissions decision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "23", "sec_num": null }, { "text": "What separates the Tepper School of Business' online MSBA program from other MSBA programs, either online or on-campus? The Tepper School of Business is globally renowned for its analytical approach to business problem solving. It is an integral part of Carnegie Mellon University, a top-tier research university that has become the center for disciplines including data science, robotics, business intelligence and additive manufacturing. Several faculty members of the online MS in Business Analytics also hold appointments in other schools including Carnegie Mellon's top-ranked School of Computer Science. Our MSBA program provides students with exceptionally robust coursework in business analytics techniques, with a special focus on machine learning and optimization. All of the advanced analytics skills we teach are delivered in a business context, ensuring that students graduate knowing how to efficiently, effectively and creatively apply their analytics expertise to business problems. Furthermore, the Tepper School's Accelerate Leadership Center offers students the opportunity to improve their leadership, inter-personal and communication skills, through online assessments and one-on-one coaching. Additionally, the program's on-campus experiences provide opportunities for online students to interact closely with faculty, each other and industry professionals working at companies that look to Carnegie Mellon and the Tepper School for high-caliber talent.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "24", "sec_num": null }, { "text": "What time(s) do the synchronous sessions take place? The weekly live sessions are in the evening (U.S. Eastern Time) and typically last 75 minutes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "25", "sec_num": null }, { "text": "What types of financial aid or scholarships are available to online students? Students may be eligible to take out federal and/or private education loans to cover tuition and other education-related costs. Please view our Tuition page for details. At this time, the Tepper School does not provide scholarships for the MSBA program. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "26", "sec_num": null }, { "text": "https://commoncrawl.org/about/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use the root domain instead of the regular domain name to avoid having help.domain.com in the training set and domain.co.uk in the validation set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We use a shared encoder, which means we use the same network to compute the representation for questions and answers. DPR uses independent encoders.14 To create our batches of training data, we incrementally augment the batch with pairs of a given page. When the batch", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "It was the first page with less than 25 pairs to end with a .edu extension.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "replacing opportunities with events does not work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research received funding from the Flemish Government under the \"Onderzoeksprogramma Artifici\u00eble Intelligentie (AI) Vlaanderen\" programme.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "9" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Bridging the lexical chasm: statistical approaches to answer-finding", "authors": [ { "first": "Adam", "middle": [], "last": "Berger", "suffix": "" }, { "first": "Rich", "middle": [], "last": "Caruana", "suffix": "" }, { "first": "David", "middle": [], "last": "Cohn", "suffix": "" } ], "year": 2000, "venue": "Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval", "volume": "", "issue": "", "pages": "192--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Adam Berger, Rich Caruana, David Cohn, Dayne Fre- itag, and Vibhu Mittal. 2000. Bridging the lexical chasm: statistical approaches to answer-finding. In Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval, pages 192-199.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "On the resemblance and containment of documents", "authors": [ { "first": "A", "middle": [ "Z" ], "last": "Broder", "suffix": "" } ], "year": 1997, "venue": "Proceedings. Compression and Complexity of SEQUENCES 1997 (Cat. No.97TB100171)", "volume": "", "issue": "", "pages": "21--29", "other_ids": { "DOI": [ "10.1109/SEQUEN.1997.666900" ] }, "num": null, "urls": [], "raw_text": "A.Z. Broder. 1997. On the resemblance and con- tainment of documents. In Proceedings. Compres- sion and Complexity of SEQUENCES 1997 (Cat. No.97TB100171), pages 21-29.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Question answering from frequently asked question files: Experiences with the faq finder system", "authors": [ { "first": "Robin", "middle": [ "D" ], "last": "Burke", "suffix": "" }, { "first": "Kristian", "middle": [ "J" ], "last": "Hammond", "suffix": "" }, { "first": "Vladimir", "middle": [], "last": "Kulyukin", "suffix": "" }, { "first": "Steven", "middle": [ "L" ], "last": "Lytinen", "suffix": "" }, { "first": "Noriko", "middle": [], "last": "Tomuro", "suffix": "" }, { "first": "Scott", "middle": [], "last": "Schoenberg", "suffix": "" } ], "year": 1997, "venue": "", "volume": "18", "issue": "", "pages": "57--57", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robin D. Burke, Kristian J. Hammond, Vladimir Ku- lyukin, Steven L. Lytinen, Noriko Tomuro, and Scott Schoenberg. 1997. Question answering from fre- quently asked question files: Experiences with the faq finder system. 18(2):57-57.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St", "suffix": "" }, { "first": "Noah", "middle": [], "last": "John", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "", "middle": [], "last": "Yuan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Similarity estimation techniques from rounding algorithms", "authors": [ { "first": "Moses", "middle": [ "S" ], "last": "Charikar", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, STOC '02", "volume": "", "issue": "", "pages": "380--388", "other_ids": { "DOI": [ "10.1145/509907.509965" ] }, "num": null, "urls": [], "raw_text": "Moses S. Charikar. 2002. Similarity estimation tech- niques from rounding algorithms. In Proceedings of the Thiry-Fourth Annual ACM Symposium on The- ory of Computing, STOC '02, page 380-388, New York, NY, USA. Association for Computing Machin- ery.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bart for knowledge grounded conversations", "authors": [ { "first": "Ehsan", "middle": [], "last": "Maxime De Bruyn", "suffix": "" }, { "first": "Jeska", "middle": [], "last": "Lotfi", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Buhmann", "suffix": "" }, { "first": "", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2020, "venue": "Converse@ KDD", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2020. Bart for knowledge grounded conversations. In Converse@ KDD.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Convert for faq answering", "authors": [ { "first": "Ehsan", "middle": [], "last": "Maxime De Bruyn", "suffix": "" }, { "first": "Jeska", "middle": [], "last": "Lotfi", "suffix": "" }, { "first": "Walter", "middle": [], "last": "Buhmann", "suffix": "" }, { "first": "", "middle": [], "last": "Daelemans", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Maxime De Bruyn, Ehsan Lotfi, Jeska Buhmann, and Walter Daelemans. 2021. Convert for faq answer- ing.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. CoRR, abs/1810.04805.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Applying deep learning to answer selection: A study and an open task", "authors": [ { "first": "Minwei", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Bing", "middle": [], "last": "Xiang", "suffix": "" }, { "first": "Michael", "middle": [ "R" ], "last": "Glass", "suffix": "" }, { "first": "Lidan", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2015, "venue": "2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU)", "volume": "", "issue": "", "pages": "813--820", "other_ids": {}, "num": null, "urls": [], "raw_text": "Minwei Feng, Bing Xiang, Michael R. Glass, Lidan Wang, and Bowen Zhou. 2015. Applying deep learn- ing to answer selection: A study and an open task. In 2015 IEEE Workshop on Automatic Speech Recog- nition and Understanding (ASRU), pages 813-820. IEEE.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "KeLP at SemEval-2016 task 3: Learning semantic relations between questions and answers", "authors": [ { "first": "Simone", "middle": [], "last": "Filice", "suffix": "" }, { "first": "Danilo", "middle": [], "last": "Croce", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)", "volume": "", "issue": "", "pages": "1116--1123", "other_ids": { "DOI": [ "10.18653/v1/S16-1172" ] }, "num": null, "urls": [], "raw_text": "Simone Filice, Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2016. KeLP at SemEval-2016 task 3: Learning semantic relations between ques- tions and answers. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1116-1123. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Identifying and characterizing highly similar notes in big clinical note datasets", "authors": [ { "first": "A", "middle": [], "last": "Rodney", "suffix": "" }, { "first": "Tsung-Ting", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Kuo", "suffix": "" }, { "first": "Chun-Nan", "middle": [], "last": "Mcauley", "suffix": "" }, { "first": "", "middle": [], "last": "Hsu", "suffix": "" } ], "year": 2018, "venue": "Journal of biomedical informatics", "volume": "82", "issue": "", "pages": "63--69", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rodney A Gabriel, Tsung-Ting Kuo, Julian McAuley, and Chun-Nan Hsu. 2018. Identifying and char- acterizing highly similar notes in big clinical note datasets. Journal of biomedical informatics, 82:63- 69.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Deduplication of scholarly documents using locality sensitive hashing and word embeddings", "authors": [ { "first": "Bikash", "middle": [], "last": "Gyawali", "suffix": "" }, { "first": "Lucas", "middle": [], "last": "Anastasiou", "suffix": "" }, { "first": "Petr", "middle": [], "last": "Knoth", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bikash Gyawali, Lucas Anastasiou, and Petr Knoth. 2020. Deduplication of scholarly documents using locality sensitive hashing and word embeddings.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "FAQ finder: a case-based approach to knowledge navigation", "authors": [ { "first": "Kristian", "middle": [], "last": "Hammond", "suffix": "" }, { "first": "Robin", "middle": [], "last": "Burke", "suffix": "" }, { "first": "Charles", "middle": [], "last": "Martin", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Lytinen", "suffix": "" } ], "year": 1995, "venue": "Proceedings the 11th Conference on Artificial Intelligence for Applications", "volume": "", "issue": "", "pages": "80--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristian Hammond, Robin Burke, Charles Martin, and Steven Lytinen. 1995. FAQ finder: a case-based ap- proach to knowledge navigation. In Proceedings the 11th Conference on Artificial Intelligence for Appli- cations, pages 80-86. IEEE.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Convert: Efficient and accurate conversational representations from transformers", "authors": [ { "first": "Matthew", "middle": [], "last": "Henderson", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Vulic", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Henderson, I\u00f1igo Casanueva, Nikola Mrk- sic, Pei-Hao Su, Tsung-Hsien Wen, and Ivan Vulic. 2019. Convert: Efficient and accurate conversa- tional representations from transformers. CoRR, abs/1911.03688.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Retrieving answers from frequently asked questions pages on the web", "authors": [ { "first": "Valentin", "middle": [], "last": "Jijkoun", "suffix": "" }, { "first": "", "middle": [], "last": "Maarten De Rijke", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 14th ACM international conference on Information and knowledge management", "volume": "", "issue": "", "pages": "76--83", "other_ids": {}, "num": null, "urls": [], "raw_text": "Valentin Jijkoun and Maarten de Rijke. 2005. Retriev- ing answers from frequently asked questions pages on the web. In Proceedings of the 14th ACM inter- national conference on Information and knowledge management, pages 76-83.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1607.01759" ] }, "num": null, "urls": [], "raw_text": "Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Faqir-a frequently asked questions retrieval test collection", "authors": [ { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" } ], "year": 2016, "venue": "International Conference on Text, Speech, and Dialogue", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mladen Karan and Jan \u0160najder. 2016. Faqir-a fre- quently asked questions retrieval test collection. In International Conference on Text, Speech, and Dia- logue, pages 74-81. Springer.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Paraphrasefocused learning to rank for domain-specific frequently asked questions retrieval", "authors": [ { "first": "Mladen", "middle": [], "last": "Karan", "suffix": "" }, { "first": "Jan", "middle": [], "last": "\u0160najder", "suffix": "" } ], "year": 2018, "venue": "", "volume": "91", "issue": "", "pages": "418--433", "other_ids": { "DOI": [ "10.1016/j.eswa.2017.09.031" ] }, "num": null, "urls": [], "raw_text": "Mladen Karan and Jan \u0160najder. 2018. Paraphrase- focused learning to rank for domain-specific fre- quently asked questions retrieval. 91:418-433.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.550" ] }, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Highperformance FAQ retrieval using an automatic clustering method of query logs", "authors": [ { "first": "Harksoo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jungyun", "middle": [], "last": "Seo", "suffix": "" } ], "year": 2006, "venue": "", "volume": "42", "issue": "", "pages": "650--661", "other_ids": { "DOI": [ "10.1016/j.ipm.2005.04.002" ] }, "num": null, "urls": [], "raw_text": "Harksoo Kim and Jungyun Seo. 2006. High- performance FAQ retrieval using an automatic clus- tering method of query logs. 42(3):650-661.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Cluster-based FAQ retrieval using latent term weights", "authors": [ { "first": "Harksoo", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Jungyun", "middle": [], "last": "Seo", "suffix": "" } ], "year": 2008, "venue": "Conference Name: IEEE Intelligent Systems", "volume": "23", "issue": "", "pages": "58--65", "other_ids": { "DOI": [ "10.1109/MIS.2008.23" ] }, "num": null, "urls": [], "raw_text": "Harksoo Kim and Jungyun Seo. 2008. Cluster-based FAQ retrieval using latent term weights. 23(2):58- 65. Conference Name: IEEE Intelligent Systems.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Internet-augmented dialogue generation", "authors": [ { "first": "Mojtaba", "middle": [], "last": "Komeili", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better", "authors": [ { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Daphne", "middle": [], "last": "Ippolito", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Nystrom", "suffix": "" }, { "first": "Chiyuan", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Douglas", "middle": [], "last": "Eck", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck, Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data makes language models better.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Mining of Massive Datasets", "authors": [ { "first": "Jure", "middle": [], "last": "Leskovec", "suffix": "" }, { "first": "Anand", "middle": [], "last": "Rajaraman", "suffix": "" }, { "first": "Jeffrey", "middle": [ "David" ], "last": "Ullman", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jure Leskovec, Anand Rajaraman, and Jeffrey David Ullman. 2014. Mining of Massive Datasets, 2nd edi- tion. Cambridge University Press, USA.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A new dataset and analysis on organizational FAQs and user questions", "authors": [ { "first": "Guy", "middle": [], "last": "Lev", "suffix": "" }, { "first": "Michal", "middle": [], "last": "Shmueli-Scheuer", "suffix": "" }, { "first": "Achiya", "middle": [], "last": "Jerbi", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Guy Lev, Michal Shmueli-Scheuer, Achiya Jerbi, and David Konopnicki. 2020. orgFAQ: A new dataset and analysis on organizational FAQs and user ques- tions.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Aleksandra Piktus, Pontus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them", "authors": [ { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yuxiang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Linqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Pasquale", "middle": [], "last": "Minervini", "suffix": "" }, { "first": "Heinrich", "middle": [], "last": "K\u00fcttler", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.07033" ] }, "num": null, "urls": [], "raw_text": "Patrick Lewis, Yuxiang Wu, Linqing Liu, Pasquale Minervini, Heinrich K\u00fcttler, Aleksandra Piktus, Pon- tus Stenetorp, and Sebastian Riedel. 2021. Paq: 65 million probably-asked questions and what you can do with them. arXiv preprint arXiv:2102.07033.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Roberta: A robustly optimized BERT pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Unsupervised FAQ retrieval with question generation and BERT", "authors": [ { "first": "Yosi", "middle": [], "last": "Mass", "suffix": "" }, { "first": "Boaz", "middle": [], "last": "Carmeli", "suffix": "" }, { "first": "Haggai", "middle": [], "last": "Roitman", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "807--812", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yosi Mass, Boaz Carmeli, Haggai Roitman, and David Konopnicki. 2020. Unsupervised FAQ retrieval with question generation and BERT. In Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 807-812.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Wordnet: a lexical database for english", "authors": [ { "first": "A", "middle": [], "last": "George", "suffix": "" }, { "first": "", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1995, "venue": "Communications of the ACM", "volume": "38", "issue": "11", "pages": "39--41", "other_ids": {}, "num": null, "urls": [], "raw_text": "George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39- 41.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Learning regular expressions to templatebased FAQ retrieval systems", "authors": [ { "first": "A", "middle": [], "last": "Moreo", "suffix": "" }, { "first": "E", "middle": [ "M" ], "last": "Eisman", "suffix": "" }, { "first": "J", "middle": [ "L" ], "last": "Castro", "suffix": "" }, { "first": "J", "middle": [ "M" ], "last": "Zurita", "suffix": "" } ], "year": 2013, "venue": "", "volume": "53", "issue": "", "pages": "108--128", "other_ids": { "DOI": [ "10.1016/j.knosys.2013.08.018" ] }, "num": null, "urls": [], "raw_text": "A. Moreo, E. M. Eisman, J. L. Castro, and J. M. Zu- rita. 2013. Learning regular expressions to template- based FAQ retrieval systems. 53:108-128.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Exploiting syntactic and shallow semantic kernels for question answer classification", "authors": [ { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Silvia", "middle": [], "last": "Quarteroni", "suffix": "" }, { "first": "Roberto", "middle": [], "last": "Basili", "suffix": "" }, { "first": "Suresh", "middle": [], "last": "Manandhar", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "776--783", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting syntactic and shallow semantic kernels for question answer classification. In Proceedings of the 45th An- nual Meeting of the Association of Computational Linguistics, pages 776-783. Association for Compu- tational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "SemEval-2015 task 3: Answer selection in community question answering", "authors": [ { "first": "Preslav", "middle": [], "last": "Nakov", "suffix": "" }, { "first": "Llu\u00eds", "middle": [], "last": "M\u00e0rquez", "suffix": "" }, { "first": "Walid", "middle": [], "last": "Magdy", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Moschitti", "suffix": "" }, { "first": "Jim", "middle": [], "last": "Glass", "suffix": "" }, { "first": "Bilal", "middle": [], "last": "Randeree", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 9th International Workshop on Semantic Evaluation (Se-mEval 2015)", "volume": "", "issue": "", "pages": "269--281", "other_ids": { "DOI": [ "10.18653/v1/S15-2047" ] }, "num": null, "urls": [], "raw_text": "Preslav Nakov, Llu\u00eds M\u00e0rquez, Walid Magdy, Alessan- dro Moschitti, Jim Glass, and Bilal Randeree. 2015. SemEval-2015 task 3: Answer selection in commu- nity question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation (Se- mEval 2015), pages 269-281, Denver, Colorado. As- sociation for Computational Linguistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "A monolingual approach to contextualized word embeddings for mid-resource languages", "authors": [ { "first": "Pedro Javier Ortiz", "middle": [], "last": "Su\u00e1rez", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Romary", "suffix": "" }, { "first": "Beno\u00eet", "middle": [], "last": "Sagot", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1703--1714", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pedro Javier Ortiz Su\u00e1rez, Laurent Romary, and Beno\u00eet Sagot. 2020. A monolingual approach to contextual- ized word embeddings for mid-resource languages. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1703-1714, Online. Association for Computational Linguistics.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Statistical machine translation for query expansion in answer retrieval", "authors": [ { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Vasserman", "suffix": "" }, { "first": "Ioannis", "middle": [], "last": "Tsochantaridis", "suffix": "" }, { "first": "O", "middle": [], "last": "Vibhu", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mittal", "suffix": "" }, { "first": "", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics", "volume": "", "issue": "", "pages": "464--471", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stefan Riezler, Alexander Vasserman, Ioannis Tsochan- taridis, Vibhu O. Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 464-471.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "QA dataset explosion: A taxonomy of NLP resources for question answering and reading comprehension", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Isabelle", "middle": [], "last": "Augenstein", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Matt Gardner, and Isabelle Augenstein. 2021. QA dataset explosion: A taxonomy of NLP resources for question answering and reading com- prehension. CoRR, abs/2107.12708.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "FAQ retrieval using queryquestion similarity and BERT-based query-answer relevance", "authors": [ { "first": "Wataru", "middle": [], "last": "Sakata", "suffix": "" }, { "first": "Tomohide", "middle": [], "last": "Shibata", "suffix": "" }, { "first": "Ribeka", "middle": [], "last": "Tanaka", "suffix": "" }, { "first": "Sadao", "middle": [], "last": "Kurohashi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval", "volume": "", "issue": "", "pages": "1113--1116", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wataru Sakata, Tomohide Shibata, Ribeka Tanaka, and Sadao Kurohashi. 2019. FAQ retrieval using query- question similarity and BERT-based query-answer relevance. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Develop- ment in Information Retrieval, pages 1113-1116.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "A vector space model for automatic indexing", "authors": [ { "first": "Gerard", "middle": [], "last": "Salton", "suffix": "" }, { "first": "Anita", "middle": [], "last": "Wong", "suffix": "" }, { "first": "Chung-Shu", "middle": [], "last": "Yang", "suffix": "" } ], "year": 1975, "venue": "Communications of the ACM", "volume": "18", "issue": "11", "pages": "613--620", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerard Salton, Anita Wong, and Chung-Shu Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613-620.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Retrieval augmentation reduces hallucination in conversation", "authors": [ { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Spencer", "middle": [], "last": "Poff", "suffix": "" }, { "first": "Moya", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.07567" ] }, "num": null, "urls": [], "raw_text": "Kurt Shuster, Spencer Poff, Moya Chen, Douwe Kiela, and Jason Weston. 2021. Retrieval augmentation re- duces hallucination in conversation. arXiv preprint arXiv:2104.07567.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "Automated faq answering: Continued experience with shallow language understanding", "authors": [ { "first": "Eriks", "middle": [], "last": "Sneiders", "suffix": "" } ], "year": 1999, "venue": "Question Answering Systems. Papers from the 1999 AAAI Fall Symposium", "volume": "", "issue": "", "pages": "97--107", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eriks Sneiders. 1999. Automated faq answering: Con- tinued experience with shallow language understand- ing. In Question Answering Systems. Papers from the 1999 AAAI Fall Symposium, pages 97-107.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Automated question answering: template-based approach", "authors": [ { "first": "Eriks", "middle": [], "last": "Sneiders", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eriks Sneiders. 2002a. Automated question answering: template-based approach.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Automated question answering using question templates that cover the conceptual model of the database", "authors": [ { "first": "Eriks", "middle": [], "last": "Sneiders", "suffix": "" } ], "year": 2002, "venue": "International Conference on Application of Natural Language to Information Systems", "volume": "", "issue": "", "pages": "235--239", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eriks Sneiders. 2002b. Automated question answering using question templates that cover the conceptual model of the database. In International Conference on Application of Natural Language to Information Systems, pages 235-239. Springer.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "Automated FAQ answering with question-specific knowledge representation for web self-service", "authors": [ { "first": "Eriks", "middle": [], "last": "Sneiders", "suffix": "" } ], "year": 2009, "venue": "2009 2nd Conference on Human System Interactions", "volume": "", "issue": "", "pages": "298--305", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eriks Sneiders. 2009. Automated FAQ answering with question-specific knowledge representation for web self-service. In 2009 2nd Conference on Human Sys- tem Interactions, pages 298-305. IEEE.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Automated email answering by text pattern matching", "authors": [ { "first": "Eriks", "middle": [], "last": "Sneiders", "suffix": "" } ], "year": 2010, "venue": "International Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "381--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eriks Sneiders. 2010. Automated email answering by text pattern matching. In International Conference on Natural Language Processing, pages 381-392. Springer.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Retrieval models and q and a learning with FAQ files", "authors": [ { "first": "Noriko", "middle": [], "last": "Tomuro", "suffix": "" }, { "first": "Steven", "middle": [ "L" ], "last": "Lytinen", "suffix": "" } ], "year": 2004, "venue": "New Directions in Question Answering", "volume": "", "issue": "", "pages": "183--202", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noriko Tomuro and Steven L. Lytinen. 2004. Retrieval models and q and a learning with FAQ files. In New Directions in Question Answering, pages 183-202.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "Not just bigger: Towards better-quality web corpora", "authors": [ { "first": "Yannick", "middle": [], "last": "Versley", "suffix": "" }, { "first": "Yana", "middle": [], "last": "Panchenko", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the seventh Web as Corpus Workshop (WAC7)", "volume": "", "issue": "", "pages": "44--52", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yannick Versley and Yana Panchenko. 2012. Not just bigger: Towards better-quality web corpora. In Pro- ceedings of the seventh Web as Corpus Workshop (WAC7), pages 44-52.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Bucketing of our dataset according to the number of FAQs per page. To make the validation set more challenging, we started by selecting pages with a higher number of pairs.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "eligible for the MSBA program? Yes, international students are eligible for the MSBA program. Please review the International Applicants page for specific requirements.", "type_str": "figure" }, "TABREF0": { "text": "Example FAQs about the COVID-19 vaccine from the CDC website.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF2": { "text": "Summary statistics about our dataset.", "type_str": "table", "content": "
", "num": null, "html": null }, "TABREF4": { "text": "", "type_str": "table", "content": "
", "num": null, "html": null }, "TABREF6": { "text": "", "type_str": "table", "content": "
: MRR of monolingual models versus a sin-
gle multilingual model. The multilingual model out-
performs monolingual models in all languages, except
for English.
", "num": null, "html": null }, "TABREF8": { "text": "MRR results of our cross-lingual analysis. Questions were translated to English while answers remained in the original language.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF9": { "text": "Can the GMAT or GRE requirement be waived? No, these test scores are required. The model is unable to guess that test scores refer to GMAT or GRE. By changing the answer to No, the GMAT or GRE scores are required, the model correctly picks the right answer. In this case, the model correctly identifies the right answer. However, if we remove the fulltime cue, the right answer only arrives in the fourth position. Next, we look at question 15, the model makes a wrong prediction as opportunities is not mentioned in the answer. Changing the question to \u00ab It's a part-time online program, but are there any on-campus [experiences|activities] for students? \u00bb leads to a correct prediction.17", "type_str": "table", "content": "
in
the annex. The multilingual model is correct on
74.07% of the pairs, with an MRR of 85.49. Our
qualitative analysis reveals that the model is bad at
coreference resolution and depends on keywords
for query-answer matching.
Coreference Resolution The model makes a
wrong prediction in question 4 Paraphrase To study if the model is robust to
paraphrasing, we change question 1 from \u00ab Are the
hours flexible enough for full-time working adults?
\u00bb to \u00ab Is it manageable if I already have a full-time
job? \u00bb
", "num": null, "html": null }, "TABREF10": { "text": "Steven D. Whitehead. 1995. Auto-FAQ: an experiment in cyberspace leveraging. 28(1):137-146. Ruobing Xie, Yanan Lu, Fen Lin, and Leyu Lin. 2020. FAQ-based question answering via knowledge anchors. In CCF International Conference on Natural Language Processing and Chinese Computing, pages 3-15. Springer.", "type_str": "table", "content": "
Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang,
Jialin Liu, Paul N. Bennett, Junaid Ahmed, and
Arnold Overwijk. 2020. Approximate nearest neigh-bor negative contrastive learning for dense text re-trieval. CoRR, abs/2007.00808. Sheng-Yuan Yang. 2009. Developing of an ontolog-ical interface agent with template-based linguistic processing technique for FAQ services. 36(2):4049-4060. Publisher: Elsevier. XLM-RoBERTa (full training set)
XLM-Roberta(1 page per domain)
USE
TF-IDF
Random
Language
", "num": null, "html": null }, "TABREF12": { "text": "FAQ pairs from the Tepper School of Business", "type_str": "table", "content": "", "num": null, "html": null } } } }