ACL-OCL / Base_JSON /prefixC /json /coling /2020.coling-demos.8.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:37:49.632544Z"
},
"title": "A Multilingual Reading Comprehension System for more than 100 Languages",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Ferritto",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Sara",
"middle": [],
"last": "Rosenthal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Mihaela",
"middle": [],
"last": "Bornea",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Kazi",
"middle": [],
"last": "Hasan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Watson",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Rishav",
"middle": [],
"last": "Chakravarti",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": "[email protected]"
},
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "IBM Research AI",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents M-GAAMA, a Multilingual Question Answering architecture and demo system. This is the first multilingual machine reading comprehension (MRC) demo which is able to answer questions in over 100 languages. M-GAAMA answers questions from a given passage in the same or a different language. It incorporates several existing multilingual models that can be used interchangeably in the demo such as M-BERT and XLM-R. The M-GAAMA demo also improves language accessibility by incorporating the IBM Watson machine translation widget to provide additional capabilities to the user to see an answer in their desired language. We also show how M-GAAMA can be used in downstream tasks by incorporating it into an END-TO-END-QA system using CFO (Chakravarti et al., 2019). We experiment with our system architecture on the MultiLingual Question Answering (MLQA) and the CORD-19 COVID (Wang et al., 2020; Tang et al., 2020) datasets to provide insights into the performance of the system.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents M-GAAMA, a Multilingual Question Answering architecture and demo system. This is the first multilingual machine reading comprehension (MRC) demo which is able to answer questions in over 100 languages. M-GAAMA answers questions from a given passage in the same or a different language. It incorporates several existing multilingual models that can be used interchangeably in the demo such as M-BERT and XLM-R. The M-GAAMA demo also improves language accessibility by incorporating the IBM Watson machine translation widget to provide additional capabilities to the user to see an answer in their desired language. We also show how M-GAAMA can be used in downstream tasks by incorporating it into an END-TO-END-QA system using CFO (Chakravarti et al., 2019). We experiment with our system architecture on the MultiLingual Question Answering (MLQA) and the CORD-19 COVID (Wang et al., 2020; Tang et al., 2020) datasets to provide insights into the performance of the system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recent advances in open domain question answering (QA) have mostly revolved around machine reading comprehension (MRC) (Rajpurkar et al., 2018; Yang et al., 2018) . The MRC task is to read and comprehend a given text and then answer questions based on it. Our monolingual MRC approach (Pan et al., 2019) has the capability of being applied to train many Language Models (LMs) such as BERT and RoBERTa (Liu et al., 2019) . We achieve the 2nd rank 1 on the Google Natural Questions (Kwiatkowski et al., 2019 ) leaderboard 2 . In this paper, we expand our approach by introducing new multilingual capabilities using models such as Multilingual-BERT (M-BERT) and XLM-R . This addition has the capability of transcending language boundaries to 104 languages. Figure 1 shows examples of QA pairs from the MLQA dataset . To the best of our knowledge, this is the first published demo of a Multi-Lingual QA system. We achieve this by introducing a novel multilingual component to our QA GAAMA (Go Ahead, Ask Me Anything) (Chakravarti et al., 2019) pipeline.",
"cite_spans": [
{
"start": 119,
"end": 143,
"text": "(Rajpurkar et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 144,
"end": 162,
"text": "Yang et al., 2018)",
"ref_id": "BIBREF17"
},
{
"start": 285,
"end": 303,
"text": "(Pan et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 401,
"end": 419,
"text": "(Liu et al., 2019)",
"ref_id": "BIBREF7"
},
{
"start": 480,
"end": 505,
"text": "(Kwiatkowski et al., 2019",
"ref_id": "BIBREF4"
},
{
"start": 1013,
"end": 1039,
"text": "(Chakravarti et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 754,
"end": 762,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We introduce M-GAAMA, a new system that performs cross-lingual MRC where a given question and context can be in the same or different languages. The system extracts the answer from the context language. Then, the demo utilizes the SOTA IBM Watson machine translation widget to return the answer translated in the desired language of the user 3 . This breaks the language barrier for users who don't understand the given source text but want their question answered effectively and accurately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In addition, we also show how M-GAAMA can be used in downstream tasks by incorporating it with CFO (Chakravarti et al., 2019) , in an end-to-end QA system and demo. We show that this can be extended to perform multilingual QA by utilizing a language identifier to first gather the (target) language in which What record company did Kesha sign with?",
"cite_spans": [
{
"start": 99,
"end": 125,
"text": "(Chakravarti et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "After failing to negotiate with Lava Records and Atlantic Records in 2009, Kesha signed a multialbum deal with +++++++++++++++++ through Dr. Luke's imprint. Having spent the previous six years working on material for her debut album, she began putting finishing touches to the album with Luke and Max Martin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Poco despu\u00e9s, Kesha firm\u00f3 un contrato por varios discos con RCA a trav\u00e9s de Luke, despu\u00e9s de haber sido buscada por Lava Records y el sello de Flo Rida, como tambi\u00e9n Atlantic Records. ++++++ hab\u00eda notado sus seguidores en los medios sociales cuando negoci\u00f3 su contrato, por lo tanto se bas\u00f3 en construir su primer sencillo, \u00abTik Tok\u00bb, ofreciendo la canci\u00f3n en MySpace en julio.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ENGLISH \u00bfCon qu\u00e9 compa\u00f1\u00eda discogr\u00e1fica firm\u00f3 Kesha?",
"sec_num": null
},
{
"text": "Apr\u00e8s avoir \u00e9chou\u00e9 \u00e0 n\u00e9gocier avec Lava Records et Atlantic Records en 2009, Kesha a sign\u00e9 un contrat de plusieurs albums avec +++++++++++++++++ sous l'empreinte du Dr Luke. Apr\u00e8s avoir pass\u00e9 les six derni\u00e8res ann\u00e9es \u00e0 travailler sur le mat\u00e9riel de son premier album, elle a commenc\u00e9 \u00e0 mettre la touche finale \u00e0 l'album avec Luke et Max Martin. Pour l'album, elle a \u00e9crit 200 chansons. FRENCH Figure 1 : Examples of Q/C pairs about Kesha in three languages that our system answers correctly: English, Spanish, and French. The first two examples originate from the MLQA challenge. The answers are shown as answer.",
"cite_spans": [],
"ref_spans": [
{
"start": 393,
"end": 401,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Avec quelle maison de disques Kesha a-t-elle sign\u00e9?",
"sec_num": null
},
{
"text": "the question was asked. END-TO-END-QA then retrieves passages from an index in the appropriate target language and runs our multilingual MRC system on it. Since the answer is extracted from the target language, no translation is required.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Avec quelle maison de disques Kesha a-t-elle sign\u00e9?",
"sec_num": null
},
{
"text": "We first demonstrate the effectiveness of M-GAAMA on the MLQA dataset and then also show its effectiveness on the CORD-19 (Wang et al., 2020; Tang et al., 2020) corpus which contains research articles regarding COVID-19. The COVID-19 pandemic has caused an abundance of research to be published on a daily basis. Not all of the articles are available in English, and people want to ask questions in their native language. Providing the capability to ask questions on research in all languages is vital for ensuring that important and recent information is not overlooked and available to everyone. We show that M-GAAMA has the capability of providing this information for all language speakers and articles by finding answers in translated CORD-19 articles.",
"cite_spans": [
{
"start": 122,
"end": 141,
"text": "(Wang et al., 2020;",
"ref_id": null
},
{
"start": 142,
"end": 160,
"text": "Tang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Avec quelle maison de disques Kesha a-t-elle sign\u00e9?",
"sec_num": null
},
{
"text": "In summary, our contribution is the first published multi-lingual QA demo which works in over 100 languages. It returns an appropriate answer in the language that the question was originally asked. It incorporates several multilingual components including multilingual LMs, machine translation, and indexed corpora in multiple languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Avec quelle maison de disques Kesha a-t-elle sign\u00e9?",
"sec_num": null
},
{
"text": "The rest of the paper is organized as follows: We first discuss related work, then talk about the data used in our experiments and models. Sections 4 and 5 discuss the demo system and model architecture. Finally, we discuss the Model and Runtime Experiments on MLQA and the COVID-19 CORD-19 dataset (Wang et al., 2020; Tang et al., 2020) in Section 6.",
"cite_spans": [
{
"start": 299,
"end": 318,
"text": "(Wang et al., 2020;",
"ref_id": null
},
{
"start": 319,
"end": 337,
"text": "Tang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Avec quelle maison de disques Kesha a-t-elle sign\u00e9?",
"sec_num": null
},
{
"text": "Few other QA demos exist; BERTSerini (Yang et al., 2019) , leverages the Anserini IR toolkit (Yang et al., 2017) to extract relevant documents given a question, then uses BERT-based techniques to extract the correct answer. However, their demo is designed to perform only mono-lingual English QA. The GAAMA and CFO (Chakravarti et al., 2019) demos also only performs English QA. In contrast, M-GAAMA and our downstream END-TO-END-QA task perform cross-lingual QA.",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF18"
},
{
"start": 93,
"end": 112,
"text": "(Yang et al., 2017)",
"ref_id": "BIBREF16"
},
{
"start": 315,
"end": 341,
"text": "(Chakravarti et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Several cross lingual large scale representations have been created by training a large scale transformer (Vaswani et al., 2017 ) based masked language model on text in multiple languages. The use of pretrained multilingual language models such as M-BERT , XLM (Lample and Conneau, 2019) , and XLM-R achieve the previous SOTA on cross-lingual tasks including question answering . We train our underlying MRC system with these pre-trained language models and achieve results that are consistently as strong as prior work.",
"cite_spans": [
{
"start": 106,
"end": 127,
"text": "(Vaswani et al., 2017",
"ref_id": "BIBREF14"
},
{
"start": 261,
"end": 287,
"text": "(Lample and Conneau, 2019)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Many datasets for English MRC have been introduced with annotated Wikipedia documents including (Rajpurkar et al., 2016; Rajpurkar et al., 2018; Yang et al., 2018; Kwiatkowski et al., 2019) . Fewer resources are available for the cross-lingual setting. The MLQA dataset contains parallel instances in 7 languages where the context is found in Wikipedia. The TyDiQA (Clark et al., 2020) dataset containes instances in 11 languages. However, TyDiQA is not parallel and it only has instances where the question and context are in the same language. For an individual case with exposure lying between 1 E and 2 E , the likelihood function for an incubation observation was 12 (\\nCommission of China, reporting an incubation time 1 14 -------------------------------. Statistical estimation of the distribution of incubation periods has been done in two other studies.",
"cite_spans": [
{
"start": 96,
"end": 120,
"text": "(Rajpurkar et al., 2016;",
"ref_id": "BIBREF10"
},
{
"start": 121,
"end": 144,
"text": "Rajpurkar et al., 2018;",
"ref_id": "BIBREF11"
},
{
"start": 145,
"end": 163,
"text": "Yang et al., 2018;",
"ref_id": "BIBREF17"
},
{
"start": 164,
"end": 189,
"text": "Kwiatkowski et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 365,
"end": 385,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Para un caso individual con exposici\u00f3n entre 1 E y 2 E, la funci\u00f3n de probabilidad de una observaci\u00f3n de incubaci\u00f3n fue de 12 (\\ nComisi\u00f3n de China, informando un tiempo de incubaci\u00f3n 1 14 \u00ed -----------------------. Se realiz\u00f3 una estimaci\u00f3n estad\u00edstica de la distribuci\u00f3n de los per\u00edodos de incubaci\u00f3n en otros dos estudios ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We test the multilingual capabilities incorporated into our QA system by running experiments on the MLQA dataset . The dataset consists of seven languages: English (en), Spanish (es), German (de), Arabic (ar), Hindi (hi), Vietnamese (vi), and Chinese (zh). To achieve a multilingual parallel QA benchmark the authors apply a novel alignment strategy on Wikipedia articles by identifying Wikipedia sentences with the same meaning in multiple languages. Passages containing these sentences are then presented to the annotators who write questions that are now answerable in multiple languages. We consider this a good resource for evaluating multilingual capabilities on different pairs of languages (e.g., context (c) in English, question (q) in German) due to the parallel q/c pairs available in the corpus. In addition, we use SQUAD 1.1 (Rajpurkar et al., 2016) , which is significantly larger, but only contains English data, for training in a zero-shot scenario.",
"cite_spans": [
{
"start": 838,
"end": 862,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "We also explore QA on COVID-19 articles in a zero-shot scenario to show the relevance and importance of multilingual QA in current events. CovidQA 0.1 (Wang et al., 2020) contains 124 question and document pairs. The dataset comprises of (question, scientific article, exact answer) triples that have been manually created from the literature review page of Kaggles COVID-19 Open Research Dataset Challenge (Tang et al., 2020) . They manually identified the exact answer span as a verbatim extract from the document. We converted their data into SQUAD format for our experiments. We create a multilingual COVID-19 QA dataset using machine translation. We translate both the questions and context in Spanish and Chinese. We align the gold answer between the English and the translated dataset by marking the gold answer with pseudo-HTML tags prior to translation. We recovered the translated answers for all questions. An example of a QA pair in English and Spanish is shown in Figure 2 .",
"cite_spans": [
{
"start": 151,
"end": 170,
"text": "(Wang et al., 2020)",
"ref_id": null
},
{
"start": 407,
"end": 426,
"text": "(Tang et al., 2020)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 977,
"end": 985,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3"
},
{
"text": "In this section we describe the interface for our M-GAAMA demo. We then show an END-TO-END-QA demo as an example that builds upon M-GAAMA using prior work (Chakravarti et al., 2019) .",
"cite_spans": [
{
"start": 155,
"end": 181,
"text": "(Chakravarti et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Demo",
"sec_num": "4"
},
{
"text": "M-GAAMA is a gRPC (Talvar, 2016) server which wraps our LMs for MRC. M-GAAMA provides an MRC interface which can answer questions in over 100 languages. We use M-BERT and XLM-R LMs to drive M-GAAMA's multilingual support. In addition, we provide a Language Translation component made available as a Javascript widget 4 to allow the user to see the answer in the question language or any other language of choice. The M-GAAMA interface weaves the components together using the ReactJS framework 5 . Providing M-GAAMA as a gRPC server allows it to be quite flexible. This enables it to seamlessly transition between being a standalone system and integrating with larger systems. We show this via the downstream END-TO-END-QA task described below.",
"cite_spans": [
{
"start": 18,
"end": 32,
"text": "(Talvar, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Demo",
"sec_num": "4"
},
{
"text": "END-TO-END-QA builds upon M-GAAMA, with a full IR-MRC pipeline. Information Retrieval is obtained using an Elasticsearch index 6 for each language 7 . The user can ask a question in any language for which an index exists. The language of the question is identified using the 'langid' toolkit (Lui and Baldwin, 2012) to determine the appropriate index. The appropriate index is then searched for documents in the target language. These documents are then evaluated together with the user's question by M-GAAMA. Finally, answer spans are de-duplicated and sorted by score before being returned to the user. The END-TO-END-QA demo weaves these components together using the CFO framework (Chakravarti et al., 2019) , which is a novel approach for orchestrating services.",
"cite_spans": [
{
"start": 292,
"end": 315,
"text": "(Lui and Baldwin, 2012)",
"ref_id": null
},
{
"start": 685,
"end": 711,
"text": "(Chakravarti et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Demo",
"sec_num": "4"
},
{
"text": "Our MRC QA model accepts a single query-document pair as its input and produces a span from the document along with a prediction score as its output. The underlying QA model is based on (Pan et al., 2019) . The base layer of the QA system encodes the question and the candidate paragraph using the cross-lingual M-BERT and XLM-R representations. An output feed forward layer is added on top of the base layer to produce 3 sets of scores: scores at each token offset marking the likelihood of an answer chunk (1) starting at this offset, (2) ending at this offset, and (3) the entire sequence marking the likelihood of the question being answerable given the current context.",
"cite_spans": [
{
"start": 186,
"end": 204,
"text": "(Pan et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Architecture",
"sec_num": "5"
},
{
"text": "We experiment with several multi-lingual models on the MLQA test set prior to integration in our system. We explore zero-shot learning as in prior work by training and fine-tuning on SQUAD 1.1 (Rajpurkar et al., 2016) for M-BERT and the XLM-R QA models. Refer to (Chakravarti et al., 2019; Pan et al., 2019) for additional details about model architecture and implementation. We train our models using the Huggingface code 8 with the default parameters except 3e-5 learning rate, 2 training epochs, 32 batch size and, 790 warmup steps. We show results for the cross-lingual task (XLT), where the question and context are in the same language (e.g. question (q) and context (c) in Chinese) on the left side of Table 1 . We find that our comparable re-implementations of models reported in prior work perform significantly better. We expect the improvement is due to using the Hugging Face implementation and hyper-parameter tuning values. Our best results using XLM-R large are consistently as strong as prior work in all languages. We also show results for the generalized cross-lingual task (G-XLT) where the question and context are in different languages in Table 1 ; XLM-R large achieves the best results in this experiment as well. We also compare the performance of the multilingual models with the performance of the English ROBERTA large model on the English MLQA dataset and find the results are similar.",
"cite_spans": [
{
"start": 193,
"end": 217,
"text": "(Rajpurkar et al., 2016)",
"ref_id": "BIBREF10"
},
{
"start": 263,
"end": 289,
"text": "(Chakravarti et al., 2019;",
"ref_id": "BIBREF0"
},
{
"start": 290,
"end": 307,
"text": "Pan et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 709,
"end": 716,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 1161,
"end": 1168,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "We also provide additional analysis for the MLQA results by showing the difference per each question type in Table 2, for the XLT task. Having this information is useful for understanding which question types should be explored in more detail. We notice that the XLM-R L performance is more stable across all question types. All systems obtain the best performance for the \"when\" questions and the lowest for the \"why\" questions. We expect this is because \"why\" questions are more of an explanation making them more challenging while \"when\" questions tend to be easier because they are usually dates or numbers. We determine the question type by examining the English questions. Since MLQA has parallel examples, we used the question id to determine the question type when the question is in different languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "Further, we show the value of having a multilingual model by also exploring QA for COVID-19 using the CORD-19 (Wang et al., 2020; Tang et al., 2020 ) dataset on English and translated data in Spanish and Chinese. The ability to answers questions in other languages is especially important in this use-case",
"cite_spans": [
{
"start": 102,
"end": 129,
"text": "CORD-19 (Wang et al., 2020;",
"ref_id": null
},
{
"start": 130,
"end": 147,
"text": "Tang et al., 2020",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "6"
},
{
"text": "who why where what how which when other avg M-BERT 64.6 44.2 54.9 61.9 62.4 62.6 64.9 61.0 61.8 XLM-R B 68.4 56.9 60.7 63.3 67.4 64.1 75.9 65.5 65.1 XLM-R L 76.5 66.7 68.9 70.9 74.4 71.9 81.7 71.9 72.7 Table 2 : F1 score on the MLQA test set for the cross-lingual transfer task (XLT). Training data is SQUAD 1.1. B is the Base model and L is the Large model. The best performing question types are shown in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 202,
"end": 209,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "We also include the XLT averages from because the corpus is rapidly growing and some papers may only be available in a single language. The results are shown on the right side of Table 1 . Although the overall performance is lower than MLQA, results are consistent across languages. XLM-R is still the best performing model. In contrast to passage level QA in MLQA and SQUAD, the CORD-19 dataset is document level. We expect this causes a large detriment to the performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 186,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "Finally, while the best performing model is XLM-R large, there is merit to including the M-BERT model in the demo due to its reduced size which makes deployment more scalable. We use our M-BERT model for runtime experiments. A single x86-64 Intel R core is used as the CPU whereas one Nvidia R Tesla R V100 is used as the GPU. For brevity we show results for a random subset of five context question language pairs in Table 3 . As expected running the model on GPU is faster than CPU: on average a given language pair is processed 19 times faster on GPU than CPU as shown in Table 3 . The GPU also produces more consistent runtimes than CPU: standard deviation in CPU runtimes for each language pair is 32 times more than on GPU. We also find that not all languages decode equally quickly. Language pairs including English, particularly as the context, are the quickest to decode on GPU. Chinese and Hindi contexts take 2 to 3 times as long. The same trend holds on CPU, where the multiplier is approximately 1.5. Additionally, these differences are not fully explained by differing context sizes. On average Chinese and Hindi contexts are 1.4 and 0.9 times as long as their English counterparts respectively as seen in Table 4 of . Question sizes are an order to two of magnitude shorter than contexts. This indicates that some languages decode faster than others even when accounting for context sizes.",
"cite_spans": [],
"ref_spans": [
{
"start": 418,
"end": 425,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 575,
"end": 582,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 1220,
"end": 1228,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "F1",
"sec_num": null
},
{
"text": "In this paper we present our M-GAAMA demo, an interface for interacting with our multilingual QA MRC system. To the best of our knowledge we are the first to present a QA demo with multilingual capabilities in over 100 languages. We enable the user to be able to ask a question in one language, find the answer in another language, and with the use of machine translation the user can see the answer in the question language or another desired language. We also show how M-GAAMA can be used in a downstream task in our END-TO-END-QA demo. Finally, we show that our system achieves results that are consistently as strong as prior work on the MLQA dataset using XLM-R-Large on all seven languages. It can also be used to perform QA in current events via the CORD-19 COVID-19 (Wang et al., 2020; Tang et al., 2020) dataset. In the future we plan on experimenting with additional QA datsets such as Natural Questions (Kwiatkowski et al., 2019) and TyDiQA (Clark et al., 2020) .",
"cite_spans": [
{
"start": 757,
"end": 793,
"text": "CORD-19 COVID-19 (Wang et al., 2020;",
"ref_id": null
},
{
"start": 794,
"end": 812,
"text": "Tang et al., 2020)",
"ref_id": null
},
{
"start": 914,
"end": 940,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF4"
},
{
"start": 952,
"end": 972,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://www.ibm.com/watson/services/language-translator/ 5 https://reactjs.org/ 6 https://hub.docker.com/_/elasticsearch/ 7 In our implementation we built an index in English and Spanish as a proof of concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/huggingface/transformers",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Andy Sakrajda for the help with IBM Watson Translation pipeline and Vittorio Castelli and Cezar Pendus with the help in building the multiligual search corpus. We would also like to thank the authors of the MLQA and XLM-R paper for helping us by sharing the hyper-parameters to repeat some of their experiments and help us while we debug the XLM-R models for Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "8"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "CFO: A framework for building production nlp systems",
"authors": [
{
"first": "Rishav",
"middle": [],
"last": "Chakravarti",
"suffix": ""
},
{
"first": "Cezar",
"middle": [],
"last": "Pendus",
"suffix": ""
},
{
"first": "Andrzej",
"middle": [],
"last": "Sakrajda",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Ferritto",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Vittorio",
"middle": [],
"last": "Castelli",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Murdock",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Avirup",
"middle": [],
"last": "Sil",
"suffix": ""
}
],
"year": 2019,
"venue": "EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rishav Chakravarti, Cezar Pendus, Andrzej Sakrajda, Anthony Ferritto, Lin Pan, Michael Glass, Vittorio Castelli, J William Murdock, Radu Florian, Salim Roukos, and Avirup Sil. 2019. CFO: A framework for building production nlp systems. EMNLP-IJCNLP, Demo Track.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Nikolaev",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jenni- maria Palomaki. 2020. Tydi qa: A benchmark for information-seeking question answering in typologically diverse languages. TACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.02116"
]
},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual repre- sentation learning at scale. arXiv preprint arXiv:1911.02116.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In NAACL-HLT.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Natural Questions: a benchmark for question answering research. TACL",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Kelcey",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Matthew Kelcey, Jacob Devlin, Kenton Lee, Kristina N. Toutanova, Llion Jones, Ming- Wei Chang, Andrew Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: a benchmark for question answering research. TACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Cross-lingual language model pretraining",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems (NeurIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. Advances in Neural Information Processing Systems (NeurIPS).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Mlqa: Evaluating crosslingual extractive question answering",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Ouz",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Barlas Ouz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Evaluating cross- lingual extractive question answering.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "RoBERTa: A robustly optimized BERT pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. CoRR.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "2012. langid.py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the ACL 2012 System Demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Proceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea, July. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Radu Florian, and Avirup Sil. 2019. Frustratingly easy natural question answering",
"authors": [
{
"first": "Lin",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Rishav",
"middle": [],
"last": "Chakravarti",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Ferritto",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Glass",
"suffix": ""
},
{
"first": "Alfio",
"middle": [],
"last": "Gliozzo",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lin Pan, Rishav Chakravarti, Anthony Ferritto, Michael Glass, Alfio Gliozzo, Salim Roukos, Radu Florian, and Avirup Sil. 2019. Frustratingly easy natural question answering.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "SQuAD: 100,000+ questions for machine comprehension of text",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Konstantin",
"middle": [],
"last": "Lopyrev",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2016,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. EMNLP.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Know what you don't know: Unanswerable questions for SQuAD",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Robin",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.03822"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable questions for SQuAD. arXiv preprint arXiv:1806.03822.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "grpc design and implementation, 5. Talk by Varun Talwar",
"authors": [
{
"first": "Varun",
"middle": [],
"last": "Talvar",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Varun Talvar. 2016. grpc design and implementation, 5. Talk by Varun Talwar, Product Manager at Google at Stanford, California [Accessed: 2019 06 20].",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Phng Ths. B\u00f9i Cm, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly bootstrapping a question answering dataset for covid-19",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Tang",
"suffix": ""
},
{
"first": "Rodrigo",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "Edwin",
"middle": [
"M"
],
"last": "Zhang",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Gupta",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Raphael Tang, Rodrigo Nogueira, Edwin M. Zhang, Nikhil Gupta, Phng Ths. B\u00f9i Cm, Kyunghyun Cho, and Jimmy Lin. 2020. Rapidly bootstrapping a question answering dataset for covid-19. ArXiv, abs/2004.11339.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Curran Associates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Anserini: Enabling the use of lucene for information retrieval research",
"authors": [
{
"first": "Peilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of lucene for information retrieval research. SIGIR. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "HotpotQA: A dataset for diverse, explainable multi-hop question answering",
"authors": [
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "William",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1809.09600"
]
},
"num": null,
"urls": [],
"raw_text": "Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W Cohen, Ruslan Salakhutdinov, and Christo- pher D Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. arXiv preprint arXiv:1809.09600.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "End-toend open-domain question answering with bertserini",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yuqing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Aileen",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xingyu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Luchen",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Kun",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to- end open-domain question answering with bertserini.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "What is the incubation period of the virus?"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Quelle est la p\u00e9riode d'incubation du virus?ENGLISH SPANISH Examples of Q/C pairs about COVID-19. The answers are shown as answer."
},
"TABREF0": {
"type_str": "table",
"text": "Table 1: (Left) F1 score on the MLQA test set for the cross-lingual transfer task (XLT) per language and the mean XLT and G-XLT scores. Training data is SQUAD 1.1. B is the Base model and L is the Large model. (Right) F1 XLT scores on the CORD-19 dataset when training on SQUAD 1.1 in three languages.",
"content": "<table><tr><td/><td/><td/><td/><td/><td colspan=\"2\">MLQA</td><td/><td/><td/><td/><td>COVID-19</td></tr><tr><td>F1</td><td>en</td><td>es</td><td>de</td><td>ar</td><td>hi</td><td>vi</td><td>zh</td><td colspan=\"3\">XLT G-XLT en</td><td>es</td><td>zh</td></tr><tr><td colspan=\"3\">ROBERTA L 84.4 -</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td colspan=\"2\">27.1 -</td><td>-</td></tr><tr><td>M-BERT</td><td colspan=\"9\">80.4 66.7 61.3 51.9 50.7 61.6 60.2 61.8 52.1</td><td colspan=\"2\">22.3 17.0 20.5</td></tr><tr><td>XLM-R B</td><td colspan=\"9\">80.1 67.6 63.0 56.3 61.1 66.2 61.6 65.1 41.2</td><td colspan=\"2\">21.9 19.0 17.0</td></tr><tr><td>XLM-R L</td><td colspan=\"9\">83.9 74.0 69.9 66.3 71.2 74.0 69.9 72.7 67.9</td><td colspan=\"2\">27.0 28.7 25.0</td></tr></table>",
"html": null,
"num": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"content": "<table><tr><td/><td/><td colspan=\"2\">for comparison.</td><td/></tr><tr><td colspan=\"5\">Context Question # Examples T GP U T CP U</td></tr><tr><td>hi</td><td>ar</td><td>186</td><td>50</td><td>721</td></tr><tr><td>en</td><td>de</td><td>512</td><td>35</td><td>1525</td></tr><tr><td>zh</td><td>hi</td><td>189</td><td>61</td><td>628</td></tr><tr><td>en</td><td>ar</td><td>517</td><td>69</td><td>1215</td></tr><tr><td>zh</td><td>ar</td><td>188</td><td>57</td><td>615</td></tr></table>",
"html": null,
"num": null
},
"TABREF2": {
"type_str": "table",
"text": "Dev Set Performance on MLQA benchmarked on the CPU and GPU. Times in seconds.",
"content": "<table/>",
"html": null,
"num": null
}
}
}
}