ACL-OCL / Base_JSON /prefixC /json /clssts /2020.clssts-1.5.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:27:54.632105Z"
},
"title": "Cross-lingual Information Retrieval with BERT",
"authors": [
{
"first": "Zhuolin",
"middle": [],
"last": "Jiang",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Amro",
"middle": [],
"last": "El-Jaroudi",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "William",
"middle": [],
"last": "Hartmann",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Damianos",
"middle": [],
"last": "Karakos",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
},
{
"first": "Lingjun",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {
"laboratory": "Raytheon BBN Technologies",
"institution": "",
"location": {
"postCode": "02138",
"settlement": "Cambridge",
"region": "MA"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Multiple neural language models have been developed recently, e.g., BERT and XLNet, and achieved impressive results in various NLP tasks including sentence classification, question answering and document ranking. In this paper, we explore the use of the popular bidirectional language model, BERT, to model and learn the relevance between English queries and foreign-language documents in the task of cross-lingual information retrieval. A deep relevance matching model based on BERT is introduced and trained by finetuning a pretrained multilingual BERT model with weak supervision, using home-made CLIR training data derived from parallel corpora. Experimental results of the retrieval of Lithuanian documents against short English queries show that our model is effective and outperforms the competitive baseline approaches.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Multiple neural language models have been developed recently, e.g., BERT and XLNet, and achieved impressive results in various NLP tasks including sentence classification, question answering and document ranking. In this paper, we explore the use of the popular bidirectional language model, BERT, to model and learn the relevance between English queries and foreign-language documents in the task of cross-lingual information retrieval. A deep relevance matching model based on BERT is introduced and trained by finetuning a pretrained multilingual BERT model with weak supervision, using home-made CLIR training data derived from parallel corpora. Experimental results of the retrieval of Lithuanian documents against short English queries show that our model is effective and outperforms the competitive baseline approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A traditional cross-lingual information retrieval (CLIR) system consists of two components: machine translation and monolingual information retrieval (Nie, 2010) . The idea is to solve the translation problem first, then the crosslingual IR problem become monolingual IR. However, the performance of translation-based approaches is limited by the quality of the machine translation and it needs to handle to translation ambiguity (Zhou et al., 2012) . One possible solution is to consider the translation alternatives of individual words of queries or documents as in (Zbib et al., 2019; Xu and Weischedel, 2000) , which provides more possibilities for matching query words in relevant documents compared to using single translations. But the alignment information is necessarily required in the training stage of the CLIR system to extract target-source word pairs from parallel data and this is not a trivial task. To achieve good performance in IR, deep neural networks have been widely used in this task. These approaches can be roughly divided into two categories. The first class of approaches uses pretrained word representations or embeddings, such as word2vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014) , directly to improve IR models. Usually these word embeddings are pretrained on large scale text corpora using co-occurrence statistics, so they have modeled the underlying data distribution implicitly and should be helpful for building discriminative models. (Vulic and Moens, 2015) and (Litschko et al., 2018) used pretrained bilingual embeddings to represent queries and foreign documents, and then ranked documents by cosine similarity. (Zheng and Callan, 2015) used word2vec embeddings to learn query term weights. However, their training objectives of trained neural embeddings are different from the objective of IR. The second set of approaches design and train deep neural networks based on IR objectives. These methods have shown impressive results on monolingual IR datasets (Xiong et al., 2017; Guo et al., 2016; Dehghani et al., 2017) . They usually rely on large amounts of query-document relevance annotated data that are expensive to obtain, especially for low-resource language pairs in crosslingual IR tasks. Moreover, it is not clear whether they generalize well when documents and queries are in different languages. Recently multiple pretrained language models have been developed such as BERT (Devlin et al., 2019) and XL-Net (Yang et al., 2019) , that model the underlying data distribution and learn the linguistic patterns or features in language. These models have outperformed traditional word embeddings on various NLP tasks (Yang et al., 2019; Devlin et al., 2019; Peters et al., 2018; Lan et al., 2019) . These pretrained models also provided new opportunities for IR. Therefore, several recent works have successfully applied BERT pretrained models for monolingual IR (Dai and Callan, 2019; Akkalyoncu Yilmaz et al., 2019) and passage re-ranking (Nogueira and Cho, 2019) . In this paper, we extend and apply BERT as a ranker for CLIR. We introduce a cross-lingual deep relevance matching model for CLIR based on BERT. We finetune a pretrained multilingual model with home-made CLIR data and obtain very promising results. In order to finetune the model, we construct a large amount of training data from parallel data, which is mainly used for machine translation and is much easier to obtain compared to the relevance labels of query-document pairs. In addition, we don't require the source-target alignment information to construct training samples and avoid the quality issues of machine translation in traditional CLIR. The entire model is specifically optimized using a CLIR objective. Our main contributions are:",
"cite_spans": [
{
"start": 150,
"end": 161,
"text": "(Nie, 2010)",
"ref_id": "BIBREF3"
},
{
"start": 430,
"end": 449,
"text": "(Zhou et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 568,
"end": 587,
"text": "(Zbib et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 588,
"end": 612,
"text": "Xu and Weischedel, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 1169,
"end": 1191,
"text": "(Mikolov et al., 2013)",
"ref_id": null
},
{
"start": 1202,
"end": 1227,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1489,
"end": 1512,
"text": "(Vulic and Moens, 2015)",
"ref_id": "BIBREF9"
},
{
"start": 1517,
"end": 1540,
"text": "(Litschko et al., 2018)",
"ref_id": null
},
{
"start": 1670,
"end": 1694,
"text": "(Zheng and Callan, 2015)",
"ref_id": "BIBREF15"
},
{
"start": 2015,
"end": 2035,
"text": "(Xiong et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 2036,
"end": 2053,
"text": "Guo et al., 2016;",
"ref_id": null
},
{
"start": 2054,
"end": 2076,
"text": "Dehghani et al., 2017)",
"ref_id": "BIBREF2"
},
{
"start": 2439,
"end": 2465,
"text": "BERT (Devlin et al., 2019)",
"ref_id": null
},
{
"start": 2477,
"end": 2496,
"text": "(Yang et al., 2019)",
"ref_id": "BIBREF12"
},
{
"start": 2682,
"end": 2701,
"text": "(Yang et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 2702,
"end": 2722,
"text": "Devlin et al., 2019;",
"ref_id": null
},
{
"start": 2723,
"end": 2743,
"text": "Peters et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 2744,
"end": 2761,
"text": "Lan et al., 2019)",
"ref_id": null
},
{
"start": 2928,
"end": 2950,
"text": "(Dai and Callan, 2019;",
"ref_id": "BIBREF1"
},
{
"start": 2951,
"end": 2982,
"text": "Akkalyoncu Yilmaz et al., 2019)",
"ref_id": "BIBREF0"
},
{
"start": 3006,
"end": 3030,
"text": "(Nogueira and Cho, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 We introduce a cross-lingual deep relevance architecture with BERT, where a pretrained multilingual BERT model is adapted for cross-lingual IR.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u2022 We define a proxy CLIR task which can be used to easily construct CLIR training data from bitext data, without requiring any amount of relevance labels of query-document pairs in different languages. 2. Our approach 2.1. Motivation BERT (Devlin et al., 2019) is the first bidirectional language model, which makes use of left and right word contexts simultaneously to predict word tokens. It is trained by optimizing two objectives: masked word prediction and next sentence prediction. As shown in Figure 1 , the inputs are a pair of masked sentences in the same language, where some tokens in the both sentences are replaced by symbol ' [Mask] '. The BERT model is trained to predict these masked tokens, by capturing within or across sentence meaning (or context), which is important for IR. The second objective aims to judge whether the sentences are consecutive or not. It encourages the BERT model to model the relationship between two sentences. The self-attention mechanism in BERT models the local interactions of words in sentence A with words in sentence B, so it can learn pairwise sentence or word-token relevance patterns. The entire BERT model is pretrained on large scale text corpora and learns linguistic patterns in language. So search tasks with little training data can still benefit from the pretrained model. Finetuning BERT on search task makes it learn IR specific features. It can capture query-document exact term matching, bi-gram features for monolingual IR as introduced in (Dai and Callan, 2019) . Local matchings of words and ngrams have proven to be strong neural IR features. Bigram modeling is important, because it can learn the meaning of word compounds (bi-grams) from the meanings of individual words. Motivated by this work, we aim to finetune the pretrained BERT model for cross-lingual IR. dict the relevance score, which is the probability, p(q|s), of query q occurring in sentence s. There are three types of parameterized layers in this model:",
"cite_spans": [
{
"start": 234,
"end": 260,
"text": "BERT (Devlin et al., 2019)",
"ref_id": null
},
{
"start": 640,
"end": 646,
"text": "[Mask]",
"ref_id": null
},
{
"start": 1506,
"end": 1528,
"text": "(Dai and Callan, 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [
{
"start": 500,
"end": 508,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "(1) an embedding layer including token embedding, sentence embedding and positional embedding (Devlin et al., 2019); (2) BERT layers which are 12 layers of transformer blocks;",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT for CLIR",
"sec_num": "2.2."
},
{
"text": "(3) a feed-forward neural network (FFNN) which is a single layer neural network in our implementation. The embedding layer and BERT layer are initialized with the pretrained BERT model 1 , while the FFNN is learned from scratch. During finetuning, the entire model is tuned to learn more CLIR-specific features. We only train the model using single-word queries since the queries in MATERIAL dataset are typically short and keyword based, but our approach can be easily extended to be multi-word queries or query phrases. After finetuning, this model produces a sentence-level relevance score for a pair of input query and foreign language sentence. For the CLIR task, given a user-issued query Q, the foreignlanguage document Doc is ranked by its relevance score with respect to Q. The document-level relevance score P (Doc is R|Q) is calculated by aggregating the sentencelevel scores with a Noisy-OR model: P (Doc is R|Q) = P (Q occurs at least in one sentence in Doc)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT for CLIR",
"sec_num": "2.2."
},
{
"text": "= 1 \u2212 s\u2208Doc (1 \u2212 P (Q|s)) (1) = 1 \u2212 s\u2208Doc (1 \u2212 q\u2208Q p(q|s))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT for CLIR",
"sec_num": "2.2."
},
{
"text": "Note that a multi-word query will be split into multiple single-word queries when computing document-level relevance scores. The individual query terms q \u2208 Q are modeled independently. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Finetuning BERT for CLIR",
"sec_num": "2.2."
},
{
"text": "To finetune the BERT CLIR model, we start with bitext data in English and the desired foreign-language. We then define a proxy CLIR task to construct training samples: Given a foreign-language sentence s and an English query term q, sentence s is relevant to q if q occurs in one plausible translation of s. Any non-stop English word in the bitext can serve as a single-word query. The English word and its the corresponding foreign-language sentence constitute a positive example. Similarly, we randomly select other words from the English vocabulary, which are not in the English sentence, to be query words to construct negative examples. Table 1 shows an illustration of constructing four training examples from a bitext in Lithuanian and English. We select 'doctors' and 'allege' in the English sentence as two single-word queries and use the Lithuanian sentence to construct two positive examples, and pick another two words \"controller\" and \"leisure\" in the English vocabulary, which are not in the English sentence, to construct negative examples. In this way, we can construct a large-scale training corpus for CLIR using parallel data only, which are much easier to obtain compared to query-document relevance annotated data.",
"cite_spans": [],
"ref_spans": [
{
"start": 642,
"end": 649,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Finetuning using Weak Supervision",
"sec_num": "2.3."
},
{
"text": "We report experimental results on the retrieval of Lithuanian text and speech documents against short English queries. We use queries and retrieval corpora provided by the IARPA MATERIAL program. The retrieval corpora have two datasets: an analysis set (about 800 documents) and a development set (about 400 documents). The query set Q1 contains 300 queries. To construct the training set, we use parallel sentences released under the MATERIAL (MAT, 2017) and the LO-RILEI (LOR, 2015) programs. We also include a parallel lexicon downloaded from Panlex (Kamholz et al., 2014). These parallel data contain about 2.6 million pairs of bitexts. We extract about 54 million training samples from these parallel data to finetune BERT. The positive-negative ratio of CLIR training data is 1 : 2. To finetune BERT, we use the ADAM optimizer with an initial learning rate set to 1 \u00d7 10 \u22125 , batch size of 32 and max sequence length of 128. We report the results from the model trained for one epoch. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": "AQW V = 1 \u2212 P M iss \u2212 \u03b2P F A ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": "where P M iss is the average per-query miss rate, P F A is the average per-query false alarm rate and \u03b2 is a constant that changes the relative importance of the two types of error. We use \u03b2 = 40. AQWV is the score using a single selected detection threshold. MQWV is the score that could be obtained with the optimal detection threshold.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": "To verify the effectiveness of our BERT CLIR model, we compare against four baselines: Probabilistic CLIR Model (Xu and Weischedel, 2000) is a generative probabilistic model which requires a probabilistic translation dictionary. The translation dictionary is generated from the word alignments of the parallel data. We used the GIZA++ (Och and Ney, 2003) and the Berkeley aligner (Haghighi et al., 2009) to estimate lexical translation probabilities. Probabilistic Occurrence Model (Zbib et al., 2019) computes the document relevance score as the probability that each query term q occurs at least once in the document. P (Doc is R|Q)",
"cite_spans": [
{
"start": 112,
"end": 137,
"text": "(Xu and Weischedel, 2000)",
"ref_id": "BIBREF11"
},
{
"start": 335,
"end": 354,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF5"
},
{
"start": 380,
"end": 403,
"text": "(Haghighi et al., 2009)",
"ref_id": null
},
{
"start": 482,
"end": 501,
"text": "(Zbib et al., 2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": "= q\u2208Q 1 \u2212 f \u2208Doc (1 \u2212 p(q|f ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": ", where f is a foreign term in the document. Query Relevance Attentional Neural Network Model (QRANN) (Zhao et al., 2019) uses an attention mechanism to compute a context vector derived from word embeddings in the foreign sentences, followed by a feed-forward layer to capture the relationship between query words. The idea is similar to a single transformer layer. The QRANN models are trained on multi-word queries, which are noun phrases in the English sentences of bitexts, and single-word queries. Dot-product Model is a simplified version of QRANN, that computes a context vector from the word embeddings of foreign sentence using multiplicative attention, followed by the dot product of between the query embeddings and the context vector. The dot-product model is trained using single-word queries only.",
"cite_spans": [
{
"start": 102,
"end": 121,
"text": "(Zhao et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3."
},
{
"text": "The QRANN and Dot-product models are trained using the same CLIR training data used to train BERT model described earlier. The classification results of different neural CLIR approaches are shown in Table 2 . The CLIR BERT model achieves the best result compared to other two neural models. From the confusion matrix in the ",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 206,
"text": "Table 2",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Classification Accuracy of different neural CLIR models",
"sec_num": "3.1."
},
{
"text": "We compare the MAP score of the BERT model with those of other CLIR models in Table 3 . In the table, we report MAP scores on the phrase query subset and the entire query set separately, to see how our model trained with singleword queries performs on query phrases. In the model training stage, QRANN model is the only model that is trained with the query phrases directly, all other models (including BERT) in this experiment will split a multi-word query or query phrase into multiple single-word queries. Surprisingly, the BERT MAP scores for the phrase query subset is the best compared with the performances of other approaches. It shows that BERT model can produce better relevance model for single-word queries and foreignlanguage sentence.The table also shows that BERT outperforms the other neural approaches over the entire query set.",
"cite_spans": [],
"ref_spans": [
{
"start": 78,
"end": 85,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "MAP scores of different CLIR models",
"sec_num": "3.2."
},
{
"text": "We compare BERT models with other CLIR models in terms of MQWV scores. The results are summarized in Table 4 . The first row in the table shows the best results of non-neural CLIR models, which are probabilistic CLIR model and probabilistic occurrence model. In this table, we separate the results based on the type of source documents: text or speech. Speech documents are converted into text documents via automatic speech recognition (Povey et al., 2011) . The results of the BERT model on the speech sets are the best, compared with the non-neural CLIR systems, QRANN and Dot-product models, while the results on the text sets are comparable to those from the non-neural systems, and better than the other neural systems. ",
"cite_spans": [
{
"start": 437,
"end": 457,
"text": "(Povey et al., 2011)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 101,
"end": 108,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "MQWV scores of different CLIR models",
"sec_num": "3.3."
},
{
"text": "In Figure 3 , we visualize the attention patterns produced by the attention heads from a transformer layer for the input English query 'writing well' and the foreign-language sentence 'mano nuomone \u0161i autore ra\u0161o arba gerai arba blogai arba vidutini\u0161kai'. The query term 'writing' attends to the foreign word 'ra\u0161o' (source-target word matching), while also attends to the foreign word 'gerai' , which correspond to the next English word 'well' in the query (bigram modeling). BERT CLIR model is able to capture these local matching features, which have been proven to be strong neural IR features.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Analysis on attention patterns from BERT",
"sec_num": "3.4."
},
{
"text": "We introduce a deep relevance matching model based on BERT language modeling architecture for cross-lingual document retrieval. The self-attention based architecture models the interactions of query words with words in the foreign-language sentence. The relevance model is initialized by the pretrained multi-lingual BERT model, and then finetuned with home-made CLIR training data that are derived from parallel data. The results of the CLIR BERT model on the data released by the MATERIAL program are better than two other competitive neural baselines, and comparable to the results of the probabilistic CLIR model. Our future work will use public IR datasets in English to learn IR features with BERT and transfer them to crosslingual IR. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "4."
},
{
"text": "We used the pretrained multi-lingual BERT model, which is trained on the concatenation of monolingual Wikipedia corpora from 104 languages. It has 12 layers, 768 hidden dimensions, 12 self-attention heads and 110 million parameters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense US Air Force Research Laboratory contract number FA8650-17-C-9118.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
},
{
"text": " '14) . Lan, Z.-Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019) . Albert: A lite bert for selfsupervised learning of language representations. ArXiv. Litschko, R., Glavas, G., Ponzetto, S. P., and Vulic, I. (2018) . Unsupervised cross-lingual information retrieval using monolingual data only. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval. (2015). Darpa lorelei program -broad agency announcement (baa). https: //www.darpa.mil/program/ low-resource-languages-for-emergent-incidents. (2017). Iarpa material program -broad agency announcement (baa). https://www.iarpa.gov/index. php/research-programs/material. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and",
"cite_spans": [
{
"start": 55,
"end": 89,
"text": "Sharma, P., and Soricut, R. (2019)",
"ref_id": null
},
{
"start": 190,
"end": 239,
"text": "Glavas, G., Ponzetto, S. P., and Vulic, I. (2018)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1,
"end": 5,
"text": "'14)",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Applying BERT to document retrieval with birch",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Akkalyoncu Yilmaz",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 EMNLP-IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akkalyoncu Yilmaz, Z., Wang, S., Yang, W., Zhang, H., and Lin, J. (2019). Applying BERT to document re- trieval with birch. In Proceedings of the 2019 EMNLP- IJCNLP.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Deeper text understanding for ir with contextual neural language modeling",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dai, Z. and Callan, J. (2019). Deeper text understanding for ir with contextual neural language modeling. In Pro- ceedings of the 42nd International ACM SIGIR Confer- ence on Research and Development in Information Re- trieval.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "M",
"middle": [],
"last": "Dehghani",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Zamani",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Severyn",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Kamps",
"suffix": ""
},
{
"first": "W",
"middle": [
"B"
],
"last": "Croft",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dehghani, M., Zamani, H., Severyn, A., Kamps, J., and Croft, W. B. (2017). Neural ranking models with weak supervision. In Proceedings of the 40th International Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Cross-Language Information Retrieval",
"authors": [
{
"first": "J.-Y",
"middle": [],
"last": "Nie",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nie, J.-Y. (2010). Cross-Language Information Retrieval. Morgan and Claypool Publishers.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Passage re-ranking with BERT",
"authors": [
{
"first": "R",
"middle": [],
"last": "Nogueira",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nogueira, R. and Cho, K. (2019). Passage re-ranking with BERT. volume abs/1901.04085.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "F",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Och, F. J. and Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "J",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "C",
"middle": [
"D"
],
"last": "Mannin",
"suffix": ""
}
],
"year": 2014,
"venue": "Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pennington, J., Socher, R., and Mannin, C. D. (2014). Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "M",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep contextu- alized word representations. In Proceedings of the 2018 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The kaldi speech recognition toolkit",
"authors": [
{
"first": "D",
"middle": [],
"last": "Povey",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ghoshal",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Boulianne",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Burget",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Glembek",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Goel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Hannemann",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Motlicek",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Schwarz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Silovsky",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Stemmer",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Vesely",
"suffix": ""
}
],
"year": 2011,
"venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glem- bek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., and Vesely, K. (2011). The kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Monolingual and crosslingual information retrieval models based on (bilingual) word embeddings",
"authors": [
{
"first": "I",
"middle": [],
"last": "Vulic",
"suffix": ""
},
{
"first": "M.-F",
"middle": [],
"last": "Moens",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vulic, I. and Moens, M.-F. (2015). Monolingual and cross- lingual information retrieval models based on (bilingual) word embeddings. In Proceedings of the 38th Interna- tional ACM SIGIR Conference on Research and Devel- opment in Information Retrieval.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "End-to-end neural ad-hoc ranking with kernel pooling",
"authors": [
{
"first": "C",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Callan",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Power",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiong, C., Dai, Z., Callan, J., Liu, Z., and Power, R. (2017). End-to-end neural ad-hoc ranking with kernel pooling. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in In- formation Retrieval.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Cross-lingual information retrieval using hidden markov models",
"authors": [
{
"first": "J",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xu, J. and Weischedel, R. (2000). Cross-lingual informa- tion retrieval using hidden markov models. In Proceed- ings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Xlnet: Generalized autoregressive pretraining for language understanding",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Carbonell",
"suffix": ""
},
{
"first": "R",
"middle": [
"R"
],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Q",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
}
],
"year": 2019,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., and Le, Q. V. (2019). Xlnet: Generalized au- toregressive pretraining for language understanding. In Advances in Neural Information Processing Systems 32.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Neural-network lexical translation for cross-lingual IR from text and speech",
"authors": [
{
"first": "R",
"middle": [],
"last": "Zbib",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Hartmann",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Deyoung",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Rivkin",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zbib, R., Zhao, L., Karakos, D., Hartmann, W., DeYoung, J., Huang, Z., Jiang, Z., Rivkin, N., Zhang, L., Schwartz, R. M., and Makhoul, J. (2019). Neural-network lexi- cal translation for cross-lingual IR from text and speech. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Informa- tion Retrieval.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Weakly supervised attentional model for low resource ad-hoc cross-lingual information retrieval",
"authors": [
{
"first": "L",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Zbib",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Karakos",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhao, L., Zbib, R., Jiang, Z., Karakos, D., and Huang, Z. (2019). Weakly supervised attentional model for low resource ad-hoc cross-lingual information retrieval. In Proceedings of the 2nd Workshop on Deep Learning Ap- proaches for Low-Resource NLP (DeepLo 2019).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Learning to reweight terms with distributed representations",
"authors": [
{
"first": "G",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Callan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zheng, G. and Callan, J. (2015). Learning to reweight terms with distributed representations. In Proceedings of the 38th International ACM SIGIR Conference on Re- search and Development in Information Retrieval.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Translation techniques in crosslanguage information retrieval",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Truran",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Brailsford",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Wade",
"suffix": ""
},
{
"first": "Ashman",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2012,
"venue": "ACM Comput. Surv",
"volume": "45",
"issue": "1",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou, D., Truran, M., Brailsford, T., Wade, V., and Ashman, H. (2012). Translation techniques in cross- language information retrieval. ACM Comput. Surv., 45(1):1-44.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "BERT pretraining architecture (Devlin et al., 2019). FFNN denotes feed-forward neural network.",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "shows the proposed CLIR model architecture with BERT. The inputs are pairs of single-word queries q in English and foreign-language sentences s. This is different from the pretraining model inFigure 1, where the model is fed with pairs of sentences in the same language. We concatenate the query q and the foreign-language sentence s into a text sequence '[[CLS], q, [SEP], s,[SEP]]'. The output embedding of the first token '[CLS]' is used as a representation of the entire query-sentence pair. Then it is fed into a single layer feed-forward neural network to pre-",
"type_str": "figure"
},
"FIGREF2": {
"num": null,
"uris": null,
"text": "Fine-tuned CLIR BERT model architecture.",
"type_str": "figure"
},
"FIGREF3": {
"num": null,
"uris": null,
"text": "Visualization of CLIR BERT model. Colors identify the corresponding attention heads, while the line weight reflects the attention score. Different heads from layer 12 can capture different matching features. Word pieces' ra' , '##\u0161o' in Lithuanian correspond to ''write' in English while 'ger', '##ai' are for 'well' in English. Head 12 and head 4 in (a)(c) can capture source-target word matching, head9 and head1 in (b)(d) could attend to its previous or next words (bigram modeling).",
"type_str": "figure"
},
"TABREF1": {
"text": "Four training examples derived from a bitext: Source-Lithuanian: medik\u0173 teigimu dabar veikianti sistema efektyvi; Target-English: doctors allege that the system currently in operation is effective.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Approach</td><td colspan=\"3\">Accuracy Confusion Matrix</td></tr><tr><td>BERT</td><td>95.3%</td><td>0.93 0.02</td><td>0.07 0.98</td></tr><tr><td>Dot-Product</td><td>84.2%</td><td>0.74 0.07</td><td>0.26 0.93</td></tr><tr><td>QRANN</td><td>87.3%</td><td>0.73 0.003</td><td>0.27 0.997</td></tr><tr><td/><td/><td/><td>BERT sig-</td></tr><tr><td/><td/><td/><td>nificantly improves the performance of classifying relevant</td></tr><tr><td/><td/><td/><td>query-sentence pairs (i.e., true positives), while matching</td></tr><tr><td/><td/><td/><td>the performance of classifying irrelevant query-sentence</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"3\">: Performance of classification accuracy on the gen-</td></tr><tr><td colspan=\"3\">erated query-sentence pairs from the bitexts of the MATE-</td></tr><tr><td colspan=\"3\">RIAL analysis set. The first column in the confusion ma-</td></tr><tr><td colspan=\"3\">trix corresponds to the positive class (i.e., relevant query-</td></tr><tr><td colspan=\"3\">sentence pair) while the second the column is the negative</td></tr><tr><td>class.</td><td/><td/></tr><tr><td>Approach</td><td colspan=\"2\">phrase query subset entire query set</td></tr><tr><td>Prob. CLIR</td><td>57.4</td><td>61.2</td></tr><tr><td>Prob. Occurrence</td><td>51.4</td><td>56.9</td></tr><tr><td>BERT</td><td>61.3</td><td>56.8</td></tr><tr><td>Dot-Product</td><td>50.8</td><td>39.2</td></tr><tr><td>QRANN</td><td>55.8</td><td>45.5</td></tr></table>",
"html": null
},
"TABREF5": {
"text": "Performance of MAP scores on the MATERIAL analysis set and Q1 queries.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}