Transformers
PyTorch
xlm-roberta
clir
colbertx
plaidx
xlm-roberta-large
Inference Endpoints
eugene-yang commited on
Commit
bacc415
1 Parent(s): 9c69d6f

push model

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,97 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - fa
6
+ - ru
7
+ tags:
8
+ - clir
9
+ - colbertx
10
+ - plaidx
11
+ - xlm-roberta-large
12
+ datasets:
13
+ - ms_marco
14
+ - hltcoe/tdist-msmarco-scores
15
+ task_categories:
16
+ - text-retrieval
17
+ - information-retrieval
18
+ task_ids:
19
+ - passage-retrieval
20
+ - cross-language-retrieval
21
  license: mit
22
  ---
23
+
24
+ # ColBERT-X for English-Chinese/Persian/Russian MLIR using Multilingual Translate-Distill
25
+
26
+ ## CLIR Model Setting
27
+
28
+ - Query language: English
29
+ - Query length: 32 token max
30
+ - Document language: Chinese/Persian/Russian
31
+ - Document length: 180 token max (please use MaxP to aggregate the passage score if needed)
32
+
33
+ ## Model Description
34
+
35
+ Multilingual Translate-Distill is a training technique that produces state-of-the-art MLIR dense retrieval model through translation and distillation.
36
+ `plaidx-large-neuclir-mtd-round-robin-entries-mt5xxl-engeng` is trained with KL-Divergence from the `mt5xxl` MonoT5 reranker
37
+ [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k)
38
+ inferenced on English MS MARCO training queries and passages.
39
+ The teacher scores can be found in
40
+ [`hltcoe/tdist-msmarco-scores`](https://huggingface.co/datasets/hltcoe/tdist-msmarco-scores/blob/main/t53b-monot5-msmarco-engeng.jsonl.gz).
41
+
42
+ ### Training Parameters
43
+
44
+ - learning rate: 5e-6
45
+ - update steps: 200,000
46
+ - nway (number of passages per query): 6 (randomly selected from 50; 2 if using `round-robin-entires`, see below)
47
+ - per device batch size (number of query-passage set): 8
48
+ - training GPU: 8 NVIDIA V100 with 32 GB memory
49
+
50
+ ### Mixing Strategies
51
+
52
+ - `mix-passages`: languages are randomly assigned to the 6 sampled passages for a given query during training.
53
+ - `mix-entries`: all passages in the a given query-passage set are randomly assigned to the same language.
54
+ - `round-robin-entires`: for each query, the query-passage set is repeated `n` times to iterate through all languages.
55
+
56
+ ## Usage
57
+
58
+ To properly load ColBERT-X models from Huggingface Hub, please use the following version of PLAID-X.
59
+ ```bash
60
+ pip install PLAID-X>=0.3.1
61
+ ```
62
+
63
+ Following code snippet loads the model through Huggingface API.
64
+ ```python
65
+ from colbert.modeling.checkpoint import Checkpoint
66
+ from colbert.infra import ColBERTConfig
67
+
68
+ Checkpoint('hltcoe/plaidx-large-neuclir-mtd-round-robin-entries-mt5xxl-engeng', colbert_config=ColBERTConfig())
69
+ ```
70
+
71
+ For full tutorial, please refer to the [PLAID-X Jupyter Notebook](https://colab.research.google.com/github/hltcoe/clir-tutorial/blob/main/notebooks/clir_tutorial_plaidx.ipynb),
72
+ which is part of the [SIGIR 2023 CLIR Tutorial](https://github.com/hltcoe/clir-tutorial).
73
+
74
+ ## BibTeX entry and Citation Info
75
+
76
+ Please cite the following two papers if you use the model.
77
+
78
+
79
+ ```bibtex
80
+ @inproceedings{mtt,
81
+ title = {Neural Approaches to Multilingual Information Retrieval},
82
+ author = {Dawn Lawrie and Eugene Yang and Douglas W Oard and James Mayfield},
83
+ booktitle = {Proceedings of the 45th European Conference on Information Retrieval (ECIR)},
84
+ year = {2023},
85
+ doi = {10.1007/978-3-031-28244-7_33},
86
+ url = {https://arxiv.org/abs/2209.01335}
87
+ }
88
+ ```
89
+
90
+ ```bibtex
91
+ @inproceedings{mtd,
92
+ author = {Eugene Yang and Dawn Lawrie and James Mayfield},
93
+ title = {Distillation for Multilingual Information Retrieval},
94
+ booktitle = {Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR) (Short Paper) (Accepted)},
95
+ year = {2024}
96
+ }
97
+ ```
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "[unused0]": 250002,
3
+ "[unused1]": 250003
4
+ }
artifact.metadata ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "query_token_id": "[unused0]",
3
+ "doc_token_id": "[unused1]",
4
+ "query_token": "[Q]",
5
+ "doc_token": "[D]",
6
+ "ncells": null,
7
+ "centroid_score_threshold": null,
8
+ "ndocs": null,
9
+ "index_path": null,
10
+ "nbits": 1,
11
+ "kmeans_niters": 4,
12
+ "resume": false,
13
+ "max_sampled_pid": -1,
14
+ "max_num_partitions": -1,
15
+ "use_lagacy_build_ivf": false,
16
+ "reuse_centroids_from": null,
17
+ "similarity": "cosine",
18
+ "bsize": 2,
19
+ "accumsteps": 1,
20
+ "lr": 5e-6,
21
+ "maxsteps": 400000,
22
+ "save_every": null,
23
+ "resume_optimizer": false,
24
+ "fix_broken_optimizer_state": false,
25
+ "warmup": null,
26
+ "warmup_bert": null,
27
+ "relu": false,
28
+ "nway": 6,
29
+ "n_query_alternative": 1,
30
+ "use_ib_negatives": false,
31
+ "kd_loss": "KLD",
32
+ "reranker": false,
33
+ "distillation_alpha": 1.0,
34
+ "ignore_scores": false,
35
+ "model_name": "xlm-roberta-large",
36
+ "force_resize_embeddings": true,
37
+ "shuffle_passages": true,
38
+ "sampling_max_beta": 1.0,
39
+ "over_one_epoch": true,
40
+ "multilang": true,
41
+ "nolangreg": true,
42
+ "query_maxlen": 32,
43
+ "attend_to_mask_tokens": false,
44
+ "interaction": "colbert",
45
+ "dim": 128,
46
+ "doc_maxlen": 220,
47
+ "mask_punctuation": true,
48
+ "checkpoint": "xlm-roberta-large",
49
+ "triples": "\/expscratch\/eyang\/workspace\/plaid-aux\/training_triples\/msmarco-passages\/triples_mt5xxl-monot5-mmarco-engeng.jsonl",
50
+ "collection": "Combination(all)[irds:neumarco\/zh\/train:docs+irds:neumarco\/fa\/train:docs+irds:neumarco\/ru\/train:docs]",
51
+ "queries": "irds:msmarco-passage\/train:queries",
52
+ "index_name": null,
53
+ "debug": false,
54
+ "overwrite": false,
55
+ "root": "\/expscratch\/eyang\/workspace\/plaid-aux\/experiments",
56
+ "experiment": "mtt-tdistill",
57
+ "index_root": null,
58
+ "name": "multi.allentriesnoreg-KLD-shuf-5e-6\/mt5xxl-monot5-mmarco-engeng\/16bat.6way",
59
+ "rank": 0,
60
+ "nranks": 8,
61
+ "amp": true,
62
+ "ivf_num_processes": 20,
63
+ "ivf_use_tempdir": false,
64
+ "ivf_merging_ways": 2,
65
+ "gpus": 8,
66
+ "meta": {
67
+ "hostname": "r5n03",
68
+ "git_branch": "eugene-training",
69
+ "git_hash": "d4f2493b700ceeea4592ffaf34d73dcd5c7926ba",
70
+ "git_commit_datetime": "2023-11-22 22:38:49-05:00",
71
+ "current_datetime": "Nov 23, 2023 ; 4:28PM EST (-0500)",
72
+ "cmd": "train.py --model_name xlm-roberta-large --training_triples \/expscratch\/eyang\/workspace\/plaid-aux\/training_triples\/msmarco-passages\/triples_mt5xxl-monot5-mmarco-engeng.jsonl --training_queries msmarco-passage\/train --training_collection neumarco\/zh\/train neumarco\/fa\/train neumarco\/ru\/train --training_collection_mixing all --other_args nolangreg=True --maxsteps 400000 --learning_rate 5e-6 --kd_loss KLD --per_device_batch_size 2 --nway 6 --run_tag multi.allentriesnoreg-KLD-shuf-5e-6\/mt5xxl-monot5-mmarco-engeng --experiment mtt-tdistill",
73
+ "version": "colbert-v0.4"
74
+ }
75
+ }
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "xlm-roberta-large",
3
+ "architectures": [
4
+ "HF_ColBERT"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": null,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 1024,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 4096,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "xlm-roberta",
18
+ "num_attention_heads": 16,
19
+ "num_hidden_layers": 24,
20
+ "output_past": true,
21
+ "pad_token_id": 1,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.28.0",
25
+ "type_vocab_size": 1,
26
+ "use_cache": true,
27
+ "vocab_size": 250004
28
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:273912cf490377802bd321e8234c6d7c9643ef84120455be817ed4ac50140f6f
3
+ size 2240233969
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "cls_token": "<s>",
4
+ "eos_token": "</s>",
5
+ "mask_token": {
6
+ "content": "<mask>",
7
+ "lstrip": true,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "pad_token": "<pad>",
13
+ "sep_token": "</s>",
14
+ "unk_token": "<unk>"
15
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c19ec03247ee31e5f42772ac32bde8dca2727b30c8310c2e585df4980a8db230
3
+ size 17083032
tokenizer_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "clean_up_tokenization_spaces": true,
4
+ "cls_token": "<s>",
5
+ "eos_token": "</s>",
6
+ "mask_token": {
7
+ "__type": "AddedToken",
8
+ "content": "<mask>",
9
+ "lstrip": true,
10
+ "normalized": true,
11
+ "rstrip": false,
12
+ "single_word": false
13
+ },
14
+ "model_max_length": 512,
15
+ "pad_token": "<pad>",
16
+ "sep_token": "</s>",
17
+ "tokenizer_class": "XLMRobertaTokenizer",
18
+ "unk_token": "<unk>"
19
+ }