zhichao-geng commited on
Commit
fe9cbf9
·
verified ·
1 Parent(s): d0505a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -3
README.md CHANGED
@@ -1,3 +1,157 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - fr
6
+ - bn
7
+ - te
8
+ - es
9
+ - id
10
+ - hi
11
+ - ru
12
+ - ar
13
+ - fa
14
+ - ja
15
+ - fi
16
+ - sw
17
+ - ko
18
+ license: apache-2.0
19
+ tags:
20
+ - learned sparse
21
+ - opensearch
22
+ - transformers
23
+ - retrieval
24
+ - passage-retrieval
25
+ - document-expansion
26
+ - bag-of-words
27
+ datasets:
28
+ - miracl/miracl
29
+ ---
30
+
31
+ # opensearch-neural-sparse-encoding-multilingual-v1
32
+
33
+ ## Select the model
34
+ The model should be selected considering search relevance, model inference and retrieval efficiency(FLOPS). We benchmark models' performance on a subset of MIRACL benchmark (we exclude th since the uncased backbone can not encode it).
35
+ ** We recommend to use it with max_ratio pruning. **
36
+
37
+ | Model | Inference-free for Retrieval | Model Parameters | AVG NDCG@10 | AVG FLOPS |
38
+ |-------|------------------------------|------------------|-------------|-----------|
39
+ | [opensearch-neural-sparse-encoding-multilingual-v1](https://huggingface.co/opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1) | | 160M | 0.629 | 1.3 |
40
+
41
+ ## Overview
42
+
43
+ - **Paper**: [Towards Competitive Search Relevance For Inference-Free Learned Sparse Retrievers](https://arxiv.org/abs/2411.04403)
44
+ - **Fine-tuning sample**: [opensearch-sparse-model-tuning-sample](https://github.com/zhichao-aws/opensearch-sparse-model-tuning-sample)
45
+
46
+ This is a learned sparse retrieval model. It encodes the documents to 105879 dimensional **sparse vectors**. For queries, it just use a tokenizer and a weight look-up table to generate sparse vectors. The non-zero dimension index means the corresponding token in the vocabulary, and the weight means the importance of the token. And the similarity score is the inner product of query/document sparse vectors.
47
+
48
+ OpenSearch neural sparse feature supports learned sparse retrieval with lucene inverted index. Link: https://opensearch.org/docs/latest/query-dsl/specialized/neural-sparse/. The indexing and search can be performed with OpenSearch high-level API.
49
+
50
+
51
+ ## Usage (HuggingFace)
52
+ This model is supposed to run inside OpenSearch cluster. But you can also use it outside the cluster, with HuggingFace models API.
53
+
54
+ ```python
55
+ import json
56
+ import itertools
57
+ import torch
58
+
59
+ from transformers import AutoModelForMaskedLM, AutoTokenizer
60
+
61
+
62
+ # get sparse vector from dense vectors with shape batch_size * seq_len * vocab_size
63
+ def get_sparse_vector(feature, output):
64
+ values, _ = torch.max(output*feature["attention_mask"].unsqueeze(-1), dim=1)
65
+ values = torch.log(1 + torch.relu(values))
66
+ values[:,special_token_ids] = 0
67
+ return values
68
+
69
+ # transform the sparse vector to a dict of (token, weight)
70
+ def transform_sparse_vector_to_dict(sparse_vector):
71
+ sample_indices,token_indices=torch.nonzero(sparse_vector,as_tuple=True)
72
+ non_zero_values = sparse_vector[(sample_indices,token_indices)].tolist()
73
+ number_of_tokens_for_each_sample = torch.bincount(sample_indices).cpu().tolist()
74
+ tokens = [transform_sparse_vector_to_dict.id_to_token[_id] for _id in token_indices.tolist()]
75
+
76
+ output = []
77
+ end_idxs = list(itertools.accumulate([0]+number_of_tokens_for_each_sample))
78
+ for i in range(len(end_idxs)-1):
79
+ token_strings = tokens[end_idxs[i]:end_idxs[i+1]]
80
+ weights = non_zero_values[end_idxs[i]:end_idxs[i+1]]
81
+ output.append(dict(zip(token_strings, weights)))
82
+ return output
83
+
84
+ # download the idf file from model hub. idf is used to give weights for query tokens
85
+ def get_tokenizer_idf(tokenizer):
86
+ from huggingface_hub import hf_hub_download
87
+ local_cached_path = hf_hub_download(repo_id="opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1", filename="idf.json")
88
+ with open(local_cached_path) as f:
89
+ idf = json.load(f)
90
+ idf_vector = [0]*tokenizer.vocab_size
91
+ for token,weight in idf.items():
92
+ _id = tokenizer._convert_token_to_id_with_added_voc(token)
93
+ idf_vector[_id]=weight
94
+ return torch.tensor(idf_vector)
95
+
96
+ # load the model
97
+ model = AutoModelForMaskedLM.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1")
98
+ tokenizer = AutoTokenizer.from_pretrained("opensearch-project/opensearch-neural-sparse-encoding-multilingual-v1")
99
+ idf = get_tokenizer_idf(tokenizer)
100
+
101
+ # set the special tokens and id_to_token transform for post-process
102
+ special_token_ids = [tokenizer.vocab[token] for token in tokenizer.special_tokens_map.values()]
103
+ get_sparse_vector.special_token_ids = special_token_ids
104
+ id_to_token = ["" for i in range(tokenizer.vocab_size)]
105
+ for token, _id in tokenizer.vocab.items():
106
+ id_to_token[_id] = token
107
+ transform_sparse_vector_to_dict.id_to_token = id_to_token
108
+
109
+
110
+
111
+ query = "What's the weather in ny now?"
112
+ document = "Currently New York is rainy."
113
+
114
+ # encode the query
115
+ feature_query = tokenizer([query], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False)
116
+ input_ids = feature_query["input_ids"]
117
+ batch_size = input_ids.shape[0]
118
+ query_vector = torch.zeros(batch_size, tokenizer.vocab_size)
119
+ query_vector[torch.arange(batch_size).unsqueeze(-1), input_ids] = 1
120
+ query_sparse_vector = query_vector*idf
121
+
122
+ # encode the document
123
+ feature_document = tokenizer([document], padding=True, truncation=True, return_tensors='pt', return_token_type_ids=False)
124
+ output = model(**feature_document)[0]
125
+ document_sparse_vector = get_sparse_vector(feature_document, output)
126
+
127
+
128
+ # get similarity score
129
+ sim_score = torch.matmul(query_sparse_vector[0],document_sparse_vector[0])
130
+ print(sim_score) # tensor(7.7400, grad_fn=<DotBackward0>)
131
+
132
+
133
+ query_token_weight = transform_sparse_vector_to_dict(query_sparse_vector)[0]
134
+ document_query_token_weight = transform_sparse_vector_to_dict(document_sparse_vector)[0]
135
+ for token in sorted(query_token_weight, key=lambda x:query_token_weight[x], reverse=True):
136
+ if token in document_query_token_weight:
137
+ print("score in query: %.4f, score in document: %.4f, token: %s"%(query_token_weight[token],document_query_token_weight[token],token))
138
+
139
+
140
+
141
+ # result:
142
+ # score in query: 3.0699, score in document: 1.2821, token: weather
143
+ # score in query: 1.6406, score in document: 0.9018, token: now
144
+ # score in query: 1.6108, score in document: 0.3141, token: ?
145
+ # score in query: 1.2721, score in document: 1.3446, token: ny
146
+ # score in query: 0.6005, score in document: 0.1804, token: in
147
+ ```
148
+
149
+ The above code sample shows an example of neural sparse search. Although there is no overlap token in original query and document, but this model performs a good match.
150
+
151
+ ## License
152
+
153
+ This project is licensed under the [Apache v2.0 License](https://github.com/opensearch-project/neural-search/blob/main/LICENSE).
154
+
155
+ ## Copyright
156
+
157
+ Copyright OpenSearch Contributors. See [NOTICE](https://github.com/opensearch-project/neural-search/blob/main/NOTICE) for details.