samheym commited on
Commit
e363b49
·
verified ·
1 Parent(s): fe574c0

Add new SentenceTransformer model

Browse files
1_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 768, "out_features": 128, "bias": false, "activation_function": "torch.nn.modules.linear.Identity"}
1_Dense/model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6bfcb02d8053d8b2c52365d37e44880184d11e0e8f361e58fb6271c22bda0bb
3
+ size 393304
README.md ADDED
@@ -0,0 +1,239 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ tags:
5
+ - ColBERT
6
+ - PyLate
7
+ - sentence-transformers
8
+ - sentence-similarity
9
+ - feature-extraction
10
+ pipeline_tag: sentence-similarity
11
+ library_name: PyLate
12
+ ---
13
+
14
+ # GerColBERT
15
+
16
+ This is a [PyLate](https://github.com/lightonai/pylate) model trained. It maps sentences & paragraphs to sequences of 128-dimensional dense vectors and can be used for semantic textual similarity using the MaxSim operator.
17
+
18
+ ## Model Details
19
+
20
+ ### Model Description
21
+ - **Model Type:** PyLate model
22
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
23
+ - **Document Length:** 180 tokens
24
+ - **Query Length:** 32 tokens
25
+ - **Output Dimensionality:** 128 tokens
26
+ - **Similarity Function:** MaxSim
27
+ <!-- - **Training Dataset:** Unknown -->
28
+ - **Language:** de
29
+ <!-- - **License:** Unknown -->
30
+
31
+ ### Model Sources
32
+
33
+ - **Documentation:** [PyLate Documentation](https://lightonai.github.io/pylate/)
34
+ - **Repository:** [PyLate on GitHub](https://github.com/lightonai/pylate)
35
+ - **Hugging Face:** [PyLate models on Hugging Face](https://huggingface.co/models?library=PyLate)
36
+
37
+ ### Full Model Architecture
38
+
39
+ ```
40
+ ColBERT(
41
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
42
+ (1): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
43
+ )
44
+ ```
45
+
46
+ ## Usage
47
+ First install the PyLate library:
48
+
49
+ ```bash
50
+ pip install -U pylate
51
+ ```
52
+
53
+ ### Retrieval
54
+
55
+ PyLate provides a streamlined interface to index and retrieve documents using ColBERT models. The index leverages the Voyager HNSW index to efficiently handle document embeddings and enable fast retrieval.
56
+
57
+ #### Indexing documents
58
+
59
+ First, load the ColBERT model and initialize the Voyager index, then encode and index your documents:
60
+
61
+ ```python
62
+ from pylate import indexes, models, retrieve
63
+
64
+ # Step 1: Load the ColBERT model
65
+ model = models.ColBERT(
66
+ model_name_or_path=samheym/GerColBERT,
67
+ )
68
+
69
+ # Step 2: Initialize the Voyager index
70
+ index = indexes.Voyager(
71
+ index_folder="pylate-index",
72
+ index_name="index",
73
+ override=True, # This overwrites the existing index if any
74
+ )
75
+
76
+ # Step 3: Encode the documents
77
+ documents_ids = ["1", "2", "3"]
78
+ documents = ["document 1 text", "document 2 text", "document 3 text"]
79
+
80
+ documents_embeddings = model.encode(
81
+ documents,
82
+ batch_size=32,
83
+ is_query=False, # Ensure that it is set to False to indicate that these are documents, not queries
84
+ show_progress_bar=True,
85
+ )
86
+
87
+ # Step 4: Add document embeddings to the index by providing embeddings and corresponding ids
88
+ index.add_documents(
89
+ documents_ids=documents_ids,
90
+ documents_embeddings=documents_embeddings,
91
+ )
92
+ ```
93
+
94
+ Note that you do not have to recreate the index and encode the documents every time. Once you have created an index and added the documents, you can re-use the index later by loading it:
95
+
96
+ ```python
97
+ # To load an index, simply instantiate it with the correct folder/name and without overriding it
98
+ index = indexes.Voyager(
99
+ index_folder="pylate-index",
100
+ index_name="index",
101
+ )
102
+ ```
103
+
104
+ #### Retrieving top-k documents for queries
105
+
106
+ Once the documents are indexed, you can retrieve the top-k most relevant documents for a given set of queries.
107
+ To do so, initialize the ColBERT retriever with the index you want to search in, encode the queries and then retrieve the top-k documents to get the top matches ids and relevance scores:
108
+
109
+ ```python
110
+ # Step 1: Initialize the ColBERT retriever
111
+ retriever = retrieve.ColBERT(index=index)
112
+
113
+ # Step 2: Encode the queries
114
+ queries_embeddings = model.encode(
115
+ ["query for document 3", "query for document 1"],
116
+ batch_size=32,
117
+ is_query=True, # # Ensure that it is set to False to indicate that these are queries
118
+ show_progress_bar=True,
119
+ )
120
+
121
+ # Step 3: Retrieve top-k documents
122
+ scores = retriever.retrieve(
123
+ queries_embeddings=queries_embeddings,
124
+ k=10, # Retrieve the top 10 matches for each query
125
+ )
126
+ ```
127
+
128
+ ### Reranking
129
+ If you only want to use the ColBERT model to perform reranking on top of your first-stage retrieval pipeline without building an index, you can simply use rank function and pass the queries and documents to rerank:
130
+
131
+ ```python
132
+ from pylate import rank, models
133
+
134
+ queries = [
135
+ "query A",
136
+ "query B",
137
+ ]
138
+
139
+ documents = [
140
+ ["document A", "document B"],
141
+ ["document 1", "document C", "document B"],
142
+ ]
143
+
144
+ documents_ids = [
145
+ [1, 2],
146
+ [1, 3, 2],
147
+ ]
148
+
149
+ model = models.ColBERT(
150
+ model_name_or_path=samheym/GerColBERT,
151
+ )
152
+
153
+ queries_embeddings = model.encode(
154
+ queries,
155
+ is_query=True,
156
+ )
157
+
158
+ documents_embeddings = model.encode(
159
+ documents,
160
+ is_query=False,
161
+ )
162
+
163
+ reranked_documents = rank.rerank(
164
+ documents_ids=documents_ids,
165
+ queries_embeddings=queries_embeddings,
166
+ documents_embeddings=documents_embeddings,
167
+ )
168
+ ```
169
+
170
+ <!--
171
+ ### Direct Usage (Transformers)
172
+
173
+ <details><summary>Click to see the direct usage in Transformers</summary>
174
+
175
+ </details>
176
+ -->
177
+
178
+ <!--
179
+ ### Downstream Usage (Sentence Transformers)
180
+
181
+ You can finetune this model on your own dataset.
182
+
183
+ <details><summary>Click to expand</summary>
184
+
185
+ </details>
186
+ -->
187
+
188
+ <!--
189
+ ### Out-of-Scope Use
190
+
191
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
192
+ -->
193
+
194
+ <!--
195
+ ## Bias, Risks and Limitations
196
+
197
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
198
+ -->
199
+
200
+ <!--
201
+ ### Recommendations
202
+
203
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
204
+ -->
205
+
206
+ ## Training Details
207
+
208
+ ### Framework Versions
209
+ - Python: 3.12.3
210
+ - Sentence Transformers: 3.4.1
211
+ - PyLate: 1.1.4
212
+ - Transformers: 4.48.2
213
+ - PyTorch: 2.6.0+cu124
214
+ - Accelerate: 1.4.0
215
+ - Datasets: 2.21.0
216
+ - Tokenizers: 0.21.0
217
+
218
+
219
+ ## Citation
220
+
221
+ ### BibTeX
222
+
223
+ <!--
224
+ ## Glossary
225
+
226
+ *Clearly define terms in order to be accessible across audiences.*
227
+ -->
228
+
229
+ <!--
230
+ ## Model Card Authors
231
+
232
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
233
+ -->
234
+
235
+ <!--
236
+ ## Model Card Contact
237
+
238
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
239
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "[unused0]": 31102,
3
+ "[unused1]": 31103
4
+ }
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "evaluate2/data/5f8c1f1d-5f9d-4472-bec6-2cc88eb7bca9/colbert-80000",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 12,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.2",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 31104
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.6.0+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "MaxSim",
10
+ "query_prefix": "[unused0]",
11
+ "document_prefix": "[unused1]",
12
+ "query_length": 32,
13
+ "document_length": 180,
14
+ "attend_to_expansion_tokens": false,
15
+ "skiplist_words": [
16
+ "!",
17
+ "\"",
18
+ "#",
19
+ "$",
20
+ "%",
21
+ "&",
22
+ "'",
23
+ "(",
24
+ ")",
25
+ "*",
26
+ "+",
27
+ ",",
28
+ "-",
29
+ ".",
30
+ "/",
31
+ ":",
32
+ ";",
33
+ "<",
34
+ "=",
35
+ ">",
36
+ "?",
37
+ "@",
38
+ "[",
39
+ "\\",
40
+ "]",
41
+ "^",
42
+ "_",
43
+ "`",
44
+ "{",
45
+ "|",
46
+ "}",
47
+ "~"
48
+ ]
49
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a8efc83f5d5ef0a784dadc8313b0ac2b85f7bad615278c7dba7a61fddd7c59f
3
+ size 439739232
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Dense",
12
+ "type": "pylate.models.Dense.Dense"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "[MASK]",
17
+ "sep_token": {
18
+ "content": "[SEP]",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ },
24
+ "unk_token": {
25
+ "content": "[UNK]",
26
+ "lstrip": false,
27
+ "normalized": false,
28
+ "rstrip": false,
29
+ "single_word": false
30
+ }
31
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "101": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "102": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "103": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "104": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "31102": {
44
+ "content": "[unused0]",
45
+ "lstrip": false,
46
+ "normalized": true,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": false
50
+ },
51
+ "31103": {
52
+ "content": "[unused1]",
53
+ "lstrip": false,
54
+ "normalized": true,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": false
58
+ }
59
+ },
60
+ "clean_up_tokenization_spaces": true,
61
+ "cls_token": "[CLS]",
62
+ "do_basic_tokenize": true,
63
+ "do_lower_case": false,
64
+ "extra_special_tokens": {},
65
+ "mask_token": "[MASK]",
66
+ "max_len": 512,
67
+ "model_max_length": 512,
68
+ "never_split": null,
69
+ "pad_token": "[MASK]",
70
+ "sep_token": "[SEP]",
71
+ "strip_accents": false,
72
+ "tokenize_chinese_chars": true,
73
+ "tokenizer_class": "BertTokenizer",
74
+ "unk_token": "[UNK]"
75
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff