Corran commited on
Commit
b94be35
·
verified ·
1 Parent(s): 60a9730

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,590 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:35964
11
+ - loss:MatryoshkaLoss
12
+ - loss:MultipleNegativesRankingLoss
13
+ base_model: nomic-ai/nomic-embed-text-v1.5
14
+ widget:
15
+ - source_sentence: Despite the crucial role of phosphorus in global food production,
16
+ there is a lack of comprehensive analysis on the economic and policy aspects of
17
+ phosphorus supply and demand, highlighting a significant knowledge gap in the
18
+ field of natural resource economics.
19
+ sentences:
20
+ - The human brain is intrinsically organized into dynamic, anticorrelated functional
21
+ networks
22
+ - 'The story of phosphorus: Global food security and food for thought'
23
+ - Identifying a knowledge gap in the field of study
24
+ - source_sentence: Despite the comprehensive data sources used in this analysis, it
25
+ is important to note that uncertainties remain in the estimation of global precipitation,
26
+ particularly in data-sparse regions, and careful interpretation of the findings
27
+ is advised.
28
+ sentences:
29
+ - The shuttle radar topography mission—a new class of digital elevation models acquired
30
+ by spaceborne radar
31
+ - Advising cautious interpretation of the findings
32
+ - 'Global Precipitation: A 17-Year Monthly Analysis Based on Gauge Observations,
33
+ Satellite Estimates, and Numerical Model Outputs'
34
+ - source_sentence: The study found that participants' value functions were characterized
35
+ by loss aversion, risk aversion, and the concavity of the utility function in
36
+ gains and the convexity in losses.
37
+ sentences:
38
+ - Ordered mesoporous molecular sieves synthesized by a liquid-crystal template mechanism
39
+ - 'Prospect theory: An analysis of decision under risk'
40
+ - Summarising the results section
41
+ - source_sentence: Further research is needed to explore the potential role of individual
42
+ amino acids in optimizing protein intake and promoting optimal health outcomes.
43
+ sentences:
44
+ - Suggestions for future work
45
+ - Validation of a modified Early Warning Score in medical admissions
46
+ - Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol,
47
+ Protein and Amino Acids
48
+ - source_sentence: The IANA Task Force (2021) builds upon previous research suggesting
49
+ that slower gait speed is associated with increased risk of adverse outcomes in
50
+ older adults (Levine et al., 2015; Schoenfeld et al., 2016).
51
+ sentences:
52
+ - 'Transdisciplinary research in sustainability science: practice, principles, and
53
+ challenges'
54
+ - Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling
55
+ older people an International Academy on Nutrition and Aging (IANA) Task Force
56
+ - Referring to another writer’s idea(s) or position
57
+ datasets:
58
+ - Corran/SciTopicTriplets
59
+ pipeline_tag: sentence-similarity
60
+ library_name: sentence-transformers
61
+ metrics:
62
+ - cosine_accuracy@1
63
+ - cosine_accuracy@3
64
+ - cosine_accuracy@5
65
+ - cosine_accuracy@10
66
+ - cosine_precision@1
67
+ - cosine_precision@3
68
+ - cosine_precision@5
69
+ - cosine_precision@10
70
+ - cosine_recall@1
71
+ - cosine_recall@3
72
+ - cosine_recall@5
73
+ - cosine_recall@10
74
+ - cosine_ndcg@10
75
+ - cosine_mrr@10
76
+ - cosine_map@100
77
+ model-index:
78
+ - name: nomic-ai/nomic-embed-text-v1.5
79
+ results:
80
+ - task:
81
+ type: information-retrieval
82
+ name: Information Retrieval
83
+ dataset:
84
+ name: SciGen Eval Set
85
+ type: SciGen-Eval-Set
86
+ metrics:
87
+ - type: cosine_accuracy@1
88
+ value: 0.19750889679715303
89
+ name: Cosine Accuracy@1
90
+ - type: cosine_accuracy@3
91
+ value: 0.5547153024911032
92
+ name: Cosine Accuracy@3
93
+ - type: cosine_accuracy@5
94
+ value: 0.81605871886121
95
+ name: Cosine Accuracy@5
96
+ - type: cosine_accuracy@10
97
+ value: 0.9893238434163701
98
+ name: Cosine Accuracy@10
99
+ - type: cosine_precision@1
100
+ value: 0.19750889679715303
101
+ name: Cosine Precision@1
102
+ - type: cosine_precision@3
103
+ value: 0.1849051008303677
104
+ name: Cosine Precision@3
105
+ - type: cosine_precision@5
106
+ value: 0.16321174377224199
107
+ name: Cosine Precision@5
108
+ - type: cosine_precision@10
109
+ value: 0.098932384341637
110
+ name: Cosine Precision@10
111
+ - type: cosine_recall@1
112
+ value: 0.19750889679715303
113
+ name: Cosine Recall@1
114
+ - type: cosine_recall@3
115
+ value: 0.5547153024911032
116
+ name: Cosine Recall@3
117
+ - type: cosine_recall@5
118
+ value: 0.81605871886121
119
+ name: Cosine Recall@5
120
+ - type: cosine_recall@10
121
+ value: 0.9893238434163701
122
+ name: Cosine Recall@10
123
+ - type: cosine_ndcg@10
124
+ value: 0.5663698287874538
125
+ name: Cosine Ndcg@10
126
+ - type: cosine_mrr@10
127
+ value: 0.43265442297915546
128
+ name: Cosine Mrr@10
129
+ - type: cosine_map@100
130
+ value: 0.433292401944685
131
+ name: Cosine Map@100
132
+ ---
133
+
134
+ # nomic-ai/nomic-embed-text-v1.5
135
+
136
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) on the [sci_topic_triplets](https://huggingface.co/datasets/Corran/SciTopicTriplets) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
137
+
138
+ ## Model Details
139
+
140
+ ### Model Description
141
+ - **Model Type:** Sentence Transformer
142
+ - **Base model:** [nomic-ai/nomic-embed-text-v1.5](https://huggingface.co/nomic-ai/nomic-embed-text-v1.5) <!-- at revision ac6fcd72429d86ff25c17895e47a9bfcfc50c1b2 -->
143
+ - **Maximum Sequence Length:** 8192 tokens
144
+ - **Output Dimensionality:** 768 dimensions
145
+ - **Similarity Function:** Cosine Similarity
146
+ - **Training Dataset:**
147
+ - [sci_topic_triplets](https://huggingface.co/datasets/Corran/SciTopicTriplets)
148
+ - **Language:** en
149
+ - **License:** apache-2.0
150
+
151
+ ### Model Sources
152
+
153
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
154
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
155
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
156
+
157
+ ### Full Model Architecture
158
+
159
+ ```
160
+ SentenceTransformer(
161
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
162
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
163
+ )
164
+ ```
165
+
166
+ ## Usage
167
+
168
+ ### Direct Usage (Sentence Transformers)
169
+
170
+ First install the Sentence Transformers library:
171
+
172
+ ```bash
173
+ pip install -U sentence-transformers
174
+ ```
175
+
176
+ Then you can load this model and run inference.
177
+ ```python
178
+ from sentence_transformers import SentenceTransformer
179
+
180
+ # Download from the 🤗 Hub
181
+ model = SentenceTransformer("Corran/SciTopicNomicEmbed")
182
+ # Run inference
183
+ sentences = [
184
+ 'The IANA Task Force (2021) builds upon previous research suggesting that slower gait speed is associated with increased risk of adverse outcomes in older adults (Levine et al., 2015; Schoenfeld et al., 2016).',
185
+ 'Gait speed at usual pace as a predictor of adverse outcomes in community-dwelling older people an International Academy on Nutrition and Aging (IANA) Task Force',
186
+ 'Referring to another writer’s idea(s) or position',
187
+ ]
188
+ embeddings = model.encode(sentences)
189
+ print(embeddings.shape)
190
+ # [3, 768]
191
+
192
+ # Get the similarity scores for the embeddings
193
+ similarities = model.similarity(embeddings, embeddings)
194
+ print(similarities.shape)
195
+ # [3, 3]
196
+ ```
197
+
198
+ <!--
199
+ ### Direct Usage (Transformers)
200
+
201
+ <details><summary>Click to see the direct usage in Transformers</summary>
202
+
203
+ </details>
204
+ -->
205
+
206
+ <!--
207
+ ### Downstream Usage (Sentence Transformers)
208
+
209
+ You can finetune this model on your own dataset.
210
+
211
+ <details><summary>Click to expand</summary>
212
+
213
+ </details>
214
+ -->
215
+
216
+ <!--
217
+ ### Out-of-Scope Use
218
+
219
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
220
+ -->
221
+
222
+ ## Evaluation
223
+
224
+ ### Metrics
225
+
226
+ #### Information Retrieval
227
+
228
+ * Dataset: `SciGen-Eval-Set`
229
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
230
+
231
+ | Metric | Value |
232
+ |:--------------------|:-----------|
233
+ | cosine_accuracy@1 | 0.1975 |
234
+ | cosine_accuracy@3 | 0.5547 |
235
+ | cosine_accuracy@5 | 0.8161 |
236
+ | cosine_accuracy@10 | 0.9893 |
237
+ | cosine_precision@1 | 0.1975 |
238
+ | cosine_precision@3 | 0.1849 |
239
+ | cosine_precision@5 | 0.1632 |
240
+ | cosine_precision@10 | 0.0989 |
241
+ | cosine_recall@1 | 0.1975 |
242
+ | cosine_recall@3 | 0.5547 |
243
+ | cosine_recall@5 | 0.8161 |
244
+ | cosine_recall@10 | 0.9893 |
245
+ | **cosine_ndcg@10** | **0.5664** |
246
+ | cosine_mrr@10 | 0.4327 |
247
+ | cosine_map@100 | 0.4333 |
248
+
249
+ <!--
250
+ ## Bias, Risks and Limitations
251
+
252
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
253
+ -->
254
+
255
+ <!--
256
+ ### Recommendations
257
+
258
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
259
+ -->
260
+
261
+ ## Training Details
262
+
263
+ ### Training Dataset
264
+
265
+ #### sci_topic_triplets
266
+
267
+ * Dataset: [sci_topic_triplets](https://huggingface.co/datasets/Corran/SciTopicTriplets) at [8bf9936](https://huggingface.co/datasets/Corran/SciTopicTriplets/tree/8bf9936b3b007670b076d43959cdc261383ff88f)
268
+ * Size: 35,964 training samples
269
+ * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
270
+ * Approximate statistics based on the first 1000 samples:
271
+ | | query | positive | negative |
272
+ |:--------|:-----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
273
+ | type | string | string | string |
274
+ | details | <ul><li>min: 17 tokens</li><li>mean: 40.37 tokens</li><li>max: 93 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 18.75 tokens</li><li>max: 56 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.74 tokens</li><li>max: 23 tokens</li></ul> |
275
+ * Samples:
276
+ | query | positive | negative |
277
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|
278
+ | <code>This study provides comprehensive estimates of life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death and 195 countries and territories from 1980 to 2015, allowing for a detailed understanding of global health trends and patterns over the past four decades.</code> | <code>Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: a systematic analysis for the Global Burden of Disease Study 2015</code> | <code>Explaining the significance of the current study</code> |
279
+ | <code>This paper explores the relationship between the expected value and the volatility of the nominal excess return on stocks using a econometric approach.</code> | <code>On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks</code> | <code>Stating the focus, aim, or argument of a short paper</code> |
280
+ | <code>Despite the increasing attention given to the role of audit committees and board of directors in mitigating earnings management, several studies have reported inconclusive or even negative findings.</code> | <code>Audit committee, board of director characteristics, and earnings management</code> | <code>General reference to previous research or scholarship: highlighting negative outcomes</code> |
281
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
282
+ ```json
283
+ {
284
+ "loss": "MultipleNegativesRankingLoss",
285
+ "matryoshka_dims": [
286
+ 768,
287
+ 384,
288
+ 256,
289
+ 128,
290
+ 64
291
+ ],
292
+ "matryoshka_weights": [
293
+ 1,
294
+ 1,
295
+ 1,
296
+ 1,
297
+ 1
298
+ ],
299
+ "n_dims_per_step": -1
300
+ }
301
+ ```
302
+
303
+ ### Evaluation Dataset
304
+
305
+ #### sci_topic_triplets
306
+
307
+ * Dataset: [sci_topic_triplets](https://huggingface.co/datasets/Corran/SciTopicTriplets) at [8bf9936](https://huggingface.co/datasets/Corran/SciTopicTriplets/tree/8bf9936b3b007670b076d43959cdc261383ff88f)
308
+ * Size: 4,495 evaluation samples
309
+ * Columns: <code>query</code>, <code>positive</code>, and <code>negative</code>
310
+ * Approximate statistics based on the first 1000 samples:
311
+ | | query | positive | negative |
312
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
313
+ | type | string | string | string |
314
+ | details | <ul><li>min: 18 tokens</li><li>mean: 40.1 tokens</li><li>max: 87 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 18.75 tokens</li><li>max: 58 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 10.74 tokens</li><li>max: 23 tokens</li></ul> |
315
+ * Samples:
316
+ | query | positive | negative |
317
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------|
318
+ | <code>In this cluster-randomised controlled trial, the authors aimed to evaluate the effectiveness of introducing the Medical Emergency Team (MET) system in reducing response times and improving patient outcomes in emergency departments.</code> | <code>Introduction of the medical emergency team (MET) system: a cluster-randomised controlled trial</code> | <code>Some ways of introducing quotations</code> |
319
+ | <code>In the data collection phase of our study, we employed both surveys and interviews as research methods. Specifically, we administered surveys to 200 participants and conducted interviews with 10 key industry experts to gather proportional data on various aspects of management science practices.</code> | <code>Research Methodology: A Step-by-Step Guide for Beginners</code> | <code>Surveys and interviews: Reporting proportions</code> |
320
+ | <code>Several density functional theory (DFT) based chemical reactivity indexes, such as the Fukui functions and the electrophilic and nucleophilic indices, are discussed in detail for their ability to predict chemical reactivity.</code> | <code>Chemical reactivity indexes in density functional theory</code> | <code>General comments on the relevant literature</code> |
321
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
322
+ ```json
323
+ {
324
+ "loss": "MultipleNegativesRankingLoss",
325
+ "matryoshka_dims": [
326
+ 768,
327
+ 384,
328
+ 256,
329
+ 128,
330
+ 64
331
+ ],
332
+ "matryoshka_weights": [
333
+ 1,
334
+ 1,
335
+ 1,
336
+ 1,
337
+ 1
338
+ ],
339
+ "n_dims_per_step": -1
340
+ }
341
+ ```
342
+
343
+ ### Training Hyperparameters
344
+ #### Non-Default Hyperparameters
345
+
346
+ - `eval_strategy`: steps
347
+ - `per_device_train_batch_size`: 256
348
+ - `per_device_eval_batch_size`: 256
349
+ - `learning_rate`: 2e-05
350
+ - `num_train_epochs`: 10
351
+ - `warmup_ratio`: 0.1
352
+ - `fp16`: True
353
+ - `load_best_model_at_end`: True
354
+
355
+ #### All Hyperparameters
356
+ <details><summary>Click to expand</summary>
357
+
358
+ - `overwrite_output_dir`: False
359
+ - `do_predict`: False
360
+ - `eval_strategy`: steps
361
+ - `prediction_loss_only`: True
362
+ - `per_device_train_batch_size`: 256
363
+ - `per_device_eval_batch_size`: 256
364
+ - `per_gpu_train_batch_size`: None
365
+ - `per_gpu_eval_batch_size`: None
366
+ - `gradient_accumulation_steps`: 1
367
+ - `eval_accumulation_steps`: None
368
+ - `torch_empty_cache_steps`: None
369
+ - `learning_rate`: 2e-05
370
+ - `weight_decay`: 0.0
371
+ - `adam_beta1`: 0.9
372
+ - `adam_beta2`: 0.999
373
+ - `adam_epsilon`: 1e-08
374
+ - `max_grad_norm`: 1.0
375
+ - `num_train_epochs`: 10
376
+ - `max_steps`: -1
377
+ - `lr_scheduler_type`: linear
378
+ - `lr_scheduler_kwargs`: {}
379
+ - `warmup_ratio`: 0.1
380
+ - `warmup_steps`: 0
381
+ - `log_level`: passive
382
+ - `log_level_replica`: warning
383
+ - `log_on_each_node`: True
384
+ - `logging_nan_inf_filter`: True
385
+ - `save_safetensors`: True
386
+ - `save_on_each_node`: False
387
+ - `save_only_model`: False
388
+ - `restore_callback_states_from_checkpoint`: False
389
+ - `no_cuda`: False
390
+ - `use_cpu`: False
391
+ - `use_mps_device`: False
392
+ - `seed`: 42
393
+ - `data_seed`: None
394
+ - `jit_mode_eval`: False
395
+ - `use_ipex`: False
396
+ - `bf16`: False
397
+ - `fp16`: True
398
+ - `fp16_opt_level`: O1
399
+ - `half_precision_backend`: auto
400
+ - `bf16_full_eval`: False
401
+ - `fp16_full_eval`: False
402
+ - `tf32`: None
403
+ - `local_rank`: 0
404
+ - `ddp_backend`: None
405
+ - `tpu_num_cores`: None
406
+ - `tpu_metrics_debug`: False
407
+ - `debug`: []
408
+ - `dataloader_drop_last`: False
409
+ - `dataloader_num_workers`: 0
410
+ - `dataloader_prefetch_factor`: None
411
+ - `past_index`: -1
412
+ - `disable_tqdm`: False
413
+ - `remove_unused_columns`: True
414
+ - `label_names`: None
415
+ - `load_best_model_at_end`: True
416
+ - `ignore_data_skip`: False
417
+ - `fsdp`: []
418
+ - `fsdp_min_num_params`: 0
419
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
420
+ - `fsdp_transformer_layer_cls_to_wrap`: None
421
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
422
+ - `deepspeed`: None
423
+ - `label_smoothing_factor`: 0.0
424
+ - `optim`: adamw_torch
425
+ - `optim_args`: None
426
+ - `adafactor`: False
427
+ - `group_by_length`: False
428
+ - `length_column_name`: length
429
+ - `ddp_find_unused_parameters`: None
430
+ - `ddp_bucket_cap_mb`: None
431
+ - `ddp_broadcast_buffers`: False
432
+ - `dataloader_pin_memory`: True
433
+ - `dataloader_persistent_workers`: False
434
+ - `skip_memory_metrics`: True
435
+ - `use_legacy_prediction_loop`: False
436
+ - `push_to_hub`: False
437
+ - `resume_from_checkpoint`: None
438
+ - `hub_model_id`: None
439
+ - `hub_strategy`: every_save
440
+ - `hub_private_repo`: None
441
+ - `hub_always_push`: False
442
+ - `gradient_checkpointing`: False
443
+ - `gradient_checkpointing_kwargs`: None
444
+ - `include_inputs_for_metrics`: False
445
+ - `include_for_metrics`: []
446
+ - `eval_do_concat_batches`: True
447
+ - `fp16_backend`: auto
448
+ - `push_to_hub_model_id`: None
449
+ - `push_to_hub_organization`: None
450
+ - `mp_parameters`:
451
+ - `auto_find_batch_size`: False
452
+ - `full_determinism`: False
453
+ - `torchdynamo`: None
454
+ - `ray_scope`: last
455
+ - `ddp_timeout`: 1800
456
+ - `torch_compile`: False
457
+ - `torch_compile_backend`: None
458
+ - `torch_compile_mode`: None
459
+ - `dispatch_batches`: None
460
+ - `split_batches`: None
461
+ - `include_tokens_per_second`: False
462
+ - `include_num_input_tokens_seen`: False
463
+ - `neftune_noise_alpha`: None
464
+ - `optim_target_modules`: None
465
+ - `batch_eval_metrics`: False
466
+ - `eval_on_start`: False
467
+ - `use_liger_kernel`: False
468
+ - `eval_use_gather_object`: False
469
+ - `average_tokens_across_devices`: False
470
+ - `prompts`: None
471
+ - `batch_sampler`: batch_sampler
472
+ - `multi_dataset_batch_sampler`: proportional
473
+
474
+ </details>
475
+
476
+ ### Training Logs
477
+ | Epoch | Step | Training Loss | Validation Loss | SciGen-Eval-Set_cosine_ndcg@10 |
478
+ |:------:|:----:|:-------------:|:---------------:|:------------------------------:|
479
+ | 0 | 0 | - | - | 0.5454 |
480
+ | 0.1418 | 20 | 4.4872 | 3.1379 | 0.5468 |
481
+ | 0.2837 | 40 | 2.241 | 1.7162 | 0.5497 |
482
+ | 0.4255 | 60 | 1.5937 | 1.4834 | 0.5524 |
483
+ | 0.5674 | 80 | 1.5356 | 1.3911 | 0.5541 |
484
+ | 0.7092 | 100 | 1.4106 | 1.3277 | 0.5549 |
485
+ | 0.8511 | 120 | 1.2612 | 1.2919 | 0.5561 |
486
+ | 0.9929 | 140 | 1.3147 | 1.2642 | 0.5572 |
487
+ | 1.1348 | 160 | 1.1527 | 1.2529 | 0.5582 |
488
+ | 1.2766 | 180 | 1.2103 | 1.2388 | 0.5593 |
489
+ | 1.4184 | 200 | 1.2407 | 1.2235 | 0.5598 |
490
+ | 1.5603 | 220 | 1.1356 | 1.2101 | 0.5607 |
491
+ | 1.7021 | 240 | 1.1644 | 1.1938 | 0.5605 |
492
+ | 1.8440 | 260 | 1.1927 | 1.1864 | 0.5612 |
493
+ | 1.9858 | 280 | 1.1909 | 1.1800 | 0.5613 |
494
+ | 2.1277 | 300 | 1.0549 | 1.1785 | 0.5620 |
495
+ | 2.2695 | 320 | 1.0745 | 1.1755 | 0.5630 |
496
+ | 2.4113 | 340 | 1.1485 | 1.1656 | 0.5637 |
497
+ | 2.5532 | 360 | 1.1159 | 1.1654 | 0.5637 |
498
+ | 2.6950 | 380 | 1.0686 | 1.1623 | 0.5640 |
499
+ | 2.8369 | 400 | 1.1436 | 1.1594 | 0.5632 |
500
+ | 2.9787 | 420 | 1.0899 | 1.1534 | 0.5644 |
501
+ | 3.1206 | 440 | 1.0756 | 1.1512 | 0.5647 |
502
+ | 3.2624 | 460 | 1.0203 | 1.1536 | 0.5645 |
503
+ | 3.4043 | 480 | 1.1073 | 1.1564 | 0.5650 |
504
+ | 3.5461 | 500 | 1.0423 | 1.1594 | 0.5651 |
505
+ | 3.6879 | 520 | 1.069 | 1.1514 | 0.5652 |
506
+ | 3.8298 | 540 | 1.0101 | 1.1538 | 0.5645 |
507
+ | 3.9716 | 560 | 1.0685 | 1.1647 | 0.5650 |
508
+ | 4.1135 | 580 | 1.0326 | 1.1618 | 0.5653 |
509
+ | 4.2553 | 600 | 1.0729 | 1.1587 | 0.5648 |
510
+ | 4.3972 | 620 | 1.0417 | 1.1515 | 0.5655 |
511
+ | 4.5390 | 640 | 1.0438 | 1.1528 | 0.5657 |
512
+ | 4.6809 | 660 | 1.025 | 1.1433 | 0.5660 |
513
+ | 4.8227 | 680 | 1.0526 | 1.1382 | 0.5662 |
514
+ | 4.9645 | 700 | 1.0485 | 1.1392 | 0.5663 |
515
+ | 5.1064 | 720 | 1.0348 | 1.1411 | 0.5665 |
516
+ | 5.2482 | 740 | 1.1001 | 1.1511 | 0.5663 |
517
+ | 5.3901 | 760 | 1.0926 | 1.1625 | 0.5662 |
518
+ | 5.5319 | 780 | 1.0885 | 1.1487 | 0.5662 |
519
+ | 5.6738 | 800 | 1.0942 | 1.1492 | 0.5665 |
520
+ | 5.8156 | 820 | 1.0457 | 1.1465 | 0.5666 |
521
+ | 5.9574 | 840 | 1.0479 | 1.1461 | 0.5664 |
522
+
523
+
524
+ ### Framework Versions
525
+ - Python: 3.11.11
526
+ - Sentence Transformers: 3.3.1
527
+ - Transformers: 4.47.1
528
+ - PyTorch: 2.5.1+cu124
529
+ - Accelerate: 1.2.1
530
+ - Datasets: 3.2.0
531
+ - Tokenizers: 0.21.0
532
+
533
+ ## Citation
534
+
535
+ ### BibTeX
536
+
537
+ #### Sentence Transformers
538
+ ```bibtex
539
+ @inproceedings{reimers-2019-sentence-bert,
540
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
541
+ author = "Reimers, Nils and Gurevych, Iryna",
542
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
543
+ month = "11",
544
+ year = "2019",
545
+ publisher = "Association for Computational Linguistics",
546
+ url = "https://arxiv.org/abs/1908.10084",
547
+ }
548
+ ```
549
+
550
+ #### MatryoshkaLoss
551
+ ```bibtex
552
+ @misc{kusupati2024matryoshka,
553
+ title={Matryoshka Representation Learning},
554
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
555
+ year={2024},
556
+ eprint={2205.13147},
557
+ archivePrefix={arXiv},
558
+ primaryClass={cs.LG}
559
+ }
560
+ ```
561
+
562
+ #### MultipleNegativesRankingLoss
563
+ ```bibtex
564
+ @misc{henderson2017efficient,
565
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
566
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
567
+ year={2017},
568
+ eprint={1705.00652},
569
+ archivePrefix={arXiv},
570
+ primaryClass={cs.CL}
571
+ }
572
+ ```
573
+
574
+ <!--
575
+ ## Glossary
576
+
577
+ *Clearly define terms in order to be accessible across audiences.*
578
+ -->
579
+
580
+ <!--
581
+ ## Model Card Authors
582
+
583
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
584
+ -->
585
+
586
+ <!--
587
+ ## Model Card Contact
588
+
589
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
590
+ -->
config.json ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "nomic-ai/nomic-embed-text-v1.5",
3
+ "activation_function": "swiglu",
4
+ "architectures": [
5
+ "NomicBertModel"
6
+ ],
7
+ "attn_pdrop": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "nomic-ai/nomic-bert-2048--configuration_hf_nomic_bert.NomicBertConfig",
10
+ "AutoModel": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertModel",
11
+ "AutoModelForMaskedLM": "nomic-ai/nomic-bert-2048--modeling_hf_nomic_bert.NomicBertForPreTraining"
12
+ },
13
+ "bos_token_id": null,
14
+ "causal": false,
15
+ "dense_seq_output": true,
16
+ "embd_pdrop": 0.0,
17
+ "eos_token_id": null,
18
+ "fused_bias_fc": true,
19
+ "fused_dropout_add_ln": true,
20
+ "initializer_range": 0.02,
21
+ "layer_norm_epsilon": 1e-12,
22
+ "max_trained_positions": 2048,
23
+ "mlp_fc1_bias": false,
24
+ "mlp_fc2_bias": false,
25
+ "model_type": "nomic_bert",
26
+ "n_embd": 768,
27
+ "n_head": 12,
28
+ "n_inner": 3072,
29
+ "n_layer": 12,
30
+ "n_positions": 8192,
31
+ "pad_vocab_size_multiple": 64,
32
+ "parallel_block": false,
33
+ "parallel_block_tied_norm": false,
34
+ "prenorm": false,
35
+ "qkv_proj_bias": false,
36
+ "reorder_and_upcast_attn": false,
37
+ "resid_pdrop": 0.0,
38
+ "rotary_emb_base": 1000,
39
+ "rotary_emb_fraction": 1.0,
40
+ "rotary_emb_interleaved": false,
41
+ "rotary_emb_scale_base": null,
42
+ "rotary_scaling_factor": null,
43
+ "scale_attn_by_inverse_layer_idx": false,
44
+ "scale_attn_weights": true,
45
+ "summary_activation": null,
46
+ "summary_first_dropout": 0.0,
47
+ "summary_proj_to_labels": true,
48
+ "summary_type": "cls_index",
49
+ "summary_use_proj": true,
50
+ "torch_dtype": "float32",
51
+ "transformers_version": "4.47.1",
52
+ "type_vocab_size": 2,
53
+ "use_cache": true,
54
+ "use_flash_attn": true,
55
+ "use_rms_norm": false,
56
+ "use_xentropy": true,
57
+ "vocab_size": 30528
58
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.47.1",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7bdfd89a53ee34a3daf46e1b10e3814ae8ed9527b728ad2c6205c855d2214b68
3
+ size 546938168
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "model_max_length": 8192,
50
+ "pad_token": "[PAD]",
51
+ "sep_token": "[SEP]",
52
+ "strip_accents": null,
53
+ "tokenize_chinese_chars": true,
54
+ "tokenizer_class": "BertTokenizer",
55
+ "unk_token": "[UNK]"
56
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff