nomic-text-embed COVID QA Matryoshka test

This is a sentence-transformers model finetuned from nomic-ai/nomic-embed-text-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/nomic-embed-text-v1.5
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("JerryO3/test")
# Run inference
sentences = [
    'The monolayers were removed from their plastic surfaces and serially passaged whenever they became confluent. Cells were plated out onto 96-well culture plates for cytotoxicity and anti-influenza assays, and propagated at 37 °C in an atmosphere of 5% CO 2 . The influenza strain A/Leningrad/134/17/1957 H2N2) was purchased from National Control Institute of Veterinary Bioproducts and Pharmaceuticals (Beijing, China). Virus was routinely grown on MDCK cells. The stock cultures were prepared from supernatants of infected cells and stored at −80 °C. The cellular toxicity of patchouli alcohol on MDCK cells was assessed by the MTT method. Briefly, cells were seeded on a microtiter plate in the absence or presence of various concentrations (20 µM -0.0098 µM) of patchouli alcohol (eight replicates) and incubated at 37 °C in a humidified atmosphere of 5% CO 2 for 72 h. The supernatants were discarded, washed with PBS twice and MTT reagent (5 mg/mL in PBS) was added to each well. After incubation at 37 °C for 4 h, the supernatants were removed, then 200 μL DMSO was added and incubated at 37 °C for another 30 min.',
    'What method was used to measure the inhibition of viral replication?',
    'What can be a factor in using common vectors for the delivery of vaccines?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.321
cosine_accuracy@3 0.6049
cosine_accuracy@5 0.7222
cosine_accuracy@10 0.858
cosine_precision@1 0.321
cosine_precision@3 0.2016
cosine_precision@5 0.1444
cosine_precision@10 0.0858
cosine_recall@1 0.321
cosine_recall@3 0.6049
cosine_recall@5 0.7222
cosine_recall@10 0.858
cosine_ndcg@10 0.5726
cosine_mrr@10 0.4832
cosine_map@100 0.4877

Information Retrieval

Metric Value
cosine_accuracy@1 0.3395
cosine_accuracy@3 0.6173
cosine_accuracy@5 0.6914
cosine_accuracy@10 0.8395
cosine_precision@1 0.3395
cosine_precision@3 0.2058
cosine_precision@5 0.1383
cosine_precision@10 0.084
cosine_recall@1 0.3395
cosine_recall@3 0.6173
cosine_recall@5 0.6914
cosine_recall@10 0.8395
cosine_ndcg@10 0.577
cosine_mrr@10 0.4943
cosine_map@100 0.5

Information Retrieval

Metric Value
cosine_accuracy@1 0.3148
cosine_accuracy@3 0.5864
cosine_accuracy@5 0.6605
cosine_accuracy@10 0.7901
cosine_precision@1 0.3148
cosine_precision@3 0.1955
cosine_precision@5 0.1321
cosine_precision@10 0.079
cosine_recall@1 0.3148
cosine_recall@3 0.5864
cosine_recall@5 0.6605
cosine_recall@10 0.7901
cosine_ndcg@10 0.5455
cosine_mrr@10 0.468
cosine_map@100 0.4775

Information Retrieval

Metric Value
cosine_accuracy@1 0.2716
cosine_accuracy@3 0.537
cosine_accuracy@5 0.6543
cosine_accuracy@10 0.7284
cosine_precision@1 0.2716
cosine_precision@3 0.179
cosine_precision@5 0.1309
cosine_precision@10 0.0728
cosine_recall@1 0.2716
cosine_recall@3 0.537
cosine_recall@5 0.6543
cosine_recall@10 0.7284
cosine_ndcg@10 0.4966
cosine_mrr@10 0.4221
cosine_map@100 0.4335

Information Retrieval

Metric Value
cosine_accuracy@1 0.2407
cosine_accuracy@3 0.4753
cosine_accuracy@5 0.5864
cosine_accuracy@10 0.6728
cosine_precision@1 0.2407
cosine_precision@3 0.1584
cosine_precision@5 0.1173
cosine_precision@10 0.0673
cosine_recall@1 0.2407
cosine_recall@3 0.4753
cosine_recall@5 0.5864
cosine_recall@10 0.6728
cosine_ndcg@10 0.4509
cosine_mrr@10 0.3798
cosine_map@100 0.3911

Training Details

Training Dataset

Unnamed Dataset

  • Size: 1,453 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 112 tokens
    • mean: 319.17 tokens
    • max: 778 tokens
    • min: 6 tokens
    • mean: 14.84 tokens
    • max: 65 tokens
  • Samples:
    positive anchor
    We find that the slowing growth in daily reported deaths in Italy is consistent with a significant impact of interventions implemented several weeks earlier. In Italy, we estimate that the effective reproduction number, Rt, dropped to close to 1 around the time of Iockdown (11th March), although with a high level of uncertainty. Overall, we estimate that countries have managed to reduce their reproduction number. Our estimates have wide credible intervals and contain 1 for countries that have implemented a
    [46] Where the biological samples are taken from also play a role in the sensitivity of these tests. For SARS-CoV and MERS-CoV, specimens collected from the lower respiratory tract such as sputum and tracheal aspirates have higher and more prolonged levels of viral RNA because of the tropism of the virus. MERS-CoV viral loads are also higher for severe cases and have longer viral shedding compared to mild cases. Although upper respiratory tract specimens such as nasopharyngeal or oropharyngeal swabs can be used, they have potentially lower viral loads and may have higher risk of false-negatives among the mild MERS and SARS cases [102, 103] , and likely among the 2019-nCoV cases. The existing practices in detecting genetic material of coronaviruses such as SARS-CoV and MERS-CoV include (a) reverse transcription-polymerase chain reaction (RT-PCR), (b) real-time RT-PCR (rRT-PCR), (c) reverse transcription loop-mediated isothermal amplification (RT-LAMP) and (d) real-time RT-LAMP [104] . Nucleic amplification tests (NAAT) are usually preferred as in the case of MERS-CoV diagnosis as it has the highest sensitivity at the earliest time point in the acute phase of infection [102] . Chinese health authorities have recently posted the full genome of 2019-nCoV in the GenBank and in GISAID portal to facilitate in the detection of the virus [11] . Several laboratory assays have been developed to detect the novel coronavirus in Wuhan, as highlighted in WHO's interim guidance on nCoV laboratory testing of suspected cases. Why are Nucleic amplification tests (NAAT) usually preferred as in the case of MERS-CoV diagnosis?
    By the time symptoms appear in HCPS, both strong antiviral responses, and, for the more virulent viral genotypes, viral RNA can be detected in blood plasma or nucleated blood cells respectively [63, 64] . At least three studies have correlated plasma viral RNA with disease severity for HCPS and HFRS, suggesting that the replication of the virus plays an ongoing and real-time role in viral pathogenesis [65] [66] [67] . Several hallmark pathologic changes have been identified that occur in both HFRS and HCPS. A critical feature of both is a transient (~ 1-5 days) capillary leak involving the kidney and retroperitoneal space in HFRS and the lungs in HCPS. The resulting leakage is exudative in character, with chemical composition high in protein and resembling plasma. The continued experience indicating the strong tissue tropism for endothelial cells, specifically, is among the several factors that make β3 integrin an especially attractive candidate as an important in vivo receptor for hantaviruses. It is likely that hantaviruses arrive at their target tissues through uptake by regional lymph nodes, perhaps with or within an escorting lung histiocyte. The virus seeds local endothelium, where the first few infected cells give rise, ultimately, to a primary viremia, a process that appears to take a long time for hantavirus infections [62, 63] . Which is an especially attractive candidate as an important in vivo receptor for hantaviruses?
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • learning_rate: 2e-05
  • num_train_epochs: 4
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • auto_find_batch_size: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 8
  • per_device_eval_batch_size: 8
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 4
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: True
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_128_cosine_map@100 dim_256_cosine_map@100 dim_512_cosine_map@100 dim_64_cosine_map@100 dim_768_cosine_map@100
0.0549 10 5.6725 - - - - -
0.1099 20 4.6781 - - - - -
0.1648 30 3.9597 - - - - -
0.2198 40 3.2221 - - - - -
0.2747 50 2.2144 - - - - -
0.3297 60 2.8916 - - - - -
0.3846 70 1.7038 - - - - -
0.4396 80 2.4738 - - - - -
0.4945 90 1.8951 - - - - -
0.5495 100 1.515 - - - - -
0.6044 110 1.5431 - - - - -
0.6593 120 2.4492 - - - - -
0.7143 130 1.656 - - - - -
0.7692 140 1.7953 - - - - -
0.8242 150 1.8679 - - - - -
0.8791 160 2.1551 - - - - -
0.9341 170 1.5363 - - - - -
0.9890 180 1.2529 - - - - -
1.0 182 - 0.3894 0.4585 0.4805 0.3287 0.4926
1.0440 190 1.319 - - - - -
1.0989 200 1.0985 - - - - -
1.1538 210 1.0403 - - - - -
1.2088 220 0.4363 - - - - -
1.2637 230 0.2102 - - - - -
1.3187 240 0.3584 - - - - -
1.3736 250 0.2683 - - - - -
1.4286 260 0.4438 - - - - -
1.4835 270 0.34 - - - - -
1.5385 280 0.4296 - - - - -
1.5934 290 0.2323 - - - - -
1.6484 300 0.3259 - - - - -
1.7033 310 0.4339 - - - - -
1.7582 320 0.1524 - - - - -
1.8132 330 0.0782 - - - - -
1.8681 340 0.4306 - - - - -
1.9231 350 0.312 - - - - -
1.9780 360 0.2112 - - - - -
2.0 364 - 0.4139 0.4526 0.4762 0.3761 0.4672
2.0330 370 0.2341 - - - - -
2.0879 380 0.1965 - - - - -
2.1429 390 0.3019 - - - - -
2.1978 400 0.1518 - - - - -
2.2527 410 0.0203 - - - - -
2.3077 420 0.0687 - - - - -
2.3626 430 0.0206 - - - - -
2.4176 440 0.3615 - - - - -
2.4725 450 0.4674 - - - - -
2.5275 460 0.0623 - - - - -
2.5824 470 0.0222 - - - - -
2.6374 480 0.1049 - - - - -
2.6923 490 0.4955 - - - - -
2.7473 500 0.439 - - - - -
2.8022 510 0.0052 - - - - -
2.8571 520 0.16 - - - - -
2.9121 530 0.0583 - - - - -
2.9670 540 0.0127 - - - - -
3.0 546 - 0.4427 0.4765 0.508 0.397 0.5021
3.0220 550 0.0143 - - - - -
3.0769 560 0.0228 - - - - -
3.1319 570 0.0704 - - - - -
3.1868 580 0.0086 - - - - -
3.2418 590 0.001 - - - - -
3.2967 600 0.002 - - - - -
3.3516 610 0.0016 - - - - -
3.4066 620 0.021 - - - - -
3.4615 630 0.0013 - - - - -
3.5165 640 0.0723 - - - - -
3.5714 650 0.0045 - - - - -
3.6264 660 0.0048 - - - - -
3.6813 670 0.1005 - - - - -
3.7363 680 0.0018 - - - - -
3.7912 690 0.0101 - - - - -
3.8462 700 0.0104 - - - - -
3.9011 710 0.0025 - - - - -
3.9560 720 0.014 - - - - -
4.0 728 - 0.4335 0.4775 0.5000 0.3911 0.4877
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.0.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.31.0
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning}, 
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply}, 
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
16
Safetensors
Model size
137M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for JerryO3/test

Finetuned
(18)
this model

Evaluation results