SentenceTransformer based on nomic-ai/nomic-embed-text-v1

This is a sentence-transformers model finetuned from nomic-ai/nomic-embed-text-v1. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: nomic-ai/nomic-embed-text-v1
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("ptpedroVortal/nomic_vortal_v3.3")
# Run inference
sentences = [
    'Collect the details that are associated with product \'\' \'Macbook Air 13" com processador M1/M2 e 8 GB de RAM (Telado PT-PT)\', with quantity 1, unit UN',
    'Apresenta -se de seguida a configuração financeira para a fornecimento  dos produtos \\nrequeridos , mediante opções por cor e diferentes características:\\nNOTA:  Valores válidos  até 23 de Fevereiro e mediante adjudicação de 2 ou mais \\nequipamentos  portáteis (excluindo Teclado)\\nPART-NUMBER QTD. DESCRIÇÃOVALOR\\nUNITÁRIOVALOR\\nTOTAL\\nMLY03PO/A 1Apple Macbook AIR  13,6"  (Disco 512GB SSD; 10 core) 1 545,08 €                 1 545,08 €\\nMLXY3PO/A 1Apple Macbook AIR  13,6" (Disco 256GB SSD, 8 core) 1 227,48 €                 1 227,48 €',
    'LOTE 5\n1 MESA APOIO MESA DE APOIO EM INOX AISI 304 2,0 279,000 23,0 558,000\nMesa com 4 rodas , 2 com travão\nTabuleiro inferior\nDimens: C 700 x L 500 x A 800mm\nPrateleira inferior - profundidade 250mm\nFabrico Nacional e por medida\nTotal do do lote 5: 558,00€ Quinhentos e cinquenta e oito euros',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

  • Evaluated with main.CustomEvaluator
Metric Value
pearson_cosine nan
spearman_cosine nan

Training Details

Training Dataset

Unnamed Dataset

  • Size: 222 training samples
  • Columns: query and correct_node
  • Approximate statistics based on the first 222 samples:
    query correct_node
    type string string
    details
    • min: 15 tokens
    • mean: 55.17 tokens
    • max: 154 tokens
    • min: 22 tokens
    • mean: 109.22 tokens
    • max: 2920 tokens
  • Samples:
    query correct_node
    Collect the details that are associated with Lot 4 product '' 'Mesas de Mayo', with quantity 2, unit Subcontracting Unit LOTE 4
    1 MESA DE MAYO 82JM 10.ME.1831 2,000 842,00000 23 1 684,00
    oitocentos e quarenta e dois euros
    Origem : Nacional
    Marca : MOBIT
    Prazo de entrega: 30 dias
    Garantia: 2 anos
    Transporte
    Collect the details that are associated with Lot 7 product '' 'Carro transporte de roupa suja ', with quantity 1, unit US Lote 7 nan nan nan nan nan\nRef. Description Qt. Un. Un. Price Total\n9856 Carros para Transporte de Roupa Suja e Limpa 1 US 16.23 16.23</code>
    Collect the details that are associated with product '' '2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.)', with quantity 72, unit UN 2202000014 - FIO SUT. SEDA NÃO ABS. 2/0 MULTIF. SEM AGULHA (CART.) 0.36
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 27 evaluation samples
  • Columns: query and correct_node
  • Approximate statistics based on the first 27 samples:
    query correct_node
    type string string
    details
    • min: 17 tokens
    • mean: 56.85 tokens
    • max: 121 tokens
    • min: 40 tokens
    • mean: 228.15 tokens
    • max: 2963 tokens
  • Samples:
    query correct_node
    Collect the details that are associated with product '' '2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.)', with quantity 1656, unit UN 2202000055 - FIO SUT. POLIAMIDA NÃO ABS. 2/0 MONOF. AG. LANC. 39 MM 3/8 C (CART.) 1.28
    Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN
    Collect the details that are associated with Lot 3 product 'Portaria do Parque Coberto dos Olhos de Água' 'Vigilância e segurança humana contínua - Olhos de Água - período de 3 meses - todos os dias da semana, incluindo feriados, total estimado de 2754H', with quantity 1, unit UN Lote 3:\nPreço Unitário: 10,00€ (dez euros) /hora\nPreço Total: 27.540,00€ (vinte sete mil quinhentos e quarenta euros)
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • bf16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss spearman_cosine
7.1429 100 0.0965 0.2395 nan
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.14
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.0.dev0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.4

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
21
Safetensors
Model size
137M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for ptpedroVortal/nomic_vortal_v3.3

Finetuned
(12)
this model

Evaluation results