stephenhib's picture
Add new SentenceTransformer model
f1ea1c2 verified
metadata
base_model: sentence-transformers/all-mpnet-base-v2
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:807656
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      <p id="pa01" num="0001">An decoding method according to an embodiment
      includes a deriving step and an decoding step. The deriving step derives a
      first reference value that is a reference value of a weighting factor
      based on fixed point precision representing roughness of the weighting
      factor that is used for making a motion-compensated prediction of a change
      in a pixel value by multiplying a reference image by the weighting factor.
      The decoding step decodes a first difference value that is a difference
      value between the weighting factor and the first reference value. The
      weighting factor is included in a range of predetermined bit precision
      having the first reference value at approximate center.

      <img id="iaf01" file="imgaf001.tif" wi="146" he="85" img-content="drawing"
      img-format="tif"/></p>
    sentences:
      - DECODING METHOD AND DECODING DEVICE
      - >-
        METHOD FOR DETERMINING SEMI-SYNCHRONOUS EXPOSURE PARAMETERS AND
        ELECTRONIC DEVICE
      - HOISTING ROPE MONITORING DEVICE
  - source_sentence: >-
      <p id="pa01" num="0001">A layered sheet 10 includes a substrate layer 1,
      and surface layers 2 and 3 configured to be layered on at least one
      surface of the substrate layer 1. The substrate layer 1 contains a first
      thermoplastic resin and inorganic fillers. The surface layers 2 and 3
      contain a second thermoplastic resin and a conductive material. A content
      of the inorganic fillers in the substrate layer 1 is 0.3 to 28 mass% based
      on a total amount of the substrate layer.<img id="iaf01"
      file="imgaf001.tif" wi="86" he="70" img-content="drawing"
      img-format="tif"/><img id="iaf02" file="imgaf002.tif" wi="165" he="117"
      img-content="drawing" img-format="tif"/></p>
    sentences:
      - >-
        LAYERED SHEET, CONTAINER, CARRIER TAPE, AND ELECTRONIC COMPONENT
        PACKAGING BODY
      - BLOCK COPOLYMERS FOR GEL COMPOSITIONS WITH IMPROVED EFFICIENCY
      - AN INDICATOR SYSTEM FOR A PERISHABLE PRODUCT CONTAINER
  - source_sentence: >-
      <p id="pa01" num="0001">A method for manufacturing a gear which
      effectively prevent a crack from occurring inside a tooth part when
      rolling processing is performed on a teeth part of a gear raw material is
      achieved. A method according to one embodiment for manufacturing a gear 15
      by performing rolling processing on a tooth part 2a of a sintered gear raw
      material 2. The method includes, when the rolling processing is performed
      on the tooth part 2a of the gear raw material 2, pressing the gear raw
      material 2 toward a center of rotation of the gear raw material 2 by a
      rolling machine 4 and, when at least the rolling processing is performed
      on the tooth part 2a of the gear raw material 2 toward a center of a
      thickness thereof by a pressing machine 5, pressing a region A where an
      internal density of the tooth part 2a of the gear raw material 2
      decreases.</p><p id="pa02" num="0002">The invention also relates to an
      apparatus for manufacturing a gear.

      <img id="iaf01" file="imgaf001.tif" wi="106" he="68" img-content="drawing"
      img-format="tif"/></p>
    sentences:
      - >-
        COMMUNICATION METHOD, RELATED APPARATUS AND DEVICE AND COMPUTER-READABLE
        STORAGE MEDIUM
      - METHOD AND APPARATUS FOR MANUFACTURING GEAR
      - >-
        IMPLANTABLE MEDICAL DEVICE AND METHOD OF PROVIDING WIRE CONNECTIONS FOR
        IT
  - source_sentence: >-
      <p id="pa01" num="0001">This application discloses a data reading method,
      apparatus, and system, and a distributed system, and belongs to the field
      of storage technologies. The method includes: receiving a data read
      request sent by a terminal, where the data read request includes a logical
      address of target data; locally searching, based on the logical address, a
      first slave node for a latest version of the target data; and when it is
      determined that the latest version of the target data has been stored in
      each of a plurality of slave nodes, sending the latest version of the
      target data to the terminal. This application can avoid a rollback of a
      version of read data, and this application applies to data reading.<img
      id="iaf01" file="imgaf001.tif" wi="62" he="86" img-content="drawing"
      img-format="tif"/><img id="iaf02" file="imgaf002.tif" wi="155" he="233"
      img-content="drawing" img-format="tif"/></p>
    sentences:
      - SLIDING MECHANISM AND TERMINAL DEVICE PROVIDED WITH SAME
      - >-
        PRESSURE-APPLYING DEVICE FOR A SWITCHING MODULE AND METHOD OF CHANGING A
        SWITCHING MODULE USING THE SAME
      - DATA READING METHOD, DEVICE, SYSTEM, AND DISTRIBUTED SYSTEM
  - source_sentence: >-
      <p id="pa01" num="0001">An application apparatus (100) includes: an
      application needle (24) that applies, to a target, an application material
      having its viscosity changing under shear; a drive unit (90) that moves
      the application needle (24) up and down; and a controller (80) that
      controls the drive unit (90) to move the application needle such that
      shear is applied to the application material at a shear speed depending on
      a type of the application material and depending on a target application
      amount or a target application diameter.<img id="iaf01"
      file="imgaf001.tif" wi="78" he="56" img-content="drawing"
      img-format="tif"/></p>
    sentences:
      - HEAT PROCESSING DEVICE
      - Electric motor
      - COATING APPARATUS AND COATING METHOD
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-mpnet-base-v2
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: sentence transformers/all mpnet base v2
          type: sentence-transformers/all-mpnet-base-v2
        metrics:
          - type: cosine_accuracy@1
            value: 0.592
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.711
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.751
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.814
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.592
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.237
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.1502
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0814
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.592
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.711
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.751
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.814
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.6987639783179386
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.6624964285714287
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.6665468875517868
            name: Cosine Map@100

SentenceTransformer based on sentence-transformers/all-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-mpnet-base-v2 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-mpnet-base-v2
  • Maximum Sequence Length: 384 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("stephenhib/all-mpnet-base-v2-patabs-1epoc-batch32-100000")
# Run inference
sentences = [
    '<p id="pa01" num="0001">An application apparatus (100) includes: an application needle (24) that applies, to a target, an application material having its viscosity changing under shear; a drive unit (90) that moves the application needle (24) up and down; and a controller (80) that controls the drive unit (90) to move the application needle such that shear is applied to the application material at a shear speed depending on a type of the application material and depending on a target application amount or a target application diameter.<img id="iaf01" file="imgaf001.tif" wi="78" he="56" img-content="drawing" img-format="tif"/></p>',
    'COATING APPARATUS AND COATING METHOD',
    'Electric motor',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.592
cosine_accuracy@3 0.711
cosine_accuracy@5 0.751
cosine_accuracy@10 0.814
cosine_precision@1 0.592
cosine_precision@3 0.237
cosine_precision@5 0.1502
cosine_precision@10 0.0814
cosine_recall@1 0.592
cosine_recall@3 0.711
cosine_recall@5 0.751
cosine_recall@10 0.814
cosine_ndcg@10 0.6988
cosine_mrr@10 0.6625
cosine_map@100 0.6665

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 807,656 training samples
  • Columns: positive and anchor
  • Approximate statistics based on the first 1000 samples:
    positive anchor
    type string string
    details
    • min: 45 tokens
    • mean: 237.14 tokens
    • max: 384 tokens
    • min: 3 tokens
    • mean: 12.34 tokens
    • max: 101 tokens
  • Samples:
    positive anchor

    The invention relates to an image fusion method and device, which includes: obtaining a first short-focus image and a first long-focus image acquired by a short-focus sensor and a long-focus sensor at the same time; according to the focal lengths of a short-focus lens and a long-focus lens, calculating a reduction coefficient corresponding to the first long-focus image when the sizes of the same target in the first long-focus image and the first short-focus image are matched; performing a reduction processing on the first long-focus image according to the reduction coefficient to obtain a second long-focus image; according to a relative angle of the current long-focus lens and short-focus lens, calculating a position of the second long-focus image in the first short-focus image when the positions of the same target in the second long-focus image and the first short-focus image are matched; and according to the position of the second long-focus image in the first short-focus image, covering the first short-focus image by the second long-focus image to obtain a fused image. According to embodiments of the present application, on the premise of considering both the monitoring range and the definition, the monitoring cost is reduced, and the monitoring efficiency is improved.

    IMAGE FUSION METHOD AND DEVICE

    The present invention discloses an ex vivo method for the diagnostic and/or prognostic assessment of the acute-on-chronic liver failure (ACLF) syndrome in a patient with a liver disorder characterized in that it comprises the steps of: (a) measuring a panel of metabolites related with acylcarnitines-sialic acid-acetylated amino acids and/or sugar alcohols and derivatives-tryptophan metabolism-catecholamines derivatives in a biological sample of said patient; and (b) comparing the level of said metabolites in the sample with the level of said metabolites in healthy patients; and wherein an increase of at least 1.2 times of the level of said metabolites is indicative of ACLF syndrome.

    METHOD FOR THE DIAGNOSTIC AND/OR PROGNOSTIC ASSESSMENT OF ACUTE-ON-CHRONIC LIVER FAILURE SYNDROME IN PATIENTS WITH LIVER DISORDERS

    A valve housing receives a spool 34 and the spool has a regulating chamber 52 selectively communicating a supply line to a return line. The spool 34 is biased in one direction by a spring force and there is a second force biasing the spool in an opposed direction whith the second bias force being provided by a fluid pressure within a hydraulic system associated which the pressure regulating valve. The amount of communication between the supply port 111 and the return port 99 is regulated by a position of the spool 34 as the bias force from the fluid pressure change. Damper chambers are provided on opposed sides of the spool and serve to dampen a speed of movement of the spool and a supply line for supplying fluid into the damper chambers through check valves 44, 64. The supply line serves to assist in purging air outwardly of the damper chambers.

    Air purging pressure regulating valve
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 2
  • learning_rate: 2e-05
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 2
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss sentence-transformers/all-mpnet-base-v2_cosine_map@100
0.032 100 0.1433 0.6217
0.064 200 0.0953 0.6447
0.096 300 0.1084 0.6612
0.128 400 0.0817 0.6546
0.16 500 0.0768 0.6512
0.192 600 0.0779 0.6466
0.224 700 0.0709 0.6594
0.256 800 0.0813 0.6441
0.288 900 0.0597 0.6454
0.32 1000 0.0744 0.6496
0.352 1100 0.0669 0.6608
0.384 1200 0.0657 0.6566
0.416 1300 0.0489 0.6660
0.448 1400 0.0643 0.6597
0.48 1500 0.0593 0.6587
0.512 1600 0.0598 0.6613
0.544 1700 0.0737 0.6570
0.576 1800 0.0661 0.6655
0.608 1900 0.0499 0.6613
0.64 2000 0.0641 0.6616
0.672 2100 0.0679 0.6662
0.704 2200 0.0521 0.6715
0.736 2300 0.0569 0.6651
0.768 2400 0.0507 0.6679
0.8 2500 0.0405 0.6678
0.832 2600 0.0548 0.6690
0.864 2700 0.0403 0.6692
0.896 2800 0.0613 0.6649
0.928 2900 0.0485 0.6673
0.96 3000 0.0495 0.6674
0.992 3100 0.0546 0.6665

Framework Versions

  • Python: 3.11.9
  • Sentence Transformers: 3.2.1
  • Transformers: 4.45.2
  • PyTorch: 2.3.1.post300
  • Accelerate: 1.0.1
  • Datasets: 3.0.1
  • Tokenizers: 0.20.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}