OpenCVModelDemo / README.md
CrazyDave53's picture
Demo
e090459 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:500
  - loss:MarginDistillationLoss
base_model: sentence-transformers/paraphrase-multilingual-mpnet-base-v2
widget:
  - source_sentence: qualify leads job description
    sentences:
      - >-
        job description Qualify leads – can the customers use our solutions and
        do they have budget
      - 'job description QUALIFICATION STANDARDS:'
      - >-
        job description Has passion and convictions and the innate ability to
        inspire passion in others
  - source_sentence: what kind of work can you do in a fast paced work environment
    sentences:
      - 'job description Customer Care:'
      - >-
        cv Worked effectively in a heavily cross-functional, fast paced
        environment
      - >-
        job description Ability to work in a fast-paced, team environment to
        meet required deadlines
  - source_sentence: what is the job description of architect at cisco
    sentences:
      - job description Proven track record of meeting quota
      - >-
        job description You will provide an architectural perspective across the
        Cisco product portfolio and can use your technical specialization for
        specific opportunities.
      - >-
        job description Work with network architect to direct network
        engineering within the US/ Canada region.
  - source_sentence: what is a product partner for a bank
    sentences:
      - >-
        job description Assist customers with product features and installation
        needs for TV, Internet, phone and security services
      - >-
        job description Partners effectively with Credit, Product Partners,
        Closers, Servicing, Technical Services, and other partners to identify
        cross-sell opportunities and deepen client relationships as well as
        solve internal obstacles and deliver a seamless execution.
      - >-
        job description Partner with the ad product/experience team to ensure
        strategies can be effectively executed
  - source_sentence: what is the job description of outcome-oriented
    sentences:
      - >-
        job description Outcome-oriented -- results-focused with strong
        performance culture
      - >-
        job description Results oriented—takes personal accountability and
        drives to “get it done” (solve customer problems and close deals) and
        “do it right” (act with integrity and sustain strong relationships)
      - >-
        job description Provide consistent assessment of each associate’s sales
        performance and work within the store to give feedback on areas of
        strength and opportunity while keeping in line with Company objectives.
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/paraphrase-multilingual-mpnet-base-v2

This is a sentence-transformers model finetuned from sentence-transformers/paraphrase-multilingual-mpnet-base-v2. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'what is the job description of outcome-oriented',
    'job description Outcome-oriented -- results-focused with strong performance culture',
    'job description Results oriented—takes personal accountability and drives to “get it done” (solve customer problems and close deals) and “do it right” (act with integrity and sustain strong relationships)',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 500 training samples
  • Columns: sentence_0, sentence_1, sentence_2, and label
  • Approximate statistics based on the first 500 samples:
    sentence_0 sentence_1 sentence_2 label
    type string string string float
    details
    • min: 4 tokens
    • mean: 10.23 tokens
    • max: 24 tokens
    • min: 5 tokens
    • mean: 26.02 tokens
    • max: 128 tokens
    • min: 5 tokens
    • mean: 21.6 tokens
    • max: 128 tokens
    • min: -13.07
    • mean: 3.46
    • max: 21.29
  • Samples:
    sentence_0 sentence_1 sentence_2 label
    what is a cv cv Jan 2003 to Jan 2005 Company Name Implemented a database package for the nuclear power plant historical design calculations cv Member and Provider Services -2.188136577606201
    what is the minimum requirement to work in a warehouse job description Experience/Minimum Requirements job description Minimum Requirements: -2.951946973800659
    what type of education does a client service executive need job description Bachelor’s degree from an accredited college/university or equivalent B2B client service experience job description Education Requirements 9.929327964782715
  • Loss: gpl.toolkit.loss.MarginDistillationLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 1
  • max_steps: 50
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: 50
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.0
  • Transformers: 4.46.2
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}