SentenceTransformer based on sentence-transformers/multi-qa-MiniLM-L6-cos-v1

This is a sentence-transformers model finetuned from sentence-transformers/multi-qa-MiniLM-L6-cos-v1. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Atharva26/sentence-transformer-finetuned-faq")
# Run inference
sentences = [
    'How can I access and customize my Preferences in Choice?',
    'Preferences\n\nTo access your Preferences in Choice and customize how you view and use key features, follow these steps:\n1. Open your account and tap on "More."\n2. Select "My Preferences."\n\nDark Theme:\nEnjoy our sleek Dark Theme by selecting the very first option in the menu labeled "DARK THEME." To switch back to the default Light Theme, open the menu and select the Light Theme option.\n\n---\n**Cleaned Answer:**\nIn the Preferences section of Choice, you can adjust how you view and utilize key features. To do this, open your account and tap on "More," then select "My Preferences." Switch to the Dark Theme by choosing the first option labeled "DARK THEME" in the menu. To go back to the default Light Theme, open the menu and select the Light Theme option.',
    "To find a specific company or ETF on the Choice platform, you can follow these steps:\n\n1. Using Search:\n- Type the first three letters of the company's name (e.g., ICICI) to see a list of affiliated companies/ETFs on NSE/BSE. \n- Choose your segment (Equity, Derivatives, Commodities, or Currency).\n- Select the specific company from the results drop-down to view its overview.\n- Add the company to your WatchList or set a price alert.\n\n2. Scrip Page - Cash and F&O:\n- View detailed information about a company.\n- Search by typing the first 3 letters of the company name and selecting it for the overview.\n\n3. Overview Section:\n- Provides essential details like OPEN, HIGH, LOW, CLOSE.\n- Click (i) for more details like PRICE TICK, MARKET LOT, CIRCUIT RANGE, etc.\n- Market Depth shows BID and ASK numbers with quantities.\n\n4. Technical Section:\n- View Technical data and use the CHART tool for analysis.\n\n5. Pivot Points:\n- Shows RESISTANCE and SUPPORT levels.\n- Displays numbers for DAILY assessment by default, switch timeframes if needed.\n\n6. Futures Section:\n- Displays future prices for near term, mid term, and far term.\n\n7. Recent News:\n- Displays news related to the company from various sources.\n\n8. Scrip Page - Derivatives (F&O):\n- Search for company and view open Call/Put contracts.\n- Select a contract and view the Option Chain.\n- Buy/Sell contracts and execute orders.\n\nThese steps will help you navigate and use the features available on the Choice platform effectively.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • learning_rate: 2e-05
  • num_train_epochs: 2
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.3

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.3
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss loss
0.0345 2 4.7333 -
0.0690 4 4.706 -
0.1034 6 4.7013 -
0.1379 8 4.721 -
0.1724 10 4.6913 -
0.2069 12 4.6712 -
0.2414 14 4.6583 -
0.2586 15 - 4.6563
0.2759 16 4.6551 -
0.3103 18 4.6552 -
0.3448 20 4.6531 -
0.3793 22 4.5423 -
0.4138 24 4.5717 -
0.4483 26 4.5314 -
0.4828 28 4.5324 -
0.5172 30 4.453 4.5032
0.5517 32 4.5038 -
0.5862 34 4.4599 -
0.6207 36 4.3689 -
0.6552 38 4.4138 -
0.6897 40 4.3598 -
0.7241 42 4.3962 -
0.7586 44 4.2948 -
0.7759 45 - 4.3505
0.7931 46 4.3345 -
0.8276 48 4.4072 -
0.8621 50 4.2972 -
0.8966 52 4.3006 -
0.9310 54 4.3104 -
0.9655 56 4.2059 -
1.0 58 4.2059 -
1.0345 60 4.2079 4.2349
1.0690 62 4.23 -
1.1034 64 4.2325 -
1.1379 66 4.1432 -
1.1724 68 4.235 -
1.2069 70 4.1383 -
1.2414 72 4.1703 -
1.2759 74 4.1145 -
1.2931 75 - 4.1638
1.3103 76 4.0703 -
1.3448 78 4.1306 -
1.3793 80 4.0792 -
1.4138 82 4.102 -
1.4483 84 4.1091 -
1.4828 86 4.1437 -
1.5172 88 4.1011 -
1.5517 90 4.0618 4.1283
1.5862 92 4.0696 -
1.6207 94 4.1508 -
1.6552 96 4.0182 -
1.6897 98 4.1442 -
1.7241 100 4.2017 -
1.7586 102 4.097 -
1.7931 104 4.2886 -
1.8103 105 - 4.1213
1.8276 106 4.0573 -
1.8621 108 4.1101 -
1.8966 110 4.1942 -
1.9310 112 4.122 -
1.9655 114 4.1533 -
2.0 116 4.0961 -

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.45.2
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

TripletLoss

@misc{hermans2017defense,
    title={In Defense of the Triplet Loss for Person Re-Identification},
    author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
    year={2017},
    eprint={1703.07737},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Downloads last month
13
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Atharva26/sentence-transformer-finetuned-faq

Finetuned
(14)
this model