metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:267
- loss:ContrastiveLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
widget:
- source_sentence: >-
hypertension
The patient's primary diagnosis is hypertension, as stated in the visit
note.
BP medications
The patient is on BP medications which are used to treat hypertension.
BP management
The visit note mentions follow-up on BP management, indicating ongoing
treatment for hypertension.
HTN
HTN is the abbreviation for hypertension, which is the patient's diagnosed
condition.
BP was measured at 138/90
This blood pressure reading supports the diagnosis of hypertension as it
is elevated.
monthly bp at home have been around that number or higher
Consistently high blood pressure readings confirm the presence of
hypertension.
most likely diagnosis for this patient is hypertension
The visit note explicitly states that hypertension is the most likely
diagnosis.
sentences:
- Anemia, Unspecified
- Essential (Primary) Hypertension
- Dehydration
- source_sentence: >-
BMI ABOVE NORMAL PARAM F/U DOCUMENTED
This phrase indicates that the patient's BMI is above normal parameters
and requires follow-up, which is a key indicator for obesity
classification.
34.11
The specific BMI value of 34.11 falls within the range for Class 1 obesity
(30.0-34.9), providing numerical confirmation of the diagnosis.
Class 1 obesity
This is the explicit statement of the patient's condition, directly
aligning with the ICD code E66.811 for Class 1 obesity.
sentences:
- Obesity, Class 1
- Hypothyroidism, Unspecified
- Overweight
- source_sentence: >-
anxious and uses food for comfort
This phrase indicates the presence of anxiety symptoms, specifically using
food as a coping mechanism, which aligns with an unspecified anxiety
disorder.
sentences:
- Essential (Primary) Hypertension
- Essential (Primary) Hypertension
- Anxiety Disorder, Unspecified
- source_sentence: >-
compression stockings
Compression stockings are a treatment for venous insufficiency, which can
cause localized edema.
venous insufficiency
Venous insufficiency is a condition that leads to leg edema, which is a
type of localized edema.
Leg edema
Leg edema is a direct symptom of localized edema.
edema
Edema refers to swelling caused by fluid retention, which aligns with the
ICD code R60.0 for Localized Edema.
sentences:
- Nasal Congestion
- Localized Edema
- Essential (Primary) Hypertension
- source_sentence: >-
Had lithotripsy and passed an 8x5 mm stone on L.
This phrase indicates a history of urinary calculi as evidenced by the
treatment (lithotripsy) for kidney stones.
sentences:
- Pure Hypercholesterolemia, Unspecified
- Personal History Of Urinary Calculi
- Menopausal And Female Climacteric States
pipeline_tag: sentence-similarity
library_name: sentence-transformers
SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'Had lithotripsy and passed an 8x5 mm stone on L.\nThis phrase indicates a history of urinary calculi as evidenced by the treatment (lithotripsy) for kidney stones.',
'Personal History Of Urinary Calculi',
'Pure Hypercholesterolemia, Unspecified',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 267 training samples
- Columns:
anchor
,positive
, andlabel
- Approximate statistics based on the first 267 samples:
anchor positive label type string string float details - min: 12 tokens
- mean: 94.12 tokens
- max: 256 tokens
- min: 3 tokens
- mean: 9.77 tokens
- max: 23 tokens
- min: 1.0
- mean: 1.0
- max: 1.0
- Samples:
anchor positive label T2DM
Directly indicates the diagnosis of Type 2 Diabetes Mellitus without complications as stated in the Problem/Dx section.Type 2 Diabetes Mellitus Without Complications
1.0
Atorvastatin
Atorvastatin is a statin medication prescribed to lower cholesterol levels, directly addressing hypercholesterolemia.
Hyperlipidemia
Hyperlipidemia is a broader term that includes high cholesterol (hypercholesterolemia), which is explicitly mentioned in the assessment.
statin therapy
Statin therapy, including Atorvastatin, is specifically noted as part of the treatment plan for managing high cholesterol.
Hypercholesterolemia
Explicitly listed under assessment as a condition being managed, aligning with the ICD code E78.00.Pure Hypercholesterolemia, Unspecified
1.0
Encounter for immunization (Z23)
This phrase directly indicates the ICD code Z23 and its description as the reason for the encounter.Encounter For Immunization
1.0
- Loss:
ContrastiveLoss
with these parameters:{ "distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE", "margin": 0.5, "size_average": true }
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.0588 | 1 | 0.1007 |
0.1176 | 2 | 0.1131 |
0.1765 | 3 | 0.099 |
0.2353 | 4 | 0.0867 |
0.2941 | 5 | 0.0682 |
0.3529 | 6 | 0.1019 |
0.4118 | 7 | 0.0618 |
0.4706 | 8 | 0.0623 |
0.5294 | 9 | 0.0564 |
0.5882 | 10 | 0.0521 |
0.6471 | 11 | 0.0545 |
0.7059 | 12 | 0.0335 |
0.7647 | 13 | 0.0593 |
0.8235 | 14 | 0.0381 |
0.8824 | 15 | 0.0308 |
0.9412 | 16 | 0.0487 |
1.0 | 17 | 0.0398 |
Framework Versions
- Python: 3.11.12
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.5.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
ContrastiveLoss
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}