SentenceTransformer based on intfloat/multilingual-e5-large-instruct

This is a sentence-transformers model finetuned from intfloat/multilingual-e5-large-instruct on the measuring-embeddings-v4 dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Lauther/measuring-embeddings-v4.2")
# Run inference
sentences = [
    'uncertainty points',
    'What is a Fluid?\nA Fluid is the substance measured within a measurement system. It can be a gas or liquid, such as hydrocarbons, water, or other industrial fluids. Proper classification of fluids is essential for ensuring measurement accuracy, regulatory compliance, and operational efficiency. By identifying fluids correctly, the system applies the appropriate measurement techniques, processing methods, and reporting standards.',
    'What is a Calibration Point?\nA Calibration Point represents a specific data entry in a calibration process, comparing an expected reference value to an actual measured value. These points are fundamental in ensuring measurement accuracy and identifying deviations.\n\nKey Aspects of Calibration Points:\n- Calibration Report Association: Each calibration point belongs to a specific calibration report, linking it to a broader calibration procedure.\n- Reference Values: Theoretical or expected values used as a benchmark for measurement validation.\n- Measured Values: The actual recorded values during calibration, reflecting the instrument’s response.\n- Errors: The difference between reference and measured values, indicating possible measurement inaccuracies.\nCalibration points are essential for evaluating instrument performance, ensuring compliance with standards, and maintaining measurement reliability.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

measuring-embeddings-v4

  • Dataset: measuring-embeddings-v4 at 1e3ca2c
  • Size: 3,075 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 3 tokens
    • mean: 7.55 tokens
    • max: 17 tokens
    • min: 80 tokens
    • mean: 180.22 tokens
    • max: 406 tokens
    • min: 0.07
    • mean: 0.21
    • max: 0.95
  • Samples:
    sentence1 sentence2 score
    last calibrated span What are historical report values?
    These represent the recorded data points within flow computer reports. Unlike the report index, which serves as a reference to locate reports, these values contain the actual measurements and calculated data stored in the historical records.

    Flow computer reports store two types of data values:

    - Hourly data values: Contain measured or calculated values (e.g., operational minutes, alarms set, etc.) recorded on an hourly basis.
    - Daily data values: Contain measured or calculated values (e.g., operational minutes, alarms set, etc.) recorded on a daily basis.
    Each value is directly linked to its respective report index, ensuring traceability to the original flow computer record. These values maintain their raw integrity, providing a reliable source for analysis and validation.
    0.1
    flow computer configuration What is a Measurement Type?
    Measurement types define the classification of measurements used within a system based on their purpose and regulatory requirements. These types include fiscal, appropriation, operational, and custody measurements.

    - Fiscal measurements are used for tax and regulatory reporting, ensuring accurate financial transactions based on measured quantities.
    - Appropriation measurements track resource allocation and ownership distribution among stakeholders.
    - Operational measurements support real-time monitoring and process optimization within industrial operations.
    - Custody measurements are essential for legal and contractual transactions, ensuring precise handover of fluids between parties.

    These classifications play a crucial role in compliance, financial accuracy, and operational efficiency across industries such as oil and gas, water management, and energy distribution.
    0.1
    uncertainty certificate number What is an Uncertainty Composition?
    An Uncertainty Composition represents a specific factor that contributes to the overall uncertainty of a measurement system. These components are essential for evaluating the accuracy and reliability of measurements by identifying and quantifying the sources of uncertainty.

    Key Aspects of an Uncertainty Component:
    - Component Name: Defines the uncertainty factor (e.g., diameter, density, variance, covariance) influencing the measurement system.
    - Value of Composition: Quantifies the component’s contribution to the total uncertainty, helping to analyze which factors have the greatest impact.
    - Uncertainty File ID: Links the component to a specific uncertainty dataset for traceability and validation.
    Understanding these components is critical for uncertainty analysis, ensuring compliance with industry standards and improving measurement precision.
    0.1
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

measuring-embeddings-v4

  • Dataset: measuring-embeddings-v4 at 1e3ca2c
  • Size: 659 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 659 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 3 tokens
    • mean: 7.63 tokens
    • max: 17 tokens
    • min: 80 tokens
    • mean: 186.36 tokens
    • max: 406 tokens
    • min: 0.07
    • mean: 0.2
    • max: 0.9
  • Samples:
    sentence1 sentence2 score
    measurement system details What is an Uncertainty Composition?
    An Uncertainty Composition represents a specific factor that contributes to the overall uncertainty of a measurement system. These components are essential for evaluating the accuracy and reliability of measurements by identifying and quantifying the sources of uncertainty.

    Key Aspects of an Uncertainty Component:
    - Component Name: Defines the uncertainty factor (e.g., diameter, density, variance, covariance) influencing the measurement system.
    - Value of Composition: Quantifies the component’s contribution to the total uncertainty, helping to analyze which factors have the greatest impact.
    - Uncertainty File ID: Links the component to a specific uncertainty dataset for traceability and validation.
    Understanding these components is critical for uncertainty analysis, ensuring compliance with industry standards and improving measurement precision.
    0.15
    measurement system tag EMED-3102-02-010 What is a report index or historic index?
    Indexes represent the recorded reports generated by flow computers, classified into two types:
    - Hourly reports Index: Store data for hourly events.
    - Daily reports Index: Strore data for daily events.

    These reports, also referred to as historical data or flow computer historical records, contain raw, first-hand measurements directly collected from the flow computer. The data has not been processed or used in any calculations, preserving its original state for analysis or validation.

    The index is essential for locating specific values within the report.
    0.24
    static pressure What is a Meter Stream?
    A Meter Stream represents a measurement system configured within a flow computer. It serves as the interface between the physical measurement system and the computational processes that record and analyze flow data.

    Key Aspects of a Meter Stream:
    - Status: Indicates whether the meter stream is active or inactive.
    - Measurement System Association: Links the meter stream to a specific measurement system, ensuring that the data collected corresponds to a defined physical setup.
    - Flow Computer Association: Identifies the flow computer responsible for managing and recording the measurement system's data.
    Why is a Meter Stream Important?
    A meter stream is a critical component in flow measurement, as it ensures that the measurement system is correctly integrated into the flow computer for accurate monitoring and reporting. Since each flow computer can handle multiple meter streams, proper configuration is essential for maintaining data integrity and traceability.
    0.1
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • gradient_accumulation_steps: 4
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 4
  • per_device_eval_batch_size: 4
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 4
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Click to expand
Epoch Step Training Loss Validation Loss
2.3953 460 0.8121 -
2.4473 470 1.7843 -
2.4993 480 3.0975 -
2.5514 490 0.8585 -
2.6034 500 2.7931 -
2.6554 510 1.4479 -
2.7074 520 1.6132 -
2.7594 530 0.8279 -
2.8114 540 2.0968 -
2.8635 550 1.5086 -
2.9155 560 1.7022 -
2.9675 570 1.7252 -
3.0208 580 0.329 -
3.0728 590 3.0231 -
3.1248 600 1.2077 0.4939
3.1769 610 1.7389 -
3.2289 620 1.747 -
3.2809 630 2.608 -
3.3329 640 2.3748 -
3.3849 650 0.9898 -
3.4369 660 3.6768 -
3.4889 670 1.7257 -
3.5410 680 1.2324 -
3.5930 690 1.4847 -
3.6450 700 0.5312 -
3.6970 710 2.6352 -
3.7490 720 3.3293 -
3.8010 730 1.0756 -
3.8531 740 1.2176 -
3.9051 750 1.4641 0.2318
3.9571 760 0.4642 -
4.0052 770 0.8467 -
4.0572 780 0.6422 -
4.1092 790 1.2341 -
4.1612 800 1.2382 -
4.2133 810 0.8518 -
4.2653 820 2.2545 -
4.3173 830 1.0461 -
4.3693 840 1.4097 -
4.4213 850 1.6382 -
4.4733 860 3.3653 -
4.5254 870 1.6778 -
4.5774 880 2.4592 -
4.6294 890 2.3244 -
4.6814 900 0.7048 0.2351
4.7334 910 1.507 -
4.7854 920 1.9508 -
4.8375 930 0.9046 -
4.8895 940 1.3923 -
4.9415 950 2.8222 -
4.9935 960 0.8341 -
5.0416 970 1.7129 -
5.0936 980 0.5792 -
5.1456 990 1.5091 -
5.1977 1000 0.8392 -
5.2497 1010 1.3499 -
5.3017 1020 1.1315 -
5.3537 1030 0.8192 -
5.4057 1040 0.3839 -
5.4577 1050 0.887 0.3572
5.5098 1060 0.9957 -
5.5618 1070 1.4341 -
5.6138 1080 0.5888 -
5.6658 1090 1.4963 -
5.7178 1100 1.5912 -
5.7698 1110 1.3382 -
5.8218 1120 1.4406 -
5.8739 1130 1.0845 -
5.9259 1140 0.2931 -
5.9779 1150 0.8994 -
6.0260 1160 1.1391 -
6.0780 1170 1.4646 -
6.1300 1180 0.509 -
6.1821 1190 0.4108 -
6.2341 1200 0.418 0.2573
6.2861 1210 1.4609 -
6.3381 1220 1.4237 -
6.3901 1230 0.6612 -
6.4421 1240 1.52 -
6.4941 1250 0.9426 -
6.5462 1260 1.5047 -
6.5982 1270 0.2918 -
6.6502 1280 0.96 -
6.7022 1290 1.6685 -
6.7542 1300 0.6779 -
6.8062 1310 0.0522 -
6.8583 1320 1.5055 -
6.9103 1330 0.2947 -
6.9623 1340 0.7499 -
7.0104 1350 2.6794 0.1881
7.0624 1360 1.4322 -
7.1144 1370 0.1859 -
7.1664 1380 1.0946 -
7.2185 1390 1.0941 -
7.2705 1400 0.8873 -
7.3225 1410 0.3996 -
7.3745 1420 0.159 -
7.4265 1430 0.7672 -
7.4785 1440 0.6511 -
7.5306 1450 0.2682 -
7.5826 1460 1.5488 -
7.6346 1470 0.4513 -
7.6866 1480 0.7482 -
7.7386 1490 1.4327 -
7.7906 1500 1.0277 0.1801
7.8427 1510 0.4197 -
7.8947 1520 3.3415 -
7.9467 1530 0.7131 -
7.9987 1540 0.7276 -
8.0468 1550 1.1939 -
8.0988 1560 0.4333 -
8.1508 1570 1.3594 -
8.2029 1580 0.9792 -
8.2549 1590 0.4581 -
8.3069 1600 0.5785 -
8.3589 1610 0.4015 -
8.4109 1620 0.5693 -
8.4629 1630 1.4925 -
8.5150 1640 0.6028 -
8.5670 1650 0.2087 0.1802
8.6190 1660 1.0404 -
8.6710 1670 0.8293 -
8.7230 1680 1.1231 -
8.7750 1690 0.4747 -
8.8270 1700 1.0668 -
8.8791 1710 1.2665 -
8.9311 1720 0.3004 -
8.9831 1730 0.1333 -
9.0312 1740 1.0171 -
9.0832 1750 1.3999 -
9.1352 1760 0.1939 -
9.1873 1770 0.1591 -
9.2393 1780 0.1243 -
9.2913 1790 0.8689 -
9.3433 1800 0.4325 0.1501
9.3953 1810 0.5094 -
9.4473 1820 0.3178 -
9.4993 1830 0.211 -
9.5514 1840 1.3497 -
9.6034 1850 0.6287 -
9.6554 1860 0.4895 -
9.7074 1870 0.3925 -
9.7594 1880 0.4384 -
9.8114 1890 0.8487 -
9.8635 1900 0.9134 -
9.9155 1910 0.1522 -
9.9675 1920 0.3798 -

Framework Versions

  • Python: 3.11.0
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
33
Safetensors
Model size
560M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Lauther/measuring-embeddings-v4.2

Finetuned
(81)
this model

Dataset used to train Lauther/measuring-embeddings-v4.2