Edit model card

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("danicafisher/dfisher-fine-tuned-sentence-transformer")
# Run inference
sentences = [
    'What are the primary information security risks associated with GAI-based systems in the context of cybersecurity?',
    '10 \nGAI systems can ease the unintentional production or dissemination of false, inaccurate, or misleading \ncontent (misinformation) at scale, particularly if the content stems from confabulations.  \nGAI systems can also ease the deliberate production or dissemination of false or misleading information \n(disinformation) at scale, where an actor has the explicit intent to deceive or cause harm to others. Even \nvery subtle changes to text or images can manipulate human and machine perception. \nSimilarly, GAI systems could enable a higher degree of sophistication for malicious actors to produce \ndisinformation that is targeted towards specific demographics. Current and emerging multimodal models \nmake it possible to generate both text-based disinformation and highly realistic “deepfakes” – that is, \nsynthetic audiovisual content and photorealistic images.12 Additional disinformation threats could be \nenabled by future GAI models trained on new data modalities. \nDisinformation and misinformation – both of which may be facilitated by GAI – may erode public trust in \ntrue or valid evidence and information, with downstream effects. For example, a synthetic image of a \nPentagon blast went viral and briefly caused a drop in the stock market. Generative AI models can also \nassist malicious actors in creating compelling imagery and propaganda to support disinformation \ncampaigns, which may not be photorealistic, but could enable these campaigns to gain more reach and \nengagement on social media platforms. Additionally, generative AI models can assist malicious actors in \ncreating fraudulent content intended to impersonate others. \nTrustworthy AI Characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable and \nExplainable \n2.9. Information Security \nInformation security for computer systems and data is a mature field with widely accepted and \nstandardized practices for offensive and defensive cyber capabilities. GAI-based systems present two \nprimary information security risks: GAI could potentially discover or enable new cybersecurity risks by \nlowering the barriers for or easing automated exercise of offensive capabilities; simultaneously, it \nexpands the available attack surface, as GAI itself is vulnerable to attacks like prompt injection or data \npoisoning.  \nOffensive cyber capabilities advanced by GAI systems may augment cybersecurity attacks such as \nhacking, malware, and phishing. Reports have indicated that LLMs are already able to discover some \nvulnerabilities in systems (hardware, software, data) and write code to exploit them. Sophisticated threat \nactors might further these risks by developing GAI-powered security co-pilots for use in several parts of \nthe attack chain, including informing attackers on how to proactively evade threat detection and escalate \nprivileges after gaining system access. \nInformation security for GAI models and systems also includes maintaining availability of the GAI system \nand the integrity and (when applicable) the confidentiality of the GAI code, training data, and model \nweights. To identify and secure potential attack points in AI systems or specific components of the AI \n \n \n12 See also https://doi.org/10.6028/NIST.AI.100-4, to be published.',
    '7 \nunethical behavior. Text-to-image models also make it easy to create images that could be used to \npromote dangerous or violent messages. Similar concerns are present for other GAI media, including \nvideo and audio. GAI may also produce content that recommends self-harm or criminal/illegal activities.  \nMany current systems restrict model outputs to limit certain content or in response to certain prompts, \nbut this approach may still produce harmful recommendations in response to other less-explicit, novel \nprompts (also relevant to CBRN Information or Capabilities, Data Privacy, Information Security, and \nObscene, Degrading and/or Abusive Content). Crafting such prompts deliberately is known as \n“jailbreaking,” or, manipulating prompts to circumvent output controls. Limitations of GAI systems can be \nharmful or dangerous in certain contexts. Studies have observed that users may disclose mental health \nissues in conversations with chatbots – and that users exhibit negative reactions to unhelpful responses \nfrom these chatbots during situations of distress. \nThis risk encompasses difficulty controlling creation of and public exposure to offensive or hateful \nlanguage, and denigrating or stereotypical content generated by AI. This kind of speech may contribute \nto downstream harm such as fueling dangerous or violent behaviors. The spread of denigrating or \nstereotypical content can also further exacerbate representational harms (see Harmful Bias and \nHomogenization below).  \nTrustworthy AI Characteristics: Safe, Secure and Resilient \n2.4. Data Privacy \nGAI systems raise several risks to privacy. GAI system training requires large volumes of data, which in \nsome cases may include personal data. The use of personal data for GAI training raises risks to widely \naccepted privacy principles, including to transparency, individual participation (including consent), and \npurpose specification. For example, most model developers do not disclose specific data sources on \nwhich models were trained, limiting user awareness of whether personally identifiably information (PII) \nwas trained on and, if so, how it was collected.  \nModels may leak, generate, or correctly infer sensitive information about individuals. For example, \nduring adversarial attacks, LLMs have revealed sensitive information (from the public domain) that was \nincluded in their training data. This problem has been referred to as data memorization, and may pose \nexacerbated privacy risks even for data present only in a small number of training samples.  \nIn addition to revealing sensitive information in GAI training data, GAI models may be able to correctly \ninfer PII or sensitive data that was not in their training data nor disclosed by the user by stitching \ntogether information from disparate sources. These inferences can have negative impact on an individual \neven if the inferences are not accurate (e.g., confabulations), and especially if they reveal information \nthat the individual considers sensitive or that is used to disadvantage or harm them. \nBeyond harms from information exposure (such as extortion or dignitary harm), wrong or inappropriate \ninferences of PII can contribute to downstream or secondary harmful impacts. For example, predictive \ninferences made by GAI models based on PII or protected attributes can contribute to adverse decisions, \nleading to representational or allocative harms to individuals or groups (see Harmful Bias and \nHomogenization below).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 128 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 128 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 17 tokens
    • mean: 23.14 tokens
    • max: 38 tokens
    • min: 56 tokens
    • mean: 247.42 tokens
    • max: 256 tokens
  • Samples:
    sentence_0 sentence_1
    How should fairness assessments be conducted to measure systemic bias across demographic groups in GAI systems? 36
    MEASURE 2.11: Fairness and bias – as identified in the MAP function – are evaluated and results are documented.
    Action ID
    Suggested Action
    GAI Risks
    MS-2.11-001
    Apply use-case appropriate benchmarks (e.g., Bias Benchmark Questions, Real
    Hateful or Harmful Prompts, Winogender Schemas15) to quantify systemic bias,
    stereotyping, denigration, and hateful content in GAI system outputs;
    Document assumptions and limitations of benchmarks, including any actual or
    possible training/test data cross contamination, relative to in-context
    deployment environment.
    Harmful Bias and Homogenization
    MS-2.11-002
    Conduct fairness assessments to measure systemic bias. Measure GAI system
    performance across demographic groups and subgroups, addressing both
    quality of service and any allocation of services and resources. Quantify harms
    using: field testing with sub-group populations to determine likelihood of
    exposure to generated content exhibiting harmful bias, AI red-teaming with
    counterfactual and low-context (e.g., “leader,” “bad guys”) prompts. For ML
    pipelines or business processes with categorical or numeric outcomes that rely
    on GAI, apply general fairness metrics (e.g., demographic parity, equalized odds,
    equal opportunity, statistical hypothesis tests), to the pipeline or business
    outcome where appropriate; Custom, context-specific metrics developed in
    collaboration with domain experts and affected communities; Measurements of
    the prevalence of denigration in generated content in deployment (e.g., sub-
    sampling a fraction of traffic and manually annotating denigrating content).
    Harmful Bias and Homogenization;
    Dangerous, Violent, or Hateful
    Content
    MS-2.11-003
    Identify the classes of individuals, groups, or environmental ecosystems which
    might be impacted by GAI systems through direct engagement with potentially
    impacted communities.
    Environmental; Harmful Bias and
    Homogenization
    MS-2.11-004
    Review, document, and measure sources of bias in GAI training and TEVV data:
    Differences in distributions of outcomes across and within groups, including
    intersecting groups; Completeness, representativeness, and balance of data
    sources; demographic group and subgroup coverage in GAI system training
    data; Forms of latent systemic bias in images, text, audio, embeddings, or other
    complex or unstructured data; Input data features that may serve as proxies for
    demographic group membership (i.e., image metadata, language dialect) or
    otherwise give rise to emergent bias within GAI systems; The extent to which
    the digital divide may negatively impact representativeness in GAI system
    training and TEVV data; Filtering of hate speech or content in GAI system
    training data; Prevalence of GAI-generated data in GAI system training data.
    Harmful Bias and Homogenization


    15 Winogender Schemas is a sample set of paired sentences which differ only by gender of the pronouns used,
    which can be used to evaluate gender bias in natural language processing coreference resolution systems.
    How should organizations adjust their AI system inventory requirements to account for GAI risks? 16
    GOVERN 1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned, and
    organizational roles and responsibilities are clearly defined, including determining the frequency of periodic review.
    Action ID
    Suggested Action
    GAI Risks
    GV-1.5-001 Define organizational responsibilities for periodic review of content provenance
    and incident monitoring for GAI systems.
    Information Integrity
    GV-1.5-002
    Establish organizational policies and procedures for after action reviews of GAI
    system incident response and incident disclosures, to identify gaps; Update
    incident response and incident disclosure processes as required.
    Human-AI Configuration;
    Information Security
    GV-1.5-003
    Maintain a document retention policy to keep history for test, evaluation,
    validation, and verification (TEVV), and digital content transparency methods for
    GAI.
    Information Integrity; Intellectual
    Property
    AI Actor Tasks: Governance and Oversight, Operation and Monitoring

    GOVERN 1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
    Action ID
    Suggested Action
    GAI Risks
    GV-1.6-001 Enumerate organizational GAI systems for incorporation into AI system inventory
    and adjust AI system inventory requirements to account for GAI risks.
    Information Security
    GV-1.6-002 Define any inventory exemptions in organizational policies for GAI systems
    embedded into application software.
    Value Chain and Component
    Integration
    GV-1.6-003
    In addition to general model, governance, and risk information, consider the
    following items in GAI system inventory entries: Data provenance information
    (e.g., source, signatures, versioning, watermarks); Known issues reported from
    internal bug tracking or external information sharing resources (e.g., AI incident
    database, AVID, CVE, NVD, or OECD AI incident monitor); Human oversight roles
    and responsibilities; Special rights and considerations for intellectual property,
    licensed works, or personal, privileged, proprietary or sensitive data; Underlying
    foundation models, versions of underlying models, and access modes.
    Data Privacy; Human-AI
    Configuration; Information
    Integrity; Intellectual Property;
    Value Chain and Component
    Integration
    AI Actor Tasks: Governance and Oversight
    What framework is suggested for evaluating and monitoring third-party entities' performance and adherence to content provenance standards? 21
    GV-6.1-005
    Implement a use-cased based supplier risk assessment framework to evaluate and
    monitor third-party entities’ performance and adherence to content provenance
    standards and technologies to detect anomalies and unauthorized changes;
    services acquisition and value chain risk management; and legal compliance.
    Data Privacy; Information
    Integrity; Information Security;
    Intellectual Property; Value Chain
    and Component Integration
    GV-6.1-006 Include clauses in contracts which allow an organization to evaluate third-party
    GAI processes and standards.
    Information Integrity
    GV-6.1-007 Inventory all third-party entities with access to organizational content and
    establish approved GAI technology and service provider lists.
    Value Chain and Component
    Integration
    GV-6.1-008 Maintain records of changes to content made by third parties to promote content
    provenance, including sources, timestamps, metadata.
    Information Integrity; Value Chain
    and Component Integration;
    Intellectual Property
    GV-6.1-009
    Update and integrate due diligence processes for GAI acquisition and
    procurement vendor assessments to include intellectual property, data privacy,
    security, and other risks. For example, update processes to: Address solutions that
    may rely on embedded GAI technologies; Address ongoing monitoring,
    assessments, and alerting, dynamic risk assessments, and real-time reporting
    tools for monitoring third-party GAI risks; Consider policy adjustments across GAI
    modeling libraries, tools and APIs, fine-tuned models, and embedded tools;
    Assess GAI vendors, open-source or proprietary GAI tools, or GAI service
    providers against incident or vulnerability databases.
    Data Privacy; Human-AI
    Configuration; Information
    Security; Intellectual Property;
    Value Chain and Component
    Integration; Harmful Bias and
    Homogenization
    GV-6.1-010
    Update GAI acceptable use policies to address proprietary and open-source GAI
    technologies and data, and contractors, consultants, and other third-party
    personnel.
    Intellectual Property; Value Chain
    and Component Integration
    AI Actor Tasks: Operation and Monitoring, Procurement, Third-party entities

    GOVERN 6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be
    high-risk.
    Action ID
    Suggested Action
    GAI Risks
    GV-6.2-001
    Document GAI risks associated with system value chain to identify over-reliance
    on third-party data and to identify fallbacks.
    Value Chain and Component
    Integration
    GV-6.2-002
    Document incidents involving third-party GAI data and systems, including open-
    data and open-source software.
    Intellectual Property; Value Chain
    and Component Integration
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • eval_use_gather_object: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.1.1
  • Transformers: 4.44.2
  • PyTorch: 2.4.1+cu121
  • Accelerate: 0.34.2
  • Datasets: 3.0.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
7
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for danicafisher/dfisher-fine-tuned-sentence-transformer

Finetuned
(140)
this model