BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("daniellgoncalves/bge-base-horizon-europe-matryoshka")
# Run inference
sentences = [
'including data. They are globally unique and long -lasting references to digital objects \r\n(such as data, publications and other research outputs) or non -digital objects such as \r\nresearchers, research institutions, grants, etc. Frequently used persistent identifiers \r\ninclude digital object identifiers (DOIs), Handles, and others. For further reading on PID \r\ntypes, please refer to https://www.dpconline.org/handbook/technical -solutions -and-\r\ntools/persistent -identifiers . \r\nTo enhance the findability of r esearch outputs, and their potential reuse, standardised \r\nmetadata frameworks are essential, ensuring that data and other research outputs \r\nare accompanied by rich metadata that provides them with context. \r\nTo enhance the re -usability of research data, they must be licenced. For more \r\ninformation on the licences required for data under Horizon Europe, please refer to the \r\nAGA (article 17). \r\nTrusted repositories assume a central role in the Horizon Europe for the deposition of \r\nand access to publications and resea rch data. For a definition of trusted repositories in \r\nHorizon Europe please refer to the AGA (article 17). Proposers, with the help of data \r\nand research support staff (e.g. data stewards, data librarians, et c), should check \r\nwhether the repositories that th ey plan to deposit their data have the features of \r\ntrusted repositories, and justify this accordingly in their Data Management Plans. \r\nData management plans (DMPs) are a cornerstone for responsible management of \r\nresearch outputs, notably data and are mandatory in Horizon Europe for projects \r\ngenerating and/or reusing data (on requirements and the frequency of DMPs as \r\ndeliverables consult the AGA article 17). A template for a DMP is provided under the \r\nreporting templates in the reference documents of the Funding and Tenders portal of \r\nthe European Commission. Its use is recom mended but not mandatory. DMPs are',
'What is a cornerstone for responsible management of research outputs in Horizon Europe?\r\n',
'How can a Beneficiary submit a Financial Statement?\r\n',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 45 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 54 tokens
- mean: 309.0 tokens
- max: 512 tokens
- min: 8 tokens
- mean: 18.09 tokens
- max: 34 tokens
- Samples:
positive anchor and rules and procedures for sharing of data including licensing.
Research outputs — Results to which access can be given in the form of scientific publications, data or other
engineered results and processes such as software, algorithms, protocols, models, workflows
and electronic notebooks.
[Scope of the obligations
For this section, references to ‘beneficiary’or ‘beneficiaries’do not include affiliated entities (if any). ]
[Agreement on background — Background free from restrictions
The beneficiaries must identify in a written agreement the background as needed for implementing the action
or for exploiting its results.
Where the call conditions restrict control due to strategic interests reasons, background that is subject to
control or other restrictions by a country (or entity from a country) which is not one of the eligible countries
or target countries set out in the call conditions and that impact the exploitation of the results (i.e. would
make the exploitation of the results subject to control or restrictions) must not be used and must be explicitly
excluded in the agreement on background —unless otherwise agreed with the granting authority. ]
[Results free from restrictions
Where the call conditions restrict control due to [strategic interests reasons ], the beneficiaries must ensure
that the results of the action are not subject to control or other restrictions by a country (or entity from a
country) which is not one of the eligible countries or target countries set out in the call conditions — unless
otherwise agreed with the granting authority. ]
Ownership of results
Results are owned by the beneficiaries that generate them. [[…]]Who owns the results of the action in the given context?
EU Grants: HE Programme Guide : V4.0 – 15.10.2023
60
To a degree matching the type of research being proposed (from basic to
precompetitiv e) and as appropriate, AI -based systems or techniques should be, or be
developed to become (implicitly or explicitly contributing to one or several of the
following objectives):
• technically robust, accurate and reproducible, and able to deal with and infor m
about possible failures, inaccuracies and errors, proportionate to the assessed
risk posed by the AI -based system or technique
• socially robust, in that they duly consider the context and environment in which
they operate
• reliable and to function as inte nded, minimising unintentional and unexpected
harm, preventing unacceptable harm and safeguarding the physical and mental
integrity of humans
• able to provide a suitable explanation of its decision -making process, whenever
an AI -based system can have a sign ificant impact on people’s lives.What should AI-based systems be able to provide in terms of decision-making process?
EU Funding & Tenders Portal: Online Manual: V1.1 – 15.09.2022
15 • 1 Participant Contact (PaCo) per beneficiar y
• 1 LEAR per organisation
• 1 Project Legal Signatory (PLSIGN) per organisation
• 1 Project Financial Signatory (PFSIGN) per organi sation .
One person can have several roles at the same time.
Organisations that participate as Affiliated Entities (or other type of participant — Associated
Partner, Subcontractor, etc) do NOT need any access roles in the Portal, since they are not
allowed to use it. All actions in the Portal are handled for them by the Coordinator /Beneficiary
they are linked to .
FAQ
• FAQ on users' roles and access rights
1.3 Accepting the Terms and Conditions of Us e
The Portal is part of the Single Electronic Data Exchange Area set up under Articles 147 and
128 of the EU Financial Regulation .
On the first login to M y Area, user s must agree to the Portal Terms and Conditions and the
Portal Privacy Statement.
Organisations will be asked to agree to the Terms and Conditions when they appoint their LEAR
(declaration of consent). Every time you access to My Area, you are implicitly reaffirming
your acceptance of the Terms and Conditions valid at that time.
Personal data will be kept and processed for th e purposes of the Single Electronic Data Exchange
Area, i.e. for the manage ment and implement ation of y our EU grants, contracts, prizes and
other transactions managed through the Portal. The detailed conditions for the processing of
your personal data are set out in the Portal Privacy Statement .
2. Participant Register — Register your organisationWho is responsible for agreeing to the Portal Terms and Conditions and the Portal Privacy Statement for an organization?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 8per_device_eval_batch_size
: 8per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 3.0max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for daniellgoncalves/bge-base-horizon-europe-matryoshka
Base model
BAAI/bge-base-en-v1.5