license: apache-2.0
Model Card for Model ID
industry-bert-insurance-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
BERT-based 768-parameter drop-in substitute for non-industry-specific embeddings model. This model was trained on a wide range of publicly available materials related to the Insurance industry.
Model Details
Model Description
- Developed by: llmware
- Shared by [optional]: Darren Oberst
- Model type: BERT-based Industry domain fine-tuned Sentence Transformer architecture
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model [optional]: BERT-based model, fine-tuning methodology described below.
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
This model is intended to be used as a sentence embedding model, specifically for the Asset Management and financial industries.
Downstream Use [optional]
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
Training Details
Training Data
[More Information Needed]
Training Procedure
This model was fine-tuned using a custom self-supervised procedure that combined contrastive techniques with stochastic injections of distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below: TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
Summary
Model Examination [optional]
[More Information Needed]
Environmental Impact
Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
- Hardware Type: [More Information Needed]
- Hours used: [More Information Needed]
- Cloud Provider: [More Information Needed]
- Compute Region: [More Information Needed]
- Carbon Emitted: [More Information Needed]
Technical Specifications [optional]
Model Architecture and Objective
[More Information Needed]
Compute Infrastructure
[More Information Needed]
Hardware
[More Information Needed]
Software
[More Information Needed]
Citation [optional]
Custom training protocol used to train the model, which was derived and inspired by the following papers:
@article{wang-2021-TSDAE, title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning", author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", journal= "arXiv preprint arXiv:2104.06979", month = "4", year = "2021", url = "https://arxiv.org/abs/2104.06979", }
@inproceedings{giorgi-etal-2021-declutr, title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations}, author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary}, year = 2021, month = aug, booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)}, publisher = {Association for Computational Linguistics}, address = {Online}, pages = {879--895}, doi = {10.18653/v1/2021.acl-long.72}, url = {https://aclanthology.org/2021.acl-long.72} }
@article{Carlsson-2021-CT, title = {Semantic Re-tuning with Contrastive Tension}, author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren}, year= {2021}, month= {"January"} Published: 12 Jan 2021, Last Modified: 05 May 2023 }
Model Card Authors [optional]
[More Information Needed]
Model Card Contact
[More Information Needed]