doberst commited on
Commit
292bb7e
1 Parent(s): 2e86e11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +170 -0
README.md CHANGED
@@ -1,3 +1,173 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # Model Card for Model ID
6
+
7
+ <!-- Provide a quick summary of what the model is/does. -->
8
+ industry-bert-insurance-v0.1 is part of a series of industry-fine-tuned sentence_transformer embedding models.
9
+
10
+ BERT-based 768-parameter drop-in substitute for non-industry-specific embeddings model. This model was trained on a wide range of
11
+ publicly available materials related to the Insurance industry.
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ <!-- Provide a longer summary of what this model is. -->
18
+
19
+ - **Developed by:** llmware
20
+ - **Shared by [optional]:** Darren Oberst
21
+ - **Model type:** BERT-based Industry domain fine-tuned Sentence Transformer architecture
22
+ - **Language(s) (NLP):** English
23
+ - **License:** Apache 2.0
24
+ - **Finetuned from model [optional]:** BERT-based model, fine-tuning methodology described below.
25
+
26
+ ### Model Sources [optional]
27
+
28
+ <!-- Provide the basic links for the model. -->
29
+
30
+ - **Repository:** [More Information Needed]
31
+ - **Paper [optional]:** [More Information Needed]
32
+ - **Demo [optional]:** [More Information Needed]
33
+
34
+ ## Uses
35
+
36
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
37
+
38
+ ### Direct Use
39
+
40
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
41
+ This model is intended to be used as a sentence embedding model, specifically for the Asset Management and financial industries.
42
+
43
+ ### Downstream Use [optional]
44
+
45
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
46
+
47
+ [More Information Needed]
48
+
49
+ ### Out-of-Scope Use
50
+
51
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
52
+
53
+ [More Information Needed]
54
+
55
+ ## Bias, Risks, and Limitations
56
+
57
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
58
+
59
+ [More Information Needed]
60
+
61
+ ### Recommendations
62
+
63
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
64
+
65
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
66
+
67
+ ## How to Get Started with the Model
68
+
69
+ Use the code below to get started with the model.
70
+
71
+ [More Information Needed]
72
+
73
+ ## Training Details
74
+
75
+ ### Training Data
76
+
77
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
78
+
79
+ [More Information Needed]
80
+
81
+ ### Training Procedure
82
+
83
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
84
+
85
+ This model was fine-tuned using a custom self-supervised procedure that combined contrastive techniques with stochastic injections of
86
+ distortions in the samples. The methodology was derived, adapted and inspired primarily from three research papers cited below:
87
+ TSDAE (Reimers), DeClutr (Giorgi), and Contrastive Tension (Carlsson).
88
+
89
+ #### Summary
90
+
91
+
92
+
93
+ ## Model Examination [optional]
94
+
95
+ <!-- Relevant interpretability work for the model goes here -->
96
+
97
+ [More Information Needed]
98
+
99
+ ## Environmental Impact
100
+
101
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
102
+
103
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
104
+
105
+ - **Hardware Type:** [More Information Needed]
106
+ - **Hours used:** [More Information Needed]
107
+ - **Cloud Provider:** [More Information Needed]
108
+ - **Compute Region:** [More Information Needed]
109
+ - **Carbon Emitted:** [More Information Needed]
110
+
111
+ ## Technical Specifications [optional]
112
+
113
+ ### Model Architecture and Objective
114
+
115
+ [More Information Needed]
116
+
117
+ ### Compute Infrastructure
118
+
119
+ [More Information Needed]
120
+
121
+ #### Hardware
122
+
123
+ [More Information Needed]
124
+
125
+ #### Software
126
+
127
+ [More Information Needed]
128
+
129
+ ## Citation [optional]
130
+
131
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
132
+
133
+ Custom training protocol used to train the model, which was derived and inspired by the following papers:
134
+
135
+ @article{wang-2021-TSDAE,
136
+ title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
137
+ author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna",
138
+ journal= "arXiv preprint arXiv:2104.06979",
139
+ month = "4",
140
+ year = "2021",
141
+ url = "https://arxiv.org/abs/2104.06979",
142
+ }
143
+
144
+ @inproceedings{giorgi-etal-2021-declutr,
145
+ title = {{D}e{CLUTR}: Deep Contrastive Learning for Unsupervised Textual Representations},
146
+ author = {Giorgi, John and Nitski, Osvald and Wang, Bo and Bader, Gary},
147
+ year = 2021,
148
+ month = aug,
149
+ booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)},
150
+ publisher = {Association for Computational Linguistics},
151
+ address = {Online},
152
+ pages = {879--895},
153
+ doi = {10.18653/v1/2021.acl-long.72},
154
+ url = {https://aclanthology.org/2021.acl-long.72}
155
+ }
156
+
157
+ @article{Carlsson-2021-CT,
158
+ title = {Semantic Re-tuning with Contrastive Tension},
159
+ author= {Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren},
160
+ year= {2021},
161
+ month= {"January"}
162
+ Published: 12 Jan 2021, Last Modified: 05 May 2023
163
+ }
164
+
165
+ ## Model Card Authors [optional]
166
+
167
+ [More Information Needed]
168
+
169
+ ## Model Card Contact
170
+
171
+ [More Information Needed]
172
+
173
+