TejaGollapudi
commited on
Commit
·
dcdbfca
1
Parent(s):
db3f569
Model card
Browse files
README.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- "eng"
|
4 |
+
thumbnail: "url to a thumbnail used in social sharing"
|
5 |
+
tags:
|
6 |
+
- "pytorch"
|
7 |
+
- "tensorflow"
|
8 |
+
license: "apache-2.0"
|
9 |
+
|
10 |
+
---
|
11 |
+
|
12 |
+
|
13 |
+
# vBERT-2021-BASE
|
14 |
+
|
15 |
+
### Model Info:
|
16 |
+
<ul>
|
17 |
+
<li> Authors: R&D AI Lab, VMware Inc.
|
18 |
+
<li> Model date: April, 2022
|
19 |
+
<li> Model version: 2021-base
|
20 |
+
<li> Model type: Pretrained language model
|
21 |
+
<li> License: Apache 2.0
|
22 |
+
</ul>
|
23 |
+
|
24 |
+
#### Motivation
|
25 |
+
Traditional BERT models struggle with VMware-specific words (Tanzu, vSphere, etc.), technical terms, and compound words. (<a href =https://medium.com/@rickbattle/weaknesses-of-wordpiece-tokenization-eb20e37fec99>Weaknesses of WordPiece Tokenization</a>)
|
26 |
+
|
27 |
+
We have created our vBERT model to address the aforementioned issues. We have replaced the first 1k unused tokens of BERT's vocabulary with VMware-specific terms to create a modified vocabulary. We then pretrained the 'bert-base-uncased' model for additional 78K steps (71k With MSL_128 and 7k with MSL_512) (approximately 5 epochs) on VMware domain data.
|
28 |
+
|
29 |
+
#### Intended Use
|
30 |
+
The model functions as a VMware-specific Language Model.
|
31 |
+
|
32 |
+
|
33 |
+
#### How to Use
|
34 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
35 |
+
|
36 |
+
```
|
37 |
+
from transformers import BertTokenizer, BertModel
|
38 |
+
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-base')
|
39 |
+
model = BertModel.from_pretrained("VMware/vbert-2021-base")
|
40 |
+
text = "Replace me by any text you'd like."
|
41 |
+
encoded_input = tokenizer(text, return_tensors='pt')
|
42 |
+
output = model(**encoded_input)
|
43 |
+
```
|
44 |
+
|
45 |
+
and in TensorFlow:
|
46 |
+
|
47 |
+
```
|
48 |
+
from transformers import BertTokenizer, TFBertModel
|
49 |
+
tokenizer = BertTokenizer.from_pretrained('VMware/vbert-2021-base')
|
50 |
+
model = TFBertModel.from_pretrained('VMware/vbert-2021-base')
|
51 |
+
text = "Replace me by any text you'd like."
|
52 |
+
encoded_input = tokenizer(text, return_tensors='tf')
|
53 |
+
output = model(encoded_input)
|
54 |
+
|
55 |
+
```
|
56 |
+
|
57 |
+
### Training
|
58 |
+
|
59 |
+
#### - Datasets
|
60 |
+
Publically available VMware text data such as VMware Docs, Blogs etc. were used for creating the pretraining corpus. Sourced in May, 2021. (~320,000 Documents)
|
61 |
+
#### - Preprocessing
|
62 |
+
<ul>
|
63 |
+
<li>Decoding HTML
|
64 |
+
<li>Decoding Unicode
|
65 |
+
<li>Stripping repeated characters
|
66 |
+
<li>Splitting compound word
|
67 |
+
<li>Spelling correction
|
68 |
+
</ul>
|
69 |
+
|
70 |
+
#### - Model performance measures
|
71 |
+
We benchmarked vBERT on various VMware-specific NLP downstream tasks (IR, classification, etc).
|
72 |
+
The model scored higher than the 'bert-base-uncased' model on all benchmarks.
|
73 |
+
|
74 |
+
### Limitations and bias
|
75 |
+
Since the model is further pretrained on the BERT model, it may have the same biases embedded within the original BERT model.
|
76 |
+
|
77 |
+
The data needs to be preprocessed using our internal vNLP Preprocessor (not available to the public) to maximize its performance.
|