ankitcodes commited on
Commit
be63a2f
1 Parent(s): 779ad59

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -45
README.md CHANGED
@@ -46,21 +46,7 @@ language:
46
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
47
  should probably proofread and complete it, then remove this comment. -->
48
 
49
- # Personal Identifiable Information (PII Model)
50
 
51
- This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the generator dataset.
52
- It achieves the following results:
53
-
54
- - Training Loss: 0.003900
55
- - Validation Loss: 0.051071
56
- - Precision: 95.53%
57
- - Recall: 96.60%
58
- - F1: 96%
59
- - Accuracy:99.11%
60
-
61
- ## Model description
62
-
63
- Meet our digital safeguard, a savvy token classification model with a knack for spotting personally identifiable information (PII) entities. Trained on the illustrious Bert architecture and fine-tuned on a custom dataset, this model is like a superhero for privacy, swiftly detecting names, addresses, dates of birth, and more. With each token it encounters, it acts as a vigilant guardian, ensuring that sensitive information remains shielded from prying eyes, making the digital realm a safer and more secure place to explore.
64
 
65
  ## Model can Detect Following Entity Group
66
 
@@ -99,42 +85,11 @@ Meet our digital safeguard, a savvy token classification model with a knack for
99
 
100
 
101
 
102
- ### Training hyperparameters
103
- The following hyperparameters were used during training:
104
-
105
- | Hyperparameter | Value |
106
- |------------------------------|---------------|
107
- | Learning Rate | 5e-5 |
108
- | Train Batch Size | 16 |
109
- | Eval Batch Size | 16 |
110
- | Number of Training Epochs | 7 |
111
- | Weight Decay | 0.01 |
112
- | Save Strategy | Epoch |
113
- | Load Best Model at End | True |
114
- | Metric for Best Model | F1 |
115
- | Push to Hub | True |
116
- | Evaluation Strategy | Epoch |
117
- | Early Stopping Patience | 3 |
118
-
119
-
120
- ### Training results
121
-
122
- | Epoch | Training Loss | Validation Loss | Precision (%) | Recall (%) | F1 Score (%) | Accuracy (%) |
123
- |-------|---------------|-----------------|---------------|------------|--------------|--------------|
124
- | 1 | 0.0443 | 0.038108 | 91.88 | 95.17 | 93.50 | 98.80 |
125
- | 2 | 0.0318 | 0.035728 | 94.13 | 96.15 | 95.13 | 98.90 |
126
- | 3 | 0.0209 | 0.032016 | 94.81 | 96.42 | 95.61 | 99.01 |
127
- | 4 | 0.0154 | 0.040221 | 93.87 | 95.80 | 94.82 | 98.88 |
128
- | 5 | 0.0084 | 0.048183 | 94.21 | 96.06 | 95.13 | 98.93 |
129
- | 6 | 0.0037 | 0.052281 | 94.49 | 96.60 | 95.53 | 99.07 |
130
-
131
 
132
 
133
 
134
 
135
 
136
- ### Author
137
138
 
139
  ### Framework versions
140
 
 
46
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
47
  should probably proofread and complete it, then remove this comment. -->
48
 
 
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
  ## Model can Detect Following Entity Group
52
 
 
85
 
86
 
87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
88
 
89
 
90
 
91
 
92
 
 
 
93
 
94
  ### Framework versions
95