omidrohanian
commited on
Commit
·
135524f
1
Parent(s):
8e9bc52
Update README.md
Browse files
README.md
CHANGED
@@ -2,13 +2,13 @@
|
|
2 |
CompactBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2?text=The+goal+of+life+is+%5BMASK%5D.) model which is distilled for 100k training steps using a total batch size of 192 on the PubMed dataset.
|
3 |
|
4 |
# Distillation Procedure
|
5 |
-
This model has the same overall architecture as [DistilBioBERT](https://huggingface.co/nlpie/distil-biobert)
|
6 |
|
7 |
# Initialisation
|
8 |
Following [DistilBERT](https://huggingface.co/distilbert-base-uncased?text=The+goal+of+life+is+%5BMASK%5D.), we initialise the student model by taking weights from every other layer of the teacher.
|
9 |
|
10 |
# Architecture
|
11 |
-
In this model, the size of the hidden dimension and the embedding layer are both set to 768. The vocabulary size is 28996. The number of transformer layers is 6 and the expansion rate of the feed-forward layer is 4. Overall this model has around 65 million parameters.
|
12 |
|
13 |
# Citation
|
14 |
If you use this model, please consider citing the following paper:
|
|
|
2 |
CompactBioBERT is a distilled version of the [BioBERT](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2?text=The+goal+of+life+is+%5BMASK%5D.) model which is distilled for 100k training steps using a total batch size of 192 on the PubMed dataset.
|
3 |
|
4 |
# Distillation Procedure
|
5 |
+
This model has the same overall architecture as [DistilBioBERT](https://huggingface.co/nlpie/distil-biobert) with the difference that here we combine the distillation approaches of DistilBioBERT and [TinyBioBERT](https://huggingface.co/nlpie/tiny-biobert). We utilise the same initialisation technique as in [DistilBioBERT](https://huggingface.co/nlpie/distil-biobert), and apply a layer-to-layer distillation with three major components, namely, MLM, layer, and output distillation.
|
6 |
|
7 |
# Initialisation
|
8 |
Following [DistilBERT](https://huggingface.co/distilbert-base-uncased?text=The+goal+of+life+is+%5BMASK%5D.), we initialise the student model by taking weights from every other layer of the teacher.
|
9 |
|
10 |
# Architecture
|
11 |
+
In this model, the size of the hidden dimension and the embedding layer are both set to 768. The vocabulary size is 28996. The number of transformer layers is 6 and the expansion rate of the feed-forward layer is 4. Overall, this model has around 65 million parameters.
|
12 |
|
13 |
# Citation
|
14 |
If you use this model, please consider citing the following paper:
|