The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the official Google BERT repository. These BERT variants were introduced in the paper Well-Read Students Learn Better: On the Importance of Pre-training Compact Models. These models are supposed to be trained on a downstream task.
If you use the model, please consider citing the paper
@misc{bhargava2021generalization,
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics},
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers},
year={2021},
eprint={2110.01518},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
Original Implementation and more info can be found in this Github repository.
You can check out:
prajjwal1/bert-tiny
(L=2, H=128)prajjwal1/bert-mini
(L=4, H=256)prajjwal1/bert-small
(L=4, H=512)prajjwal1/bert-medium
(L=8, H=512)