|
--- |
|
language: |
|
- en |
|
|
|
license: |
|
- mit |
|
|
|
tags: |
|
- BERT |
|
- MNLI |
|
- NLI |
|
- transformer |
|
- pre-training |
|
|
|
--- |
|
|
|
The following model is a Pytorch pre-trained model obtained from converting Tensorflow checkpoint found in the [official Google BERT repository](https://github.com/google-research/bert). |
|
|
|
This is one of the smaller pre-trained BERT variants, together with [bert-mini](https://huggingface.co/prajjwal1/bert-mini) [bert-small](https://huggingface.co/prajjwal1/bert-small) and [bert-medium](https://huggingface.co/prajjwal1/bert-medium). They were introduced in the study `Well-Read Students Learn Better: On the Importance of Pre-training Compact Models` ([arxiv](https://arxiv.org/abs/1908.08962)), and ported to HF for the study `Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics` ([arXiv](https://arxiv.org/abs/2110.01518)). These models are supposed to be trained on a downstream task. |
|
|
|
If you use the model, please consider citing both the papers: |
|
``` |
|
@misc{bhargava2021generalization, |
|
title={Generalization in NLI: Ways (Not) To Go Beyond Simple Heuristics}, |
|
author={Prajjwal Bhargava and Aleksandr Drozd and Anna Rogers}, |
|
year={2021}, |
|
eprint={2110.01518}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
|
|
|