Papers
arxiv:2004.04037

DynaBERT: Dynamic BERT with Adaptive Width and Depth

Published on Apr 8, 2020
Authors:
,
,
,
,
,

Abstract

The pre-trained language models like BERT, though powerful in many natural language processing tasks, are both computation and memory expensive. To alleviate this problem, one approach is to compress them for specific tasks before deployment. However, recent works on <PRE_TAG>BERT compression</POST_TAG> usually compress the large BERT model to a fixed smaller size. They can not fully satisfy the requirements of different edge devices with various hardware performances. In this paper, we propose a novel dynamic <PRE_TAG>BERT model</POST_TAG> (abbreviated as Dyna<PRE_TAG>BERT</POST_TAG>), which can flexibly adjust the size and latency by selecting adaptive width and depth. The training process of Dyna<PRE_TAG>BERT</POST_TAG> includes first training a width-adaptive <PRE_TAG>BERT</POST_TAG> and then allowing both adaptive width and depth, by distilling knowledge from the full-sized model to small sub-networks. Network rewiring is also used to keep the more important attention heads and neurons shared by more sub-networks. Comprehensive experiments under various efficiency constraints demonstrate that our proposed dynamic BERT (or Ro<PRE_TAG>BERTa</POST_TAG>) at its largest size has comparable performance as <PRE_TAG>BERT-base</POST_TAG> (or <PRE_TAG>Ro<PRE_TAG>BERTa</POST_TAG>-base</POST_TAG>), while at smaller widths and depths consistently outperforms existing <PRE_TAG>BERT compression</POST_TAG> methods. Code is available at https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/Dyna<PRE_TAG>BERT</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 2

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2004.04037 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2004.04037 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.