Simple is Better and Large is Not Enough: Towards Ensembling of Foundational Language Models
Abstract
Foundational Language Models (FLMs) have advanced natural language processing (NLP) research. Current researchers are developing larger FLMs (e.g., XLNet, T5) to enable contextualized language representation, classification, and generation. While developing larger FLMs has been of significant advantage, it is also a liability concerning hallucination and predictive uncertainty. Fundamentally, larger FLMs are built on the same foundations as smaller FLMs (e.g., BERT); hence, one must recognize the potential of smaller FLMs which can be realized through an ensemble. In the current research, we perform a reality check on FLMs and their ensemble on benchmark and real-world datasets. We hypothesize that the ensembling of FLMs can influence the individualistic attention of FLMs and unravel the strength of coordination and cooperation of different FLMs. We utilize BERT and define three other ensemble techniques: {Shallow, Semi, and Deep}, wherein the <PRE_TAG>Deep-Ensemble</POST_TAG> introduces a knowledge-guided reinforcement learning approach. We discovered that the suggested <PRE_TAG><PRE_TAG>Deep-Ensemble</POST_TAG> BERT</POST_TAG> outperforms its large variation i.e. <PRE_TAG>BERTlarge</POST_TAG>, by a factor of many times using datasets that show the usefulness of NLP in sensitive fields, such as mental health.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper