# BERT-mini model finetuned with M-FAC This model is finetuned on STS-B dataset with state-of-the-art second-order optimizer M-FAC. Check NeurIPS 2021 paper for more details on M-FAC: [https://arxiv.org/pdf/2107.03356.pdf](https://arxiv.org/pdf/2107.03356.pdf). ## Finetuning setup For fair comparison against default Adam baseline, we finetune the model in the same framework as described here [https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) and just swap Adam optimizer with M-FAC. Hyperparameters used by M-FAC optimizer: ```bash learning rate = 1e-4 number of gradients = 512 dampening = 1e-6 ``` ## Results We share the best model out of 5 runs with the following score on STS-B validation set: ```bash pearson = 85.03 spearman = 85.06 ``` Mean and standard deviation for 5 runs on STS-B validation set: | | Pearson | Spearman | |:----:|:-----------:|:----------:| | Adam | 82.09 ± 0.54 | 82.64 ± 0.71 | | M-FAC | 84.66 ± 0.30 | 84.65 ± 0.30 | Results can be reproduced by adding M-FAC optimizer code in [https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) and running the following bash script: ```bash CUDA_VISIBLE_DEVICES=0 python run_glue.py \ --seed 7 \ --model_name_or_path prajjwal1/bert-mini \ --task_name stsb \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 1e-4 \ --num_train_epochs 5 \ --output_dir out_dir/ \ --optim MFAC \ --optim_args '{"lr": 1e-4, "num_grads": 512, "damp": 1e-6}' ``` We believe these results could be improved with modest tuning of hyperparameters: `per_device_train_batch_size`, `learning_rate`, `num_train_epochs`, `num_grads` and `damp`. For the sake of fair comparison and a robust default setup we use the same hyperparameters across all models (`bert-tiny`, `bert-mini`) and all datasets (SQuAD version 2 and GLUE). Our code for M-FAC can be found here: [https://github.com/IST-DASLab/M-FAC](https://github.com/IST-DASLab/M-FAC). A step-by-step tutorial on how to integrate and use M-FAC with any repository can be found here: [https://github.com/IST-DASLab/M-FAC/tree/master/tutorials](https://github.com/IST-DASLab/M-FAC/tree/master/tutorials). ## BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2107-03356, author = {Elias Frantar and Eldar Kurtic and Dan Alistarh}, title = {Efficient Matrix-Free Approximations of Second-Order Information, with Applications to Pruning and Optimization}, journal = {CoRR}, volume = {abs/2107.03356}, year = {2021}, url = {https://arxiv.org/abs/2107.03356}, eprinttype = {arXiv}, eprint = {2107.03356}, timestamp = {Tue, 20 Jul 2021 15:08:33 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2107-03356.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```