An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods
Abstract
Despite apparent human-level performances of <PRE_TAG>deep neural</POST_TAG> networks (DNN), they behave fundamentally differently from humans. They easily change predictions when small corruptions such as blur and noise are applied on the input (lack of <PRE_TAG>robustness</POST_TAG>), and they often produce confident predictions on out-of-distribution samples (improper uncertainty measure). While a number of researches have aimed to address those issues, proposed solutions are typically expensive and complicated (e.g. Bayesian inference and adversarial training). Meanwhile, many simple and cheap regularization methods have been developed to enhance the generalization of classifiers. Such regularization methods have largely been overlooked as baselines for addressing the <PRE_TAG>robustness</POST_TAG> and uncertainty issues, as they are not specifically designed for that. In this paper, we provide extensive empirical evaluations on the <PRE_TAG>robustness</POST_TAG> and uncertainty estimates of image classifiers (CIFAR-100 and ImageNet) trained with state-of-the-art regularization methods. Furthermore, experimental results show that certain regularization methods can serve as strong baseline methods for <PRE_TAG>robustness</POST_TAG> and uncertainty estimation of <PRE_TAG>DNNs</POST_TAG>.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper