Papers
arxiv:1511.04561

8-Bit Approximations for Parallelism in Deep Learning

Published on Nov 14, 2015
Authors:

Abstract

The creation of practical deep learning data-products often requires parallelization across processors and computers to make deep learning feasible on large data sets, but bottlenecks in communication bandwidth make it difficult to attain good speedups through parallelism. Here we develop and test 8-bit approximation algorithms which make better use of the available bandwidth by compressing 32-bit gradients and nonlinear activations to 8-bit approximations. We show that these approximations do not decrease predictive performance on MNIST, CIFAR10, and ImageNet for both model and data <PRE_TAG>parallelism</POST_TAG> and provide a data transfer speedup of 2x relative to 32-bit <PRE_TAG>parallelism</POST_TAG>. We build a predictive model for speedups based on our experimental data, verify its validity on known speedup data, and show that we can obtain a speedup of 50x and more on a system of 96 GPUs compared to a speedup of 23x for 32-bit. We compare our data types with other methods and show that 8-bit approximations achieve state-of-the-art <PRE_TAG>speedups</POST_TAG> for model <PRE_TAG>parallelism</POST_TAG>. Thus 8-bit approximation is an efficient method to parallelize convolutional networks on very large systems of GPUs.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/1511.04561 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/1511.04561 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.