Bitsandbytes documentation
AdamW
AdamW
AdamW is a variant of the Adam
optimizer that separates weight decay from the gradient update based on the observation that the weight decay formulation is different when applied to SGD
and Adam
.
bitsandbytes also supports paged optimizers which take advantage of CUDAs unified memory to transfer memory from the GPU to the CPU when GPU memory is exhausted.
AdamW
class bitsandbytes.optim.AdamW
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
Base AdamW optimizer.
AdamW8bit
class bitsandbytes.optim.AdamW8bit
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
8-bit AdamW optimizer.
AdamW32bit
class bitsandbytes.optim.AdamW32bit
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = Trueis_paged = False )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
32-bit AdamW optimizer.
PagedAdamW
class bitsandbytes.optim.PagedAdamW
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
Paged AdamW optimizer.
PagedAdamW8bit
class bitsandbytes.optim.PagedAdamW8bit
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
Paged 8-bit AdamW optimizer.
PagedAdamW32bit
class bitsandbytes.optim.PagedAdamW32bit
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
__init__
< source >( paramslr = 0.001betas = (0.9, 0.999)eps = 1e-08weight_decay = 0.01amsgrad = Falseoptim_bits = 32args = Nonemin_8bit_size = 4096percentile_clipping = 100block_wise = True )
Parameters
- params (
torch.tensor
) β The input parameters to optimize. - lr (
float
, defaults to 1e-3) β The learning rate. - betas (
tuple(float, float)
, defaults to (0.9, 0.999)) β The beta values are the decay rates of the first and second-order moment of the optimizer. - eps (
float
, defaults to 1e-8) β The epsilon value prevents division by zero in the optimizer. - weight_decay (
float
, defaults to 1e-2) β The weight decay value for the optimizer. - amsgrad (
bool
, defaults toFalse
) β Whether to use the AMSGrad variant of Adam that uses the maximum of past squared gradients instead. - optim_bits (
int
, defaults to 32) β The number of bits of the optimizer state. - args (
object
, defaults toNone
) β An object with additional arguments. - min_8bit_size (
int
, defaults to 4096) β The minimum number of elements of the parameter tensors for 8-bit optimization. - percentile_clipping (
int
, defaults to 100) β Adapts clipping threshold automatically by tracking the last 100 gradient norms and clipping the gradient at a certain percentile to improve stability. - block_wise (
bool
, defaults toTrue
) β Whether to independently quantize each block of tensors to reduce outlier effects and improve stability. - is_paged (
bool
, defaults toFalse
) β Whether the optimizer is a paged optimizer or not.
Paged 32-bit AdamW optimizer.