id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st184800 | Hi, @dskhudia, Thank you for your response. I can give some reproduction steps. For more detailed producible example, maybe I will do it at weekends.
1: Download the imagenet1k dataset.
2: pip install torchvision==0.5.0, this will upgrade the torch into 1.4.0.
3: Use the script with the commands
python -m torch.distributed.launch --nproc_per_node=8 --use_env train_quant.py --data-path=./imagenet_1k
The train_quant.py script is borrowed from torchvision reference code.
Summary
from __future__ import print_function
import datetime
import os
import time
import sys
import copy
import torch
import torch.utils.data
from torch import nn
import torchvision
import torch.quantization
import train_utils as utils
from train import train_one_epoch, evaluate, load_data
def main(args):
if args.output_dir:
utils.mkdir(args.output_dir)
utils.init_distributed_mode(args)
print(args)
if args.post_training_quantize and args.distributed:
raise RuntimeError("Post training quantization example should not be performed "
"on distributed mode")
# Set backend engine to ensure that quantized model runs on the correct kernels
if args.backend not in torch.backends.quantized.supported_engines:
raise RuntimeError("Quantized backend not supported: " + str(args.backend))
torch.backends.quantized.engine = args.backend
device = torch.device(args.device)
torch.backends.cudnn.benchmark = True
# Data loading code
print("Loading data")
train_dir = os.path.join(args.data_path, 'train')
val_dir = os.path.join(args.data_path, 'val')
dataset, dataset_test, train_sampler, test_sampler = load_data(train_dir, val_dir,
args.cache_dataset, args.distributed)
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=args.batch_size,
sampler=train_sampler, num_workers=args.workers, pin_memory=True)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=args.eval_batch_size,
sampler=test_sampler, num_workers=args.workers, pin_memory=True)
print("Creating model", args.model)
# when training quantized models, we always start from a pre-trained fp32 reference model
model = torchvision.models.quantization.__dict__[args.model](pretrained=True, quantize=args.test_only)
model.to(device)
if not (args.test_only or args.post_training_quantize):
model.fuse_model()
model.qconfig = torch.quantization.get_default_qat_qconfig(args.backend)
torch.quantization.prepare_qat(model, inplace=True)
optimizer = torch.optim.SGD(
model.parameters(), lr=args.lr, momentum=args.momentum,
weight_decay=args.weight_decay)
lr_scheduler = torch.optim.lr_scheduler.StepLR(optimizer,
step_size=args.lr_step_size,
gamma=args.lr_gamma)
criterion = nn.CrossEntropyLoss()
model_without_ddp = model
if args.distributed:
model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu])
model_without_ddp = model.module
print (model.module)
model.apply(torch.quantization.enable_observer)
model.apply(torch.quantization.enable_fake_quant)
start_time = time.time()
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
print('Starting training for epoch', epoch)
train_one_epoch(model, criterion, optimizer, data_loader, device, epoch,
args.print_freq)
lr_scheduler.step()
with torch.no_grad():
if epoch >= args.num_observer_update_epochs:
print('Disabling observer for subseq epochs, epoch = ', epoch)
model.apply(torch.quantization.disable_observer)
if epoch >= args.num_batch_norm_update_epochs:
print('Freezing BN for subseq epochs, epoch = ', epoch)
model.apply(torch.nn.intrinsic.qat.freeze_bn_stats)
print('Evaluate QAT model')
evaluate(model, criterion, data_loader_test, device=device)
quantized_eval_model = copy.deepcopy(model)
quantized_eval_model.eval()
quantized_eval_model.to(torch.device('cpu'))
torch.quantization.convert(quantized_eval_model, inplace=True)
print('Evaluate Quantized model')
evaluate(quantized_eval_model, criterion, data_loader_test,
device=torch.device('cpu'))
model.train()
print('Saving models after epoch ', epoch)
total_time = time.time() - start_time
total_time_str = str(datetime.timedelta(seconds=int(total_time)))
print('Training time {}'.format(total_time_str))
def parse_args():
import argparse
parser = argparse.ArgumentParser(description='PyTorch Classification Training')
parser.add_argument('--data-path',
default='/datasets01/imagenet_full_size/061417/',
help='dataset')
parser.add_argument('--model',
default='mobilenet_v2',
help='model')
parser.add_argument('--backend',
default='qnnpack',
help='fbgemm or qnnpack')
parser.add_argument('--device',
default='cuda',
help='device')
parser.add_argument('-b', '--batch-size', default=32, type=int,
help='batch size for calibration/training')
parser.add_argument('--eval-batch-size', default=128, type=int,
help='batch size for evaluation')
parser.add_argument('--epochs', default=90, type=int, metavar='N',
help='number of total epochs to run')
parser.add_argument('--num-observer-update-epochs',
default=4, type=int, metavar='N',
help='number of total epochs to update observers')
parser.add_argument('--num-batch-norm-update-epochs', default=3,
type=int, metavar='N',
help='number of total epochs to update batch norm stats')
parser.add_argument('--num-calibration-batches',
default=32, type=int, metavar='N',
help='number of batches of training set for \
observer calibration ')
parser.add_argument('-j', '--workers', default=16, type=int, metavar='N',
help='number of data loading workers (default: 16)')
parser.add_argument('--lr',
default=0.0001, type=float,
help='initial learning rate')
parser.add_argument('--momentum',
default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('-j', '--workers', default=16, type=int, metavar='N',
help='number of data loading workers (default: 16)')
parser.add_argument('--lr',
default=0.0001, type=float,
help='initial learning rate')
parser.add_argument('--momentum',
default=0.9, type=float, metavar='M',
help='momentum')
parser.add_argument('--wd', '--weight-decay', default=1e-4, type=float,
metavar='W', help='weight decay (default: 1e-4)',
dest='weight_decay')
parser.add_argument('--lr-step-size', default=30, type=int,
help='decrease lr every step-size epochs')
parser.add_argument('--lr-gamma', default=0.1, type=float,
help='decrease lr by a factor of lr-gamma')
parser.add_argument('--print-freq', default=10, type=int,
help='print frequency')
parser.add_argument('--output-dir', default='.', help='path where to save')
parser.add_argument('--resume', default='', help='resume from checkpoint')
parser.add_argument('--start-epoch', default=0, type=int, metavar='N',
help='start epoch')
parser.add_argument(
"--cache-dataset",
dest="cache_dataset",
help="Cache the datasets for quicker initialization. \
It also serializes the transforms",
action="store_true",
)
parser.add_argument(
"--test-only",
dest="test_only",
help="Only test the model",
action="store_true",
)
parser.add_argument(
"--post-training-quantize",
dest="post_training_quantize",
help="Post training quantize the model",
action="store_true",
)
# distributed training parameters
parser.add_argument('--world-size', default=1, type=int,
help='number of distributed processes')
parser.add_argument('--dist-url',
default='env://',
help='url used to set up distributed training')
args = parser.parse_args()
return args
if __name__ == "__main__":
args = parse_args()
main(args) |
st184801 | The torchvision reference script (train_quantization.py) has not been tested for multiGPU support yet. Recently, we landed fixes to the code that should solve this issue:
github.com/pytorch/pytorch
[quant] Regsiter fake_quant and observer attributes as buffers 8
pytorch:gh/supriyar/56/base ← pytorch:gh/supriyar/56/head
opened
Feb 21, 2020
supriyar
+18
-13
Note that syncBN is not yet supported for quantization aware training. |
st184802 | I have a try with the changed files, but it remains a bug.
dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: Tensors must be CUDA and dense |
st184803 | Creating a github issue to track this problem.
github.com/pytorch/pytorch
Broadcasting does not work for Quantization aware training with multiple GPUs 16
opened
Apr 25, 2020
raghuramank100
Repro code and error info are at:
https://discuss.pytorch.org/t/quantization-awareness-training-multi-gpu-suport/66106
Snippet of error at:
Traceback (most recent call last):
File “train_quantization.py”, line 258, in
main(args)
File “train_quantization.py”, line 77,...
quantization
triaged |
st184804 | you can try to set the default value of scale and zero point, because it can not broadcast none tensor. |
st184805 | I have tried to set default values of scale, zero point, quant_min and quant_max, and I could see the same error:
“dist._broadcast_coalesced(self.process_group, tensors, buffer_size)
RuntimeError: Tensors must be CUDA and dense”. |
st184806 | scale = torch.FloatTensor([1])
zero_point = torch.FloatTensor([0])
min_val = torch.FloatTensor([0])
max_val = torch.FloatTensor([255])
robotcator via PyTorch Forums <[email protected]> 於 2020年5月20日 週三 上午11:05寫道: |
st184807 | We have now added multi-GPU support for Quantization aware training in the nightly build, let us know if you see any issues |
st184808 | Get the information “There is still work to do on verifying that BN is working correctly in
QAT + DDP, but saving that for a separate PR.” from https://github.com/pytorch/vision/pull/2230 8.
Could you provide the PR for tracking? Thanks. |
st184809 | the PR to make BN work correctly with QAT+DDP is here: https://github.com/pytorch/pytorch/pull/38478 18 . This enables SyncBatchNorm to be swapped in to a fused QAT Conv-BN. I will update the issue. There were also a couple of bug fixes landed, such as https://github.com/pytorch/pytorch/pull/38368 6 |
st184810 | Hello Everyone,
I have fine-tuned bert-base model with amp and without amp using MAX_SEQ_LEN=512. I compared the performance among these models in terms of:
Fine-tuning time
Inference time on CPU/GPU
Model size
While conducting first experiment, I observed that in terms of Fine-tuning time , bert model with amp performs better as compare to without amp.
However, when I compare the inference time and model size, both models have same inference time and model size.
Could anyone please explain why this is the case? |
st184811 | The model size regarding its parameters won’t be changed, as the operations (and intermediates) will be casted to FP16 for “safe” ops.
So while you might be able to increase the batch size during training or inference, the state_dict won’t be smaller in size.
Which batch size are you using for inference? If you are seeing a speedup during training, you should also see it during inference. However, if your batch size is low (e.g. a single sample), the performance gain might be too small compared to the overheads of launching all kernels. |
st184812 | @ptrblck thanks for your answer.
I have tried using different batch sizes e.g (8, 16, 64, 128). But I am not finding any difference.
Regarding the code, I am following examples here: https://pytorch.org/docs/stable/notes/amp_examples.html 3
Update: I am able to see the difference in inference time on GPU using
with autocast():
with torch.no_grad():
outputs = model(**inputs)
But when I compare the inference time on CPU, I do not notice any difference. |
st184813 | Ramesh_Kumar:
But when I compare the inference time on CPU, I do not notice any difference.
Automatic mixed precision is implemented for CUDA operations (and is thus in the torch.cuda namespace). By appying amp your GPU could use TensorCores for certain operations, which would yield a speedup. I don’t know if anything like that is implemented for CPU operations (and if I’m not mistaken not all operations are implemented for HalfTensors on the CPU). |
st184814 | thanks for your response. No, I am not using dynamic quantization. But, since I cannot do mixed precision on cpu. Hence, I guess for CPU I have to switch to dynamic quantization. Am I right? |
st184815 | So do you think, post-training quantization is better idea ? If we want to reduce inference time on cpu? |
st184816 | I’ve unfortunately never profiled the quantized models, so unsure what the expected speedup is.
However, please let us know once if you are using the post-training quantized models and how large the performance gain is. |
st184817 | @ptrblck just to give you update regarding dynamic quantization on cpu. There is issue with quantized bert model which I and many others are facing. Here is the github issue link : https://github.com/huggingface/transformers/issues/2542 19 |
st184818 | Hi
I am trying to quantize a text detection model based on Mobilenet (model definition here 2 )
After inserting the quant and dequant stub, fusing all the conv+bn+relu and conv+relu, replacing cat with skip_add.cat() . I perform the static quantization (script - https://github.com/raghavgurbaxani/Quantization_Experiments/blob/master/try_quantization.py 15 )
After performing quantization, the model size doesn’t go down (in fact it increases )
Original Size:
Size (MB): 6.623636
Fused model Size:
Size (MB): 6.638188
Quantized model Size:
Size (MB): 7.928258
I have even printed the final quantized model here 6
I changed the qconfig to fused_model.qconfig = torch.quantization.default_qconfig but still quantized_model size is Size (MB): 6.715115
Why doesn’t the model size reduce ? |
st184819 | Solved by Vasiliy_Kuznetsov in post #10
Hi Raghav,
For post training quantization, we want the model to be in eval mode (see https://github.com/pytorch/pytorch/blob/530d48e93a3f04a5ec63a1b789c19a5f775bf497/torch/quantization/fuse_modules.py#L63). So, you can add a model.eval() call before you fuse modules:
model.eval()
torch.quantizati… |
st184820 | Looking at the model def you posted, it looks like it is not yet quantized. One missing thing is calibration. You can add a calibration step after you call prepare and before you call convert:
torch.quantization.prepare(fused_model, inplace=True)
# calibrate your model by feeding it example inputs
for inputs in your_dataset:
fused_model(inputs)
print('Quantized model Size:')
quantized = torch.quantization.convert(fused_model, inplace=False)
print_size_of_model(quantized) |
st184821 | Hi @Vasiliy_Kuznetsov
Thank you for your input, I have updated my script to pass in a few images into the fused model as inputs for calibration.
Please see the updated script here
github.com
raghavgurbaxani/Quantization_Experiments/blob/master/try_quantization.py 5
import os
import config as cfg
from model import East
import torch
import utils
import preprossing
import cv2
import numpy as np
import time
def uninplace(model):
if hasattr(model, 'inplace'):
model.inplace = False
if not model.children():
return
for child in model.children():
uninplace(child)
def print_size_of_model(model):
torch.save(model.state_dict(), "temp.p")
This file has been truncated. show original
But still the quantized model size is bigger than the original model -
Original Size:
Size (MB): 6.623636
Fused model Size:
Size (MB): 6.638188
Quantized model Size:
Size (MB): 6.712286
there seems to be some improvement due to the calibration, but the quantized model size is still not satisfactory compared to the original size
Could you suggest what’s going wrong here ? |
st184822 | @Vasiliy_Kuznetsov
I also tried a script with Quantized Aware Training -
github.com
raghavgurbaxani/Quantization_Experiments/blob/master/try_qat.py 3
import os
import config as cfg
from model import East
import torch
import utils
import preprossing
import cv2
import numpy as np
import time
import loss
def uninplace(model):
if hasattr(model, 'inplace'):
model.inplace = False
if not model.children():
return
for child in model.children():
uninplace(child)
def print_size_of_model(model):
This file has been truncated. show original
But still the quantized model is bigger than the original model
I don’t know what’s going wrong here
Original Size:
Size (MB): 6.623636
Fused model Size:
Size (MB): 6.638188
Quantized model Size:
Size (MB): 6.712286
QAT model Size:
Size (MB): 6.712286 |
st184823 | in the paste here (https://github.com/raghavgurbaxani/Quantization_Experiments/blob/master/quantized_model.txt 5), the model doesn’t look quantized. One would expect to see QuantizedConv instead of Conv and QuantizedLinear instead of Linear. One thing to try could be to make sure to run the convert script and ensure that you see the quantized module equivalents afterwards. |
st184824 | Hi @Vasiliy_Kuznetsov
Please check the updated quantized_model now -
github.com
raghavgurbaxani/Quantization_Experiments/blob/master/quantized_model.txt 3
Size (MB): 6.712286
DataParallel(
(module): East(
(mobilenet): MobileNetV2(
(features): Sequential(
(0): Sequential(
(0): ConvBnReLU2d(
(0): Conv2d(
3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False
(activation_post_process): MinMaxObserver(min_val=-694.3411254882812, max_val=765.30712890625)
)
(1): BatchNorm2d(
32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True
(activation_post_process): MinMaxObserver(min_val=-4.2157487869262695, max_val=4.755300998687744)
)
(2): ReLU(
(activation_post_process): MinMaxObserver(min_val=0.0, max_val=4.755300998687744)
)
)
(1): Identity()
This file has been truncated. show original
it seems to have quantized covolutions (line 100 onwards). I don’t know why the layers before line 100 do not have quantized modules.
Do you think my quantstub and dequantstub placement is incorrect ?
Here’s the model (with quant and dequant stub)
github.com
raghavgurbaxani/Quantization_Experiments/blob/master/model.py 5
import torch.nn as nn
import math
import torch
import config as cfg
import utils
from torch.quantization import QuantStub, DeQuantStub
def conv_bn(inp, oup, stride):
return nn.Sequential(
nn.Conv2d(inp, oup, 3, stride, 1, bias=False),
nn.BatchNorm2d(oup),
nn.ReLU(inplace=True)
)
class InvertedResidual(nn.Module):
def __init__(self, inp, oup, stride, expand_ratio):
super(InvertedResidual, self).__init__()
self.stride = stride
assert stride in [1, 2]
This file has been truncated. show original
Main script here - https://github.com/raghavgurbaxani/Quantization_Experiments/blob/master/try_qat.py 1
I suspect maybe my quant and dequant stub may be incorrect but apart from that I’ve followed all the steps as posted in the static quantization tutorial.
Reallly appreciate your help |
st184825 | Hi Raghav,
For post training quantization, we want the model to be in eval mode (see https://github.com/pytorch/pytorch/blob/530d48e93a3f04a5ec63a1b789c19a5f775bf497/torch/quantization/fuse_modules.py#L63 24). So, you can add a model.eval() call before you fuse modules:
model.eval()
torch.quantization.fuse_modules(...) |
st184826 | To the best of my knowledge, the existing quantization method is operating on 32-bit.
In order to quantize weight of CNN as well as reduce memory footprint and then port the quantized model into the mobile device, how to convert a 32-bit operation to a 4-bit or 8-bit operation on cpu? |
st184827 | PyTorch quantization supports int8 (but not int4), with fast kernels for CPU on mobile via QNNPACK. https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 30 has some information to get started, and you would want to set the backend to qnnpack to target mobile CPUs. |
st184828 | class quantizeModel(object):
“”“docstring for quantizePytorchModel”""
def __init__(self):
super(quantizeModel, self).__init__()
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.train_loader, self.test_loader = get_imagenet()
self.quant()
def quant(self):
model = self.load_model()
model.eval()
self.print_size_of_model(model)
self.validate(model, "original_resnet18", self.test_loader)
model.fuse_model()
self.print_size_of_model(model)
self.quantize(model)
def load_model(self):
model = resnet18()
state_dict = torch.load("CIFAR10_resnet18.pth", map_location=self.device)
model.load_state_dict(state_dict)
model.to(self.device)
return model
def print_size_of_model(self, model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p") / 1e6)
os.remove('temp.p')
def validate(self, model, name, data_loader):
with torch.no_grad():
correct = 0
total = 0
acc = 0
for data in data_loader:
images, labels = data
images, labels = images.to(self.device), labels.to(self.device)
output = model(images)
_, predicted = torch.max(output, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
if total == 1024:
break
acc = round(100 * correct / total, 3)
print('{{"metric": "{}_val_accuracy", "value": {}%}}'.format(name, acc))
return acc
def quantize(self, model):
#model.qconfig = torch.quantization.default_qconfig
#model.qconfig = torch.quantization.default_per_channel_qconfig
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.observer.MinMaxObserver.with_args(reduce_range=True),
weight=torch.quantization.observer.PerChannelMinMaxObserver.with_args(dtype=torch.qint8,
qscheme=torch.per_channel_affine))
pmodel = torch.quantization.prepare(model)
#calibration
self.validate(pmodel, "quntize_per_channel_resent18_train", self.train_loader)
qmodel = torch.quantization.convert(pmodel)
self.validate(qmodel, "quntize_per_chaannel_resent18_test", self.test_loader)
self.print_size_of_model(qmodel)
torch.jit.save(torch.jit.script(qmodel), "quantization_per_channel_model18.pth") |
st184829 | Below is my quantification process:
class quantizeModel(object):
def __init__(self):
super(quantizeModel, self).__init__()
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.train_loader, self.test_loader = get_imagenet()
self.quant()
def quant(self):
model = self.load_model()
model.eval()
self.print_size_of_model(model)
self.validate(model, "original_resnet18", self.test_loader)
model.fuse_model()
self.print_size_of_model(model)
self.quantize(model)
def load_model(self):
model = resnet18()
state_dict = torch.load("CIFAR10_resnet18.pth", map_location=self.device)
model.load_state_dict(state_dict)
model.to(self.device)
return model
def print_size_of_model(self, model):
torch.save(model.state_dict(), "temp.p")
print('Size (MB):', os.path.getsize("temp.p") / 1e6)
os.remove('temp.p')
def validate(self, model, name, data_loader):
with torch.no_grad():
correct = 0
total = 0
acc = 0
for data in data_loader:
images, labels = data
images, labels = images.to(self.device), labels.to(self.device)
output = model(images)
_, predicted = torch.max(output, dim=1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
if total == 1024:
break
acc = round(100 * correct / total, 3)
print('{{"metric": "{}_val_accuracy", "value": {}%}}'.format(name, acc))
return acc
def quantize(self, model):
#model.qconfig = torch.quantization.default_qconfig
#model.qconfig = torch.quantization.default_per_channel_qconfig
model.qconfig = torch.quantization.QConfig(
activation=torch.quantization.observer.MinMaxObserver.with_args(reduce_range=True),
weight=torch.quantization.observer.PerChannelMinMaxObserver.with_args(dtype=torch.qint8,
qscheme=torch.per_channel_affine))
pmodel = torch.quantization.prepare(model)
#calibration
self.validate(pmodel, "quntize_per_channel_resent18_train", self.train_loader)
qmodel = torch.quantization.convert(pmodel)
self.validate(qmodel, "quntize_per_chaannel_resent18_test", self.test_loader)
self.print_size_of_model(qmodel)
torch.jit.save(torch.jit.script(qmodel), "quantization_per_channel_model18.pth")
The program is executed three times, and three different sets of quantitative results are obtained, as follows:
1.
Size (MB): 44.786115
{“metric”: “original_resnet18_val_accuracy”, “value”: 75.098%}
Size (MB): 44.717413
{“metric”: “quntize_per_channel_resent18_train_val_accuracy”, “value”: 46.387%}
{“metric”: “quntize_per_chaannel_resent18_test_val_accuracy”, “value”: 75.586%}
Size (MB): 11.290618
2.
Size (MB): 44.786115
{“metric”: “original_resnet18_val_accuracy”, “value”: 75.098%}
Size (MB): 44.717413
{“metric”: “quntize_per_channel_resent18_train_val_accuracy”, “value”: 45.996%}
{“metric”: “quntize_per_chaannel_resent18_test_val_accuracy”, “value”: 76.953%}
Size (MB): 11.290618
3.
Size (MB): 44.786115
{“metric”: “original_resnet18_val_accuracy”, “value”: 75.098%}
Size (MB): 44.717413
{“metric”: “quntize_per_channel_resent18_train_val_accuracy”, “value”: 43.945%}
{“metric”: “quntize_per_chaannel_resent18_test_val_accuracy”, “value”: 75.195%}
Size (MB): 11.290618
I think the weight parameters are unchanged, the calibration data is unchanged, the quantization configuration algorithm is unchanged, and the result after quantization should also be unchanged, but the accuracy after three quantizations is different. What is the reason? |
st184830 | does it reproduce with torch.manual_seed(0)? The pasted code should give the same results, perhaps the get_imagenet has some randomness? |
st184831 | Thank you for your reply,After torch.manual_seed(191009), the code gives the same result。 |
st184832 | Solved by raghuramank100 in post #4
That is correct, we will work on adding support for fusing relu6 soon. For now, if you are doing post training quantization, you could replace relu6 with relu and proceed as a work around.
Thanks, |
st184833 | Yes you can. ReLU6 was added to DEFAULT_MODULE_MAPPING. See Quantized hard sigmoid 48 |
st184834 | hi @pshashk, I did tried to fuse model with ReLU6 activation function. It throws an error. I see in pytorch source code that fuse modules currently support 4 types sequence of modules:
Fuses only the following sequence of modules:
conv, bn
conv, bn, relu
conv, relu
linear, relu
https://github.com/pytorch/pytorch/blob/master/torch/quantization/fuse_modules.py 22 |
st184835 | That is correct, we will work on adding support for fusing relu6 soon. For now, if you are doing post training quantization, you could replace relu6 with relu and proceed as a work around.
Thanks, |
st184836 | @raghuramank100 can you provide an example of how to replace relu6 with relu ? I am trying to quantize a network with relu6 activations. |
st184837 | I have a model in production. I want to be able to increase my model’s throughput for speed reasons. I’ve tried quantizing the model but for some reason, if increase the batch size I still run into an OOM error. I thought that quantizing the model from fp32 to say fp16 would allow the bigger batch sizes? If that’s not the case what is the use case of quantizing models? |
st184838 | Quantizing model will enable to run the model at lower precision (int8) so it runs faster. Also since the tensors as quantized to 8 bit they will occupy less storage space as well. |
st184839 | Hello
I’d like to convert fp32 model supported in torchvision.models to INT8 model to accelerate CPU inference.
As I understand, using prepare() and convert() can convert the model (https://pytorch.org/docs/stable/quantization.html#id1 15).
Any ideas to handle below error messages?
thanks a lot!
Environment:
Nvidia Jetson TX2
pytorch 1.4.0
My code:
model = torchvision.models.vgg16(pretrained=True).eval()
img = np.random.randint(255, size=(1,3,224,224), dtype=np.uint8)
img = torch.FloatTensor(img)#.cuda()
model.qconfig = torch.quantization.get_default_qconfig('qnnpack')
config = torch.quantization.get_default_qat_qconfig('qnnpack')
torch.backends.quantized.engine = 'qnnpack'
model.qconfig = torch.quantization.default_qconfig
model = torch.quantization.prepare(model)
model = torch.quantization.convert(model)
model.eval()
quant = QuantStub()
img = quant(img)
for loop in range(100):
start = time.time()
output = model.forward(img)#, layer[1])
_, predicted = torch.max(output, 1)
end = time.time()
print(end-start)
Result:
/home/user/.local/lib/python3.6/site-packages/torch/quantization/observer.py:172: UserWarning: Must run observer before calling calculate_qparams. Returning default scale and zero point.
Returning default scale and zero point.")
Traceback (most recent call last):
File "quan2.py", line 37, in <module>
output = model.forward(img)#, layer[1])
File "/usr/local/lib/python3.6/dist-packages/torchvision-0.5.0a0+85b8fbf-py3.6-linux-aarch64.egg/torchvision/models/vgg.py", line 43, in forward
x = self.features(x)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/user/.local/lib/python3.6/site-packages/torch/nn/quantized/modules/conv.py", line 215, in forward
self.dilation, self.groups, self.scale, self.zero_point)
RuntimeError: Could not run 'quantized::conv2d' with arguments from the 'CPUTensorId' backend. 'quantized::conv2d' is only available for these backends: [QuantizedCPUTensorId]. (dispatch_ at /home/user/pytorch/aten/src/ATen/core/dispatch/Dispatcher.h:257)
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x78 (0x7f98d36258 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x145a9a8 (0x7f67d9c9a8 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #2: <unknown function> + 0x484b200 (0x7f6b18d200 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libtorch.so)
frame #3: <unknown function> + 0x6518f0 (0x7f9060e8f0 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #4: <unknown function> + 0x61a868 (0x7f905d7868 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
frame #5: <unknown function> + 0x25ee04 (0x7f9021be04 in /home/user/.local/lib/python3.6/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
frame #7: python3() [0x529958]
frame #9: python3() [0x527860]
frame #11: python3() [0x5f2bcc]
frame #14: python3() [0x528ff0]
frame #17: python3() [0x5f2bcc]
frame #19: python3() [0x595e5c]
frame #21: python3() [0x529738]
frame #23: python3() [0x527860]
frame #25: python3() [0x5f2bcc]
frame #28: python3() [0x528ff0]
frame #31: python3() [0x5f2bcc]
frame #33: python3() [0x595e5c]
frame #35: python3() [0x529738]
frame #37: python3() [0x527860]
frame #38: python3() [0x5297dc]
frame #40: python3() [0x528ff0]
frame #45: __libc_start_main + 0xe0 (0x7f9a22d6e0 in /lib/aarch64-linux-gnu/libc.so.6)
frame #46: python3() [0x420e94] |
st184840 | There are a couple things might be related to the error:
You are using original vgg16, you need some modifications so that it can be quantized. You can take a look at torchvision/models/quantization/resnet.py as well as the tutorial: https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html 56 to see how to modify the model.
QuantStub() need to be added to the model instead of using it directly.
Between prepare() and convert() you need to run the model to collect histogram, which will be used to determine the scale and zero point of the quantized model. |
st184841 | Also are you running it on GPU? PyTorch quantization currently only support CPU and mobile backend. |
st184842 | Thanks a lot. By the way, in the case of torchvision/models/quantization/resnet.py, there is specified backend, “fbgemm”, On the other hand, in the case of torchvision/models/quantization/mobilenet.py, the specified backend is ‘qnnpack’. Because I am working on ARM processor, I have to exploit ‘qnnpack’ backend. Is there any different implementation methodology between ‘fbgemm’ and ‘qnnpack’? |
st184843 | The quantization workflow and methodology is the same for ‘fbgemm’ and ‘qnnpack’, they are just different backends for different platforms. |
st184844 | Although I carefully read the attached link in your answer, it is hard to find ‘observer’ phase. Could you explain the observer phase? Which function in the attached link conduct the operating observer phase?
Even if there is no batch norm layer, is fuse always necessary? |
st184845 | Observer phase is called calibration in the tutorial link, basically it runs several iterations and us observer to collect the statistics of the activation and weight, which will be used to do quantization in convert():
# Calibrate with the training set
evaluate(myModel, criterion, data_loader, neval_batches=num_calibration_batches)
You need to fuse batch norm, can refer to this post: Static quantizing and batch norm error (could not run aten::native_batch_norm with args from QuantCPUTensorid backend') 8
Other than that fuse is not necessary but fuse will help on the performance and accuracy. |
st184846 | I see.
To summary what I understood, the quantization step is done as follow.
Load pretrained fp32 model
run prepare() to prepare converting pretrained fp32 model to int8 model
run fp32model.forward() to calibrate fp32 model by operating the fp32 model for a sufficient number of times. However, this calibration phase is a kind of `blackbox’ process so I cannot notice that the calibration is actually done.
run convert() to finally convert the calibrated model to usable int8 model. |
st184847 | I am trying to quantize a model which has upsampling layers at specific parts of network but unable to quantize it due to this Error.
Expected a value of type ‘Tensor’ for argument ‘target_size’ but instead found type ‘List[int]’.
Inferred ‘target_size’ to be of type ‘Tensor’ because it was not annotated with an explicit type.
Implementation of the upsampling layer
class Upsample(nn.Module):
def __init__(self):
super(Upsample, self).__init__()
def forward(self, x, target_size):
# assert (x.data.dim() == 4)
_, _, tH, tW = target_size[0], target_size[1], target_size[2], target_size[3]
B = x.data.size(0)
C = x.data.size(1)
H = x.data.size(2)
W = x.data.size(3)
return x.view(B, C, H, 1, W, 1).expand(B, C, H, tH // H, W, tW // W).contiguous().view(B, C, tH, tW)
Upsampling function usage
up = self.upsample1(x7, downsample4.size())
Quantization code
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
print(model.qconfig)
torch.quantization.prepare(model, inplace=True)
print('Post Training Quantization Prepare : Inserting Observers')
print('\n Downsampling1 Block: After observer insertion \n\n', model.down1.conv1)
torch.quantization.convert(model, inplace=True)
print("Post Training Quantization : Convert Done!")
print("\n Downsampling1Block: After quantization \n\n", model.down1.conv1)
torch.jit.save(torch.jit.script(model), quantized_model_path)
This is my first time trying to quantize a model in pytorch so I am totally clueless on how to solve this. Thanks in advance. |
st184848 | The error doesn’t seem related to quantization. Since the upsample1 is a custom module, you cannot quantize it currently so the quant, dequant stubs need to be inserted in the model correctly. |
st184849 | Htut_Lynn_Aung:
def forward(self, x, target_size):
# assert (x.data.dim() == 4)
this is an error from torch.jit.script, you can annotate the forward like following:
def forward(self, x, target_size):
# type : (Tensor, List[int]) -> Tensor
# assert (x.data.dim() == 4)
to make it scriptable. |
st184850 | Thanks for the reply. Yes, that seems to be the case. I converted the target_size to torch tensor before passing it to upsample function so that kinda solved the problem.
From this
x7 = self.conv7(x6)
# UPSAMPLE
up = self.upsample1(x7, downsample4.size())
to
x7 = self.conv7(x6)
# UPSAMPLE
featuremap_size = torch.tensor(downsample4.size())
up = self.upsample1(x7, featuremap_size, self.inference)
But the model that I am trying to optimize is YoloV4 and it has some activation functions(mish and softplus) not supported on pytorch’s quantization. Therefore, even after this solution, I was not able to quantize the model in the end. |
st184851 | How do we print quantized model weights in PyTorch?
To print using normal PyTorch representation, I understand we use the following approach…
print_parameters = lambda model: [print(name, param.data) for name, param in model.named_parameters() if param.requires_grad]
Similarly, if I defined a model as follows…
import torch
class Model(torch.nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = torch.nn.Conv2d(2, 3, 1, bias=False)
self.conv2 = torch.nn.Conv2d(3, 1, 2, bias=False)
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
return x
model = Model()
Then we can print using pretty straight-forward syntax…
print(model.conv1.weight)
print(model.conv2.weight)
However, both of these approaches fail when the model is converted to a quantized form. Specifically, after the following procedures…
model.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
Printing model.conv1.weight returns a method and the loop provided at the beginning does not print anything. |
st184852 | We have recently prototyped PyTorch Numeric Suite that can compare the numerics between float and quantized model. It’s in nightly build and will be in 1.6.
You can take a look at the following example to see how to compare the weights using it:
github.com
pytorch/pytorch/blob/9c66c88e19e8c6cc90c59409a8074cc31cb4654c/test/quantization/test_numeric_suite.py#L81 32
x = self.mymul.mul(x, x)
x = self.myadd_relu.add_relu(x, x)
w = self.my_scalar_add.add_scalar(x, -0.5)
w = self.my_scalar_mul.mul_scalar(w, 0.5)
w = self.dequant(w)
return w
class TestEagerModeNumericSuite(QuantizationTestCase):
def test_compare_weights(self):
r"""Compare the weights of float and quantized conv layer
"""
def compare_and_validate_results(float_model, q_model):
weight_dict = compare_weights(
float_model.state_dict(), q_model.state_dict()
)
self.assertEqual(len(weight_dict), 1)
for k, v in weight_dict.items():
self.assertTrue(v["float"].shape == v["quantized"].shape) |
st184853 | Is it possible to transform the quantization ability to Caffe? Let’s say I created a quantized model using PyTorch and now I want to export the model to Caffe, can I do that by using the scale/zero_point parameters or it’s mandatory to use PyTorch for their quantization? |
st184854 | you can take a look at ONNX, but we don’t have very good quantization support in ONNX right now, I’m not sure about the ONNX - caffe path either. |
st184855 | Is the quantization done once and then can be used (with the scale and zero_point) or it should have special support that make it int8 during inference? |
st184856 | quantization is done before inference, it transforms a floating point model to a quantized model. |
st184857 | I have scoured all documentation I could locate and am still confused on certain accounts:
class docstring (https://pytorch.org/docs/stable/quantization.html#torch.quantization.QuantStub 52) says
Quantize stub module, before calibration, this is same as an observer, it will be swapped as nnq.Quantize in convert.
which unfortunately isn’t very helpful at all (which “observer”?). That last part of that sentence seems to suggest that at inference time this module will be replaced with one that does actual float-to-int8 data conversion. But what does this module do at calibration time?
furthermore, tutorials seem to suggest that QuantStub and DeQuantStub act as delimiters of parts of the model stack that will actually be subject to quantization; however, some other commentary ([quantization] how to quantize model which include not support to quantize layer 21) seems to suggest that these also “record tensor statistics” and hence unique instances of them are needed – what, one unique pair per each contiguous quantization region?
Some more details would be very much appreciated. |
st184858 | QuantStub is just a place holder for quantize op, it needs to be unique since it has state.
DeQuantStub is a place holder for dequantize op, but it does not need to be unique since it’s stateless.
In eager mode quantization, users need to manually place QuantStub and DeQuantStub in the model whenever the activation in the code crosses the quantized and non-quantized boundary.
One thing to remember is for a quantized module, we always quantize the output of the module, but we don’t quantize the input of the module, so the quantization of the input Tensor should be taken care of by the previous module, that’s why we have QuantStub here, basically to quantize the input for the next quantized module in the sequence.
So in prepare, we’ll attach observer for the output of QuantStub to record the Tensor statistics of the output Tensor, just like for other modules like nn.Conv2d. observer is specified by the qconfig.
And in convert QuantStub will be swapped as nnq.Quantize module, and output of nnq.Quantize(input of nnq.Conv2d) will be quantized. |
st184859 | Thank you, Jerry, this helps.
To clarify
In eager mode quantization
– that is the only mode available at the moment, correct? The JIT does not quantize (yet?).
This might sounds like a nit, but I think the following actually reflects a fundamental difficulty in quantizing in eager mode:
so the quantization of the input Tensor should be taken care of by the previous module,
How does pytorch decide what is “previous”? The true sequence of layer invocations is determined by the procedural code in forward() and it can involve branching at runtime or just data flow merging like with skip connections.
(I am yet to succeed in quantizing resnet 16 because of this, I suspect) |
st184860 | – that is the only mode available at the moment, correct? The JIT does not quantize (yet?).
yeah, eager mode is the only mode that’s supported in public release, but graph mode is coming up in 1.6 as well.
How does pytorch decide what is “previous”?
PyTorch don’t do this in eager mode. that’s why in eager mode users need to manually place QuantStub and DeQuantStub themselves. this is done automatically in graph mode quantization.
eager mode will just swap all modules that has a qconfig, so user need to make sure the swap makes sense and set qconfig and place QuantStub/DeQuantStub correctly. |
st184861 | What types of layers are supported in PyTorch’s quantization framework? (especially ones that are more related to convnets).
I found Conv2D, Conv3D, Relu, but I couldn’t find any types of BatchNorm) |
st184862 | I’ll move the category to Quantization to get a bit more visibility for this topic. |
st184863 | you can checkout https://pytorch.org/docs/stable/quantization.html#operation-coverage 2, this might be a little bit out of date, to find the most up to date supported ops, you can take a look at: https://github.com/pytorch/pytorch/tree/master/aten/src/ATen/native/quantized/cpu 2 |
st184864 | I have seen the static quantization tutorial, where the layers are fused before itself. I got good result with fused layers but if I don’t fuse the layers, My accuracy is very poor.
What is the effect of layer fusion?
Please do help me with this. |
st184865 | layer fusion is going to fuse Conv+BN into a Conv module or Conv + BN + ReLU into a ConvRelu module. this does not change numerics itself. Without fusion conv, bn and relu will be quantized independently, that might be the reason why the accuracy drops. |
st184866 | But, what is the drawback of quantizing convolution, batchnorm, relu operations independently? |
st184867 | quantizing them independently will have worse performance, and also may suffer from bigger accuracy loss. |
st184868 | In my origin model,the upsample part is
F.interpolate(l7, scale_factor=2.0, mode='bilinear', align_corners=True)
,when i get QAT model.pt and tried it on android ,the inference time of the model.pt is slow, just similar to the float.pt
So,i changed the upsample part just like
F.interpolate(l7, scale_factor=2.0, mode='nearest')
the inference time is speed up.
But the result of segmentation model is too bad.
Why bilinear is slower than nearest after QAT?
Is there anyone can explain and give some suggestions.
Thx |
st184869 | Hi,
Actually, I do not know about the QAT, but nearest is always faster than bilinear. In bilinear a transformation need to be computed meanwhile nearest is just copy/paste without any computation (almost).
Although in large tensors, linear is possibly preferred even in term of speed.
Bests |
st184870 | Quantization-aware training (QAT) is the quantization method that typically results in the highest accuracy.
You are right, nearest is always faster than bilinear.
I test it on android, the difference less than 5ms ,but it’s more than about 100ms for quanted model. |
st184871 | I’m trying to quantize a mobilenetv2 + SSDLite model from https://github.com/qfgaohao/pytorch-ssd
I followed the tutorial here https://pytorch.org/tutorials/advanced/static_quantization_tutorial.html doing Post-training static quantization
Before quantizing the model definition looks like this
SSD(
(base_net): Sequential(
(0): Sequential(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
(3): Conv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): Conv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU6(inplace=True)
(3): Conv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=96, bias=False)
(4): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): ReLU6(inplace=True)
(6): Conv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
**#Removed some stuff to stay under 32K characters**
(5): Conv2d(64, 24, kernel_size=(1, 1), stride=(1, 1))
)
(source_layer_add_ons): ModuleList()
)
Quantization is done using :
model.eval().to('cpu')
model.fuse_model()
model.qconfig = torch.quantization.get_default_qconfig('fbgemm')
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
After quantization the model definition looks like this :
SSD(
(base_net): Sequential(
(0): Sequential(
(0): QuantizedConv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
)
(1): InvertedResidual(
(conv): Sequential(
(0): QuantizedConv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), scale=1.0, zero_point=0, padding=(1, 1), groups=32, bias=False)
(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
(3): QuantizedConv2d(32, 16, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual(
(conv): Sequential(
(0): QuantizedConv2d(16, 96, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): QuantizedReLU6(inplace=True)
(3): QuantizedConv2d(96, 96, kernel_size=(3, 3), stride=(2, 2), scale=1.0, zero_point=0, padding=(1, 1), groups=96, bias=False)
(4): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): QuantizedReLU6(inplace=True)
(6): QuantizedConv2d(96, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0, bias=False)
(7): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
**#Removed some stuff to stay under 32K characters**
(5): QuantizedConv2d(64, 24, kernel_size=(1, 1), stride=(1, 1), scale=1.0, zero_point=0)
)
(source_layer_add_ons): ModuleList()
)
Model size decreased from 14MB to 4MB.
but with this new definition how can i load the quantized model ?
I’m trying the following & getting the below error
#Saving
torch.save(q_model.state_dict(), project.quantized_trained_model_dir / file_name)
#Loading the saved quatized model
lq_model = create_mobilenetv2_ssd_lite(len(class_names), is_test=True)
lq_model.load(project.quantized_trained_model_dir / file_name)
#Error
RuntimeError: Error(s) in loading state_dict for SSD:
Unexpected key(s) in state_dict: "base_net.0.0.scale", "base_net.0.0.zero_point", "base_net.0.0.bias", "base_net.1.conv.0.scale", "base_net.1.conv.0.zero_point", "base_net.1.conv.0.bias", "base_net.1.conv.3.scale", "base_net.1.conv.3.zero_point", "base_net.1.conv.3.bias", "base_net.2.conv.0.scale"...
I do understand that after quantization some layers are changed Conv2d -> QuantizedConv2d but does that mean that I have to have 2 model definitions for original & quantized versions?
This a diff of the definitions
Screen Shot 2020-03-22 at 7.19.05 PM3584×2278 1.57 MB |
st184872 | Solved by jerryzh168 in post #3
yeah, you’ll need to quantize lq_model after lq_model = create_mobilenetv2_ssd_lite(len(class_names), is_test=True) before you load from the quantized model |
st184873 | Have you tried this?: How do I save and load quantization model 15
i.e., prepare and convert steps before loading the state_dict
Also, I would expect conv+batchnorm+relu to be fused into QuantizedConvReLU2d but I think you are using relu6 and fusion of conv+batchnorm+relu6 isn’t currently supported. |
st184874 | yeah, you’ll need to quantize lq_model after lq_model = create_mobilenetv2_ssd_lite(len(class_names), is_test=True) before you load from the quantized model |
st184875 | This worked even though there around 1.6 seconds of overhead of quantize the vanilla model before loading the model. Thank you |
st184876 | @wassimseif: How satisfied were you with the results of the quantization? Where did you add the QuantStub and DeQuantStub functions in the forward pass? In particular, I was wondering is you dequantize the localization and confidence predictions at the very end of the model or whether you dequantize beforehand. |
st184877 | Quantization alone resulted in a huge drop in the mAP. but doing calibration while quantizing resulted in the same mAP compared to the original model. so make sure to explore calibration also. Quant & DeQuant stubs where added just in the model ( before & after the forward pass ) |
st184878 | Thanks a lot for your kind and informative reply, @wassimseif !
When I tried exclusively quantization the mAP also drops enormously and I wondered whether I was doing it correctly. Will try calibration too - thanks for the suggestion!
Just to make sure though: At the end of the forward pass, there are two dequant stubs added then, one for the locations and one for the confidences? Initially I thought that I should dequantize after the base net because I suspected that such find grained localization cannot be done well with 8bits but requires the more expressive 32bit representation. |
st184879 | You don’t need 2 DeQuant Stubs. Just 1 & you can reuse it for. Yes dequantizing after the base net might work, but this would result in the SSD layers not being quantized. My case was that fp16 or 32 instruction were not available so i had to quantize the whole model. |
st184880 | I want to know whether the quantized model obtained by Post Training Static Quantization can be run on CUDA? |
st184881 | Solved by jerryzh168 in post #2
No, it only works on CPU right now, we will consider adding CUDA support in the second half of the year |
st184882 | No, it only works on CPU right now, we will consider adding CUDA support in the second half of the year |
st184883 | As per the documentation, PyTorch will support int8 quantization. Will PyTorch support int16 quantization currently? |
st184884 | Solved by supriyar in post #2
We currently do not support int16 quantization. There is support for fp16 dynamic quantization. |
st184885 | We currently do not support int16 quantization. There is support for fp16 dynamic quantization. |
st184886 | Hi
I am experimenting pytorch 1.3 Quantization for resent50. I had taken the pre-trained model from model zoo.
Please find below for accuracy (for 100 images) and size at different stages of my experiment
Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 93.75}
Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
Size (MB): 102.145116
{"metric": "fused_resnet50_val_accuracy", "value": 0.0}
ConvReLU2d(
(0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))
(1): ReLU()
)
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 0.0}
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 0.0}
Size (MB): 25.65341
QuantizedConvReLU2d(3, 64, kernel_size=(7, 7), stride=(2, 2), scale=1.0, zero_point=0, padding=(3, 3))
Size (MB): 25.957137
QuantizedConvReLU2d(3, 64, kernel_size=(7, 7), stride=(2, 2), scale=1.0, zero_point=0, padding=(3, 3))
Not sure Where it went wrong
PFB code for fusing layers
def fuse_model(m):
modules_to_fuse = [['conv1','bn1',"relu"]]
torch.quantization.fuse_modules(m, modules_to_fuse,inplace=True)
for mod in m.layer1:
torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
if mod.downsample:
torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)
for mod in m.layer2:
torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
if mod.downsample:
torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)
for mod in m.layer3:
torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
if mod.downsample:
torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)
for mod in m.layer4:
torch.quantization.fuse_modules(mod, [["conv1","bn1"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv2","bn2"]],inplace=True)
torch.quantization.fuse_modules(mod, [["conv3","bn3","relu"]],inplace=True)
if mod.downsample:
torch.quantization.fuse_modules(mod.downsample, [["0","1"]],inplace=True)
return m |
st184887 | modified My fused_model function now I am seeing same accuracy as non fused model but, Quantiation accuracy is zero.
Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 93.75}
Size (MB): 102.143772
{"metric": "fused_resnet50_val_accuracy", "value": 93.75}
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 0.0}
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 0.0}
Size (MB): 25.653416
Size (MB): 25.957149
Change in fused function is , I have added relu in layer 1 to 4 |
st184888 | Its great to see that the accuracy post fusion is high. What are you using for calibration before you quantize the model? |
st184889 | issue with calibration data set, I have selected randomly 1024 training samples from imagenet dataset. Now I can I see some good results.
Size (MB): 102.491395
{"metric": "original_resnet50_val_accuracy", "value": 90.234375}
Size (MB): 102.143772
{"metric": "fused_resnet50_val_accuracy", "value": 90.234375}
calibration
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 78.22265625}
after quantization
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 88.28125}
calibration
{"metric": "quntize_per_tensor_resent50_val_accuracy", "value": 76.26953125}
after quantization
{"metric": "quntize_per_channel_resent50_val_accuracy", "value": 89.84375}
size of quantization per tensor model
Size (MB): 25.653446
size of quantization per channel model
Size (MB): 25.957137 |
st184890 | Hi @Tiru_B, can you share your code as I am having the same issue inspite of adding a relu layer. |
st184891 | GitHub
tiru1930/resnet_quantization 27
Contribute to tiru1930/resnet_quantization development by creating an account on GitHub. |
st184892 | OS: Win7 64bit
Pytorch 1.5.0_CPU
when I try to quantize an unet model, meet the error below:
RuntimeError: Could not run 'aten::slow_conv_transpose2d' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::slow_conv_transpose2d' is only available for these backends: [CPUTensorId, VariableTensorId].
Is there any way to workaround this? |
st184893 | Had the exact same problem. I worked around this by inserting torch.quantization.DeQuantStub and torch.quantization.QuantStub before and after the ConvTranspose2d layer. I don’t know if this affects performance or anything.
Just take a look at the following class I converted:
from torch.quantization import DeQuantStub, QuantStub
class UpsamplerBlock (nn.Module):
def __init__(self, ninput, noutput):
super(UpsamplerBlock, self).__init__()
self.conv = nn.ConvTranspose2d(ninput, noutput, 3, stride=2, padding=1, output_padding=1, bias=True)
self.bn = nn.BatchNorm2d(noutput, eps=1e-3)
self.quant = QuantStub()
self.dequant = DeQuantStub()
def forward(self, input):
output = self.conv(self.dequant(input))
output = self.bn(self.quant(output))
return F.relu(output) |
st184894 | yeah, quantized conv transpose 2d is not supported yet, but @Zafar is working on it right now |
st184895 | Pytorch 1.5.0+cu92
Windows7 64 bit
model.qconfig = torch.quantization.default_qconfig
torch.quantization.prepare(model, inplace=True)
torch.quantization.convert(model, inplace=True)
<---- Here
Exception has occurred: RuntimeError
Didn’t find engine for operation quantized::conv2d_prepack NoQEngine |
st184896 | Solved by peterjc123 in post #3
We use VS 14.11 to build binaries for CUDA 9.2, so there is no FBGEMM support. If you need FBGEMM, then please use the binaries with other CUDA versions instead. |
st184897 | Could you check if the USE_FBGEMM macro is turned on in your build environment? I believe it should be supported.
cc @dskhudia who might be able to add more info. |
st184898 | We use VS 14.11 to build binaries for CUDA 9.2, so there is no FBGEMM support. If you need FBGEMM, then please use the binaries with other CUDA versions instead. |
st184899 | I have input image of shape 32*1*32*128 where 32 is my batch size(index 0).
I want to quantize my model. When I call my evaluate function then it shows this error
File "/media/ai/ashish/OCR/Text_Recognition/modules/transformation.py", line 158, in build_P_prime
batch_size, 3, 2).float().to(device)), dim=1) # batch_size x F+3 x 2
RuntimeError: Could not run 'aten::_cat' with arguments from the 'QuantizedCPUTensorId' backend. 'aten::_cat' is only available for these backends: [CUDATensorId, CPUTensorId, VariableTensorId]. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.